Rubricization misleads people, particularly administrators, into believing that the assessment is objective and valid. Furthermore, fitting people and programs into quantitative rubrics limits creativity and risk-taking, essential to supporting and encouraging quality.
The other day, I was struck by a form as I looked over merit evaluation materials. It was called an “Outlets Table,” consisting of a rubric representing the “caliber” of the outlets where individual faculty members publish.
The categories were interesting. They included the name of the press, followed by counts of the number of the individual’s publications accepted or in the press by the outlet, the year the press was founded, the number of annual books or articles the press or journal produces, the type of publisher (university, commercial, etc.), and a small space to justify why a particular press was chosen. While directions included a caveat allowing departments to provide different categories, the examples clarified what administrators valued—metrics that can be interpreted as identifying prestige.
That form is one of many examples I’ve encountered recently where rubrics seem to be dominating how administrators and faculty approach assessing performance and other aspects of academic life.
For example, UC Berkeley’s rubric for assessing candidate contributions to diversity, equity, inclusion, and belonging indicates how to score an individual’s knowledge, understanding, and commitment to DEIB issues.
Where is this love affair with rubrics coming from? I’m unsure if I have an answer. Still, I have suspicions, and a likely culprit is accrediting agencies pushing to encourage assessment programs for all university departments, usually with predictably empty metrics.
Don’t get me wrong; I’m in favor of self-assessment. But my take is based on spending about ten years as the assessment coordinator for two university departments. The pressure to create systems for assessment encourages reducing quality to quantities, often translating a score to measure whether a program is performing at an arbitrarily defined level.
The protocol involves assigning subjective values to things, such as capstone papers, and then putting those values into a rubric that sets a level at which the assessment target is considered to be achieved. In the case of many assessment programs, it becomes somewhat absurd because student papers, which have already been assessed (graded), are used to assess if students are achieving learning outcomes specified at the department level. But that outcome may have little connection to the aims of faculty teaching those classes.
There are numerous reasons to reject this approach. For one thing, departmental learning outcomes are problematic because they assume all faculty approach the teaching and structuring of their classes in the same way and focus on the same goals. Moreover, that approach undermines academic freedom; it superimposes assumed common ideas about learning on all faculty while ignoring that different faculty can and should emphasize various student expectations and results.
There is no single way to teach or measure teaching quality. In my case, I never provide learning outcomes on syllabi because I find them restrictive and largely meaningless. I’m primarily interested in one outcome—that students are challenged to think in new ways. Other outcomes vary by student. For example, if a student needs writing work, I want that student to develop that skill. On the other hand, another student may be a superb writer, and in that case, I’m more interested in seeing them develop argument skills.
So my approach doesn’t fit well into assessment rubrics that emphasize particular categories, like whether students understood how a concept like religion had been defined by various scholars or improved their essay writing skills. Moreover, emphasizing things like learning outcomes over the quality of learning is a prime example of the neoliberal project that continues to undermine higher education.
It generates a variety of problems, including.
- Promoting quantification of all aspects of education, including the idea there are clear, observable results that can be “measured” in a class. This isn’t the case for many types of classes and students.
- They are focusing on a misguided idea that learning is the responsibility of those teaching. It isn’t; it’s the responsibility of learners. Teachers are responsible for guiding and assisting students in the process. The learning outcomes model shifts learning responsibility to teachers and institutions rather than emphasizing that students should own their learning and education. That’s a recipe for poor education.
- Ignoring that students learn and absorb information in complex ways. Courses focused on critical thinking or understanding other cultures do not lend themselves to simple lists of learning outcomes. In my courses, students broaden their understanding of human behavior and develop critical thinking skills, and how either happens varies significantly from student to student.
Teaching is more than showing students skills like interpreting a text or building an argument. To paraphrase Molly Worthen’s excellent NY Times article, teaching is about cultivating a mindset, helping students immerse themselves in a body of knowledge, and questioning assumptions, including assumptions such as the idea that we can quantify learning.
The problem with most assessment rubrics is that quality is not easily reduced to a set of numbers and categories. Variables like the number of books a company publishes or how old the press is are irrelevant to its publishing quality. Indeed, I can think of commercial, academic publishers that seem to focus almost entirely on quantity over quality. It may well be that the press publishing the highest quality work on a given topic is a small publisher with a rigorous review and careful editing but not with a large market.
Smaller journals, such as The Journal of Japanese Studies in my field, may not have large circulations and high impact factors, but they publish important work. Citation counts, for example, do not necessarily reflect the quality of a journal, and counting the number of faculty publications completely ignores the amount of time that may be required to produce just one article or book of superior quality.
Rubricization interferes with the careful, qualitative assessment of faculty or programs because it misleads people, particularly administrators, into believing that the assessment is objective and, therefore, valid. In other words, it reduces subjective qualities to numbers that make it appear as though the assessment being conducted is in some way objective and an accurate representation of quality. Furthermore, it emphasizes conformity because it forces people to fit what they do individually and in their academic programs into a structure that limits creativity and risk-taking, which is essential to supporting and encouraging quality. In short, rubrics are square. As philosopher Robert Pirsig put it, “absence of Quality is the essence of squareness.”
Of course, the goal of all this rubricization isn’t a genuine quality assessment. As the aforementioned merit form showed, administrator interests tend to lie in the “caliber” of faculty and programs as evidenced by quantities over their quality (these two are often erroneously conflated). That is another way of saying that the focus is on prestige and image over content and quality.
High numbers make quantocrats happy because they provide “data” to observers that they feel are objective–even though they are inherently subjective. Then, the data become promotional tools used by media outlets such as the U.S. News & World Report and spun into claims of high value and quality.
Academics need to push back against the rubricization of higher education. We must tell administrators that attempting to quantify every aspect of our work by assigning numbers that fit into little boxes on a form is not a quality measure. Instead, it’s a measure of conformity that maintains the status quo and undermines the qualities that make faculty, researchers, and educational institutions great—innovation, creativity, and commitment to rigor over volume.