College rankings have become as much a part of the college search and selection processes as interviews and campus tours. Yet of these informational sources, it is perhaps the campus tour that is the most accurate and valuable. Tours are carefully planned, scripted, and executed by trained tour guides. Pending favorable weather, the tour experience is likely to be valuable and exciting. Students begin to see themselves at the institution and their tours most likely are being led by someone very close to their own age. Interviews, by contrast, reflect all the biases of the interviewer, the limitations of the site, and the nerves of the applicant. And rankings, well, where to begin?
Earlier this year I suggested that college rankings are an experiment we must consider tried, tested, and failed. The biases are now well understood. The financial motivations to publishers are obvious even to the casual observer. The shortcomings have become evident to all institutions as they have spent time and money in order to achieve (or try to achieve) the highest possible ranking. Anyone who has asked “have the rankings resulted in better colleges and universities?” comes to the same conclusion. A resounding no. In fact, the following has been found:
1. Institutions are spending considerable amounts of money each year to achieve the most points in ranking categories, whether or not these align with their own priorities or goals. This takes resources (time and money) away from other priorities and needs.
2. With few exceptions, there is little evidence that institutions have moved up significantly in the rankings. For the most part, schools remain in competition with their academic peers and, at best, we see small movements among and within clusters. For example, one year a school moves up one spot as another one drops one spot. The next year, they may well flip back. Large moves are not common. And when they are, well, see the next point.
3. Some colleges and universities have chosen to “game” the system either by finding one or two categories in which, through large investment, they can significantly increase their points, OR (more nefariously) by outright misrepresenting reported information used in the rankings calculations.
One example of “gaming” (or perhaps more accurately “playing to”) the rankings is a campaign to encourage all alumni to give “even one dollar” to the university. This may not make a big difference in their total donations for the year, although it may start some alumni on a path toward greater giving. But it WILL significantly change the “percentage of alumni who donated to the university” category. Another example may be how class and section sizes are defined, tagged, staffed, and reported in order to lower average class size. Yet another example may be how and when students are admitted, with those having lower SAT scores being admitted mid-year so as not to negatively impact the rankings. The list goes on. And even as publications modify their rankings, whether to address legitimate criticisms or to minimize differential abilities (or intents) to game them, institutions always find ways to play them.
But far more egregious, of course, is outright misrepresentation of facts, whether through obfuscation, inflation, manipulation, or outright fabrication. Recent years have seen several high-profile examples where universities have been caught, exposed, and sanctioned. Small numbers, but likely only the tip of the iceberg.
And even more recently, we have seen an increasing number of universities opting out of the rankings altogether. This started with medical schools, largely in reaction to the belief that the rankings were biased against those institutions whose priorities for admission were not only discounted by rankings calculations but were diametrically opposed to them. In other words, they were being penalized for delivering upon their own strategic goals for admissions. This seems to have caught on as more professional schools (e.g., law schools) and even entire universities are beginning to opt-out or show signs they are ready to opt-out. This, of course, has resulted (quite predictably) in the rankings being “updated” to reflect changes in the marketplace, how students are making their choices, and of course to keep as many colleges and universities on the rankings treadmill as possible.
Magazine sales and online subscriptions are not the only ways these companies make money. Many also sell “badges” to the institutions allowing them “official and sanctioned” bragging rights for whatever ranking accolade they have earned. And there are many categories and subcategories, e.g., grouped by size, type of institution, highest degree awarded, geographic region, etc.
The financial engine these rankings represent to their publishers depend on their ability to keep the vast majority of colleges and universities engaged.
Again, the questions can (and should) be asked:
1. How have rankings improved higher education?
2. How have rankings enabled students to make better, and better informed, choices?
3. What other impacts have decades of rankings had our higher educational system and its institutions?
I have explored some of this previously as have others. The answer to the first question is “no.” It has cost time and money, made little difference, and forced some colleges and universities to take attention away from their highest priorities. The answer to the second question is “unlikely.” While there may be value to presenting information for comparative purposes in one place, it is not clear this is what makes (or should make) the most difference in a student’s selection process and there is evidence of lack of accuracy/repeatability of (or at least inconsistency in) reported information across institutions. Rankings are more likely to distract or confuse rather than enable students.
The third question is one that deserves its own article. For example, rankings can be blamed for costly investments and unwise decisions by some universities to add graduate and research programs. Rankings have diverted funds from colleges and universities, many under-resourced, that may be more strategically invested or more desperately needed elsewhere. Rankings have undeniably forced a move toward uniformity (sameness) rather than distinction (differentiation), something that no ranking organization could admit to being a desired outcome or admit to being a good idea. The question of impacts will be the subject of a future article.
Another reasonable question: why have colleges and universities allowed a news magazine to dictate their values, their priorities, and how and on what they should invest their resources? Why have they abdicated responsibility for asserting their own values and value proposition, or sharing their points of distinction and points of pride? As I have written before, universities should set their own goals, achieve against them, and tell why that commitment and success matters. If they matter, people will pay attention. Allow colleges and universities to articulate, promote, market, and own their priorities, distinctions, values, commitments, and outcomes. Then let the free market determine who goes where and which schools survive, thrive, adapt, or close. Not a magazine.
Some closing points:
1. The USN&WR rankings remain the most widely recognized college rankings today.
2. There is growing belief that the USN&WR rankings are too nuanced, too easily manipulated, and biased.
3. Colleges and universities are opting out of rankings in increasing numbers. This is likely to increase until (even after several modifications in attempt to address growing concerns and criticisms) the ranking system collapses, i.e., no longer has sufficient participants, and disappears.
4. Recent years have seen an explosion of new college rankings. Nearly all are conducted and reported by major magazines or other media. This has provided a “prize for everyone” landscape in which most institutions are able to claim a ranking in something and, of course, display their purchased “badge.”
5. New rankings and ratings systems are likely to gain in popularity and acceptance AND provide greater utility in the college selection process as one of many information sources used by students and their families.
Finally, this month saw the announcement of a major change by Money magazine, a fast-rising competitor to USN&WR in the rankings game. Colleges on Money’s 2023 list will get a rating (not a numeric ranking) of between 2.5 and 5 stars.
Is Money magazine’s new rating system an improvement? Yes. At least when it comes to the utility to students and their families. And likely also when it comes to how colleges and universities will choose to respond. There will be many institutions in each of the star (or half star) categories. And very likely, colleges and universities will have to accept that is their place, at least according to this one rating system. This admission and acceptance will allow them to focus on what matters most to them, to their students, and those they wish to attract in the years ahead.
Read the full article here