In higher education, college and university rankings are listings of educational institutions in an order determined by any combination of factors. Rankings can be based on subjectively perceived "quality," on some combination of empirical statistics, or on surveys of educators, scholars, students, prospective students, or others. Such rankings are often consulted by prospective students as they choose to which schools they will apply or which school they will attend.
Rankings vary significantly from country to country. A Cornell University study found that the rankings in the United States significantly affected colleges' applications and admissions. In the United Kingdom, several newspapers publish league tables which claim to rank universities.
The U.S. News & World Report rankings
The best-known American college and university rankings have been compiled since 1983 by the magazine U.S. News and World Report based on a combination of institutional statistics and surveys of university faculty and staff members. The college rankings were not published in 1984, but were published in all years since. The precise methodology used by the U.S. News rankings has changed many times, and the data is not all available to the public, so peer review of the rankings is limited. (A private 1997 review by the National Opinion Research Center, commissioned by U.S. News itself, was later published by the Washington Monthly; it appeared to contain several serious criticisms of the rankings' methodology.)
The U.S. News rankings, unlike some other such lists, create a strict hierarchy of colleges and universities in their "top tier," rather than ranking only groups or "tiers" of schools; the individual schools' order changes significantly every year the rankings are published. The most important factors in the rankings are:
- Peer assessment: a survey of the institution's reputation among presidents, provosts, and deans of admission of other institutions
- Retention: six-year graduation rate and first-year student retention rate
- Student selectivity: standardized test scores of admitted students, proportion of admitted students in upper percentiles of their high-school class, and proportion of applicants accepted
- Faculty resources: average class size, faculty salary, faculty degree level, student-faculty ratio, and proportion of full-time faculty
- Financial resources: per-student spending
- Graduation rate performance: difference between expected and actual graduation rate
- Alumni giving rate
All these factors are combined according to statistical weights determined by U.S. News. The weighting is often changed by U.S. News from year to year, and is not empirically determined (the NORC methodology review said that these weights "lack any defensible empirical or theoretical basis"). The first four such factors account for the great majority of the U.S. News ranking (80%, according to U.S. News's 2005 methodology), and the "reputational measure" (which surveys high-level administrators at similar institutions about their perceived quality ranking of each college and university) is especially important to the final ranking (accounting by itself for 25% of the ranking according to the 2005 methodology).
Other organizations which compile general annual college and university rankings include the Fiske Guide to Colleges and the Princeton Review. Many specialized rankings are available in guidebooks for undergraduate and graduate students, dealing with individual student interests, fields of study, and other concerns such as geographical location, financial aid, and affordability.
Among the best-known rankings dealing with individual fields of study is the Philosophical Gourmet Report or "Leiter Report" (after its founding author, Brian Leiter of the University of Texas at Austin), a ranking of departments of analytic philosophy. This report has been at least as controversial within its field as the general U.S. News rankings, attracting criticism from many different viewpoints, but it is also extremely popular and well regarded by many in the profession. Notably, practitioners of continental philosophy, who perceive the Leiter report as unfair to their field, have compiled alternative rankings.
The Times Higher Education Supplement, a British publication, annually publishes the Times Higher World University Rankings, a list of 200 ranked universities from around the world.
Criticisms of rankings
College and university rankings, especially the well-known U.S. News rankings, have drawn significant criticism from within and without higher education. Critics feel that the rankings are arbitrary and based on criteria unimportant to education itself (especially wealth and reputation); they also charge that, with little oversight, colleges and universities inflate their reported statistics. Beyond these criticisms, critics claim that the rankings impose ill-considered external priorities on college administrations, whose decisions are sometimes driven by the need to create the most desirable statistics for reporting to U.S. News rather than by sound educational goals.
Some of the specific data used for quantification are also frequently criticized. For instance, Rice University, with a top 5 endowment and a generous Financial Aid department, is ranked in the mid-twenties for "Financial Resources". As another example, the "Peer Assessment" equally weighs the opinions of administrators at less-known schools such as Florida Atlantic and North Dakota State with those of say, Harvard and Stanford. Students with their sights set on the best graduate schools may not be interested in knowing which programs the administrators of bottom schools have heard of, or vice versa.
Other critics, seeing the issue from students' and prospective students' points of view, claim that the quality of a college or university experience is not quantifiable, and that the ratings should thus not be weighed seriously in a decision about which school to attend. Individual, subjective, and random factors all influence the educational experience to such an overwhelming extent, they say, that no general ranking can provide useful information to an individual student.
Suppose, as these critics illustrate, that the difference between an "excellent" school and a "good" one is often that most of the departments in the excellent school are excellent, while only some of the departments in the good school are excellent. And the difference between an excellent department and a good one might be, similarly, that most of the professors in the excellent department are excellent, while only some in the good department are. For an individual student, depending on the student's choices of field of study and professors, this will often mean that there is no difference between an excellent college or university and a merely good one; the student will be able to find excellent departments and excellent faculty to work with even at an institution which might be ranked "second-tier" or lower. Statistically, the rankings are distributions with large variances and small differences between the individual universities' means (averages).
Complicating matters further, as most educators and students observe, individuals' opinions about the excellence of academic departments and, especially, of professors, exhibit a wide range of variation depending on personal preferences. And the quality of an individual student's education is most determined by whether or not the student happens to encounter a small number of professors that "click" with and inspire him or her. Similarly, the main difference between a "good" or "second-tier" large state university and an "excellent" or "top-tier" prestigious smaller institution, for the student, is often just that, at the larger school, the student needs to work a bit harder and be a bit more assertive and motivated in order to actively extract a good education. For many students this will not be difficult enough to justify a preference for the smaller institution, though some individuals do prefer a smaller school.
Forget U.S. News Coalition
In the 1990s a coalition of student activists calling themselves the Forget U.S. News Coalition (and occasionally substituting a coarser word for "Forget") arose, based initially at Stanford University. FUNC attempted to influence college and university administrations to reconsider their cooperation with the U.S. News rankings. They met with limited success, finding administrations encouraged the development of alternatives to the rankings, though most institutions (including Stanford) continued to cooperate with U.S. News. Critics of FUNC question its motives claiming that the organization is dissatisfied with the rankings not for principled objections to the ranking process, but rather because they are disatisfied that Stanford has ranked below Harvard, Yale, and Princeton for the past 10 years.
Colleges and criticism of U.S. News rankings
Reed College has not cooperated with the U.S. News rankings nor submitted any institutional data to U.S. News since 1994; its administration has been outspoken in its criticism of the rankings. Some critics charge, and Rolling Stone magazine reported, that Reed's "second-tier" or lower ranking in U.S. News's lists, which was based on U.S. News estimates of non-submitted data, is artificially depressed by U.S. News as retribution for Reed's harsh criticism of the rankings.
Another college which hasn't been cooperative with the editors of the U.S. News rankings magazine, because of the college's criticism of the usefulness of college rankings to students, is Ohio Wesleyan University.
Rankings on the Web
World university rankings
- Times Higher Education Supplement Australian Rankings (http://www.australian-universities.com/rankings.php)
- Australian university ratings by The Good Guides (http://ratings.thegoodguides.com.au/)
- Universities (http://www.macleans.ca/universities/article.jsp?content=20031106_133237_3292) by Maclean's Magazine