GLOBAL

Eight years of ranking: What have we learned?
It is more than eight years since Shanghai Jiao Tong University produced its first Academic Ranking of World Universities. Since then international university rankings have multiplied. There are now two main competitors producing general rankings that include indicators other than research, Quacquarelli Symonds (QS) and Times Higher Education.There are also web-based rankings, Webometrics and IC4U, and research-based rankings from Taiwan, Turkey and Australia, the last of which seems to have disappeared. Then we have rankings from Russia and France.
Nor should we forget the European U-Multirank project, which has just moved out of the pilot stage, or regional rankings for Asia and Latin America or the various disciplinary sub-rankings or the rankings of business schools.
There are now quite a few things that we have learned about ranking universities.
Measuring research is the easy bit
There are several ways of measuring research. You can count total publications, publications per faculty, total citations per faculty, citations per paper, h-index, international collaboration, money spent, reputation. All of these can be normalised in several different ways.
The result is that ranking is beginning to look like heavyweight boxing with no undisputed champion in sight. Cambridge is top of the QS rankings mainly because it has a good reputation for research, Harvard is first in the Shanghai rankings because it produces more of just about everything and Caltech leads in the new Times Higher Education World University Rankings because of an emphasis on quality rather than quantity.
Nobody has figured out how to measure teaching
QS has an indicator that measures student faculty ratio but this is, as they admit, a very crude instrument. For one thing, it includes academics who only do research and may never see the inside of a lecture hall. Times Higher Education has a cluster of indicators concerning teaching, but they only claim that these have something to do with the learning environment.
If anyone does try to seriously measure teaching quality, the best bet might be to use some sort of survey of student satisfaction, as has apparently been done successfully by the U-Multirank pilot project, or perhaps http://ratemyprofessors.com could go global.
In any case, for better students and better schools, teaching is largely irrelevant. Recruiters do not head for Harvard, Oxford and the grandes ecoles because they have heard about the enthusiasm with which lecturers jump through outcomes-based education hoops. They go there because that is where the smart people are and smart people are smart before they go to university.
Getting there first is important
The Academic Ranking of World Universities published by Shanghai Jiao Tong University is not noticeably better than the Performance Ranking of World Scientific Papers produced by the Higher Education Evaluation and Accreditation Council of Taiwan. But it still gets a great deal more publicity.
A very good research-based ranking has been produced by the Middle East Technical University in Ankara, but hardly anybody knows about it: the niche has already been occupied.
Brand names matter
If anyone else but a magazine with the word ‘Times’ in it and an association with Thomson Reuters had produced a ranking with Alexandria University in the top 200 in the world, or for that matter even put it first in Egypt, they would have been laughed out of existence. The QS rankings have flourished partly because they are linked to a successful graduate recruitment enterprise.
Beware of methodology
The QS rankings are well known for a fistful of methodological changes that have sent universities zooming up and down the tables. Although the methodology has officially stabilised, there have still been unannounced changes.
In 2010, something happened to the curve for citations per faculty (a mathematician could explain exactly what) that boosted the scores for high fliers except, of course, for the universities in joint first place, but lowered those for the less favoured ones. One result of this was a boost for Cambridge, no doubt to everyone’s astonishment.
Between 2010 and 2011, Times Higher Education made so many changes that talking about improvements over the year was quite pointless.
Weighting is not everything
Weighting is very important, though. It is increasingly common for rankings to have an interactive feature that allows readers to change the weightings and, in effect, to construct their own rankings. It is instructive to fiddle around with the indicators and see just how much difference changing the weighting can make.
The missing indicator
In the final analysis, the quality of a university is largely dependent on the average intelligence of its students, which is why the most keenly scrutinised section of US News’ Best Colleges is the ACT-SAT scores.
International rankings have barely begun to tackle this question. I doubt if anyone is very interested in the score on QS’s employer survey or even the Paris Mines rankings, which counts the number of top bosses.
It would probably be quite technically feasible to work out the relative selectivity of universities, but there are likely to be insurmountable political problems.
What next?
There will surely be more international rankings of one sort or another. It is unlikely, though, that any will ever achieve the dominant role that US News has achieved.
We can expect more sophistication with increasingly complex statistical analysis, more regional rankings and more disciplinary rankings, perhaps also more silly rankings like a global version of American Best Universities for Squirrels.
But it is unlikely that there will ever be agreement on what makes a good or a great university.
* Richard Holmes is a lecturer at Universiti Teknologi MARA in Malaysia and author of the University Ranking Watch.
Comment
I find the statement ‘in the final analysis, the quality of a university is largely dependent on the average intelligence of its students’ to be fatuous.
One of the more serious ways to judge a university is the difference it makes in the lives of its students.
If we follow the author’s point, we can all game the system by having smaller freshman classes and rejecting totally those who have been less fortunate in their backgrounds in K-12. At least as far as public universities go in the US, this would be truly contrary to their purpose, part of which is about opportunity.
Richard Herman, former Chancellor, University of Illinois, Urbana-Champaign
Comment
I have the honour of being the chairman of a committee that selected the best among Indian universities based on a set of criteria. Those who want to read about it and also use the criteria, can do so here.
I will also be happy to help individual countries to set up the best University criteria and also conduct the competition.
Professor Dr Raju Chandrasekar
Telelphone: 00919845543407
Comment
You can see the full THE world university rankings and read the methodology here.
Phil Baty
phil.baty@tsleducation.com
Comment
I question the US News & World Report continuing dominance. The recent New York Times report” Gaming the College Rankings is the latest of reports of institutions creatively altering the data to boost their rankings.
William Patrick Leonard