
GLOBAL: Ranking the rankings

In effect, they did the same thing as author Andrejs Rauhvargers argues they did when media and politicians embraced the rankings in the first place: "seek a quick answer to a complicated question".
In 2003, the Institute of Higher Education of Shanghai Jiao Tong University, China, produced a list that compared Chinese universities with the best universities around the world. They did so not to shake up the world of learning, but simply because the Chinese government wanted to know how its universities compared with the rest of the world and where to send the cream of its state-funded international students.
The publication of the ranking set a trend that many have cursed and continue to curse. But the public debate on rankings is quite polarised. Many academics consider them rubbish and not even worthy of a good debate, while the highly ranked universities, media and politicians quote them liberally, perpetually feeding the development of more, different and differentiated rankings.
Although the EUA study puts many of the critics right, this was not the main purpose of Global University Rankings and their Impact, says Andrejs Rauhvargers, senior adviser to the EUA and author of the study.
"We did not go into this just to criticise rankings," he says. "We went into this to analyse the methodologies. Most of the people who read and quote rankings do not understand them. This is problematic."
Even with his considerable previous knowledge of rankings, Rauhvargers found many new details while going through the raw material.
"When you see how things are actually measured and calculated you are surprised again and again. They choose the most convenient ways. Not with bad intentions, but the consequences are, well... interesting. Just as an example, several rankings say they use peer reviews. But when you read the small print you will find that this comprises little more than asking people to pick names from lists."
The problem, as has been argued over and over again, is that for things to be ranked, they must be measurable and comparable. Many things that affect quality in higher education are, fortunately or unfortunately, unmeasurable. Comparability offers even greater challenges in a world that has only recently and only regionally embarked on aligning certain aspects of higher education under the pressure of international mobility - in the case of Europe even with the express intent not to harmonise but to nourish diversity.
For these simple reasons, the big absentee in all rankings is the hugely important issue of teaching quality.
"There are no valid indicators for teaching quality - we can only use proxies," says Rauhvargers.
"Some use the staff:student ratio, but this is so different from subject to subject. In music you may need two teachers for each student. In economics you can do with far fewer. Worse, precisely for this indicator there is now ample evidence of data manipulation."
"Newer indicators are interesting but most are unsuitable for use in an international context. Take as an example the dropout rate. There are countries with hardly any entry requirements. Other countries have extremely tough entry selection processes."
"The time it takes an average student to reach a degree is also a more recently introduced indicator, but what does it really tell us? That teachers are good? Or that the programme is easy?"
Not much good news then from the man who possibly has the best snapshot of international rankings in his head. Regardless of its usefulness, is it at all possible to produce a good ranking?
"I am quite pessimistic. The new attempts like Multirank [EU] try to use a host of different indicators. This may be a step in the right direction but it could just as well still offer an incomplete picture. One is tempted to ask whether using 50 poor indicators is better than just using three or four poor indicators."
The current international emphasis on learning outcomes would seem to justify using these as an indicator. Encouraged by the massive impact of PISA, the OECD now works on AHELO which tries to do something similar in higher education.
"The idea is good. They do not evaluate the learning outcomes that teachers put on paper but they try to measure like PISA what the real outcome is, asking the same questions to different students across the world."
However, as many studies modularise further and students leave higher education with increasingly tailor-made degrees, the comparability of learning outcomes again becomes an issue in all but the most regulated of studies.
Considering the increased focus on individual students and the fact that many rankings defend their existence on the grounds that they are informing students, as the consumers of education, one would think that asking these very students for their own subjective opinions would offer better guidance than blatantly flawed data-analyses.
Interestingly, at the presentation of the study, Allan Päll, the vice-chair of the European Student Union said: "This is all nice and well, but you have to know how a modern student finds information today."
Now, read that again and take heed.
Rather than using openness of information as an argument for rankings, would it not make much more sense to further disentangle undergraduate and postgraduate studies and then further improve international mobility, particularly for postgraduate students? If a solid foundation can be provided at the best universities in each country, the most ambitious graduates of these will find their way towards the best centres of study in their particular field. And they will not use rankings as the basis for their choice. Much more likely, within years we will see a peer-to-peer service recommending and wrecking university programmes, just like tripadvisor.com does for hotels around the globe or, in fact, just like the both loved and detested spickmich.de does, where German students rate their teachers.
Critics of university rankings seem generally to overlook the fact that the target audience of universities has a fairly high average intelligence. By the same token, rankings scrutinise the work of very intelligent people who make a living out of questioning everything on this planet. So rankings were always going to face painstaking criticism.
Are most of us not intelligent enough to ignore the rankings? Clearly not.
What is happening today is a bit the same as developments with private health insurance in countries that have a free public healthcare system. Nobody wants them because they drain public services, moving resources into private healthcare. But once some people have them they force others to also take them, further worsening public services and forcing even more people to seek private health insurance.
"Universities tortured by rankings become only more obsessed by them," Rauhvargers sums up.
But is going into extreme detail to criticise university rankings not overkill? Is it not a bit like producing academic proof that politicians do not always tell the truth and that jumping from high buildings is dangerous?
Not according to Rauhvargers.
"One unwanted consequence of global league tables is that universities with other missions than that of being top research universities may feel pressure to revise their profile at a time when mission differentiation is at the top of higher education agendas across Europe," he says.
"One of the reasons for this is that politicians scramble for scarce funds and need solid justifications on which to base their decisions. They love the clear-cut judgments that rankings offer. I know of a university in Italy that was warned by its regional government that, if it signed cooperation agreements and joint degrees with universities that ranked lower than them, this would affect their funding. These people need the arguments to convince their counterparts in politics and the media of the limited value of rankings."
Asked whether Rauhvargers would drop rankings altogether if it were entirely up to him to decide, he declines to give a straight answer.
"That is not a relevant question. Rankings may in fact be useful for the top universities globally. But for the other 17,000 or so universities they can be a burden and quite an unnecessary one, because they rank so differently as compared by conventional standards."
"I found that in most of the classic rankings, the scores go down with dramatic speed as you progress down the list. In Shanghai, the number 14 has only half the score of the top university. And at the 100th spot, there is only 25% left of the score. The Shanghai ranking doesn't score beyond position 100 but the Taiwan HEEACT, whose curve is largely similar to the one of Shanghai, shows that at position 400 the score reaches under 10% of the top position. To me, this indicates the ranking is only relevant for the very, very best."
Even more importantly, he says, rankings simply will not go away. "We have to understand that we cannot stop them from being produced and published as long as there is an audience for them. And there is. At the presentation of Multirank, which does not produce a direct ranking, a very high level employer asked: 'Why do you not make a ranking? I do not want to compare indicators, I just want to know who is the best.' That says it all."
Rauhvargers says abolishing them is impossible. The only option is to educate people about them.
"I believe that the academics have to be better armed for discussions on these issues, so that they can give answers to politicians, employers, parents and students. And that is what we are trying to do with our research," he says.
Related Link:
UNESCO debates uses and misuses of rankings