bookmark

GLOBAL: THE reputational rankings - a helpful tool?

Anybody who has been following the debate surrounding world university rankings in the last year or so will have raised an eyebrow at the recent publication by Times Higher Education of a ranking based entirely upon academic assessment of institutional reputation. This ranking was an attention-grabbing stunt. But how much new light does it really shed on the world's top universities?

Before the launch of its own rankings last year, THE criticised QS's use of reputation surveys, which it said were "subjective". THE's explanation for producing its own new ranking, rather than continuing to rely on the established QS methodology, was specifically to focus on 'objective' university data rather than academic opinions.

Later, on 7 January 2010, it announced that it was planning to use an academic survey with a target size of 25,000 respondents within its rankings. Now it has published a completely academic survey with just over 13,000 participants.

So why has THE decided to launch a world ranking based entirely on institutional reputation? Is it for the benefit of institutions like Moscow State University, which did not appear in THE's original top 200 but now appears 33rd in the world?

The data on which the new reputational ranking is based has been available for six months and comprised 34.5% of the world university rankings published by THE in September 2010.

But this is the first time the magazine has allowed anyone to view this data in isolation. Allowing users to access the data six months ago may have attracted less attention, but it would perhaps have been less confusing for prospective students.

The order of the universities in the reputational rankings differs from the THE's overall ranking. But no new insights have been offered and nothing has changed. This plays into the hands of those who are sceptical about university rankings.

As the research provider that devised and first implemented the academic reputation survey, QS believes the views of academics are a key consideration for prospective students.

If properly collected and assessed, they measure institutional strength in a discipline-independent way, bypassing the notorious bias toward scientific subjects inherent in bibliometric data such as citation rates. They also give a sense of the real-world market value of a degree. Prestige is in the eye of the beholder and academics are best qualified to pass judgement on it.

QS published its own academic reputation survey - based on 15,000 responses worldwide - as a separate data column in its latest QS World University Rankings®, in September 2010.

Based on the recognition that individual users have different priorities and no one indicator can be definitive, QS encourages users to view its rankings as a collection of indicators, all of which are published simultaneously, which can be re-ordered interactively by the user.

Anyone who wants to see which universities perform best in the QS academic reputation survey may do so at a click on www.topuniversities.com. THE could have done this on its website and saved the confusion of re-publishing old data as a new ranking.

A common complaint from academics who are sceptical about rankings is that fluctuations in position often relate to the way data is collected and presented rather than any genuine change in institutional performance. This 'new' THE ranking risks confirming their belief.

THE repeatedly claimed to have produced the 'gold standard' in university rankings when it published its first World University Rankings last year. It even claimed that "for those who want a comprehensive picture of a world-class university, there is only one global system that really counts".

Yet the table contained results that many commentators regard as obvious errors, such as ranking Alexandria University in Egypt above Harvard and Stanford for research impact. It is hard to see that slicing this imperfect data helps add to the authority of THE's ranking system.

Though they inevitably provide an incomplete summary of higher education achievement, world university rankings are here to stay.

They are an inevitable product of the massification of higher education and, in an environment where individual students are being charged more than ever before for their degrees, they are an effective means of bringing much-needed comparative data into the public domain.

But those of us who produce rankings must be frank about their limitations. They are constrained by the availability of data. What's more, the weightings given to different indicators in any ranking table - no matter how much research, experience and consultation go into deciding them - represent just one possible interpretation.

Yet rankings can offset these objections to a large degree if they are presented transparently. This means publishing the results of all indicators used so that users can adapt the rankings to suit their individual needs and priorities. The emphasis must be on the user rather than the commercial interests of the publisher.

* Danny Byrne is the editor of www.topuniversities.com and of the QS Top Grad School Guide.

Comment:
I could not agree more with this article and I believe THE have come to realise that not only was their methodology flawed in this dimension, but their results are lacking validity due to a number of institutions failing to submit industry income.

For example, a university which is excellent over the remaining four or five THE measures, yet lacks significantly high industry income, was permitted to refuse submitting data. Subsequently, this field would be omitted in their cumulative total, rather than factoring in as a zero figure and damaging their overall performance.

Quite simply this undermines their ranking and its validity, transparency and methodology which is hypocritical, as I'm sure most people in the institutions being ranked would point out.

All in all, I still believe the QS rankings give a better picture and more accurate, precise and detailed measures in contrast to the experimental and fatally flawed THE rankings.

James, a significantly irritated postgraduate