GLOBAL
bookmark

‘Global university rankings data are flawed’ – HEPI

Governments and institutions should ignore leading international university rankings because they are “unreliable” and “methodologically flawed”, according to a new analysis by the United Kingdom higher education think tank, the Higher Education Policy Institute or HEPI.

But ranking organisations say that while they accept their processes can be improved, and they are constantly investing to achieve that, students and governments find their data useful.

Global rankings of universities, such as the Times Higher Education or THE World University Rankings, the QS World University Rankings and the Academic Ranking of World Universities, claim to identify the ‘best’ universities in the world and then list them in rank order, and they are enormously influential, as universities and even governments alter their policies to improve their position, the HEPI report said.

But HEPI’s new research shows the data that league tables use are “unreliable and sometimes worse”, and it is “unwise and undesirable to give the league tables so much weight”.

Bahram Bekhradnia, the author of the report and HEPI president, said the evidence shows that international rankings are one-dimensional, measuring research activity to the exclusion of almost everything else.

“They do not match the claims made for them. They fail to identify the ‘best’ universities in the world, given the numerous functions universities fulfil that do not feature in the ranking. Indeed, what is arguably their most important activity – educating students – is omitted.”

He said universities, their governing bodies and governments should “focus on their core functions because it is the right thing to do, not because it may improve their position in any rankings”.

The report International University Rankings: For good or ill? considers the inputs for the various international league tables and discusses their overall weaknesses before considering some improvements that could be made. These include:
  • • Ranking bodies should audit and validate data provided by universities;
  • • League table criteria should move beyond research-related measures;
  • • Surveys of reputation should be dropped, given their methodological flaws;
  • • League table results should be published in more complex ways than simple numerical rankings; and
  • • Universities and governments should not exaggerate the importance of rankings when determining priorities.
The report concludes that since there is a commercial interest in maintaining rankings there is little possibility of halting their rise, but neither does there appear to be much hope of correcting their most serious faults.

Lack of comparable data

Data comparison relies on confidence that the data on which the rankings are based have been gathered to comparable standards and using the same definitions. But “no such confidence exists, other than in relation to research publication data”, the report says. This is because comparable research data does not exist internationally other than for research, the HEPI report says.

There is not even a universally used definition of a student: the notion of a full-time equivalent student does not exist everywhere, and in some systems a masters student is not distinguishable from an undergraduate, the report adds.

The definition of a full-time member of academic staff also varies from country to country, and in some it includes PhD students.

THE and QS rankings do produce their own definitions, but universities supply their own data and the compilers of the rankings accept the data as supplied, the report says.

HEPI says there may be universities that submit erroneous data, deliberately or accidentally, “and there can be no confidence in the accuracy of their data nor whether the data are produced to common definitions and standards”. There is “no effective attempt by the compilers of rankings to audit or assure the quality of the data that are submitted”.

The reports says ranking bodies do have automated checks that are intended to ensure, for example, that the data returned are within credible bounds, but these “fall far short of an audit of the data returns themselves”.

The report argues that ranking bodies should not only provide helpful definitions as they do now, they should audit and validate the data provided by universities to ensure that these common definitions are adhered to.

Data scraping

The problem of relying on universities to supply their own data, is compounded by the practice engaged in “by QS, though not apparently THE”, of “data scraping”, the HEPI report says. This involves seeking data from a variety of data sources (institutions’ websites, for example), where a university does not itself provide data to the ranking body, and where there is absolutely no control over the data being gleaned.

The report cites as an example of the sorts of problems this can lead to, the fact that in 2013 Sultan Qaboos University, Oman, found it had dropped 150 places in the QS ranking and when querying this was told that the data scraping in previous years had wrongly identified non-teaching staff as teaching staff, thereby greatly enhancing the academic staff to student ratio that had been used.

The report is particularly critical of international surveys of reputation. It says they should be dropped because they are flawed methodologically, as effectively they only measure research performance and the results favour a small number of institutions.

Reputation surveys are especially significant in the QS ranking, the report says, in which they account for 50% of the total weight and where quality control over the data is “especially inadequate”.

Nick Hillman, HEPI director, said: “This study is overdue. Many people working in higher education enjoy looking at the league tables to see which universities are up and which are down. But what should be a fun talking point is taken ever more seriously with each passing year.

“Governments are now making funding decisions according to league table positioning and university managers are being held to account for a set of measures which are poorly understood, use questionable data and are limited in scope. This may even cause harm by deflecting institutions from their full range of activities.

“League tables will continue. But we hope those who use league tables will come to take them with a pinch of salt, that league table compilers will improve the data they use and that policy-makers will be very careful before using them to set policy.”

Scrutiny welcomed

Responding to the report, THE and QS complained to University World News that Bekhradnia had not contacted them to discuss how they compiled their rankings. However, both also welcomed further discussion about how rankings can be improved.

Phil Baty, THE World University Rankings editor, told University World News: "We welcome scrutiny of all university rankings. Our rankings, which are based on an audited, widely consulted on and openly published methodology, have for over a decade thrived and improved precisely because of such interest and discussion."

A statement on the THE World University Rankings 2016-2017 website says that it has been independently audited by professional services firm PricewaterhouseCoopers or PwC.

Sam Tomlinson, partner, PwC UK, is quoted as saying that PwC provides “rigorous independent assurance” over the calculation of the rankings. “Our audit work included testing the key controls to capture and handle data, and a full reperformance of the calculation of the rankings.”

However, it does not specify whether the quality of the data itself is audited.

Accepting that the methodology reflects the mission of a modern, global, research-focused university, Baty agreed that more must be done to establish more advanced methods for measuring factors such as teaching and outreach.

He said: “Indeed we are at the vanguard of such developments and are investing heavily to produce innovations.”

Criticising HEPI’s dismissal of the usefulness of rankings, he cited the fact that independent research revealed last year that one in three of the five million internationally mobile students each year use the THE rankings to help them select their universities.

He said: “We are more in agreement than disagreement with much of what Bahram [Bekhradnia] says in this paper. Where we take issue is with his final contention that governments should ignore rankings. This does not bear scrutiny when you look at the strategic analysis that higher education data now affords.”

Positive impact

Ben Sowter, head of research at QS, told University World News that rankings are not perfect and Bekhradnia’s criticisms are not new, although many remain valid, but he has failed to adequately explore the positive impact of rankings.

“We work tirelessly to improve the quality of our data and processes every year, and the notion that we don’t audit the data returns is absurd,” he said.

“We certainly do have a number of automated checks in place as a first line of control, but their existence doesn’t remove the need for the human audit of every data point submitted, a responsibility that we take seriously.

“This is among the most costly and time consuming components of the process and is in a state of continuous improvement.”

Sowter added that QS’s reputation surveys yield “increasingly stable results, well correlated with other measures, and the full details of how we screen quality of respondents, adjust for discipline and country and so forth could not be derived without speaking to us, yet suppositions are made to that effect in this report".

“The effectiveness of reputation surveys to distinguish between institutions does diminish as the list progresses, which is one of the reasons we break to ranges, but given that we are only assessing the world’s top 5% of institutions, we find they work quite effectively.”

He said it is true that rankings are simplistic but while some institutions ignore them and some pursue higher rankings as a mission, others take a holistic approach using rankings only for aspects that they have the power to inform, but designing their own metrics otherwise. He said the latter group is growing but seems to have been ignored by the HEPI report.

COMMENT

Rankings are a massive joke that do not sufficiently take into account teaching quality and are easily gamed.

Christopher Haggarty-Weir on the University World News Facebook page