Methodology, meaning and usefulness of rankings
Globalisation, assisted by deregulation, has created demand for international rankings. The demand originates from a range of stakeholders: students, employers, supranational institutions, scholars, funding agencies and governments. In addition, there is public interest in rankings for their own sake, whether it be the world's most liveable city or an international ranking of the quality of financial newspapers. At the same time as this expansion in demand, developments in technology, most noticeably the world wide web, have facilitated the supply of information to meet demand.Extract from an article in Australian Universities' Review
International rankings are influencing decision-making within institutions and even affecting national systems of education.
France and Germany suffer in international rankings because quality research performance is spread over many institutions; these are often specialised and a significant number are not universities. The rankings have provided much motivation for the current policy in these countries of linking or consolidating institutions to establish larger entities.
Salmi and Saroyan (2007) note that in some countries authorities restrict scholarships for studies abroad to students admitted to highly ranked institutions. Donor agencies and foundations also look at international rankings to inform their decision making.
Within universities, Hazelhorn (2007) reports that in her international survey of leaders and senior university administrators, 56% cent indicated that their institution had a formal internal mechanism for reviewing their rank. Respondents also indicated that league tables played an important role in deciding on international collaborations.
An obvious marketing benefit accrues to a university that is highly ranked in a study. But as with all forms of external appraisal there are a number of more indirect benefits. Rankings provide an incentive for better data collection within institutions, they can expose pockets of institutional weakness and confirm areas of strength, and they are useful for benchmarking against like institutions. Rankings encourage institutions to re-examine mission statements. For the university system as a whole, poor performance can be used to prod governments into action.
The effect of league tables on student choice is more complex. The consensus seems to be that for rankings targeted at school leavers their direct influence is greatest for high achievers. It seems it is overall reputation which matters for undergraduate student choice and rankings are one factor feeding in to that perception.
However, Marginson (2007b) notes that market research and anecdotal evidence from educational agents indicate that the international rankings published by Shanghai Jiao Tong University are feeding directly into student choice at all levels, even though the rankings are based solely on research performance. Increasingly the international rankings are being interpreted as measuring the international standing of an institution.
Ranking methodologies
At its 2006 meeting the International Rankers Expert Group (IREG) drew up the so-called Berlin principles (Sadlak and Liu 2007), a set of good practice guidelines for rankers. The principles include: use outputs rather than inputs, be transparent, use verifiable data and recognise diversity of missions.
What attributes should be used in rating or ranking a university's performance? Candidates include research output and its influence, the quality of teaching and research training, and contribution to the formulation and implementation of national policy. Different groups of stakeholders will have different interests; this implies that ratings should be undertaken separately for the different attributes before they are combined into a single measure.
The methods used to measure research performance in universities form a spectrum: from a survey of peers at one end to the use of quantitative measures of output only, such as publications and citations, at the other end. In the middle of the spectrum lies evaluation obtained by providing peers with representative publications and detailed quantitative information.
In evaluating the quality of teaching the methodology spectrum ranges from surveys of students and employers to quantitative measures such as progression rates, job placements and starting salaries of graduates. There is, however, much less agreement about the appropriate quantitative performance measures for teaching and learning than there is for research.
A university should be ranked highly if it is very good at what it does. This implies that in order to recognise institutional differences whole-of-institution rankings should either be conducted separately for different types of institutions or be obtained by aggregation of rankings at a sub-institutional level. The Carnegie Foundation in the US and Maclean's in Canada categorise universities into types. In Australia, because all universities offer PhD programs and have similar mission statements, categorisation is more problematical.
We are then left with the option of first ranking by sub-institutional unit, most commonly discipline, and then aggregating. Rankings by discipline are of value in themselves, especially to academics, postgraduate students and funding agencies.
The downside of the aggregating-up approach is that it requires much more detailed information, including measures of the importance of each discipline (or some other sub-institutional unit) in the university. However, not to allow for scope will bias overall rankings in favour of institutions which have disciplines where the number of publications produced per academic is large, such as in medicine. In our work at the Melbourne Institute on ranking Australian universities (Williams, 2008) we found that allowing for scope improves the ranking of the more technologically oriented universities.
Disaggregation can be at various levels: research groups, disciplines, departments and faculties. It is inevitable that international rankings will be at the discipline or institutional level, especially if the rankings are based on publicly available information. Only at this level can the independent ranker sitting at a laptop obtain data on a consistent basis.
In general, departments and faculties do not translate well across national frontiers: organisational structures differ too much and departmental affiliations of authors are not always known. While there are international rankings of MBA programmes, these require most information to be collected from institutions, which raises issues of consistency. While national research funding agencies may rate research groups, this requires too much detailed information for international comparisons.
The federal government Excellence in Research for Australia (ERA) initiative proposes to use discipline as the sub-institutional unit for measuring research. It will have the added benefit of encouraging universities to look at their internal departmental structures.
Williams goes on to look at three categories of data - survey data, data supplied by universities, and data from third party sources such as government agencies and private sector citation data banks - at quantitative measures of research performance, at measures of learning and teaching, and presentation of results.
He evaluates 'the rankers', pointing out that there are now some 34 countries for which national rankings are available (Salmi and Saroyan 2007). Independent research groups and government agencies have undertaken much of the recent expansion in country rankings, Williams writes. The nature of rankings reflects the interests of the suppliers: the media concentrates on measuring the quality of teaching and learning for undergraduates because of the large market for this information, while governments are most interested in research performance as they see this feeding into national economic performance. International rankings permit a calibration of national standings against the world's best universities.
Williams discusses the Shanghai Jiao Tong University world ranking, with its criteria related to research, and concludes that it performs well against the Berlin principles and that its outputs-measuring index is transparent and data verifiable. He finds that the international ranking published by Times Higher Education in associated with QS career and education consultants, is dominated by surveys of academics and employers, has a low response rate and is less transparent that SJTU although it is improving. He argues that Australian universities need to respond in two ways - improve outcomes in existing rankings and encourage new types of rankings - and probes the question: what is a world class university?
* Professor Ross Williams is Professorial Fellow at the Melbourne Institute, University of Melbourne. His research publications are in areas as diverse as demand and saving, time-use studies, the cost of civil litigation, housing, federal-state finance, and the economics of education. He is a Fellow of the Academy of Social Sciences in Australia and Principal Fellow of Queen's College.
Full report and full references on the Australian Universities Review site