GLOBAL
The consequences of international-isation rankings
At the IREG-8 Conference in Lisbon on 4-6 May, the central theme was rankings and internationalisation (see also Simon Marginson’s address). The relationship between the two topics is logical, as rankings, in particular the main global rankings, play a key role in international competition in higher education – and because international indicators play a role in positioning higher education institutions.But at the same time, the relationship between the two is problematic, because rankings, through their indicators, influence the way universities and governments internationalise and the way internationalisation is measured.
As I have commented on several occasions in my contributions to University World News and as I stated in my address to the IREG-8 Conference in Lisbon, there are major troublesome and risky consequences resulting from the rankings and their international indicators.
Rankings measure the number of international students, the number of international staff and the number of international co-authored publications. In the THE rankings this weighting amounts to 7.5% and in the QS rankings it is 10%. The problem with these three indicators is that they lack clear and commonly accepted definitions.
Further, they are only quantitative. If one agrees that internationalisation is not a goal in itself, but a means to enhance the quality of education, research and service, these three separate quantitative international indicators in the rankings have a counterproductive effect.
Universities and governments that aspire to stay high in, or move up, the rankings, will focus their internationalisation policy exclusively on increasing the number of international students, staff and co-authored publications they have and take action to make that happen: develop recruitment policies, teach in English, make it attractive for talented international students to stay on after graduation, etc.
But they will not develop a more long-term and in-depth approach by internationalising the curriculum and teaching and learning, investing in joint research projects and looking at the global dimension of social responsibility at universities, issues that it is more important for them to invest in.
Furthermore, as Markus Laitinen of the University of Helsinki and vice-president of the European Association for International Education, remarked during the conference, the likelihood that improving one’s performance on these three quantitative international indicators will have an impact on the ranking is rather limited, given their low combined score of only 15%.
Should the international indicators be deleted?
During the discussion, one of the participants at the conference interestingly asked Markus and me: if that is all true, would it then not be better to delete these three indicators as part of the rankings? In his response, Markus agreed, and I must admit that I am inclined to agree as well.
Using these indicators as part of the overall rankings of universities makes as little sense as using them separately to define how international a university is – as Times Higher Education does.
How international a university is is defined by its quality in research, teaching and learning and its service to society, and if rankings addressed these dimensions adequately – something one can question, as Simon Marginson rightly commented – they would reflect much better the international dimension of institutions.
Rankings have become a part of higher education, though, and if the rankers had not invented them, other media, governments, higher education institutions or even scholars would still be inclined to rank because it is in our nature to pick winners and losers and to want to know where we stand.
An example is the recent study that Janet Ilieva and Michael Peak wrote for the British Council, The Shape of Global Higher Education: National policies framework for international engagement. The study is an interesting attempt to evaluate national policies on international higher education and to identify areas that are supported by national governments. This is not the place to discuss the methodology and approach the researchers have followed.
Unclear definitions
What I want to address is that, although they state that they have no intention of ranking national policies, in the media the ranking of national policies resulting from the study has attracted the most attention, with Germany and Malaysia as numbers one and two. Most of the media coverage did not even mention that the study only addressed 27 countries, giving the impression that the ranking covers the whole world.
One can argue that the report provides enough ammunition to make a case for ranking. The point is, though, that the authors did not do this and certainly had no intention of doing so and even so it became the main focus for the media.
Rankings are a given, but they become dangerous when they claim to provide qualitative generic conclusions when they are based on unclear definitions and data and when they do not mention their limitations and context. The debate on rankings will not end soon, but a critical reflection on their foundations is a crucial part of that debate.
Hans de Wit is director of the Center for International Higher Education at Boston College, USA. Email: dewitj@bc.edu.