Simon Marginson: Global university rankings - the best of all possible worlds?

Simon Marginson: Global university rankings - the best of all possible worlds?

The last four years have seen the emergence of two systems of global university rankings, conducted by the Shanghai Jiao Tong Institute of Higher Education and the Times Higher Education Supplement. These rankings generate media coverage throughout the world, with one exception, and have begun to exert direct effects in the marketing and development strategies of many individual universities, and in some nations the policies and priorities of government - indicating the growing role of global referencing in higher education. (The exception is the United States where it is taken for granted that the only rankings that matter are the national tables from US News – though it is widely agreed also that the US News ranking has distorted priorities, e.g. the rise of merit-based aid at the expense of needs-based aid).

Rankings have normalized the idea of a worldwide market in higher education and exacerbated competitive pressures within and between nations. More specifically, in some countries such as Germany and the Netherlands the Jiao Tong University research rankings have focused national government attention on actual or possible policies designed to increase the concentration of research activity in a small number of universities, including recruitment of additional high citation researchers, a group which significantly impacts university performance in the Jiao Tong University rankings. In Europe global rankings are also associated with the formation of the European League of Research Universities and the development of a typology of European institutions more or less along American lines. In East and SE Asia there are discussions concerning the possibility of both a regional typology and a regional ranking system. In some nations new national rankings systems are emerging.

Market research suggests that foreign student choice-making regarding country and institution of study is affected by university rankings data. It is likely also that as with US News in the United States, the global rankings systems are also affecting the flows of doctoral students, elite researchers and the philanthropic and corporate dollar. So the rankings should be solid.

The rise and rise of rankings has led to a widespread interest in well grounded rankings that have positive effects on institutional performance. Three problems of validity have emerged. The first is the reliability and accuracy of the data, where the principle negative example is that of the Times Higher, which has a 1 per cent response rate for its reputational survey comprising 40 per cent of the index, and sources some of its data from interested parties, the higher education institutions themselves.

A second problem is the arbitrary character of the weightings used to construct composite indexes covering different aspects of quality or performance, the means by which ratings agencies construct a total picture of the institutions that are ranked against each other.

A third problem of validity is that reputational rankings tend to be both ill-grounded and circular: In reputation-based rankings known university brands generate ‘halo’ effects. The Times Higher favours universities already well known regardless of merit, tending to recycle existing reputations while blocking newcomer institutions or nations. There is no means of verifying the soundness of subjective judgements of reputation, for example ensuring that they are grounded in actual comparative knowledge, or address the fundamentals such as the quality of teaching and research. One study of ranking found that one third of those who responded to the survey knew little about the institutions concerned apart from their own. The classical example of these problems is the American survey of students that found Princeton law school was ranked in the top ten law schools in the country. But Princeton did not have a Law school.

Likewise we can note three problems of use of the data, in combination with validity issues. First, rankings, especially reputational rankings, become an end in themselves and protected from critical scrutiny, without regard to exactly what they measure, whether they are solidly grounded or whether their use has constructive effects. The desire for rank ordering overrules all else. Often institutions are rank ordered even where differences in the data are not statistically significant. Moreover, the illusion is created that all institutions have the same capacity to succeed even though their circumstances are often vastly different. Consider for example the difference between a leading university in the USA and a leading university in Indonesia. Population sizes of the two countries are the same order of magnitude. But that’s where equality of capacity stops. In 2001 the USA published one thousand times the number of scientific papers that Indonesia published.

A second problem of use of rankings data is that when the data are not solidly grounded, as in the case of the Times Higher ranking, changes in the rankings do not necessarily reflect changes in actual performance. There is no virtuous link between competition, performance and ranking. The worst case scenario is when institutions receive an undeserved ‘hit’ in the data, so that the effect of the rankings is capricious and destructive. The now famous example of the University of Malaysia in Malaya is a case in point. In 2004 the University of Malaya was ranked by the Times Higher at 89 in the world. This was seen as a very positive achievement within Malaysia; for example the University’s Vice-Chancellor ordered large banners declaring ‘UM a world’s top 100 university’ placed around the city, and on the edge of the campus facing the main freeway to the airport where every foreign visitor to Malaysia would see it. But the next year in 2005 an error in the classification of foreign students at the University was corrected and the outcomes of the Times two reputational surveys changed; both changes were to the disadvantage of the University. The University dropped from 89 to 169 in the Times ranking, without any necessary change in its actual performance. The Vice-Chancellor was pilloried in the Malaysian media and when his position came up for renewal by the government in March 2006 he was not reappointed.

A third problem of use is that singular rankings systems encourage institutions to reduce the emphasis on those activities that do not contribute to rankings performance; and more generally, leads to convergence of behaviour between institutional types and between national systems (and languages of use). Unless there is a broad range of rankings systems with no one system dominant, all else being equal rankings tend to work against diversity of provision. This is a serious difficulty, much remarked upon, with no solution in sight.

The foregoing discussion suggests a number of conclusions for practice:

  • Rankings based on surveys of reputation as such should be avoided as there is no necessary links with fundamental capacity or performance, and reputational rankings generate circular reputation-forming effects;
  • Rather than composite ‘omnibus’ rankings that in reality leave much uncovered and involve arbitrary decision about weightings, specialist rankings specific to purpose (such as rankings of research, rankings of student achievement, etc), grounded in data specific to the purpose, should be used, OR comprehensive data bases that can be broken down to specific questions such as the CHE data base;
  • All else being equal, the more the number of rankings systems, and the more diverse the qualities included in them, the better. Diverse multiple rankings produce more information of use to more people, and undermine the potential of any one ranking to obtain supreme status thus becoming a de facto reputational ranking;
  • Ideally rankings should be developed and maintained by independent agencies funded by foundations or governments, situated at arms length from the funders. The next best option is for rankings to be managed in university research centres providing that they are not contaminated by institutional interest and a completely separated from marketing departments;
  • Rankings should not be run by newspaper companies because their purposes are unsuitable to the production of valid rankings. They do not have a vested interest in valid social science or the long term healthy development of higher education.

Knowledge Rules, 25/02/08