NORWAY
bookmark

Official study slams university rankings as ‘useless’

A government-commissioned study of the placement of Norwegian universities in global rankings – in particular compared to other Nordic institutions – has concluded that even the top rankings are so based on subjective weightings of factors and on dubious data that they are useless as a basis for information if the goal is to improve higher education.

The Norwegian Ministry of Education and Research commissioned the Nordic Institute for Studies in Innovation, Research and Education, or NIFU, to analyse Norwegian universities’ placements on international university rankings.

The ministry specifically wanted to know what the rankings meant for the universities in practice, and if there were factors at the national or institutional level that could explain the differences between Nordic countries.

In turn, NIFU appointed a working group of six staff members and external consultants who produced a 180-page report (in Norwegian only), titled Nordic Universities and International Rankings. What explains the Nordic placements and how do universities relate to these rankings?

The report provides in-depth analysis of the Shanghai Academic Ranking of World Universities – ARWU – and the World University Rankings produced by Times Higher Education, or THE, and discusses the Leiden ranking on publication performance.

The second part discusses how Norwegian universities currently use the rankings, and their potential for further usage for policy and strategic purposes.

Main conclusions

The main conclusions regarding the ARWU and THE are that “placement on those rankings is to a large degree based on a subjective weighting of factors which often have a weak relationship to the quality of education and research.

“The rankings are based on data that to a varying degree are made available and made transparent. The rankings say almost nothing about education.

“The international rankings are therefore not useful as the basis for information and feedback both on research and education, if the goal is further improvement of Norwegian higher education institutions.”

The NIFU analysis found that Norwegian universities:
  • • Are in general less cited than other Nordic universities.
  • • Have fewer researchers – regardless of the size of the institution – able to publish in Nature and Science, to have their publications among the 10% most cited in the world, or to accumulate so many citations that they are included in the Thomson Reuters list of most cited scientists.
  • • Are in general less research productive than comparable universities in other Nordic countries, measured by citations per staff member.
Ranking usage

The NIFU report states that these are factors that universities to varying degrees can influence.

Increasing the rate of citations to have more Norwegian scientists listed among the world’s most cited will be a great challenge. But the report says universities do have the ability to influence the productivity of their researchers.

In Nordic universities, NIFU says, the rankings are not used either in strategic planning or in benchmarking against other universities. But the rankings have contributed to greater focus on quality, and have at least given a clearer picture of the national hierarchy of institutions.

NIFU will not recommend that placing in international ranking lists be used to decide on national research policies, but would rather recommend that clear national goals for raising the quality of higher education and research be established.

’Decomposing’ the rankings

The NIFU methodology in ‘decomposing’ the ARWU and THE rankings is extensive.

Each of Norway’s four ‘old’ universities – Oslo, Bergen, National Technological University and Tromsø – is benchmarked against 18 other Nordic universities.

There is a sophisticated analysis for each university, looking at which variables can explain most of the variance measured in percentage from the positioning on the same variable for the benchmarking group of universities.

This is a very illuminating exercise, because the standardised measures – for instance in THE – differentiate much better among the top-rated universities than those with a lower ranking. This methodological ‘fallacy’ is underlined several times in the report:

“In THE there is a 30.7 point difference between Caltech as number one and Pennsylvania State University at place 50. And for Helsinki University at place 100, there is only a 10.9 point difference. Then there is only a 3.8 point difference between rank 101 and rank 149 and another 4.2 points between rank no 150 and rank 199.

“The trend in both ARWU and THE is that the lower down the list you get, the smaller the difference between universities.”

What is special with the THE, NIFU argues, is that 33% of the weighting in the ranking is decided by an international survey of academics – but these results are not made available in the THE report, where only the first 50 places are documented.

The last THE reputation survey was done in 2012, NIFU said, and 16,639 academics in 144 countries responded – but THE does not say what percentage response there was to the survey.

NIFU discusses in detail which regions the respondents came from, which is given by THE, and says that a slight overrepresentation of respondents from North America probably has no great effect on the rankings, which has often been argued.

The institute says that Harvard University, which is most frequently rated by the respondents, is given a score of 100, and universities down on the list are given a percentage of the ‘votes’ Harvard gets. For instance MIT, second on the list, gets 87.6% as many ‘votes’ as Harvard.

This figure is only published for the first 50 entries on the ranking. Most universities after that might have received less than 1% of the nominations Harvard got, making it feasible that there may be great fluctuation in the proportions between universities from survey to survey.

The institute argues convincingly that the weight of the survey is an advantage only for the first placement on the rank, and unreliable as a differentiation mechanism for universities further down on the list – where most Nordic universities are ranked.

It collected information from the four ‘old’ universities and found that all of them got comparatively low scores on the survey.

Furthermore, the NIFU report discusses in detail each parameter for each of the four Norwegian universities, and compares these scores with the 18 other Nordic universities, giving a detailed picture of the many technicalities behind their rank. This is valuable information.

THE responds

Phil Baty, editor of the THE rankings, told University World News that the magazine had “not been consulted at all by the NIFU”.

Reputation, he said, formed “just a part of a comprehensive range of metrics used to create the Times Higher Education World University Rankings". In all, 13 performance indicators were used, “covering the full range of university activities – research, knowledge transfer, international outlook and the teaching environment".

“The majority of the indicators are hard, objective measures, but we feel it is very important to include an element of subjective reputation as it helps to capture the less tangible but important aspects of a university's performance, which are not well captured by hard data.”

The methodology was devised, Baty stressed, after open consultation and was refined by an expert group of more than 50 leading scholars and administrators around the world. Effort was taken to ensure the survey was fair, with countries receiving the right proportion of questionnaires, and it was distributed in multiple languages. Only senior academics were invited to respond to the survey, and all of them had published in world-leading journals.

“When Norway’s universities break new ground and push forward the boundaries of understanding in any particular academic field, they should be making sure that scholars across the world are aware of the discoveries, through the most appropriate means of dissemination – journal publications and conferences,” said Baty.

“This is the only way to ensure Norwegian universities get the credit they deserve. Other small countries have had tremendous success in the rankings – the Netherlands and Switzerland, for example.”

Local views

The NIFU report was presented at a seminar in Oslo recently, capturing much attention. “Can we trust university rankings?” NIFU wrote on its website. “University rankings criticised,” declared the Ministry of Education and Research in a press release.

“A Kiss of Death for university rankings,” said University of Oslo Rector Ole Petter Ottersen in his blog, stating:

“This report should be made available for everyone working within the higher education sector in Norway. Not the least, it should be available on the news desk of Norwegian newspapers.”

Among the many comments, the University of Bergen announced that it was ranked 56 on the indicator for citations per academic. “This confirms Norwegian universities’ love-hate relationship with university rankings,” wrote former Bergen rector Professor Kirsti Koch Christensen on Facebook.