LSE - Small Logo
LSE - Small Logo

Blog Admin

March 7th, 2016

Accounting for Impact? How the Impact Factor is shaping research and what this means for knowledge production.

10 comments | 2 shares

Estimated reading time: 5 minutes

Blog Admin

March 7th, 2016

Accounting for Impact? How the Impact Factor is shaping research and what this means for knowledge production.

10 comments | 2 shares

Estimated reading time: 5 minutes

alex rushforthsarah de rijckeWhy does the impact factor continue to play such a consequential role in academia? Alex Rushforth and Sarah de Rijcke look at how considerations of the metric enter in from early stages of research planning to the later stages of publication. Even with initiatives against the use of impact factors, scientists themselves will likely err on the side of caution and continue to provide their scores on applications for funding and promotion.

This piece is part of a series on the Accelerated Academy.

A number of criticisms have emerged in recent times decrying the Journal Impact Factor (JIF) and the perverse effects this indicator is having on biomedical research. One notable feature of these disgruntled statements is they have often emerged from within the medical research communities themselves. For instance it is now commonplace to read denunciations of metric cultures in the editorial statements of eminent medical journals, or bottom-up movements by professional scientists to protest and reform the governance of science.

Despite these rumours, how the Journal Impact Factor intersect with knowledge making practices of researchers on the ‘shop floor’ in biomedicine is little known. Without detailed empirical studies it is difficult to foretell of the actual and possible consequences this indicator is having for biomedical knowledge work. For this reason we set out to study this topic in a project we recently completed entitled ‘the impact of indicators’. We focussed on three kinds of research groups in two University Medical Centres (academic medical schools) in the Netherlands. Our ethnographic approach looked at groups in basic, translational, and applied areas of biomedicine, a set of distinctions commonly made within the field. These sub-cultures all exhibit different ways of making knowledge (Knorr-Cetina, 1999), and their varying interactions with this indicator in the course of their day-to-day research practices was the focus of our analysis. This included observing how the JIF entered from early stages of research planning and collaboration to the later stages of publication.

jugglingImage credit: moise_theodor (CC 0)

The lab-based groups in the basic and translational areas exhibited particularly tight coupling with the JIF as de facto standard for measuring quality and novelty of their work. For this reason we will briefly outline some of the most striking impressions we got from studying these particular kinds of groups, focusing particularly on the final stages of the knowledge production process i.e. publishing (for a more detailed account of the production process in all three biomedical areas see Rushforth and de Rijcke, 2015).

Whilst decisions and practices scientists make about their work remain multi-dimensional, the JIF has become a kind of obligatory passage point though which other kinds of considerations need to be filtered. For instance, basic scholarly decisions such as the types of audience one would like to reach with a paper, whom one should collaborate with, and how much time and resources should be dedicated to performing extra experiments, are all weighed up against the likelihood it will land their end-product (i.e. a journal article) in a higher impact journal. Such is the conviction that this is the currency through which they will be received within their own group, peer networks, department, by funding agencies and indeed the academic job market, our informants could pursue this criteria somewhat religiously. In one lab we observed a debate about whether to submit a manuscript to a particular journal based on the fact it scored 0.1 of a decimal point higher than the other journal being considered.

As intelligent and reflective individuals what did informants make of being caught up in this process? Scientists must accommodate demands to publish high impact work in order to get on and have a career. There is little way around this. At the same as fulfilling these demands they recognise ‘realities being knowingly eclipsed’ (Strathern, 2000). Some were wise to the wider criticisms of the indicator in circulation, be they technical or more sociological. Some decried the fact they and colleagues were making arbitrary decisions about where to submit manuscripts based on what was effectively a rather crude ranking exercise. Others though offered tentative defences of the impact factor’s effects. Indeed it is important not to paint the scientists as simply passive victims here. Despite its arbitrariness, the JIF does offer those who play by the rules of the game the pleasures of narcissism (Roberts, 2009) and play (Graeber, 2015). But whether critical or supportive, what’s for sure is there could be no stepping away.

So what might the findings from this exploratory study signal?

An important set of concerns one takes from studying the JIF’s effects is the likely consequences all this has for biomedical knowledge production. Unfortunately as a recent literature review with our colleagues on the effects of indicators showed, this is largely impossible to answer in any general definitive sense (De Rijcke et al., 2015). Nonetheless, there are a few factors we might wish to flag.

In respect to research evaluation, whilst there are temptations in using the JIF as a short-cut to making judgements about the quality of work, there are several reasons to be concerned about increasing reliance on this indicator alone. The fact is that by deferring to the JIF, one is by extension deferring the complex process of judging and ranking quality towards second order criteria (how well cited the journal is in which the article appears), rather than the merits of the work being judged as a standalone contribution to knowledge. This not only reduces the worth of scientific work to a simple numbers game, but also rely solely on the peer review process of journals to determine scientific good taste. Whereas we might generally trust journals to do this well enough most of the time, even the big brand journals have been exposed for failing to detect simple errors or misconduct, as Philip Moriarty’s recent keynote at the Accelerated Academy workshop in Prague made evident. It is also commonly argued in history and sociology of science that journal peer review also errs towards conservatism, rejecting radical innovations and erring more towards incremental, normal science-type contributions. Our observations of biomedical researchers suggests charges of conservatism made against journal peer review and reinforced by the JIF, should be taken as more than simply sour grapes from those who have work rejected by high impact journals: given the high stakes that come with publishing in high impact titles, researchers are being steered towards exercising great caution in their decisions about which projects to pursue. If there’s not chance an idea will land them the high impact outputs they need, it’s a non-starter.

Depressingly this trend looks set to continue, as long as there is hyper-specialisation and pressures to be productive, those evaluating grant applications and hiring and promotion decisions may not feel equipped in terms of skills or resources to read through submissions and make their own judgements. Evaluation committees may be pressured into omitting impact factors from their considerations, however their easy availability online, forces of habit, and discretion entrusted to expert committees makes this difficult to police. Even if such initiatives are being made, scientists themselves will likely err on the side of caution and continue to provide their scores on applications for funding and promotion. The appeal of the JIF, or some equivalent, thus looks likely to endure in a number of areas of medicine until this cycle is broken-up.

The post is part of a series on the Accelerated Academy and is based on the author’s contribution presented at Power, Acceleration and Metrics in Academic Life (2 – 4 December 2015, Prague) which was supported by Strategy AV21 – The Czech Academy of Sciences. Videocasts of the conference can be found on the Sociological Review.

Note: This article gives the views of the author, and not the position of the LSE Impact blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.

About the Authors

Alex Rushforth is a researcher at CWTS, Leiden University. He works in the areas of sociology of science, higher education, and organizational sociology. Whilst training at the Universities of Surrey and York in the United Kingdom, he has developed interests in the evolving governance of public sciences, and in particular its impact upon the research process itself.

Sarah de Rijcke is a sociologist of science and technology at CWTS, where she leads the Evaluation Practices in Context (EPIC)  research group. Her current research programme in evaluation studies examines interactions between research assessment and practices of knowledge production. This programme is situated at the intersection of Science and Technology Studies and the sociology and anthropology of science.

Print Friendly, PDF & Email

About the author

Blog Admin

Posted In: Citations | Evidence-based research | The Accelerated Academy Series

10 Comments