Are reproducibility and open science starting to matter in tenure and promotion review?

July 14th, 2017,

Tenure and promotion season is underway.  Promotion committees, in the U.S. at least, use the summer to send portfolios to 3 to 10 scholars at other institutions for independent review of the candidate’s credentials.  These independent assessments are a vital part of the review process, particularly for research-intensive universities.  Candidates need to demonstrate that their work is impacting others in the discipline.

Advocates for improving open science and reproducibility accurately worry that the movement will fail if standards for hiring, tenure, and promotion do not change.  If the likelihood of tenure and promotion is dependent exclusively on publication volume, prestige and success obtaining grants, incentives for openness will have--at best--an indirect effect on researcher’s behavior through journals and funders.  Successful nudging of the culture of incentives requires that institutions likewise reward scholars for conducting open, rigorous, reproducible research.

Like every other professor that is now old (full professor), I am asked to write tenure and promotion letters every summer by review committees at other institutions.  I just finished my fourth review: three were for promotion to full professor, one for tenure and promotion to associate professor; all for research-intensive universities.  With this small sample, I am observing some promising signals that universities are incorporating open science into tenure and promotion consideration.  Obviously, this is a small sample, but it indicates that we are past the existence proof stage of shifting incentives toward openness of research.  Here are some insights from this summer’s promotion review requests:

  1. I was asked to review because of the committee’s interest in evaluating the candidates’ work and impact relevant to open science.  Committees usually ask for reviews from experts in the candidate’s substantive domain.  None of the four candidates work in the same subdiscipline as me.  The substantive work for three was close enough that I could comment on it sensibly, but there are dozens of other scholars that would provide much deeper insight on the substantive issues.  From the candidate profiles, it is clear that I was asked because the committee’s also want evaluation of the researcher’s impact in open science.  In fact, one of the four invitation letters explicitly stated “We are particularly interested in having you speak to Professor XXXXXXX’s contributions in the area of “Open Science” and innovations in methodology and communication in our field.”

  2. There was a lot to say about the candidates’ contributions to open science.  In all four letters, I was able to discuss concrete contributions that each candidate had made to open science -- infrastructure, service, metascience, social media leadership, and their own research practices.  These letters were a delight to write, particularly because the candidates’ embodied so many characteristics that we idealize for open science practices more broadly.

  3. Committees signaled evaluating quality over quantity in their review.  Instructions to reviewers emphasized quality, and each package contained a sensible combination of content for my review: invitation letter, candidate CV, three to five representative articles to read, and sometimes personal research and teaching statements.  Of course, counting heuristics, prestige signals from journal names, and total grant dollars can still dominate decision-making.  But, for the most part, the provided packages give the external reviewer enough information to conduct a depth assessment and not so much to overwhelm the reader to fall back on counting heuristics as the sole criteria.

There is much to do in improving hiring, tenure, and promotion practices to align incentives with scholarly values. Particularly useful next steps would be aggregation and review of tenure and promotion policies across institutions. For example, it would be really useful to know the variation in portfolios given to reviewers and committee members, and the nature of the instructions for decision making. Also active research on predictors of promotion decisions would reveal whether there is any truth to the common perception that tenure decisions are highly determined by the number of publications and the prestige of their outlets. Showing that it IS NOT the case could produce a rapid sea-change in what early-career researchers understand as their incentives for advancement. Showing that it IS the case could produce self-reflection among institutions to evaluate the extent to which their policies and practices are aligned with their institutional values.

In any case, my experience with promotion review requests this summer suggests that change is occurring, particularly in assigning scholarly value to open science contributions and behavior, and it’s great to see.


Recent Posts