[blog] Open Research Award: Celebrating openness … and randomness?

Recently, the Open Science Community Groningen (OSCG) and the University of Groningen Library (UB) collaboratively set up a yearly returning Open Research Award, with the first awards being awarded October 2020. The goal of the Open Research Award is to (1) raise awareness and promote Open Research [1] practices, for example, pre-registration or Open Access publishing; (2) and incentivize incorporating Open Research in research by acknowledging and rewarding it. In this blog, we will play devil’s advocate so that we can cover pros and cons of introducing an Open Research Award and its uptake of a modified lottery.

Celebrating openness: A wrong signal to give?

One could also argue that, if indeed the future is “open”, and we as an OSCG aim to promote Open Research as the new normal, we should not reward it. In fact, handing out an Open Research Award might perversely signal that Open Research is an optional add-on instead of normative behavior. In other words: Isn’t the OSCG sending the wrong signals here? The answer is simple: No! It would be awesome if Open Research would be the norm, but currently science is still in transition towards Open Research as a full replacement of the current (closed) norm, and this process can take several years – if not decades. Hopefully, Open Research will soon be a tautology, because Open Access, transparency, reproducibility and openness of data and methods are common practice in science. However, currently, the majority of papers are not fully Open Access, not pre-registered and data, and codes are typically not shared; research is too often not reproducible; and studies in all fields too often do not replicate because original research turned out to be the result of non-transparent research practices. In sum, the Open Research Award does not signal that Open Research is an optional add-on instead of normative behavior, rather, it celebrates and highlights examples of ways in which research can be made more “open”. 

Aren’t there already too many awards?

For researchers, awards are a form of recognition, and they can help to build a competitive CV, but there are already many awards within the UG. For example, the Aletta Jacobs Prize; Van Swinderen Prize; UG Impact Award (for an overview, see all University of Groningen awards here). Do we really need another award? If you ask the OSCG: Yes, under the provision that it awards Open Research! Ideally, researchers adopt Open Research practices purely based on ideological principles, but if explicit recognition and incentives can encourage that, then why not make use of that? Researchers often consider implementing Open Research principles into research as not rewarding in the current academic system, at least not in the short run. That is, despite the The Association of Universities in the Netherlands (VSNU) and The Dutch Research Council (NWO) committed themselves to move away from metric based evaluations by signing the San Francisco Declaration on Research Assessment, 2012, on the institutional level, changes are not incorporated and rewarded yet. The UG and the University Medical Centre Groningen (UMCG), for example, have not publicly committed themselves to move away from journal metric based evaluations (i.e., signed the declaration)[2]. Researchers still perceive the number of publications and the impact factor of the journal as the main evaluation factors, and that view is confirmed by the award-criteria used in most of the RuG and UMCG awards that are currently in place (e.g., awarded based on impact factor). With regard to the more general question as to whether we really need more awards: Maybe not. It would be great if we can get rid of awards that evaluate research and researchers based on journal impact factors!

Aren’t there more effective ways to increase the use of Open Research practices?

Yes, absolutely! One of the most effective ways is training students in applying Open Research practices right from the start, and educating current researchers on how to transition to Open Research. The OSCG tries to meet these needs by taking stock of current Open Research related curricula, and by offering hands-on workshops on topics such as the importance of Open Research, p-hacking, pre registration, registered reports, and transparent visualization. It would help if inclusion of Open Research practices would be required in all educational programs of the University. Until then, the Open Research Award is just one of the many tools available that we use to increase the use of responsible and Open Research, preferably in combination with other initiatives/incentives.

Why would you allocate the money based on a “lucky draw” (i.e., modified lottery system)?

Awards are typically awarded by means of peer-review. That is, a review committee reviews the submission and selects a recommended winner. The peer review way is the gold standard and is used in many schemes such as grants, scholarships, and awards. However, there is little evidence that the review process is the best way to evaluate submission. In fact, there has been a lot of criticism with regard to the ranking systems that are often used in committees where each committee member ranks the proposals from best to worst. Typically, committee members all rank the proposals individually from best to least, and these rankings are averaged across all committee members. However, there is often a lot of disagreement among committee members when they rank the same proposals (Pier et al., 2018), and the averaged ranking score appears to be a poor predictor of researchers’ publication and citation productivity thereafter (Fang, Bowen, and Casadevall, 2016).

A promising alternative to this “gold standard” of peer-reviewed ranking, is a modified lottery.  Modified lotteries are similar to the typical award evaluation process described above, but add more randomness to allocation of the award in the ”upper half” (Fang and Casadevall, 2016). The conventional review process is already a bit random as there is a sort-of-random selection of researchers who are invited for the jury panel, and it sort-of-randomly varies which of the approaches researchers have the time to review and rank the proposals in the time frame provided. Moreover, there is often so little variation in quality in the shortlisted submissions, that many researchers acknowledge the large component of “luck”. In a modified lottery, all the submissions that meet the criteria (i.e., are of good quality) purposely have the same chances of winning the award. Introducing such additional randomness has advantages, such as reducing implicit biases that committee members themselves might not even be aware of (e.g., genderism, ageism), and improving diversity among awardees (Adam, Nature news, 20 november 2019). Because of these additional advantages, an increasing number of grant-awarding bodies are open to an uptake of a modified lottery (Liu, Choy, Clarke, et al., 2020). 

In addition to the decrease in bias, the modified lottery also decreases unwanted competitiveness. Many researchers suffer in a system purely based on competitiveness, and an award that stimulates competitiveness does not align with the new balance in the recognition and rewards of academics that VSNU strives for (see: Room for everyone’s talent). The issue of competitiveness in the Open Research Award is avoided by awarding an Open Research Award certificate to every researcher or research team of whom the submitted case study fulfills the basic criteria for the Award, and by accepting challenges and difficulties of making open choices as well as those that celebrate positive experiences and successful outcomes. In addition, instead of one large prize of €1500, three smaller prizes of €500 are randomly allocated among the eligible submissions, straying away from the conventional winner-take-all approach.

In conclusion, the Open Research Award rewards transparency without unnecessary competitiveness and distributes available funding in a manner that acknowledges that funding is distributed at least partly based on luck. Implementing Open Research principles is not always easy. Many researchers still have to overcome resistance to transitioning to Open Research from supervisors and other stakeholders higher up in the hierarchy. Furthermore, it takes time and effort to transform daily practices, and opening up is currently too often perceived as an investment without clear career benefits. Hopefully, the Open Research Award will help researchers that tried to be more transparent to feel acknowledged and their endeavors valued, and will encourage the rest.

Vera E. Heininga & Maurits Masselink, co-founders of the OSCG

Acknowledgements

We would like to thank Malvin Gattinger, Daan Ornée, Babette Knauer, and Giulia Trentacosti for their input and feedback on earlier versions of this blog.

Footnotes

[1] We use the term Open Research instead of the commonly used Open Science to emphasize that we refer to all research, from all research disciplines.

[2] The UG is currently working on implementing the Strategy Evaluation Protocol of the VSNU, KNAW and NWO, which includes sections about Open Research and evaluation of research(ers) without the use of impact factors. We are looking forward to seeing the results of this implementation.

References

  1. Adam, D. (2019) Science funders gamble on grant lotteries: A growing number of research agencies are assigning money randomly. Nature 575, 574-575. doi: 10.1038/d41586-019-03572-7. 
  2. Liu, M., Choy, V., Clarke, P. et al. (2020) The acceptability of using a lottery to allocate research funding: a survey of applicants. Res Integr Peer Rev 5, 3. https://doi.org/10.1186/s41073-019-0089-z
  3. Ferric C. Fang, Arturo Casadevall, (2016) Research Funding: the Case for a Modified Lottery. mBio Apr 2016, 7 (2) e00422-16; DOI: 10.1128/mBio.00422-16
  4. Fang FC, Bowen A, Casadevall A. NIH peer review percentile scores are poorly predictive of grant productivity. eLife. 2016;5:e13323. https://doi.org/10.7554/eLife.13323.001
  5. Pier, E. L., Brauer, M., Filut, A., Kaatz, A., Raclaw, J., Nathan, M. J., Ford, C. E., & Carnes, M. (2018). Low agreement among reviewers evaluating the same NIH grant applications. Proceedings of the National Academy of Sciences of the United States of America, 115(12), 2952–2957. https://doi.org/10.1073/pnas.1714379115
  6. San Francisco Declaration on Research Assessment, 2012. https://sfdora.org/read/