[Blog] Silly… and unethical scientists

Silly… and unethical scientists

When scientists and non-scientists are confronted with similar problems, you may expect those smart scientists to handle the problems more sensibly than non-scientists. However, in this blog, I show that the opposite may be true and that scientists all too often go for solutions that are silly… and unethical.

Non-scientist

Imagine that someone has some time and money to spare. Driven by a desire to contribute something good to society, this person decides to help the homeless by building a homeless shelter. Unfortunately, the contractor has an unpleasant message: “I am very sorry, but with these limited financial and time resources, I cannot build a firm enough foundation to build a stable superstructure on. The shelter may look good from a distance, but on close inspection, you will see that the whole building will be at risk of collapsing.” Our good Samaritan is massively disappointed of course, but what will this person do? There are several options: abandoning the idea of building the shelter and opt for something smaller, looking for other ways of helping the homeless, postponing the plans and waiting until enough money is saved to build the shelter, or trying to collect extra resources from other parties willing to collaborate, or even setting aside the plan completely. The one thing you would not expect the Samarian to do is telling the contractor “that is fine, go ahead and build it anyway”. That would be silly…and unethical.

Scientists

The scenario above is in many ways similar to the problems scientists face. Many scientists are driven by curiosity and a willingness to contribute to the betterment of society in some direct or indirect way via their research. Also similarly to our good Samaritan, scientists have only limited resources, in the form of time, money and available research subjects (e.g. human participants, mice, cell materials, etc.). Without sufficient resources, the results of scientific endeavors will not provide firm foundations on which further research could be built, or on which interventions and treatments with any chance of success can be based. What will researchers do when confronted with a situation in which the available resources are insufficient for the desired study? Surely, you won’t expect them to say “Let’s go ahead anyway!”, that would be silly… and unethical.

From a distance, it seems that researchers are able to collect enough resources and perform high quality foundation reinforcing research. Psychology researchers for example, seem to be very efficient in determining what to study and how to uncover effects, given that around 90% of studies published in scientific journals show positive results (Sterling et al., 1995). There seems to be a solid foundation of research on which stable superstructures of research are built.

However, it turns out that for too long, we have not been looking closely enough. In the past 10 years, psychology has gone to a reproducibility and replicability crisis. Many results of previous studies do not replicate (Open Science Collaboration, 2015). It also became apparent that many studies lacked the resources to have any good chance to find an effect, even if there was a real effect. It has been estimated that the average power of the typical psychology study was only around 40% (Bakker et al., 2012), meaning that if there truly is an effect, flipping a coin would more often provide the right answer about whether there is a real effect or not. The low power of psychology studies implies that, when faced with insufficient resources, researchers said “Let’s go ahead and proceed anyway!”. This is silly… and unethical.

It is unethical, because underpowered studies provide very little useful information. Estimates are highly inaccurate due to sampling error, even if an effect is significant, the effect size is likely heavily inflated (Gelman & Carlin, 2014). Because little scientific value is added, underpowered studies waste the time of patients and give them false hope that something useful can come out of their participation, policy and treatment decisions may be based on misguided information, it is a waste of tax-payers money, and it is a waste of the researchers’ own precious time and energy.

The unflattering conclusion is that too many researchers have settled with building research structures that only look good from a distance but collapse upon close inspection.

There are likely many things that have contributed to this sorry situation: the rotten system of the publish or perish culture, the requirement to keep coming up with something new and innovative, and perhaps also ignorance and a large portion of wishful thinking. There may also be well intentioned researchers with a hands on “we have got to work with what we have and make the best of it” attitude, but it is important to realize that, just as a homeless person is not helped with a shelter at risk of collapsing, a patient, the scientific community, or society in general, is not helped with research providing no reliable results, regardless of the intentions. After all, the road to hell is paved with good intentions.

Not acting silly…

So what should scientists do when faced with a situation when there are insufficient resources to carry out initial research plans? Scientist have the same options as the good Samaritan in the opening scenario of this blog. One option is just doing nothing, which can be a much better option than doing something. One could set plans aside for a while until enough funding is available, or research questions and designs be may adjusted to something that can be investigated properly. Probably the most promising option, but too rarely used, is to focus on gathering extra resources through collaboration. All researchers have to deal with limited resources, which may be time and money, but can also be access to rare study populations. Combining the scarce resources from multiple research groups can together accumulate in sufficient resources. Collaborations across different countries can have the additional benefit of improving the generalizability of research findings. Tools to facilitate collaboration are already in place, for example with the Psychological Science Accelerator, which is a network for collaborations on psychology research (https://psysciacc.org/).

Not all options will be as suitable for everybody and not in every situation. The best solution for a specific problem will be context dependent. But the one thing researchers should not do, is carrying on building structures at risk of collapsing. That would be silly… and unethical.

Lead author: Maurits Masselink 

Acknowledgments: Thank you Vera Heininga, Stefania Barzewa, and all those of ReproducibiliTEA Groningen who provided feedback, and grammar police woman Melinda Mađarević

Refences

Bakker, M., van Dijk, A., & Wicherts, J. M. (2012). The Rules of the Game Called Psychological Science. Perspectives on Psychological Science, 7(6), 543–554. https://doi.org/10.1177/1745691612459060

Gelman, A., & Carlin, J. (2014). Beyond Power Calculations: Assessing Type S (Sign) and Type M (Magnitude) Errors. Perspectives on Psychological Science, 9(6), 641–651. https://doi.org/10.1177/1745691614551642

Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716. https://doi.org/10.1126/science.aac4716

Sterling, T. D., Rosenbaum, W. L., & Weinkam, J. J. (1995). Publication decisions revisited : the effect of the outcome o. The American Statistician, 49(1), 5.