mouse-research

Instead of repeating an experiment in a mouse model of disease in their laboratory, researchers in Berlin, Germany used a coin toss to confirm whether a drug protects the brain against a stroke, as reported in their paper publishing April 9 in the open-access journal PLOS Biology.

With this provocative and seemingly absurd experiment, Sophie Piper and colleagues from the Berlin Institute of Health (BIH) and the Charité-Universitätsmedizin Berlin drastically expose a problem that potentially affects many studies in experimental biomedicine. Small sample sizes, often below 10, and almost universally loose thresholds for accepting statistical significance (five percent) lead to a high rate of false positive results and an overestimation of true effects. Their study alerts researchers that, contrary to common expectation, replication of a study—in settings which are common in many laboratories worldwide—may not add more evidence to what could be gained from tossing a coin.

Many research fields are struggling with what has been termed "the replication crisis." Quite often results from one laboratory cannot be replicated by researchers in another lab, with successful replication rates often falling below 50 percent. This has shaken confidence in the robustness of the scientific enterprise in general and stimulated a search for underlying causes.

Toward this end, many researchers have started to repeat experiments within their laboratories as an integral part of robust science and good scientific practice. Yet in their article, Piper and colleagues scrutinize the utility of replicating experiments within laboratories and send a surprising message of caution regarding current replication practices. They provide detailed theoretical and practical background for how to properly conduct and report replication studies to help scientists save resources and prevent futile use of animals, while increasing the robustness and reproducibility of their results.

"Replication is fundamental to the scientific process. We can learn from successful and from failed replication—but only if we design, perform, and report them properly," said the authors.