Study That Undercut Psych Research Got It Wrong

Widely reported analysis that said much research couldn’t be reproduced is riddled with its own replication errors, researchers say.

Written byHarvard University
| 7 min read
Register for free to listen to this article
Listen with Speechify
0:00
7:00

According to two Harvard University professors and their collaborators, a widely reported study released last year that said more than half of all psychology studies cannot be replicated is itself wrong.

In an attempt to determine the “replicability” of psychological science, a consortium of 270 scientists known as the Open Science Collaboration (OSC) tried to reproduce the results of 100 published studies. More than half of them failed, creating sensational headlines worldwide about the “replication crisis” in psychology.

But an in-depth examination of the data by Daniel Gilbert, the Edgar Pierce Professor of Psychology at Harvard, Gary King, the Albert J. Weatherhead III University Professor at Harvard, Stephen Pettigrew, a PhD student in the Department of Government at Harvard, and Timothy Wilson, the Sherrell J. Aston Professor of Psychology at the University of Virginia, has revealed that the OSC made some serious mistakes that make its pessimistic conclusion completely unwarranted.

The methods of many of the replication studies turn out to be remarkably different from the originals and, according to the four researchers, these “infidelities” had two important consequences.

Related Article: A Statistician Intent on Sharing Research To Promote Better Science

First, the methods introduced statistical error into the data, which led the OSC to significantly underestimate how many of their replications should have failed by chance alone. When this error is taken into account, the number of failures in their data is no greater than one would expect if all 100 of the original findings had been true.

Second, Gilbert, King, Pettigrew, and Wilson discovered that the low-fidelity studies were four times more likely to fail than were the high-fidelity studies, suggesting that when replicators strayed from the original methods of conducting research, they caused their own studies to fail.

Finally, the OSC used a “low-powered” design. When the four researchers applied this design to a published data set that was known to have a high replication rate, it too showed a low replication rate, suggesting that the OSC’s design was destined from the start to underestimate the replicability of psychological science.

Individually, Gilbert and King said, each of these problems would be enough to cast doubt on the conclusion that most people have drawn from this study, but taken together, they completely repudiate it. The flaws are described in a commentary to be published Friday in Science.

To continue reading this article, sign up for FREE to
Lab Manager Logo
Membership is FREE and provides you with instant access to eNewsletters, digital publications, article archives, and more.
Add Lab Manager as a preferred source on Google

Add Lab Manager as a preferred Google source to see more of our trusted coverage.

Related Topics

Current Magazine Issue Background Image

CURRENT ISSUE - March/2026

When the Unexpected Hits

How Lab Leaders Can Prepare for Safety Crises That Don’t Follow the Script

Lab Manager March 2026 Cover Image