For over three years now, we at the Laura and John Arnold Foundation (LJAF) have been thinking about the problem of reproducibility in science.
One of our first efforts was to launch the Center for Open Science (COS). This afternoon, a flagship COS project, the Reproducibility Project in Psychology, published its findings in Science.
The project took 100 psychology experiments published in top journals in 2008, and then coordinated the replication of those experiments by over 250 scientists from around the world. The effort’s unprecedented size and ambition drew admiring coverage from The New York Times, The Washington Post, The Atlantic, The Economist, Vox, FiveThirtyEight, Wired, and many other media outlets.
The actual results, however, were a bit depressing: Under half of the original results could be successfully replicated by an independent lab.
What does this mean? In our view, there are three important lessons:
First, replication should be far more of a routine practice in science. Ironically, while nearly all scientists profess that replication is important, they all too often leave the job of replication to someone else, preferring to focus on the more interesting — and publishable — task of doing their own original work. And even when replication studies do occur, they are often left unpublished, to be mentioned only at hotel bars during an academic conference.
In many scientific disciplines, we need more outlets that will reward replication studies with the honor of publication. We also need more funders — such as the National Institutes of Health, the National Science Foundation, and private philanthropy — to set aside a percentage of funds for the replication of prior work.
Second, strict replications would not be nearly as necessary if the original studies were more rigorous in the first place. For example, if original studies pre-register their hypothesis and design, have an appropriate sample size, and use statistics correctly, they are more likely to be true. Taking time to do things right the first time would prevent other researchers from being led down blind alleys.
Third, the fact that over half of psychology studies failed to be replicated does not mean that science is worthless. In fact, it doesn’t even mean that the original studies were necessarily wrong. It may just mean that the world and human behavior are more complicated than can be summed up in any one experiment. In turn, that means that everyone from pundits to policymakers should be more modest in claiming to know what “research shows” about how humans behave or how society works.
We are proud to have funded the Reproducibility Project: Psychology, and we hope that it provides an example for funders and journals to promote in many other disciplines.