The Replication Crisis: Making Science True
The Royal Society of London is the oldest national scientific institution in the world. Founded in 1660, the Royal Society was one of the birthplaces of science as we know it today. The Royal Society’s founding ideal became its motto–“nullius in verba,” meaning “take nobody’s word for it.” Every week, the fellows of the Royal Society would meet to show each other their discoveries, with one simple rule: if you couldn’t demonstrate it, right there on the podium, nobody believed it. In those days, a scientist wasn’t someone who made discoveries–it was someone who replicated discoveries. Nearly four hundred years later, some scientists worry we’ve forgotten our roots.
Replication is a simple idea: the truth should be the same no matter how many times you look. Suppose a friend claims to be able to read your mind. You decide to test it out by thinking of a number, and indeed your friend guesses it right. Would that make you believe your friend is a mind-reader? Probably not. You might want to try a couple more times, or maybe ask your friend to do something slightly different, like guess what animal you’re thinking of. In any case, you wouldn’t be convinced your friend was telling the truth until you saw it again.
Scientific experiments claim to uncover an underlying truth, like some pattern or connection. When a study is replicated, other scientists check the work and see if the pattern holds up.
One of the leading figures in the study of replication is Professor John Ioannidis, who studies replication at the Meta-Research Innovation Center at Stanford (METRICS). “I always liked doing scientific research of all sorts–I tried my hands on different things and I still try my hands on different things,” he reflects. “There were some common denominators in all of that.” As Ioannidis became exposed to more areas of research, he became intrigued by the similarities in the methods different fields used–and the ways those methods broke down. “The methods usually are the fine print, but I find them very exciting, because the methods determine the science that you get.”
A Waste of Effort?
Aren’t scientists pretty smart, though? Why should we bother checking their work? Shouldn’t we just trust that they didn’t make any mistakes? Although scientists are experts at careful research, doing good science is hard, and even a slight mishap can throw things off base. Professor Ioannidis explains that even scientists recognize this reality. “It doesn’t have to be something that is conscious. It could just be the struggle to make discoveries that makes us stretch and maybe come up with conclusions that are not trustworthy.” After spending a career studying the ways scientists make mistakes, Ioannidis knows better than anyone that mistakes are a critical part of the scientific process. “There’s so many ways that we can fool ourselves, or we can get fooled by data, observations, measurements, analysis, our software, our instruments, our approaches,” he says. “Nothing is perfect.”
A Tool for every Task
There are many different kinds of replication, each aimed at addressing a specific question. Sometimes, replication is all about double-checking the math. In this type of replication, a scientist makes sure that the methods used by a study are error-free and give consistent results, usually using the same data as the original study. “It’s about having methods that work consistently, so you can trust that next time they will do the same thing,” explains Ioannidis.
Sometimes, however, the issues in a study are more subtle; maybe the data was collected wrong, or the pattern was a fluke. For these problems, scientists use a different kind of replication, where they try to find the same pattern in a new study with all new data.
Finally, to make sure the results mean what we think they do, there’s a third kind of replication. Here scientists go beyond the results and look at the interpretations of the patterns that studies find. When scientists compare notes between multiple studies, they can ask bigger questions about how we think about the patterns in data.
A Scientific Pillar Neglected
Replication may seem like a “nice-to-have” for a study, an extra way to be sure nothing went wrong. But in reality, replication is much more than a spell-checker. “Replication is at the core of the scientific method,” Professor Ioannidis affirms. “It’s about making sure what we get out of science can be repeated, can be seen again, can be expected in the future.” Without replication, scientists couldn’t be sure which conclusions could be trusted, and which were the result of flukes or errors. Non-replicable studies are dangerously misleading and can lead to other studies going down fruitless paths, or even to bogus science being used in the real world. “If we can’t get the same results,” Ioannidis asks, “how can we apply that knowledge?”
Despite the importance of replication, it’s not the most glamorous of topics. Scientists are usually excited about pushing the frontier of knowledge, and verifying the results is often an afterthought. Furthermore, even top-notch replication isn’t as good for a scientist’s career as a big discovery. As a result, scientists often neglect replication and forge ahead with new experiments, hoping someone else will pick up the slack. But if no one wants to do it, how much replication actually gets done? “It varies across disciplines,” Professor Ioannidis advises, but the overall pattern is clear: “Most fields are not replicating most of the studies that they do, and this is a problem.” The problem has only been compounded by technological advancements in science. Today, scientists can collect vast amounts of data, and use automated techniques to perform vast amounts of analysis. Counterintuitively, this makes it harder to replicate studies–with so much data being collected and analysis being done, new studies are being created faster than scientists can replicate them.
The Curious Case of Replication in Psychology
One discipline in particular has experienced a crisis of replication in recent years. Until recently, replication was a rarity in psychology. Occasionally, worrying signs popped up like a failed replication of a high-profile study or a lab unable to get the expected results for a textbook experiment. In 2012, the Open Science Foundation decided it was time to do a sanity check. The OSF launched the Reproducibility Project, a project aiming to reproduce 100 high-profile, well-regarded studies from 3 big-name journals. The project convinced hundreds of researchers to help, and set about replicating the studies. They tried to make replication go as smoothly as possible, working with the original authors of the studies in almost all cases. Three years later, the answers arrived, and they shocked the field of psychology to its core. 64 of the studies–almost two-thirds–couldn’t be replicated. Even among the studies that were replicated, the strength of the effects observed was usually much smaller. On average, replications found effects about half as strong as the original studies. While many have raised questions the results of the Reproducibility Project and how they should be interpreted, one thing is clear: something needs to change. “[The Reproducibility Project] caused a lot of questioning and consternation, but it also lead to a lot of thinking,” Professor Ioannidis said. “How are we doing research? Should we change some of our processes? I think this has been a fruitful discussion.”
Turmoil and Trust
The replication crisis has caused much controversy in the scientific world. Many scientists whose work couldn’t be replicated felt that they were being attacked, responding with replications of their own. “It’s unavoidable that people will try to defend their positions and their data,” Professor Ioannidis acknowledged. “I don’t see a problem with that, provided that we stick to the science, rather than make it a war or a shaming process.” Ultimately, scientists have to remember why replication is important in the first place–to get closer to the truth.
Many outside the scientific community have also become alarmed in the wake of the crisis. Some have taken the Reproducibility Project’s results to mean that psychology can’t be trusted. Instead, Professor Ioannidis believes that people should be more trusting of a discipline that is carefully trying to improve their standards and questioning their methods. The problem would be much worse if, instead of studying their own flaws, scientific disciplines tried to hide and obscure their failings. “I think the general public is becoming a bit skeptical, sometimes even cynical of science,” Ioannidis lamented. “I think we are fueling them when we promise an ideal of science as being perfect and impossible to be mistaken on anything,” he cautioned. “That’s not really science.”
A Thousand Ways Forward
In the face of the replication crisis, many have been asking: what should we do about it? Scientists have proposed a plethora of potential solutions. For one, the increasing digitization of science has made it easier to do large-scale collaborative research–like the Reproducibility Project itself–that can safeguard studies from mistakes. Furthermore, new collaborations, like the Open Science Foundation, have been making the sharing of data, methods, and software easier. With better access to the nitty-gritty inner workings of a study (which are usually omitted from a published paper), replication becomes much easier. Some have called for quotas requiring journals to publish more replications. Others think the solution will come from a change in culture, transforming replication from a boring afterthought to a mark of a quality researcher. “I think that it’s unlikely that there’s a silver bullet, that we do one thing and everything is fixed,” Professor Ioannidis speculated. More likely, a combination of different changes–both to methods and to culture–will be the solution. One thing is certain: scientists need to work together to solve this crisis.
Science Needs You!
Scientists are working hard to solve the replication crisis, but more than anything they need fresh perspectives. Many scientists have been working in their fields for many years, and have settled on a certain way of doing things. More than ever, science needs new voices–new scientists, people from other disciplines, and anyone who can think outside the box. “If you aim to make a discovery that will change the world,” Professor Ioannidis implored, “change research practices. The impact that you can have, across multiple fields, can be tremendous.”
Professor Ioannidis wants to work with students to bring new voices into the discussion. “I want to have students join our lab meetings and seminars at METRICS and brainstorm potential ideas,” he said. If you’re up to the challenge, contact Professor Ioannidis at firstname.lastname@example.org, or keep thinking (and reading!) about these issues on your own.
- (2015). Estimating the reproducibility of psychological science. Science, 349(6251). Retrieved from http://science.sciencemag.org/content/349/6251/aac4716.abstract.
- Ioannidis, J. P. (2018, February 28). Scientific replication [Personal interview].