Half of Social Science Studies Can't Be Repeated, New Report Shows

A major study tested nearly 3,900 social science papers and found that about half of the results could not be repeated by other researchers. This is a big problem for trusting science.

A substantial, years-long project, designated SCORE, involving hundreds of researchers across numerous countries, has uncovered a concerning trend within the social sciences: approximately half of published research findings cannot be independently replicated. This endeavor, which scrutinized close to 3,900 articles from 2009 to 2018 across various social and behavioral disciplines—including economics, political science, psychology, and sociology—found no straightforward method to predict which studies would falter.

Only About Half Of Social Science Results Can Be Replicated, Finds New Study - 1

The primary factor correlating with a higher chance of replication was the availability of data. Only a third of the papers examined had their data and computational code readily accessible, yet these papers exhibited a significantly better rate of successful reproduction. This points to data accessibility as a crucial, though currently underutilized, element in ensuring research reliability.

Only About Half Of Social Science Results Can Be Replicated, Finds New Study - 2

The Scope of the Problem

The SCORE initiative, a sprawling effort in both the United States and internationally, examined a broad spectrum of social science fields. Researchers meticulously tested previously published results to assess their reproducibility, robustness, and general replicability. The findings suggest that the reliability of the scientific literature in these areas is far from absolute. This mirrors previous investigations in psychology and biomedical science, raising further questions about the foundations of knowledge in these disciplines.

Read More: Study: Hot Weather Does Not Make People Less Cooperative

Only About Half Of Social Science Results Can Be Replicated, Finds New Study - 3

Data Availability: A Glimmer of Hope?

While the overall replication rate remains low, the SCORE study identified data availability as a key differentiator. Papers that made their underlying data and analytical code publicly accessible demonstrated a markedly higher success rate in replication attempts. This underscores the potential of 'open science' practices to bolster the credibility of research, although the study notes that only one-third of papers in the SCORE sample met this standard.

Only About Half Of Social Science Results Can Be Replicated, Finds New Study - 4

Broader Implications

The implications of these findings extend to how new research is perceived. Economists involved in replication efforts, like Abel Brodeur, founder of the Institute for Replication at the University of Ottawa, report maintaining a degree of skepticism towards newly published papers. The challenge lies not only in replicating existing findings but also in understanding why replication fails, a complex issue potentially involving variations in research design, participant recruitment, and analytical methods across different studies.

Read More: Economics and Political Science Studies Hard to Repeat, New Research Shows

A Historical Context

This broad investigation builds upon earlier, similar projects that have highlighted issues with reproducibility. The Nature Human Behaviour journal has featured work in 2018, for instance, that found only 13 out of 21 high-profile social science experiments from top journals could be reproduced. These ongoing concerns signal a persistent struggle to establish a consistently reliable body of evidence within the social sciences.

The Nuances of Replication

It is important to distinguish between 'reproducibility'—achieving the same result using the same data and analytical methods—and 'replicability,' which involves testing the same research question with new data. While SCORE focused on replication, other aspects like robustness checks, which involve re-analyzing existing data with different methods, are also part of the broader effort to assess research integrity. The failure to replicate does not automatically invalidate original findings, as novel analytical approaches can legitimately yield different results. However, it does raise questions about the stability and generalizability of those findings.

Read More: New tools may help diagnose endometriosis faster for women

Behind the Scenes

The SCORE project, described in publications in Nature and Science, represents a significant, seven-year undertaking. Data and code from individual replication projects are made available through repositories like the OSF (Open Science Framework), facilitating transparency. The study's methodology and results are detailed in supplementary information accompanying the primary publications.

Frequently Asked Questions

Q: What did the SCORE study find about social science research?
The SCORE study, which looked at nearly 3,900 social science papers from 2009-2018, found that about half of the published research findings could not be repeated by other scientists. This means the results might not be reliable.
Q: Why is it hard to repeat social science studies?
A main reason is that data and computer code are often not shared. Only about one-third of the papers studied made their data available. When data is shared, studies are much more likely to be repeated successfully.
Q: Who is affected by the difficulty in repeating social science studies?
This affects students, other researchers, and anyone who uses social science findings to make decisions. If research can't be repeated, it's harder to trust the information and build new knowledge on it.
Q: What is being done to improve the situation?
The study suggests that sharing data and code, known as 'open science' practices, is very important. This makes it easier for others to check the research and helps build trust in scientific findings.
Q: Does failing to repeat a study mean the original research is wrong?
Not always. Sometimes different methods or new data can lead to different results. However, if many studies cannot be repeated, it raises questions about the strength and reliability of the original findings.