A groundbreaking 2015 report that cast doubt on nearly half of a number of published psychology studies in the United States exposes deep divisions in the mental health field. Now, new critiques of the report by researchers say the document is statistically flawed, The New York Times reports.
The report is titled the Reproducibility Project and suggested that less than 40 studies in a sample of 100 psychology papers published in leading American health journals held up when retested by an independent team of researchers. But the researchers who critiqued the report said that when that team’s methodology was adjusted, the study retest success rate could actually be closer to 100 percent.
“That study got so much press, and the wrong conclusions were drawn from it,” says Timothy Wilson, PhD, a professor of psychology at the University of Virginia and one of four authors of the recent critique. “It’s a mistake to make generalizations from something that was done poorly, and this we think was done poorly.”
But one researcher complained that the critique was highly biased and based on assumptions made from selectively interpreting study data.
Still, the critique raised questions about how faithfully the research replication team adhered to the original design of the studies they tested. The back and forth between scientists on both sides of the debate highlighted a big generational change in the psychology field. Younger researchers have begun sharing their data and study designs before publication to help improve transparency.
What’s more, the critique may fuel continued debate in the field about better ways to conduct and evaluate projects that replicate studies.