Retraction Watch
Tracking retractions as a window into the scientific process
“Why Has the Number of Scientific Retractions Increased?” New study tries to answer
with 4 comments
The title of this post is the title of a new study in PLOS ONE by three researchers whose names Retraction Watch readers may find familiar: Grant Steen, Arturo Casadevall, and Ferric Fang. Together and separately, they’ve examined retraction trends in a number of papers we’ve covered.
Their new paper tries to answer a question we’re almost always asked as a follow-up to data showing the number of retractions grew ten-fold over the first decade in the 21st century. As the authors write:
Their new paper tries to answer a question we’re almost always asked as a follow-up to data showing the number of retractions grew ten-fold over the first decade in the 21st century. As the authors write:
…it is unclear whether this reflects an increase in publication of flawed articles or an increase in the rate at which flawed articles are withdrawn.In other words, is there more poor or fraudulent science being published, or are readers and editors just better at finding it — perhaps thanks to better awareness? These explanations aren’t mutually exclusive, of course. Steen et al:
The recent increase in retractions is consistent with two hypotheses: (1) infractions have become more common or (2) infractions are more quickly detected. If infractions are now more common, this would not be expected to affect the time-to-retraction when data are evaluated by year of retraction. If infractions are now detected more quickly, then the time-to-retraction should decrease when evaluated as a function of year of publication.When the authors looked at 2,047 retracted articles indexed in PubMed, they found:
Time-to-retraction (from publication of article to publication of retraction) averaged 32.91 months. Among 714 retracted articles published in or before 2002, retraction required 49.82 months; among 1,333 retracted articles published after 2002, retraction required 23.82 months (p<0.0001). This suggests that journals are retracting papers more quickly than in the past, although recent articles requiring retraction may not have been recognized yet.Fang and Casadevall have also showed that high-impact factor (IF) journals are more likely to retract. In the new study, the authors report that
Time-to-retraction was significantly shorter for high-IF journals, but only ~1% of the variance in time-to-retraction was explained by increased scrutiny.And plagiarism and duplication — the latter reason for retraction having become so frequent that we can’t cover them all — are relatively new on the landscape, meaning a jump in numbers is to be expected:
The first article retracted for plagiarism was published in 1979 and the first for duplicate publication in 1990, showing that articles are now retracted for reasons not cited in the past.The effect of those who would have shown up frequently on an earlier version of Retraction Watch — think the analogues of modern-day scientists like Joachim Boldt, Yoshitaka Fujii, and Diederik Stapel — was impressive:
The proportional impact of authors with multiple retractions was greater in 1972–1992 than in the current era (p<0.001). From 1972–1992, 46.0% of retracted papers were written by authors with a single retraction; from 1993 to 2012, 63.1% of retracted papers were written by single-retraction authors (p<0.001).More details on that:
Authors with multiple retractions have had a considerable impact, both on the total number of retractions and on time-to-retraction. In 2011, 374 articles were retracted; of these, 137 articles (36.6%) were written by authors with >5 retractions. Articles retracted after a long interval (≥60 months after publication) make up 17.9% of all retracted articles; approximately two-thirds (65.7%) of such articles were retracted due to fraud or suspected fraud, a rate of fraud higher than in the overall sample [8]. Among fraudulent articles retracted ≥60 months after publication, only 10.4% (25/241) were written by authors with a single retraction.We asked Daniele Fanelli, who studies misconduct in science, for his reaction to the findings:
The finding that journals are retracting papers more quickly than in the past is very good news, as it shows how the scientific system of self-correction is improving. All the other data presented in the paper can also be interpreted, most simply, as an improvement in the system of detection. Retractions, whether by single or multiple authors, are growing because more journals are retracting. High-impact factor journals retract more and more rapidly because they have more readers and better policies. Studies have shown that impact factor is the best predictor of a journal having clear and active policies for misconduct. So any correlation between retractions and impact factor has a trivial explanation.We happen to agree that the growing number of retractions is a good thing, as we wrote in Australia’s The Conversation last year, and not just because it means we have more to write about. What we’d really like to see, of course, is more transparency in those notices — which is something the authors of the new study end with:
In sum, there is no need to invoke “Lower barriers to publication of flawed articles”, as the authors do. I am not saying that scientific misconduct is not increasing. Maybe it is, maybe it is not. But the evidence is inconclusive, and statistics on retractions have no bearing on the issue. Whatever the current prevalence of misconduct might be, it is most likely higher than the extremely small proportion of papers that are currently retracted each year. So retractions are a good thing, and we should just hope to see more of them in the future.
Better understanding of the underlying causes for retractions can potentially inform efforts to change the culture of science [41] and to stem a loss of trust in science among the lay public [42], [43].
“Why Has the Number of Scientific Retractions Increased?” New study tries to answer | Retraction Watch