A new preprint warns of nearly doubling the number of fake scientific papers in neuroscience and medicine since 2010. Do note this is before the release of e.g. ChatCPT.
How did the researchers check this?
In Study 1, n=215 neurology articles were manually inspected by an experienced editor; 20.5% (n=44) were deemed suspicious. A questionnaire was sent to all authors and, for control, to 48 authors of non-suspicious papers. It contained questions that authors of fake papers might be reluctant to answer (e.g., “Are you willing to provide original data?” [only 1 author of 44 suspicious articles did] and “Did you engage a professional agency to help write your paper? [none did]; see Tab. 3). Despite repeated reminders with a warning that failure to reply – or replying inadequately – could trigger retraction, the response rate among suspected authors was only 45.4% (20/44) compared with 95.8% (46/48) for the control group. This survey provided the first indicators of red-flagged fake publications (RFP).
The statistics in the title come from the second study, RFP’s being red-flagged fake publications:
Study 2 analyzed the frequency of these indicators in five randomly chosen neuroscience journals, expanded in Study 3 to a larger sample of articles from those five neuroscience journals and an additional five medical journals bi-annually (2010-2020). The results show a rapid growth of RFPs over time in neuroscience (13.4% to 33.7%) and a somewhat smaller and more recent increase in medicine (19.4% to 24%) (Fig. 2). A cause of the greater rise of neuroscience RFPs may be that fake experiments (biochemistry, in vitro and in vivo animal studies) in basic science are easier to generate because they do not require clinical trial ethics approval by regulatory authorities.
Found this preprint via this tweet:
Abstract of the preprint:
Background Integrity of academic publishing is increasingly undermined by fake science publications massively produced by commercial “editing services” (so-called “paper mills”). They use AI-supported, automated production techniques at scale and sell fake publications to students, scientists, and physicians under pressure to advance their careers. Because the scale of fake publications in biomedicine is unknown, we developed a simple method to red-flag them and estimate their number.
Methods To identify indicators able to red-flagged fake publications (RFPs), we sent questionnaires to authors. Based on author responses, three indicators were identified: “author’s private email”, “international co-author” and “hospital affiliation”. These were used to analyze 15,120 PubMed®-listed publications regarding date, journal, impact factor, and country of author and validated in a sample of 400 known fakes and 400 matched presumed non-fakes using classification (tallying) rules to red-flag potential fakes. For a subsample of 80 papers we used an additional indicator related to the percentage of RFP citations.
Results The classification rules using two (three) indicators had sensitivities of 86% (90%) and false alarm rates of 44% (37%). From 2010 to 2020 the RFP rate increased from 16% to 28%. Given the 1.3 million biomedical Scimago-listed publications in 2020, we estimate the scope of >300,000 RFPs annually. Countries with the highest RFP proportion are Russia, Turkey, China, Egypt, and India (39%-48%), with China, in absolute terms, as the largest contributor of all RFPs (55%).
Conclusions Potential fake publications can be red-flagged using simple-to-use, validated classification rules to earmark them for subsequent scrutiny. RFP rates are increasing, suggesting higher actual fake rates than previously reported. The scale and proliferation of fake publications in biomedicine can damage trust in science, endanger public health, and impact economic spending and security. Easy-to-apply fake detection methods, as proposed here, or more complex automated methods can help prevent further damage to the permanent scientific record and enable the retraction of fake publications at scale.
[…] […]