The growth of scientific papers and the peer review crisis have created a challenging landscape for academics. So many scientific papers are being published these days that even researchers are losing track. A recent article in The Guardian questions how long this system can last. Millions of papers are published annually. In 1980, this amounted to approximately 450,000 scientific articles per year worldwide. By 2022, that number had increased fivefold to around 2.5 million. And that’s without even considering the enormous increase in preprints—papers that are often not peer-reviewed, but are widely distributed. What used to be published in a year now rolls off the presses in a few months. This makes it nearly impossible for scientists to keep track of everything, let alone thoroughly assess it.
Moreover, many of these articles turn out to be superficial, poorly substantiated, or simply fake. The problem isn’t just the quantity, but the entire system behind it. The more you have to publish to keep your job or get promoted, the greater the temptation to prioritise speed over accuracy.
And that pressure is palpable. Peer review—once the hallmark of scientific quality—is groaning under the weight of all that work. Every researcher is expected to review dozens of papers a year for free, often at lightning speed (yes, certain editors, I’m looking at you…). At the same time, preprint platforms and commercial journals are popping up where, for a fee, you can publish just about anything, as long as it resembles an article. Sometimes literally: a growing number of AI-generated papers are appearing where the very existence of the test subjects is questionable, or with images that make no sense— a rat with six testicles being one of the more famous examples.
Technology complicates matters even further. AI now assists with writing, editing, and even conceptualising texts. Some authors hide prompts in their manuscripts to influence the AI reviewers’ judgment. Imagine the text in the PDF is white, so you can’t see it, but the AI can read it. This is disastrous for reviewers who, due to time constraints or convenience, use AI to assess potentially AI-authored articles. And at the same time, as a reader, you’re left scratching your head: which paper is reliable? Which is worth reading? And who has the time to wade through them all?
According to The Guardian, the situation has become so dire that prominent scientists and institutions are now openly advocating for change. Not a minor adjustment, but a thorough overhaul of how we assess and disseminate science. More room for in-depth research, less emphasis on numbers, better filtering systems, and abandoning scholarly communication as a revenue model for publishers. The Royal Society is even working on an alternative system that uses AI to help select what’s truly relevant, rather than publishing everything blindly.
I’m fortunate enough not to be subject to much pressure to publish, but I do notice it in my own assessments and in what I see published, even in top journals. The urge to publish ‘something,’ to be able to say you’re not standing still, even if you know it’s not groundbreaking work, is palpable. The increasing number of reviews or meta-analyses that some colleagues are groaning about – because there’s still research to be done – also seems to be related to this. Remember: science isn’t a bulk process. What we need is a revaluation of reliability. Not everything needs to be published. Not everything needs to be done quickly.
If we want science to remain reliable and gain trust, perhaps we simply need to dare to cut back. Publishing less also means less pressure. And that means more time, more attention, more space for critical reading, doubt, and correction. Because the trust that science is worthwhile isn’t earned with yet another publication, but with one that counts.
[…] Where the article is strong is in its analysis, which suggests that AI does not solve these problems but actually makes them more pressing. The authors sketch how generative AI undermines research stability: a study on GPT-3 is already outdated by the time GPT-4 or GPT-5 appears. What is more, AI speeds up and broadens literature reviews, but at the same time risks eroding critical depth – something I wrote about earlier in this blogpost. […]
[…] are being handed out, and coffee is being rushed down. But there are dark clouds on the horizon. I’ve written before about how AI is reshaping science. Every lecturer knows how tools like ChatGPT are putting traditional assessment systems under […]
[…] The report does more than raise the alarm. Policymakers are urged to rely less on rankings and invest more in expert review. Universities are encouraged to evaluate researchers based on their best work, rather than the sheer number of publications. Individual scholars are called upon to avoid predatory journals. They should speak up when colleagues engage in dubious practices, and teach early-career researchers how to distinguish between genuine and fake research. Granted, this is becoming increasingly difficult due to AI. […]