This blog exists because of the almighty RSS feeds. When I checked my Old Reader today, I couldn’t believe my eyes. This is what I saw:

I haven’t clicked on every single article, but this is an example of one of the retraction notifications:
This editorial was published as part of a guest-edited special section. Following an investigation by the publisher, a large number of articles referenced in this editorial have been or are in the process of being retracted due to overwhelming evidence that many were accepted solely on the basis of a compromised peer review process, as well as a number of further issues. Because a majority of the references in this editorial have been retracted, the journal editors have decided that the editorial is functionally compromised and must therefore be retracted. Authors F. Çelik, M. H. Baturay agree with the retraction. Author E. Namaziandost could not be reached in order to communicate the retraction decision.
I started checking Retraction Watch, but haven’t found any information yet. Luckily BERA published an explanation themselves. BERA’s explanation is, in a way, both reassuring and unsettling.
Reassuring, because it shows that journals are not simply letting things slide. They investigated, they found problems, and they acted. That matters. Retractions are not a sign that science is broken; they are a sign that correction mechanisms are still working. It’s the same argument I make about the replication crisis: it’s not a failure, it’s a correction.
At the same time, it is also unsettling to say the least. The investigation points to something bigger than a few flawed papers. We are talking about compromised peer review processes, questionable authorship, and patterns that resemble what we have come to call “paper mills”. Add to that the growing role of generative AI in producing and reviewing academic work, and you get a system under real pressure.
What struck me most is that this is not framed as an isolated incident. BERA explicitly situates it in a broader evolution: more submissions, more competition, more incentives to publish, and more sophisticated ways to game the system. In other words, this is not about a few bad actors slipping through the cracks. The cracks themselves are getting wider. I wrote about this earlier last year.
And that raises an uncomfortable question. If this happens in well-established journals, with experienced editors and reviewers, what does that mean for the rest of the field?
There is a tendency to see retractions as rare exceptions, something that happens “elsewhere”. But if you follow RSS feeds or platforms like Retraction Watch even casually, you start to see patterns. Clusters. Waves. And that is worrying.
None of this means we should suddenly distrust all research. That would be the wrong conclusion. But it does mean that the way we read research matters more than ever. Who are the authors? What is the review process? Are there signals that something is off? And perhaps most importantly: are findings replicated elsewhere? In that sense, this story is less about failure and more about adaptation. Journals are tightening checks, publishers are investing in detection, and organisations like BERA are being transparent about what went wrong.
Still, it is a reminder of something we often forget when we talk about “the evidence”. Evidence is produced in systems. And those systems are human, imperfect, and increasingly under strain. That doesn’t make evidence useless. It makes reading it a more demanding skill.