It can feel as if AI has suddenly become the problem. As if hallucinations, incorrect citations, and superficial texts only appeared with the arrival of ChatGPT and its peers. From my perspective as an education mythbuster, that story does not quite hold. Poor or incorrect citations are not an AI problem. They have always been with us.
While working on education myths, Paul, Casper and I kept running into them. The learning pyramid that cannot be properly traced back, yet is confidently attributed to Glaser, Glasser or Chi, even though Dale comes closest at best. Maslow’s theory, almost always presented as a neat pyramid, even though that shape was added later by an advertising executive. The claim that “65% of the jobs our children will do do not yet exist”, quoted in a World Economic Forum report that refers to an American study where the claim does not actually appear. All of this circulated freely in articles, presentations and even academic papers long before anyone could type “AI” without explanation.
What AI does is not so much create something new, but make something visible. All at once. And at scale.
Several people on social media pointed me to the same reflex in what we now often call the assessment crisis in (higher) education. Fraud. Essays that are “too good”. Answers that flow suspiciously smoothly. This suddenly feels urgent, but again, it mainly exposes problems that have been there for much longer. Assessments that mostly reward reproducible behaviour. Assignments that are easy to outsource because they demand little real understanding. Reports that reward form more than substance.
We have known this for years. Students receiving help with writing. Parents are thinking along. Theses written by someone else altogether. In some cases, entire reports are produced by third parties. Tests that are carefully crammed for and just as carefully forgotten afterwards. None of this is new. AI simply makes it impossible to keep pretending these are rare exceptions.
And that is precisely what makes it uncomfortable. Because if the problem is not “AI”, but our assignments, our assessments, and our relationship with knowledge, then the responsibility does not lie with the technology. It lies with us.
Perhaps AI functions here as a kind of contrast agent. It makes painfully clear where education relies on procedures rather than understanding. Where we trust the output rather than the learning processes. Where we implicitly assume that someone who can phrase something fluently must also have understood it.
That does not mean we can fix everything with “better questions” or “different tests”. But it does mean the debate becomes more honest once we stop using AI as a scapegoat. The technology forces us to be more explicit about what we actually value: knowledge that sticks, insight that transfers, and learning that does not disappear the moment the assessment is over.
In that sense, the AI debate about citations and the AI debate about assessment are really the same debate. In both cases, the issue is not technology but how we sometimes carelessly handle knowledge and evidence. We repeat claims without checking them. We judge products without insight into the thinking behind them. And we confuse fluency with understanding, form with substance, plausibility with truth.
AI does not just magnify this; it accelerates it. What used to go wrong slowly and in a scattered way now happens quickly and at scale. That is why it suddenly feels like a crisis. Not because something fundamentally new is breaking down. But because we can no longer look away from what has been skewed for years.
Those who point to AI as the cause miss the point. The real question is whether we are finally willing to be more consistent: in what we accept as knowledge, in how we use sources, and in how we assess learning and understanding. That conversation is harder than banning a tool. But it is also the only one that genuinely moves us forward.
Image: https://www.etsy.com/be/listing/1892133000/de-schuld-van-chat-gpt-mok-chat-gpt