AI in education: why the risks currently outweigh the promises

The debate on AI in education remains strikingly binary. Either AI is framed as the inevitable solution to an overstretched education system, or as an existential threat to learning, thinking and, by extension, human agency itself. The new Brookings report A New Direction for Students in an AI World seeks to disrupt that pattern. Not by splitting the difference, but by stating an uncomfortable conclusion clearly: at this point in time, the risks of AI for students outweigh its benefits.

This is not an anti-AI position. Quite the opposite. The report, led by Mary Burns, explicitly starts from the premise that AI can enrich learning. Precisely because of that starting point, its analysis is sharp. The key question is not whether AI can help, but under what conditions it actually does.

The central conclusion is straightforward. AI enriches learning only when it strengthens the core of education: the interaction between students, teachers and content. Yet the authors observe that current practice often moves in the opposite direction. Many AI applications take over cognitive work that is essential for learning and development. Thinking, formulating, planning and struggling are increasingly outsourced. This does not result in more efficient learning, but in a gradual erosion of foundational learning processes. For children and young people, this is particularly problematic, because it affects precisely those cognitive, social and emotional capacities that are still developing.

The report identifies six interconnected domains of risk:

  • undermining cognitive development

  • harm to social and emotional growth

  • erosion of trust in education

  • safety and privacy risks

  • loss of autonomy and agency

  • and a widening of educational inequality

These risks do not belong to some distant future. They already shape students’ experiences, both inside and outside schools. AI use spills across boundaries that schools can no longer control or contain.

Against this backdrop, the Brookings report dismantles familiar policy reflexes. Banning AI does not work. Uncritical adoption works no better. The authors explicitly reject the idea that technology equals innovation. Decades of educational technology show the same pattern: adding more tools does not automatically improve learning. AI follows that pattern, but raises the stakes.

On this basis, the authors organise their recommendations around three interconnected pillars for policy and practice: prosper, prepare and protect.

Prosper puts learning first. Schools should use AI only when it clearly strengthens deep learning. That demands deliberate judgement: when to use AI, and when to withhold it. Productive struggle, self-regulation and metacognition are not inefficiencies to eliminate, but conditions for learning. AI should reinforce these processes, not substitute for them. This also means that general-purpose, non-educational chatbots rarely function as appropriate learning tools for children.

Prepare builds capacity and understanding. Students, teachers, school leaders and parents all need a realistic grasp of what AI can and cannot do. AI literacy here does not mean learning which buttons to press. It means recognising limitations, biases and behavioural effects. For teachers, this calls for focused professional development rooted in pedagogy and didactics, not isolated tool training. At system level, it requires AI policy that forms part of a broader educational vision rather than a bolt-on response.

Protect may be the most underestimated pillar. The report is unambiguous: protection must be designed in, not added later. Privacy, safety, transparency and child-centred design are not optional safeguards, but basic conditions. Meeting them requires firmer expectations for providers, clear public frameworks and mature governance. If schools and governments do not claim this space, commercial platforms will.

What this report does particularly well is shift the debate. Not: how should education adapt to AI? But: what kind of education do we want, and what role should AI play within it? That is not a technological question, but a normative one. And for that reason, delay is risky. The longer implicit choices remain unexamined, the faster they become embedded in everyday practice.

Brookings’ call to action is deliberately concrete. Organisations are encouraged to choose at least one recommendation and work on it over the next three years. Not everything at once. But intentionally. No panic, no hype, but a clear sense of urgency.

AI will not disappear from students’ lives. The question is therefore not whether education responds, but how thoughtfully it does so. This report does not offer simple answers. It does offer something arguably more valuable: a framework for asking better questions. For now, that may be the most realistic form of progress.

Leave a Reply