After the warnings: when AI in education can work (possibly)

In a previous post, I discussed the recent Brookings report A New Direction for Students in an AI World. Its central conclusion was uncomfortable but clear: at present, the risks of AI for students outweigh its promises. Not because AI is inherently harmful, but because many current applications undermine essential learning processes for development. Protection, boundaries and deliberate choice were key themes.

Since then, the OECD has published their Digital Education Outlook 2026, a substantial report devoted entirely to generative AI in education. I read it at a high level rather than in full detail. Still, one thing is immediately clear: the OECD starts from a different question. This time, it is not so much what can go wrong? Instead, they ask when and how can AI actually contribute to better learning, better teaching and stronger education systems?

I received many responses to my first post, and this OECD report helps address several of them. Where Brookings mainly warns, the OECD tries to be more precise. I genuinely think the two reports complement each other. Taken together, they improve the debate.

The OECD aligns with Brookings on a crucial point: better performance does not automatically translate into better learning. The report shows that generative AI often enables students to complete tasks faster or more accurately. Yet these gains frequently disappear, and sometimes reverse, once the support is withdrawn. A field experiment in Turkey that I discussed earlier illustrates this clearly. Students performed better when they had access to AI, but worse when that access was removed. AI can support learning, but it can also displace it in ways that weaken learning development. For me, this finding raises a curricular question rather than a technological one: what should students be able to do independently, and where is it reasonable for them to rely on technology?

This point connects directly to Brookings’ concern about the outsourcing of cognitive work. The OECD confirms that risk, but pushes the discussion further by asking a design question: what must AI applications actually do if they are to support learning rather than replace it?

Here, the OECD draws a sharp line between general-purpose, commercial AI tools and AI systems designed specifically for education. Many problems arise, the report argues, when educators deploy generic chatbots, optimised for speed and output, in learning contexts. Not because these tools are inherently “bad”, but because they are not built to make thinking visible, to use errors productively, or to support self-regulation.

When developers design AI explicitly as educational technology, with clear pedagogical goals, transparency about intermediate steps and room for teacher guidance, its effects change. AI can then act as a tutor that asks questions instead of delivering answers, or as a feedback tool that reveals learning processes rather than short-circuiting them.

This shift in perspective also matters for teachers. Where Brookings mainly warns about loss of autonomy and professionalism, the OECD develops this further through the idea of teacher–AI teaming. AI can save teachers time, support analysis and enrich feedback, but only if the teacher remains firmly “in the loop”. Not as an executor of AI outputs, but as a professional decision-maker who judges what makes sense in a specific context.

At system level, the OECD adopts a cautiously optimistic stance. Generative AI can reduce administrative burdens, support curriculum analysis and assist research. But the report insists that governance remains decisive. Without clear frameworks for privacy, bias, transparency and accountability, control will inevitably shift towards commercial providers. In this respect, the OECD ultimately reinforces the protective logic that Brookings places at the centre of the debate.

So: two major reports. A great deal of AI. And now what?

Brookings rightly foregrounds protection: of learning, of autonomy and of equity. The OECD shows that protection alone will not suffice. If AI plays a role in education, and it already does, educators and policymakers must deliberately design that role. Neither unrestricted adoption nor outright rejection will do. The central question shifts from whether AI is used to under which pedagogical and institutional conditions its use is defensible.

AI will not disappear from students’ lives. But whether it strengthens or weakens learning does not follow from technology itself. It follows from choices. And that is precisely where these reports sharpen the debate.

Leave a Reply