AI in Education: Plenty of Technology, Not Much Learning Theory

Anyone diving into research on AI in education today will find plenty of promise. They will also find quite a few studies that do not fully deliver. There are intelligent systems that provide feedback, robots that support collaboration, and adaptive platforms that personalise learning. Yet the moment you look beneath the pedagogical and learning-psychological surface, the picture becomes far less futuristic. In fact, it becomes much more familiar. What you see is a field that mostly experiments, with little conceptual grounding. Moreover, technology is moving faster than pedagogy.

A new systematic review by Topali and colleagues (2025) makes this remarkably clear. Of the 28 studies that implemented AI in real classroom settings within compulsory education, roughly half did not use any explicit learning theory at all. There is no reference to how children learn, no design decisions informed by cognitive load or sociocultural interaction, and no link to learning goals or task dynamics. AI is often introduced in the same way you might bring a new device into the classroom. You plug it in and hope that it works.

Among the studies that did use a theoretical foundation, one tradition stands out. Constructivism in its various forms appears most often. Robots that act as mediators within Vygotskian perspectives, game-based learning inspired by Piaget, or inquiry-oriented environments where pupils learn by constructing meaning. This is understandable. Many AI tools designed for younger learners are playful, interactive and social, which aligns naturally with constructivist ideas.

What is most striking, however, is what you do not find. Cognitive learning theories, which form a major pillar of contemporary pedagogy, are rare. Bloom or SRL models appear occasionally, but not in depth. Cognitive Load Theory is absent. Explicit instruction is hardly present. Motivation and self-determination theories surface only in a handful of studies. Usually, they appear as a peripheral frame. Sociocultural theories appear slightly more often but mostly in targeted projects on collaboration or work with specific learner groups.

The result is a field that leans heavily on broad and sometimes loosely interpreted constructivist ideas. This approach is supplemented with ad hoc uses of motivational frameworks. None of this is necessarily wrong, but it creates a blind spot. There is very little insight into how AI relates to what we know from robust learning science. Concepts like working memory, example-based learning, expertise development, principles of effective feedback, and effective instruction remain largely unexplored. If AI is truly meant to support learning, it needs to do more than add a playful technological layer on top of existing activities.

On top of that, many studies are not only theoretically thin but also methodologically limited. Small samples, short interventions, and strong novelty effects are common. Almost all studies measure short-term learning gains, often in a single subject. Sometimes they are conducted in a single classroom. Teachers themselves rarely play an active role in the research, except as background facilitators who make the AI deployment possible.

The outcome is predictable. We are building systems that automate faster than we understand what they are automating.

That is why the review matters. It highlights what is missing. If we want AI to become more than a classroom gimmick, the field needs to mature. That means clear learning theories, deliberate design choices, careful attention to context and a broader understanding of what it means to teach and learn. The question is not only whether a robot or platform works, but why it works, for whom, under what conditions and with what consequences for thinking, motivation and relationships in the classroom.

AI turns out to be a mirror. It does not simply show us what machines can do. It forces us to clarify once again what learning actually is.

2 thoughts on “AI in Education: Plenty of Technology, Not Much Learning Theory

  1. This is a really important and honest critique — thank you for raising these concerns. I appreciate how you point out that despite all the excitement around AI in classrooms, too many implementations seem to skip the most fundamental question: What do we know about how people learn — and how does this technology support that? It’s striking (and a bit worrying) that roughly half of the AI‑in‑school studies you highlight don’t even reference a learning theory — no clear grounding in cognition, social interaction, or motivation; just plugging in a tool and hoping for the best.

Leave a Reply