The $65,000 AI School, Revisited: Inside the AI Classroom That Promised Too Much

When I ranted last year about the two-hour school day that costs $65,000, I assumed, or better, I hoped, I was reacting to the usual hype cycle. Big promises. Glossy videos. Confident founders. And that familiar “schools are broken, we fixed them” vibe.

What I did not have back then was a detailed look at what this model feels like for actual kids and families when the pitch stops, and the system starts. My dear friend Paul Kirschner sent me this article. It seems now we do. And if you’re looking for a twist, there isn’t one. Buckle up. This won’t be pretty.

This Is Not About AI (no, really)

The part that should make you pause is not even the AI branding. It’s the underlying logic: instruction becomes software, and the adults in the room become “guides” whose job is to supervise, not to teach. Yes, this is Carpe Diem all over again. WIRED quotes the head of the Brownsville campus saying the guides “don’t do any teaching.”

When your ‘teacher’ publicly says ‘please don’t use me as a teacher,’ you’re not disrupting education. You’re outsourcing it.

Once you accept that premise, the rest follows with disturbing ease.

What Happens When the System Takes Over

A nine-year-old gets stuck in IXL and is forced to repeat the lesson until the system considers it “mastered.” Not “understood.” Not “worked through with help.” Just: repeated until the metric clears. Her mother describes her doing the same kind of three-digit multiplication task more than twenty times without being allowed to move on, and when the child asks the guide for an exception, the message is basically: no, you have to do it.

The story gets worse. Over the weekend, the parents sit with their daughter for hours, she breaks down crying, telling them she’d rather die than keep going. The parents end up checking answers on a calculator just to get her through the loop. When she finally returns to school, the news isn’t “well done.” It’s that she’s now even further behind her targets because the time spent stuck counts as lost progress.

When Learning Becomes a Dashboard

This is the part where some people will say: ok, but that’s a one-off. A bad incident. A mismatch.

But it isn’t a one-off. It’s the model.

Because if your system runs on dashboards and pace targets, then “stuck” is not a pedagogical moment. It’s a productivity issue. And when productivity is the core value, kids start behaving like little productivity machines.

In the same WIRED report, the parents are told their daughter isn’t eating lunch. The school’s explanation is almost surreal: she would rather stay inside and work. The child later tells her parents she spent lunch catching up on IXL. Then the family tries to follow medical advice after a doctor notes significant weight loss: send snacks. For a few days, she eats them. Then the snacks were returned untouched because staff reportedly told her she hadn’t earned them and wouldn’t get them until she met her learning metrics.

Read that again. Snacks are an output of the dashboard.

At this point, we’re not debating education innovation. We’re debating what kind of environment you create when you treat learning like a KPI. Or how you can become a monster when the computer says no.

And the “AI” part?

Here’s a detail that should have caused a scandal on its own: IXL, the very software presented as the child’s “math teacher,” told WIRED it had deactivated Alpha’s account for violating its terms of service and explicitly stated that it is not intended, and not recommended, as a replacement for “trained, caring teachers.”

That’s not an academic quibble. That’s your core tool distancing itself from your core claim.

Then there’s the surveillance layer, because of course there is. Alpha can record students’ screen activity and mouse/keyboard usage, may use eye-tracking, and frame it using a “game film” analogy. And the same reporting includes an example of a student working at home receiving a notification that she was flagged for an “anti-pattern,” alongside a webcam video of her in pyjamas that had been captured and sent. When I read this, I started to think that Alpha is combining everything that has gone wrong in other examples as a kind of anti-Best-Of, a Worse-of.

So yes: teaching reduced to software, motivation managed through reward currencies and targets, and behaviour monitored like athletes under performance analytics. If you’re wondering what this resembles, WIRED explicitly brings up Skinner’s teaching machines and the logic of conditioning. But actually, I do think Skinner was better at this because, well, human.

And if you think that’s already enough, 404 Media’s reporting adds another layer: internal documentation reportedly describing AI-generated lesson plans that can do “more harm than good,” large amounts of student data including videos stored in ways that could be accessible to anyone with a link, and former employees describing constant monitoring down to mouse movements, increasing anxiety, and quality issues that don’t match the marketing.

This is the moment where the “we’re just experimenting” defence collapses. Because the experiment isn’t happening in a lab. It’s happening in childhood. To real children. Suddenly, the $65,000 is no longer the highest price you are paying.

We’ve Seen This Pattern Before

What frustrates me is not that someone is trying something new. Try things. Pilot things. Evaluate things. Learn. Fine.

What frustrates me is how quickly this gets repackaged as a blueprint for everyone. The same WIRED piece describes how Alpha’s leaders point to Brownsville as proof that the model can work in “low SES” communities, while also acknowledging claims that Alpha would share data with WIRED and then didn’t.

We keep doing this. A model is run in a tightly controlled environment with intense selection pressures, strong branding, and ample investor/political oxygen. Early results are presented as obvious proof. And by the time the trade-offs become visible, the expansion is already underway.

The most revealing line in the WIRED reporting, to me, isn’t even about AI. It’s about philosophy: the “only way we’re going to know if the apps work is if we let them do the teaching.”

That sentence should not be spoken lightly when the “test subjects” are children.

So no, I’m not impressed. Not by the two-hour school day. Not by the dashboards. And not by the idea that you can “CAT scan” a child’s learning and call 90% correct “mastery.”

What impresses me is how quickly we start calling this education.

And if this is the future, then at the very least, let’s be honest about what the future is: fewer teachers, more metrics, more surveillance, more behaviour shaping through rewards, and kids learning very early that their needs are negotiable, but the dashboard is not.

It isn’t that we haven’t seen this story before.

It’s that we keep acting surprised by the ending.

Leave a Reply