What Could Possibly Go Wrong? Deepfake Teachers and AI Marking

Somewhere in the north of England, according to Schoolsweek, someone thought it was a brilliant idea: teachers will soon be able to create an AI avatar of themselves with an identical voice, face and gestures included, to help pupils catch up on lessons they’ve missed. The Great Schools Trust calls it “deepfake technology for good.”

On paper, it sounds like a time-saving solution. A pupil was ill or suspended? No problem: click the friendly virtual version of your teacher, who explains what you’ve missed, and you’re back on track. The avatars aren’t meant to replace real lessons. They’re meant to act as a “bridge” between pupils and the school. And, the trust says, it will save teachers from endless after-school catch-up sessions.

What could possibly go wrong?

Well, quite a lot, actually, as often with AI.

For a start, the idea that a deepfake version of yourself will reduce workload assumes that teaching is mostly about repeating information. Yet much of what teachers actually do – sensing, motivating, adjusting, improvising – is precisely what can’t be copied. A digital replica of your voice is not an echo of your craft.

Then there’s the uneasy thought that your digital double looks like you, but legally isn’t you. The trust has already clarified that it will own the intellectual property of the videos. The avatar may resemble you perfectly, but once you leave the school, it’s deleted as if your digital existence gets tidied away along with your mug from the staffroom shelf.

And the symbolism is hard to ignore. Deepfakes have a “bad reputation” for a reason – they’re often used to deceive, harass or worse. That this very technology is now supposed to restore the moral image of education feels, at best, ironic.

The road to hell is paved with good intentions

Of course, the intentions seem sincere. The trust wants to reduce workload and help pupils who fall behind. They’re even experimenting with AI exam marking, claiming it’s faster and more accurate than teachers. When the system initially marked pupils too harshly, it turned out the AI was right: teachers had been too generous. Another reminder that, in the age of automation, the computer is always right – until it isn’t.

I find myself both fascinated and uneasy. It’s a perfect snapshot of our times: technology that promises to save time but quietly trades away something human. At first, it feels convenient – until you realise you’re mostly managing the technology that was supposed to manage things for you.

Or, to put it another way: the teacher’s deepfake explains the homework, while the real teacher checks the deepfake the pupil used to pretend they did it.

What could possibly go wrong, indeed.

One thought on “What Could Possibly Go Wrong? Deepfake Teachers and AI Marking

Leave a Reply