“But sir, miss, did you use AI?”
That might soon be the question of the current generation of students. And frankly, they’ve got a point. While teachers and lecturers worry about students secretly using ChatGPT, new research shows that students, too, are starting to lose trust. Not in AI, but in their teachers’ use of it. I found this study via this guest post by Greg Toppo on Larry Cuban’s blog.
When honesty feels risky
In a this study at the Education University of Hong Kong, Jiahui Luo (Jess) explored how students experience trust (and distrust) in an era where generative AI has quietly become part of the assessment landscape. What does “trust” even mean when you’re asked to submit your work together with a declaration of AI use? In some cases, your chat history with ChatGPT is also required. Meanwhile, you have no idea how your lecturer handles the same technology.
Luo interviewed eleven students, most of them training to become teachers. Their answers were strikingly similar: fear. “I’ve stopped using any AI tools, not even Grammarly,” said one student. “What if my lecturer thinks I’m cheating?” Their university requires every student to declare any use of AI in assignments. Yet the rules are vague, and no one really knows what counts as “too much.” As one student put it:
“If I acknowledge using AI, who’s to say I won’t be penalised for it?”
The result is predictable. Many students play it safe and avoid AI altogether. They do this not because they believe it’s wrong, but because they don’t trust how their teachers will respond.
Transparency should go both ways
Luo identifies a clear asymmetry: students are asked to be transparent about their use of AI, but teachers rarely are. While students must show their prompts, they have no idea whether teachers themselves use AI to mark or comment on their work. Additionally, they don’t know how teachers interpret “AI scores” from Turnitin. Luo calls this absence of two-way transparency a key reason for distrust:
“When transparency is required only from students, it feels like surveillance, not collaboration.”
The study also found that students now expect more from their teachers than before, not just expertise in their subject, but AI literacy. They want educators who understand the technology and can discuss its risks and benefits with nuance. Students prefer assessments that can’t be mindlessly generated by a chatbot. One student described a lecturer who used part of each class to critique ChatGPT’s answers together with the group: “That made me feel safe to use AI myself.” Another, however, called it hypocrisy when a lecturer banned AI use but was later discovered (via an AI detector) to have written most of their syllabus with ChatGPT.
Trust requires courage
At the heart of Luo’s findings lies a simple but uncomfortable truth: there can be no trust without vulnerability. Students will only be open about their AI use if they feel their teachers are, too. In an age dominated by detection tools, surveillance language and compliance rules, that kind of openness takes courage on both sides.
As Luo puts it, we need educators who don’t just explain what is allowed with AI. They should also show how they themselves navigate uncertainty.
In short: students aren’t asking for a free pass. They’re asking for honesty, reciprocity, and a bit of humanity. And maybe the question “Did you use AI?” isn’t as cheeky as it sounds. Maybe it’s just the latest version of what education has always depended on: trust.
[…] erosion of trust in education […]