AI tools like ChatGPT have become part of everyday workplace life in just a few years. The intersection of AI, communication, and trust matters more than ever: three-quarters of professionals now use these tools, often for emails or other forms of business communication. But what does that do to the way we see each other? A new study by Cardon and Coman (2025) didn’t focus on the quality of AI-generated texts – those are usually professional and error-free – but on how people judge a colleague or manager who relies on AI.
Mixed results
The findings are mixed. On the one hand, people often see messages written partly with AI as just as professional and effective as those written by humans alone, sometimes even better. AI fixes typos and can turn a clumsy email into something clear and polished. Anyone who’s ever received an awkward message from a manager knows how valuable that can be.
The hidden downside
On the other hand, heavy AI use makes writers seem less trustworthy, less warm, and less engaged. Managers who lean too much on AI risk losing credibility: they appear less sincere, less caring, and sometimes even less competent. Many respondents pointed to a tipping point between 30 and 50 per cent. If less than half of the text comes from AI, the human still counts as the author. Once AI writes more than that, doubts start to grow.
Cognitive versus affective trust
This creates an important tension. AI boosts the cognitive side of trust – clarity, accuracy, efficiency – but it weakens the affective side: warmth, integrity, authenticity. That matters most in relational communication, such as expressing appreciation. Many employees said they would feel less valued if they found out their boss had used ChatGPT to draft a congratulatory email.
A careful but limited study
The study involved more than 1,100 professionals in various conditions, but it still employed a hypothetical scenario. Respondents read a specific message and had to imagine it came from their manager. How this plays out in genuine workplace relationships remains an open question, since context, tone, and history all matter. And there’s the consideration of habituation: what feels “inauthentic” today may become tomorrow’s norm in AI, communication, and trust.
The key lesson
AI use in communication is never neutral. It’s not just about the text itself, but also about how that text shapes our perception of the writer. And that perception changes once AI becomes more than a tool. That may be the key insight: AI can be extremely helpful in structuring or polishing a message, but when it takes over the human gesture – the recognition, the small but authentic sign of attention – we lose more than we gain.
And what about education?
That raises a question for education as well. An increasing number of teachers are using AI to generate feedback on students’ work. Do the same dynamics apply here? If students realise that AI produced most of their feedback, will they feel less seen or taken less seriously, just as employees judge their managers as less engaged? For now, we don’t know. But it’s a question worth exploring soon.
Abstract of the study:
This study explores perceptions of managers who use AI-assisted writing for workplace communication tasks. Prior scholarship has established the professionalism of the text of AI-generated workplace writing, but perceptions of message senders who use AI have not previously been explored. To capture professionals’ perceptions of email authorship, a survey was conducted of 1,100 working professionals. In a two (directionality of communication) by four (levels of AI assistance) study, respondents were asked to evaluate the authorship of an AI-assisted message congratulating a team for meeting goals and setting new objectives. The results suggest that, despite positive impressions of professionalism in AI-assisted writing, managers who use AI for routine communication tasks put their trustworthiness at risk when using medium- to high-levels of AI assistance, as respondents in these conditions begin to question the authorship, confidence, caring, sincerity, and ability of senders. These results contribute to ongoing research into the effectiveness of AI-mediated interpersonal workplace communication by suggesting parameters for practical use and directions for future research.