Generative AI and critical thinking: less effort, less sharpness?

Generative AI tools like ChatGPT and Copilot often make our work easier. That’s convenient, but what does it mean for our ability to maintain critical thinking? A new study by Hao-Ping Lee and colleagues of 319 knowledge workers offers some insight.

The researchers asked participants to provide concrete examples of tasks using GenAI (936 in total) and to describe the extent to which they employed critical thinking. “Critical thinking” was defined broadly: from clearly formulating goals and prompts, to fact-checking, adapting AI texts to context, and integrating AI output into larger work.

The picture is mixed. On the one hand, many knowledge workers see themselves thinking critically, especially when they:

  • be confident in their own abilities,
  • have confidence in their ability to evaluate AI output,
  • or in any case often reflect on their work.

On the other hand, the greater the trust in AI, the less critical thinking people report. And in most cases, participants found that GenAI actually made the required thinking less strenuous—sometimes because AI supported them, but sometimes because they simply didn’t think as deeply.

It’s also interesting how critical thinking is shifting. Instead of creating content ourselves, energy is increasingly shifting to assessing and integrating AI output. This still requires sharpness, but it’s a different kind of cognitive effort. The risk is that, especially with routine tasks or low-stakes work, the critical layer slowly erodes.

Its strength lies in the fact that this isn’t a laboratory study, but a broad, practice-oriented survey of knowledge workers across various sectors. The study clearly maps when and why people use AI to think critically, and paints a rich picture of the shift in cognitive effort, from creating to evaluating. This is valuable for anyone interested in how AI impacts daily work.

This remains self-reporting: people say how they think they work, but that doesn’t necessarily correspond to their actual behaviour. There’s no objective measurement of critical thinking or the quality of the final result. Moreover, the sample isn’t representative—it’s made up of Prolific users who already use AI regularly. And “less effort” can mean both efficiency and loss of depth—that distinction isn’t made clearly here.

This study shows that GenAI doesn’t necessarily make us dumber, but that the risk of weakening is real, especially if we trust AI blindly. AI tool designers could encourage users to reflect more, for example, through targeted questions, verification tasks, or alternative perspectives. And what about us? Perhaps we should occasionally ask ourselves the simple question: “Have I checked this, or did I just accept it because it sounded good?”

Abstract of the paper:

The rise of Generative AI (GenAI) in knowledge workflows raises questions about its impact on critical thinking skills and practices. We survey 319 knowledge workers to investigate 1) when and how they perceive the enaction of critical thinking when using GenAI, and 2) when and why GenAI affects their effort to do so. Participants shared 936 first-hand examples of using GenAI in work tasks. Quantitatively, when considering both task- and user-specific factors, a user’s task-specific self-confidence and confidence in GenAI are predictive of whether critical thinking is enacted and the effort of doing so in GenAI-assisted tasks. Specifically, higher confidence in GenAI is associated with less critical thinking, while higher self-confidence is associated with more critical thinking. Qualitatively, GenAI shifts the nature of critical thinking toward information verification, response integration, and task stewardship. Our insights reveal new design challenges and opportunities for developing GenAI tools for knowledge work.

Leave a Reply