Artificial intelligence is everywhere. That will not surprise anyone. Large language models such as ChatGPT, Microsoft Copilot, Gemini or Claude have quickly become part of our daily digital landscape. Students use them. Teachers use them. Policymakers use them. And those who don’t sometimes feel a subtle pressure to at least give them a try.
I use them too. Quite regularly, in fact, not to write the posts on this blog, but to translate them e.g..
But to be honest, I’m starting to feel a bit tired of AI. Let me explain why.
Not because it cannot be useful. On the contrary. AI can genuinely help. In my own case, for example, it can be a practical tool for making blog posts easier to find in search engines.
Still, I notice that some irritations are slowly piling up.
The first issue is recognisability. If you read a lot of AI-generated text, you start to recognise it quite quickly. Sentences often have the same rhythm. Paragraphs follow a predictable structure. The nuance seems to be there, but it often feels as if everything comes from a kind of standard library. I like reading music websites and following popular culture. And yes, AI is clearly used quite a lot there.
The second issue is the small habits that repeat endlessly. Almost every answer ends with a new question or, more recently, with a kind of cliffhanger.
“Would you like me to expand on this?”
“Shall I also explain…?”
“Did you know that…?”
It is understandable. But after the hundredth time, it starts to feel like talking to someone who constantly tries to keep the conversation going. The difference between friends and AI: friends know when it is better to stay silent. And friends do not constantly tell you what you want to hear.
And then there is the last issue: errors. I deliberately avoid the word hallucinations here and simply call them what they are. AI can formulate things impressively well. But that does not mean the content is always correct. Sometimes something is slightly off. A source that does not exist. A reasoning step that does not quite work. A study that is described incorrectly.
If you know the topic well, those mistakes often jump out immediately. If you do not, they are much harder to spot. I occasionally run a simple test. I ask AI something about a topic or a book I know well. And tools like ChatGPT or Grok still fail that test more often than one might expect. I also suspect this does not happen only with the topics I check.
Some people respond, “But it is getting better.” Perhaps. But that is honestly not my personal experience.
I suspect this is part of the core of my AI fatigue. AI often works best for people who do not really need it. If you already have enough background knowledge, you can filter out the mistakes and keep what is useful. If you do not have that knowledge, it is much easier to trust the answer as it is. And at the same time, those without that prior knowledge may be the ones who rely on it the most.
None of this means AI will disappear. Search engines did not disappear either when it became clear that the first results could be misleading or incorrect. But it does suggest that the first phase of unconditional fascination may slowly be giving way to something else: familiarity.
And in my case, a little irritation.
Perhaps that is actually a good sign. New technologies often start with hype. Then comes the phase where we begin to see their limitations. Only after that do we arrive at a more realistic way of using them.