Existential AI risks? Real harms today still matter more.

There’s a lot of noise about AI wiping out humanity someday. But what about the very real, very urgent problems AI is causing right now? Bias, misinformation, job disruption—these aren’t future threats. They’re already here.

In a new study published in PNAS, Emma Hoes and Fabrizio Gilardi (University of Zurich) tackled an important question: Does all the hype around AI’s existential risks make people less worried about AI’s immediate harms?

They ran three large online experiments, involving over 10,000 participants, and exposed people to different news narratives about AI: some focused on catastrophic risks, some on immediate societal impacts, and some on AI’s benefits.

The result? Even when people read about apocalyptic AI scenarios, their concern for immediate harms stayed strong. In fact, concerns like bias and misinformation consistently ranked higher than worries about existential collapse.

So, despite fears that “doomsday talk” might pull attention away from pressing issues, it doesn’t seem to work that way, at least not for the general public.

Their takeaway is clear: it’s not either/or. We can (and should) stay alert to the massive challenges AI creates today while thinking carefully about long-term risks.

If you’re interested, the full paper is open access and packed with thoughtful analysis:
👉 Existential risk narratives about AI do not distract from its immediate harms

Curious: How do you think we should balance immediate action and future foresight when it comes to AI?

One thought on “Existential AI risks? Real harms today still matter more.

Leave a Reply