Why Social Media Make Us More Cynical Than Necessary

Digital erasure: How social media platforms are silencing Palestinians in 2024 · Global Voices Today, social media have a fairly bad reputation. On Twitter (I persist), people speak of Bluecry instead of Bluesky. On Bluesky, everyone on Twitter is suspected of being right-wing. And Facebook? Who is even still there? LinkedIn, finally, self-promotion, surely?

That intuition is understandable. Anyone who scrolls through the comment sections under news articles, where newspapers still allow them, will encounter hate, disinformation and gratuitous cynicism. At times, it feels as if this behaviour has become normal.

A recent article in PNAS Nexus offers an interesting correction, perhaps better described as a nuance, not by claiming that social media are harmless. You will see shortly that they are not. But by showing that we systematically misjudge who is responsible for harmful behaviour online, and how many people are involved.

Angela Y. Lee and colleagues asked Americans how many users they believed post toxic comments or share false news. The answers were remarkably consistent. Respondents estimated that around 40 to 50 percent of users engage in such behaviour at least occasionally.
Pause for a moment before reading on. What percentage would you guess?

 

 

 

 

In reality, platform-level data tell a very different story. In most cases, it is a small minority, roughly 3 to 8 per cent, that is responsible for the majority of this content. Perhaps you were close. Perhaps not.

This does not mean that the volume of toxic content is small. Quite the opposite. That small group is highly active and produces a disproportionate share of the cynicism and hostility we see. This is precisely where the confusion arises. We see a lot of problematic content and conclude that many people are problematic. Visibility is mistaken for representativeness.

This is a classic example of availability bias. We overestimate how common something is because examples of it come easily to mind. The same bias explains why people are more afraid of plane crashes than of what can go wrong on the drive to the airport.

What is interesting is that people are not particularly wrong about how toxic the content is. They are generally quite good at recognising what counts as harmful or boundary-crossing. The error comes in the next step. We attribute that behaviour to “most others”. And that mistake has consequences.

Those who overestimate how many people behave toxically online feel more negative, become more cynical about society, and are more inclined to believe that things are morally deteriorating. They also tend to think others actually want to see this kind of content, while most users do not. The authors describe this as a form of pluralistic ignorance. We underestimate how many others share our rejection of harmful behaviour. Perhaps Rutger Bregman is right after all when he says that most people are basically decent.

There is also a more hopeful finding. A short, factual correction already makes a difference. When participants were explicitly told that social media tend to amplify the voices of a loud minority, their pessimism about moral decline decreased, and their perception of what others value became more realistic. This does not solve everything. It did not, for instance, change deeper attitudes such as general trust in human nature. But it does show how fragile some of these beliefs are.

It is important to emphasise this, especially since I myself nearly fell into this trap after reading the press release. This research does not claim that social media “are not that bad”, or that harmful behaviour is not a serious problem. The harm remains real for those targeted and for public debate as a whole. What the research does show is that we add an extra layer of damage by drawing the wrong conclusions about each other. We start to think that everyone is like this when they are not.

Social media, then, do not offer a cross-section of society. They present an enlarged mirror of its loudest edge. The problem is not only what appears in that mirror, but what we infer from it.

Leave a Reply