Why We Trash Studies We Don’t Agree With

Science isn’t meant to please us. It’s meant to understand how the world works—even if those insights make us uncomfortable. But what happens when scientific findings directly contradict our deepest beliefs? Then it turns out we rarely remain honest in our judgments. This is an example of why we don’t trust science we don’t like. In a large-scale study (N = 7,040), Cory Clark and colleagues showed how moral outrage undermines our way of thinking about science.

The researchers repeatedly used the same texts about scientific research, but with a twist: in one version, the political left performed better, in the other, the right; in one version, men proved to be better mentors, in the other, women. The results? If the decision conflicted with someone’s beliefs, their assessment of the research immediately changed. The methods were supposedly “unclear,” the research question “unanswerable” or even “morally inappropriate,” and the researchers “incompetent” or “manipulative.” The same study was therefore considered more credible if the conclusion was pleasing.

The authors speak of cognitive chicanery: a battery of mental defence mechanisms we use to dismiss undesirable science. These strategies are strikingly recognisable:

  • Motivated confusion: claiming that the study is “unclear” or “too complex,” when it’s actually the conclusion that’s objectionable. Not understanding it becomes an excuse for not having to think.
  • Motivated postmodernism: questioning the research question itself—“You can’t measure this, can you?”, “This isn’t something you should want to investigate with data”, as if science should restrain itself out of moral politeness.
  • Anecdote elevation: prioritising personal stories or lived experience over systematic data, especially when the data doesn’t point in the desired direction.
  • Contradictory reasoning: claiming in the same breath that the study is both “too ridiculous” and “old news,” or that the researchers are both stupid and cunning manipulators. Anything is fine, as long as it contradicts the results.
  • Rhetorical tricks: undermining arguments by exaggerating them ( “so you’re saying all women are bad mentors?” ), putting them in a bad light ( “this is pseudoscience” ), or getting personal ( “these researchers are just sexists” ).
  • Bureaucratic sabotage: advocating for “more consultation,” “a commission,” or “better wording”—not out of genuine concern, but to delay or obstruct the dissemination of the investigation. ( These strategies are inspired by the CIA’s old sabotage manual.)

And then there are the classics: “the sample is too small,” “you need to know the context,” “the study is methodologically weak” — even if the same arguments aren’t used in studies with findings that do feel right.

What makes this study so interesting is that it’s not about extremists, but about average citizens—and yes, even scientists. Morality proves to be a powerful filter, even for people who think they’re rational. The lesson? Anyone who takes the debate about science and policy seriously should also be mindful of the psychological reflexes that guide us. Because sometimes it’s not the research that’s flawed, but our willingness to engage in dialogue with it.

Abstract of the research :

We document a mutually reinforcing set of belief-system defenses—”cognitive chicanery”—that transform “morally wrong” scientific claims into “empirically wrong” claims. Five experiments (4 preregistered, N=7,040) show that when participants read identical abstracts that varied only in the sociomoral desirability of the conclusions, morally offended participants were likelier to (1) dismiss the writing as incomprehensible (motivated confusion); (2) deny the empirical status of the research question (motivated postmodernism), (3) endorse claims inspired by Schopenhauer’s Stratagems for always being right and the CIA’s strategies for citizen-saboteurs, and (4) endorse a set of contradictory complaints, including that sample sizes are too small and that anecdotes are more informative than data, that the researchers are both unintelligent and crafty manipulators, and that the findings are both preposterous and old news. These patterns are consistent with motivated cognition, in which individuals seize on easy strategies for neutralizing disturbing knowledge claims, minimizing the need to update beliefs. All strategies were activated at once, in a sort of “belief-system overkill,” that ensures avoidance of unfortunate epistemic discoveries. Future research should expand on this set of strategies and explore how their deployment may undermine the pursuit of knowledge.

One thought on “Why We Trash Studies We Don’t Agree With

  1. I’m afraid we’re all routinely using comparable “cognitive chicanery” for other sources of information: media, friends and generative AI. So the conclusion could be “Because sometimes it’s not the information that’s flawed, but our willingness to engage in dialogue with it”.

Leave a Reply