Think and Save the World

Why Certainty Feels Good And Why That Is Dangerous

· 7 min read

The Brain as Prediction Machine

Karl Friston's free energy principle and the broader predictive processing framework describe the brain not as a passive receiver of sensory information but as an active prediction generator. The brain constantly generates predictions about what's about to happen — in perception, in cognition, in social situations — and then compares incoming data against those predictions. Errors (surprise, uncertainty, unpredicted information) trigger updating. Accurate predictions trigger something like reward.

This architecture is what makes the brain efficient. You don't consciously process every photon, every sound wave, every social signal from scratch. You have existing models. The models generate expectations. What's unexpected gets flagged and processed; what matches expectations gets passed through quickly.

The reward associated with accurate prediction is real and measurable. Studies using fMRI have shown activation in the ventral striatum — a core part of the dopaminergic reward system — when predictions are confirmed. Certainty, which is the subjective feeling of having a highly confident prediction, activates these systems. This is what produces the pleasure of knowing. It's why explanations feel satisfying. It's why a good conspiracy theory, which explains everything in a unified narrative, feels more compelling than the messy, uncertain reality.

The problem: the reward system doesn't distinguish between accurate predictions and confident ones. You get the reward for certainty regardless of whether the certain belief is true. The feeling of knowing is not a signal of accuracy. It's a signal that your brain's predictive model has landed on an answer with high confidence. That confidence might be well-earned or it might be confabulation. The feeling is the same.

What Certainty Does to Evidence Processing

Research on motivated reasoning (see Ziva Kunda, Peter Ditto, and colleagues) shows that prior certainty about a conclusion shapes how new evidence gets processed. Specifically:

Asymmetric scrutiny: Evidence that confirms an existing belief is processed with what researchers call "can I believe this?" processing — a low bar, fast acceptance. Evidence that disconfirms existing belief is processed with "must I believe this?" processing — a high bar, active search for reasons to reject it. The same quality evidence gets treated differently based on which conclusion it supports.

Selective exposure: People prefer to consume information that confirms their beliefs. This is partly because confirming information feels better, and partly because information that challenges beliefs requires more cognitive work to process. The algorithm knows this — it's the engine of filter bubbles.

Memory consistency bias: When you're certain about something, you're also more likely to remember past information in ways that support that belief. If your view of someone changes from positive to negative, you start to remember their past behavior differently — things that were once neutral or positive get reinterpreted as warning signs you should have seen. The certainty about the present reaches back in time and rewrites the evidence.

Together, these mechanisms create a cognitive environment in which certain beliefs are largely self-sealing. They shape what information comes in, how it's processed, and how past information is remembered. This is not stupidity or stubbornness. It's architecture.

The Backfire Effect: What the Research Actually Says

The backfire effect — as popularized — claimed that correcting factual misinformation causes people to believe it more strongly. Brendan Nyhan and Jason Reifler's 2010 paper produced this finding in specific domains (political beliefs about WMDs in Iraq, tax cuts' effects on revenue). It spread through social media and journalism as a general law of human psychology.

The problem: subsequent large-scale replications have found the effect to be weak, unreliable, and domain-specific. A 2019 meta-analysis by Wood and Porter tested corrections across 52 experiments and found little evidence of systematic backfire effects. What they found instead was that corrections generally work — people update their factual beliefs when given accurate information — but attitude change lags behind factual updating, because the factual belief was entangled with identity and tribal affiliation.

This distinction matters. The pop version of the backfire effect suggests that correcting people is futile or counterproductive. The more accurate version suggests that factual corrections work but that the relationship between facts and identity is where resistance lives. You can update someone's stated knowledge of a fact and see no change in their behavior or attitude, because the belief was never just about the fact.

The implication for your own mind: watch the entanglement. When a belief is entangled with your identity — when being wrong about it means you were a fool, or that your tribe was wrong, or that the person who originally told you was untrustworthy — it becomes load-bearing in ways that pure factual correction can't reach. Untangling the belief from the identity requires something prior to evidence: a willingness to be the kind of person who can be wrong.

Dunning-Kruger and Its Companions

The Dunning-Kruger effect — the finding that people with low competence in a domain tend to overestimate their competence — is often cited as evidence that incompetent people are irrationally overconfident. The actual Dunning-Kruger research (Kruger and Dunning, 1999) is more interesting and more disturbing. People with low competence overestimate themselves because they lack the metacognitive skills to recognize their own errors. You need to know enough to know what good performance looks like. Without that knowledge, you have no measure against which to judge yourself.

The flip side — less discussed — is that highly competent people tend to slightly underestimate their relative competence, because they assume tasks that are easy for them are easy for others. Neither end of the competence spectrum produces accurate self-assessment.

What this means: certainty and competence don't correlate the way we'd hope. The most certain person in the room is not the most likely to be right. Domain expertise reduces overconfidence in the specific domain but doesn't generalize — an expert economist might be as poorly calibrated as anyone else about geopolitics or relationships.

The Philip Tetlock Work

Philip Tetlock's long-running forecasting research, summarized in Superforecasters, offers the most rigorous evidence for what calibrated thinking actually looks like in practice.

Tetlock tracked the predictions of thousands of people over years, including domain experts, political commentators, and ordinary people. He found that most experts performed barely better than chance on long-range geopolitical predictions. More interestingly, he identified a subset of forecasters — "superforecasters" — who dramatically outperformed both experts and other laypeople, not because they knew more, but because of how they held beliefs.

The superforecasters' characteristics: - They expressed beliefs in probabilistic terms rather than binary (not "this will happen" but "I think there's a 70% chance") - They updated regularly as new information came in, rather than anchoring on initial predictions - They were actively open to being wrong and treated being wrong as information, not as threat - They maintained a genuine "what would change my mind?" stance - They were less impressed by their own expertise and more curious about disconfirming evidence

The key finding for this article: the superforecasters were not more certain than others. They were less certain, more precisely. They had traded the pleasure of confident conclusions for the accuracy of calibrated ones — and the accuracy was the thing that actually made their thinking valuable.

Building Tolerance for Not Knowing

The phrase "I don't know" is one of the most intellectually honest things a person can say, and one of the most underused. Social pressure to have opinions, to demonstrate confidence, to not appear uninformed — all of this creates an environment where "I don't know" feels like weakness or incompetence.

It is neither. It is accurate reporting of your epistemic state. And it's the prerequisite for actually finding out.

Practical practices for building "I don't know" tolerance:

Probability practice: Start expressing beliefs as percentages rather than assertions. "I'm 75% sure" rather than "I know." This forces you to locate your actual confidence level, which is usually lower than the assertion implies, and it makes updating natural — you go from 75% to 85% as evidence accumulates, rather than from "know" to "know more," which is cognitively incoherent.

The pre-mortem: Before committing to a conclusion, assume it's wrong. Spend five minutes generating every reason it might be wrong. This is Gary Klein's pre-mortem technique, and it exploits the same imaginative capacity that fuels motivated reasoning, but in the opposite direction. It doesn't eliminate certainty bias but it gives it a structured moment of challenge before the conclusion hardens.

Name your update condition: For any belief that matters, write down: "I would change my mind about this if..." If you can't complete that sentence, the belief is a commitment, not a conclusion. This doesn't mean it's wrong — but it means you're holding it more like an identity than like a hypothesis, and that's worth knowing.

Schedule uncertainty: Reserve time where you're explicitly not supposed to know the answer — exploratory reading, genuine dialogue with people who think differently, creative speculation. This decouples knowing from safety, and safety from certainty.

Why This Matters at Scale

Certainty is a political resource. Movements, leaders, and ideologies that project total certainty are exploiting the neurological reward system of their audience. The person who explains everything — who has a complete, confident account of why the world is the way it is and who is responsible — triggers the relief of arriving. The alternative — genuine complexity, partial knowledge, uncertain causal chains — asks people to tolerate discomfort that most have never been trained to tolerate.

This is one of the mechanisms through which simple, authoritarian, conspiratorial worldviews spread. Not because they're more truthful — because they're more certain. And certainty feels like truth, even when it isn't.

Building personal tolerance for not-knowing is not just a cognitive hygiene practice. It is a form of resistance to one of the most reliable vectors of mass manipulation. The person who can sit comfortably with "I don't fully understand this yet" is much harder to radicalize than the person for whom uncertainty is intolerable.

Calibrated confidence is the goal. Not permanent doubt. Not endless hedging. Beliefs held with a strength proportional to the evidence, open to revision, and honest about their own limits. That's what thinking clearly looks like from the inside.

Cite this:

Comments

·

Sign in to join the conversation.

Be the first to share how this landed.