Counterfactual Thinking: What If The Other Choice Had Been Made
The Brain's Editing Suite
The capacity to imagine alternatives to what actually happened is one of the more unusual features of human cognition. Most animals respond to outcomes. Humans respond to outcomes relative to alternatives they've constructed in their heads. That comparison — actual vs. imagined — is where a huge percentage of our emotional life takes place.
It's not pathological. It's structural. The same cognitive machinery that lets you feel regret about the past also lets you plan for the future. Counterfactual thinking and prospective thinking (imagining future scenarios) share the same mental infrastructure. You can't surgically remove one without damaging the other.
What you can do is get better at directing it.
Upward vs. Downward: The Fork in the Road
The foundational research here comes from Neal Roese, whose work in the 1990s and 2000s mapped the terrain of counterfactual thought. His central finding: the direction of a counterfactual determines its emotional and behavioral effect.
Upward counterfactuals mutate the past toward better outcomes. If only I'd studied more, I would have passed. These feel bad — they highlight the gap between what happened and what could have happened. But that sting serves a function: it motivates change. People who generate upward counterfactuals after failures tend to perform better in subsequent attempts.
Downward counterfactuals mutate the past toward worse outcomes. At least I didn't total the car. These feel better — they reframe the actual outcome as relatively fortunate. The emotional payoff is relief and gratitude. The behavioral payoff is often nothing, or worse, complacency.
This creates a practical problem: the emotional incentives push you toward downward counterfactuals (they feel better) while the learning incentives favor upward ones (they're more useful). Left to its own devices, your brain will often choose comfort over insight.
The skilled use of counterfactual thinking means overriding that default — not to suffer more, but to learn more.
The Mutation Principle: What Gets Changed
Researchers have found that counterfactual thinking doesn't alter past events randomly. It follows specific patterns — and understanding them reveals where you're being systematically biased.
Exceptional actions get mutated more than routine ones. If you took a new route to work and got into an accident, you'll think if only I'd taken my usual route much more readily than if you'd taken your normal route and had the same accident. This is the abnormality effect — we hold unusual choices to higher account. The problem: this makes us systematically more likely to blame novel decisions, regardless of whether they were the actual causal factor.
The last action before an outcome gets blamed most. This is temporal proximity bias. If you made ten decisions that led to a bad outcome, the final one will receive disproportionate counterfactual attention, even if earlier decisions were more causally significant.
Actions get mutated more than inactions, but inactions are regretted more in the long run. In the short term, you regret what you did (an action that went wrong). Over years, you regret what you didn't do (the path not taken, the risk not taken). Daniel Kahneman and Amos Tversky's work on regret theory maps this asymmetry. Long-term regret is dominated by sins of omission. This means most people's counterfactual thinking is chronically miscalibrated — overweighting recent actions, underweighting the paths they never tried.
Near-Misses: The Most Information-Dense Event You're Ignoring
A near-miss is an outcome that almost went a different way. You almost got the job. The accident almost happened. You almost made the sale.
Near-misses generate the most vivid counterfactual thinking because the alternative outcome is psychologically close. And that vividness carries enormous motivational power — power that often gets directed wrong.
In aviation safety, the near-miss is treated as nearly as informative as an actual crash. The sequence of events that almost caused catastrophe reveals the vulnerabilities in a system just as clearly as the sequence that did. Airlines and the FAA built entire reporting systems (the Aviation Safety Reporting System) around near-misses precisely because they're information-rich and survivable.
In personal life, the near-miss gets filed under "lucky" and dropped. The near-bankruptcy. The relationship that almost fell apart. The health scare that turned out to be nothing. These events contain some of the most useful information about your actual risk exposure — and most people don't extract it.
The question to ask after every near-miss: What sequence of decisions created the conditions for this to almost happen? Not just the final moment, but the upstream choices.
The Olympic Medalist Problem
Victoria Medvec's 1995 study — one of the most cited in this field — analyzed the facial expressions and interviews of Olympic medalists. Bronze medalists, on average, expressed more happiness and satisfaction than silver medalists. The explanation: silver medalists were naturally comparing upward (so close to gold), while bronze medalists were comparing downward (at least I made the podium).
This is the counterfactual reference point in action. Your emotional response to any outcome depends heavily on which comparison scenario your mind constructs automatically. And that construction is not entirely under your control — it's shaped by the structure of the situation. The person who came second, by definition, is standing next to the person who came first. The reference point is forced on them.
In less structured situations, though, you have more say over which counterfactual gets constructed. The habit of constructing downward counterfactuals after every setback — "it could have been worse" — is emotionally protective but cognitively limiting. Use it deliberately, not reflexively.
Historical Counterfactuals: A Method Worth Borrowing
Professional historians have wrestled with counterfactual reasoning for decades — and the debate is illuminating.
One camp says counterfactuals are unscientific and shouldn't be in the discipline at all. History deals with what happened, not what might have happened. The other camp, represented by scholars like Niall Ferguson and Robert Cowley, argues that counterfactual analysis is unavoidable — every causal claim about history implies a counterfactual. To say "the assassination of Franz Ferdinand caused World War I" is to implicitly claim that without the assassination, the war would not (or would not then) have occurred.
What distinguishes good historical counterfactual analysis from fantasy:
It only alters minimal facts. Good historical counterfactuals change as little as possible from the actual record and then trace the realistic consequences. Bad ones change everything and project modern assumptions onto the altered past.
It focuses on decision nodes. The most informative historical counterfactuals identify the actual moments when outcomes were most contingent — where the scale was closest to balanced. These are the high-leverage points.
It respects structural constraints. Some outcomes were nearly overdetermined by structural forces (economic, demographic, technological). Good counterfactual analysis distinguishes where contingency actually had room to operate.
Apply this discipline to personal decisions. You're not asking "what if everything were different?" You're asking: "What was the minimal change that would have produced a meaningfully different outcome? And was that change within my control?"
That's the question that yields actionable learning.
The Practical Framework: Running Your Own After-Action Review
Military units do After-Action Reviews (AARs) after every significant operation. The structure is simple: What was supposed to happen? What actually happened? Why was there a difference? What do we do differently next time?
The AAR is a disciplined counterfactual process. Here's how to adapt it for personal decisions:
Step 1: Reconstruct the decision environment. What did you know at the time? What were the constraints? What were your goals? This step matters because counterfactuals are most useful when they're realistic — alternatives available to the you who was there, not the you who now knows how it ended.
Step 2: Identify the key decision points. When did the trajectory get set? Usually there are two or three moments that mattered more than the rest. Find those.
Step 3: Generate upward counterfactuals for each. What different choice at that point would have produced a better outcome? Be specific. Not "I should have been smarter" but "I should have called the client before the proposal went out."
Step 4: Check for controllability. For each alternative you generated, was it actually available to you given your knowledge and capacity at the time? If not, drop it. If yes, keep it.
Step 5: Extract the rule. What general principle does this counterfactual suggest? "Check assumptions before finalizing commitments" is more useful than a one-time lesson that stays stuck to a specific event.
Step 6: Time-cap the process. Thirty minutes is usually enough. After that, you're no longer learning — you're ruminating.
When Counterfactual Thinking Actually Hurts
There's a real failure mode here: the endless loop. Some people — particularly those prone to anxiety and depression — get caught in counterfactual thinking that generates suffering without generating insight. The thoughts run on repeat, each pass through the same regret producing diminishing information and accumulating emotional cost.
The diagnostic question: Is this producing new information? If each cycle of "what if I'd done X" is giving you genuinely new insight, keep going. If you're covering the same ground for the fifteenth time, you're not thinking — you're suffering, and you need a different intervention (movement, conversation, distraction, sleep) rather than more reflection.
Counterfactual thinking is a tool. Like any tool, it has a right use and a wrong use. The right use is bounded, specific, directional, and aimed at extracting transferable lessons. The wrong use is unbounded, vague, backward-looking, and aimed unconsciously at punishment.
The World-Stakes Version
Scale this up and you see why counterfactual thinking matters beyond individual decision-making.
Organizations that can't do disciplined after-action counterfactual analysis repeat their failures. They're so attached to the narrative of what they did and why it made sense that they can't construct the alternative scenarios that would reveal where they went wrong. Corporate collapses, government policy failures, military disasters — most of them have a common feature: the people involved were not running honest counterfactual analysis on their decisions.
The opposite of this is organizational cultures that actively construct alternative scenarios — pre-mortems, red teams, war games — precisely because they know forward-looking planning is systematically overconfident. These organizations are doing prospective counterfactual thinking: imagining the alternative futures before they happen, extracting the lessons in advance.
The person who masters counterfactual thinking — retrospective and prospective — is operating in a different league. They're not living in the past. They're using the past as raw material for sharper future decisions.
That's the actual skill. Not what-if as regret. What-if as education.
Comments
Sign in to join the conversation.
Be the first to share how this landed.