Philosophical Razors: Occam's, Hanlon's, Hitchens's, And Others
What Razors Actually Are
A philosophical razor is a principle of parsimony applied to a specific type of judgment. The word "razor" is a deliberate metaphor — it cuts away unnecessary material to leave the clean edge of a simpler, more defensible position.
They're epistemological tools. Not logical proofs, not empirical laws — heuristics. And like all heuristics, their value lies entirely in knowing when to apply them and when the situation falls outside their domain.
There's a temptation to treat razors as trump cards. People do this with Occam's Razor especially — wielding it as if "simpler" automatically means "true," which is not what the principle says. The razor is a prior, a tiebreaker, a default preference. It doesn't determine truth; it guides your initial hypothesis.
Occam's Razor — The Original
William of Ockham (c. 1287-1347) was a Franciscan friar and logician arguing against unnecessarily elaborated metaphysical entities in scholastic philosophy. The principle that's now attributed to him — entia non sunt multiplicanda praeter necessitatem (entities are not to be multiplied beyond necessity) — is actually a later reformulation, but the spirit is his.
The scientific formulation is: among competing hypotheses that explain the same evidence equally well, prefer the one with the fewest assumptions. This became foundational to scientific method — not because simpler is always right, but because simpler is cheaper to test first and harder to accidentally make unfalsifiable through added complexity.
There's a deep mathematical backing for this intuition. Bayesian probability theory formalized it: simpler hypotheses have higher prior probability because they make fewer assumptions, each of which might be wrong. If you have two hypotheses equally consistent with the data, the simpler one needs fewer things to go right in order to be true.
Where it fails:
- Complex phenomena sometimes have complex causes. The immune system is not well-explained by simple models. - "Simpler" can be culturally biased — what seems simple to you depends on your background knowledge and assumptions. - In social and political analysis, the simplest narrative is often a conspiracy of idiots. This is sometimes right (Hanlon's Razor) and sometimes a dangerous underestimate of deliberate coordination. - History of science is full of cases where the "simpler" theory was wrong: phlogiston theory was arguably simpler than oxidation chemistry in its day.
Best applied when: You're choosing between competing explanations and they have roughly equal evidential support. Use it to set your prior, not to settle the debate.
Hanlon's Razor — The Charity Principle
"Never attribute to malice that which is adequately explained by stupidity" is often attributed to Robert J. Hanlon, who submitted it to a Murphy's Law joke book in 1980. Variants appear in Goethe and others before that.
The deeper principle here is related to Occam: malice is a complex hypothesis. It requires that someone had harmful intent, the capacity to execute it, and chose to. Incompetence or carelessness requires only that someone didn't know better or didn't think carefully enough. Since incompetence and carelessness are vastly more common in human affairs than deliberate malice, they should be the default explanation.
More precisely, Hanlon's Razor is an application of base rates. The base rate of malicious actors is lower than the base rate of careless or incompetent ones. So conditional on something going wrong, your posterior should favor carelessness.
This is not moral naivety. It's accurate probability estimation.
Where it fails:
- Institutions can be malicious even when no individual in them is. Systems produce outcomes that none of the individuals intended, which is neither malice nor stupidity — it's emergent harm. - Repeated, targeted harm is evidence that malice should move up your probability estimates. The first time, assume carelessness. By the fifth time with the same target, you should update. - Hanlon's Razor can become a way to protect powerful people from accountability. "They didn't mean to" does not mean "they should not be held responsible." - Some actors genuinely are malicious. Treating all harm as incompetence makes you exploitable.
Best applied when: You're first interpreting a harmful event or behavior and haven't yet had time to gather more evidence. It prevents knee-jerk escalation and paranoia. It does not excuse you from eventually looking at the pattern.
Hitchens's Razor — The Burden of Proof
"What can be asserted without evidence can be dismissed without evidence." Christopher Hitchens used this repeatedly in debates about religion and the supernatural, but the principle applies broadly to any epistemic claim.
The philosophical grounding here is the principle of burden of proof — onus probandi. In logic and rhetoric, the burden of proof lies with the person making a positive claim. You don't have to disprove that there's a dragon in your garage; your neighbor has to prove there is one.
This matters practically because debate is often structured to exploit this asymmetry. Someone makes a claim without evidence, waits for you to rebut it, and then claims your silence or inability to disprove is support for their position. This is a rhetorical trick, not an argument. Hitchens's Razor names the trick and prevents you from accepting its premise.
The corollary is that you should hold your own assertions to the same standard. If you're making claims, you need evidence, not just assertion. The razor cuts both directions.
Where it fails:
- Some important claims are difficult to evidence formally but still have epistemic weight. A doctor who tells you "based on clinical experience, this treatment doesn't work well in your case" doesn't have a randomized controlled trial for every judgment call. You shouldn't dismiss them. - The razor can be used to dismiss claims from people with direct experience who lack formal documentation. People who report workplace discrimination, abuse, or systemic harm often can't produce formal evidence. "You have no evidence" is sometimes exactly the power dynamic being complained about. - High-stakes asymmetry matters. If the claim, if true, would be catastrophic and the cost of precaution is low, you may be justified in taking it seriously without waiting for evidence. Pascal's wager logic applies in practical risk management.
Best applied when: Someone is demanding you rebut an unfounded assertion, or when you're allocating cognitive resources to which claims deserve serious engagement. Save your intellectual bandwidth for claims with evidential support.
Newton's Flaming Laser Sword
Named in a 2004 essay by philosopher Mike Alder, who attributed the sentiment (if not the name) to Newton: "If it cannot be settled by experiment, it is not worth debating." Alder's version is sharper: questions that are empirically undecidable are not questions — they're verbal games.
This is the scientific positivist position pushed to its limit. It's a useful corrective against endless philosophical navel-gazing that produces no traction on reality. It explains why scientists often have low patience for certain types of philosophical argument — not out of ignorance, but out of trained preference for tractable problems.
Where it fails:
- Ethics is not empirically settable, but questions of ethics matter enormously. - Aesthetics cannot be resolved by experiment, but questions of beauty and meaning are real. - Some of the most important questions humans face — about value, purpose, justice — are empirically undecidable. Ruling them out of court doesn't make them less important; it just means you've decided to ignore them.
Best applied when: You're evaluating whether a debate is productive or is just semantic sparring. If the participants genuinely cannot agree on what evidence would settle the question, Newton's Flaming Laser Sword suggests something has gone wrong in how the question is being posed.
The Duck Test
"If it walks like a duck and quacks like a duck, it probably is a duck." This principle encourages pattern recognition over elaborate alternative hypotheses. When all observable signs point to one conclusion, trust them before inventing a more exotic alternative.
This is common sense formalized. In practical reasoning, over-sophistication is often a failure mode — constructing elaborate alternative explanations to avoid acknowledging what the evidence suggests. The Duck Test says: start with the obvious.
Where it fails:
- Mimicry exists. Some things look like ducks on purpose. Fraud, manipulation, and deception all work by creating duck-like surface appearances. - Pattern recognition is fast but sometimes wrong. You can match a pattern that isn't there — especially when you're looking for confirming evidence of a hypothesis you already hold. - Category errors: the test assumes the right categories. A platypus is not a duck, but early European naturalists thought the reports of it were fraudulent because it matched no clean category they had.
Best applied when: You're trying to avoid motivated overthinking — when every indicator points one direction and you're tempted to invent reasons not to believe it.
Using Razors Without Being Captured by Them
The failure mode shared by all razors is mechanical application — treating a heuristic like a rule. Heuristics work because they're right often enough. Rules are expected to work always. The moment you treat a razor as a rule, you stop thinking and start reaching for a shortcut.
The skill is knowing which razor applies when. Some practical guidelines:
Stack them. A hypothesis that survives Occam's Razor (simplest explanation), Hanlon's Razor (not malice), and Hitchens's Razor (evidence-supported) is in much better shape than one that survives only one. Use them in combination.
Know the adversarial uses. Each razor has a manipulation version. "You have no evidence" (misused Hitchens) is used to dismiss legitimate testimony. "It's probably incompetence" (misused Hanlon) is used to protect bad actors. "You're overcomplicating it" (misused Occam) is used to dismiss legitimate complexity. Watch for razors being used to cut conversation short rather than to cut toward clarity.
Apply them to your own thinking first. Most people reach for razors as weapons against others' arguments. The more honest use is directed inward: Am I making claims without evidence? Am I assuming malice when carelessness would explain it? Am I preferring the more complex hypothesis because the simpler one is uncomfortable?
Remember that razors lower complexity — they don't eliminate uncertainty. The simpler hypothesis is not necessarily true. The first explanation isn't the final explanation. The razor just tells you where to start looking.
The goal is faster, cleaner thinking that doesn't overload itself with unnecessary entities, paranoid hypotheses, or unfounded claims. That goal is right. Razors help. But they help best when you're the one holding them, not when they're holding you.
Comments
Sign in to join the conversation.
Be the first to share how this landed.