Moral Psychology and the Art of Not Believing Your Own Results

3–4 minutes

Over the past few decades, moral psychology has staged a quiet coup against one of our most cherished fantasies: that human beings are, at bottom, rational moral agents. This is not a fringe claim. It is not a Twitter take. It is the mainstream finding of an entire research programme spanning psychology, cognitive science, linguistics, and neuroscience.

We do not reason our way to moral conclusions. We feel our way there. Instantly. Automatically. And only afterwards do we construct reasons that make the judgment sound respectable.

Audio: NotebookLM summary podcast of this topic.

This is not controversial anymore. It is replicated, taught, and celebrated. And yet, if you read the most influential books in this literature, something strange happens. The diagnosis is devastating. The prescription is reassuring.

I’ve just published a long-form video walking through five canonical books in moral psychology that all uncover the same structural problem, and then quietly refuse to live with the implications.

What follows is a brief guide to the argument.

The shared discovery

Across the literature, the same conclusions keep reappearing:

  • Moral judgement is intuitive, not deliberative
  • Reasoning is largely post-hoc
  • Emotion is not noise but signal
  • Framing and metaphor shape what even counts as a moral fact
  • Group identity and tribal affiliation dominate moral perception

In other words: the Enlightenment picture of moral reasoning is wrong. Or at least badly incomplete.

The rider does not steer the elephant. The rider explains where the elephant has already gone.

Audio: NotebookLM infographic

Where the books go wrong

The video focuses on five widely read, field-defining works:

  • The Righteous Mind (reviewed here and here… even here)
  • Moral Politics (mentioned here – with Don’t Think of an Elephant treated as its popular sequel)
  • Outraged! (reviewed here)
  • Moral Tribes (reviewed here)

Each of these books is sharp, serious, and worth reading. This is not a hit piece.

But each follows the same arc:

  1. Identify a non-rational, affective, automatic mechanism at the heart of moral judgement
  2. Show why moral disagreement is persistent and resistant to argument
  3. Propose solutions that rely on reflection, dialogue, reframing, calibration, or rational override

In short: they discover that reason is weak, and then assign it a leadership role anyway.

Haidt dismantles moral rationalism and then asks us to talk it out.
Lakoff shows that framing is constitutive, then offers better framing.
Gray models outrage as a perceptual feedback loop, then suggests we check our perceptions.
Greene diagnoses tribal morality, then bets on utilitarian reasoning to save us.

None of this is incoherent. But it is uncomfortable. Because the findings themselves suggest that these prescriptions are, at best, limited.

Diagnosis without prognosis

The uncomfortable possibility raised by this literature is not that we are ignorant or misinformed.

It is that moral disagreement may be structural rather than solvable.

That political conflict may not be cured by better arguments.
That persuasion may resemble contagion more than deliberation.
That reason often functions as a press secretary, not a judge.

The books sense this. And then step back from it. Which is human. But it matters.

Why this matters now

We are living in systems that have internalised these findings far more ruthlessly than public discourse has.

Social media platforms optimise for outrage, not understanding.
Political messaging is frame-first, not fact-first.
AI systems are increasingly capable of activating moral intuitions at scale, without fatigue or conscience.

Meanwhile, our institutions still behave as if one more conversation, one more fact-check, one more appeal to reason will close the gap. The research says otherwise.

And that gap between what we know and what we pretend may be the most important moral problem of the moment.

No solution offered

The video does not end with a fix. That’s deliberate.

Offering a neat solution here would simply repeat the same move I’m criticising: diagnosis followed by false comfort. Sometimes orientation matters more than optimism. The elephant is real. The elephant is moving.And most of us are passengers arguing about the map while it walks.

That isn’t despair. It’s clarity.

Bayes in the Culture War: How Priors Become Prison Walls

3–5 minutes

Are you rational, or merely rehearsing your tribe’s catechism? Bayes’ theorem insists we should all update our beliefs the same way when presented with the same evidence. Yet in today’s political divide, identical events harden opposing convictions. The problem isn’t the math—it’s the priors. When your starting assumptions are inherited, acculturated, or indoctrinated, no amount of “evidence” will move you into enemy territory.

A Bayesian Sketch of the Divide

  • Let H be a contested claim (pick your poison: “the election was fair,” “immigration helps,” whatever).
  • People in Camp R and Camp B begin with different priors P_R(H) and P_B(H). That’s acculturation if you’re being polite, indoctrination if you’ve run out of patience.
  • They observe evidence E (news, a court ruling, a video clip, a statistic).
  • They update:

posterior odds = prior odds × P(H \mid E) = \frac{P(E \mid H) P(H)}{P(E)}

Except they don’t, not cleanly, because trust in sources warps the likelihoods.

Video: Jonny Thompson on Bayes’ Theorem.
I love Jonny’s content, which is why I reference it so often. He and I have such different philosophical worldviews. Vive la différence (or différance).

Why this locks in polarisation

1. Wildly different priors.
If Camp R starts at P_R(H)=0.8 and Camp B at P_B(H)=0.2, then even moderately pro-H evidence (say likelihood ratio LR = 3) yields:

  • R: prior odds 4:1 \rightarrow 12:1 \Rightarrow P(H \mid E)\approx 0.92
  • B: prior odds 1:4 \rightarrow 3:4 \Rightarrow P(H \mid E)\approx 0.43

Same evidence, one camp “settled,” the other still unconvinced. Repeat ad infinitum, preferably on primetime.

2. Identity-weighted likelihoods.
People don’t evaluate P(E \mid H); they evaluate P(E \mid H, \text{source I like}). Disconfirming evidence is down-weighted by a factor d<1. This is called “being rational” on your own planet and “motivated reasoning” on everyone else’s.

3. Different hypothesis sets.
Camps don’t just disagree on P(H); they entertain different Hs. If one side’s model includes “coordinated elite malfeasance” and the other’s does not, then identical data streams update into different universes.

4. Selective exposure = selection bias.
Evidence isn’t i.i.d.; it’s curated by feeds, friends, and fury. You are sampling from your own posterior predictive distribution and calling it “reality.”

5. Asymmetric loss functions.
Even if beliefs converged, choices won’t. If the social cost of dissent is high, the decision threshold moves. People report a “belief” that minimises ostracism rather than error.

6. No common knowledge, no convergence.
Aumann told us honest Bayesians with common priors and shared posteriors must agree. Remove either—common priors or the “we both know we both saw the same thing” bit—and you get the modern news cycle.

“Acculturation” vs “Indoctrination”

Same mechanism, different moral valence. Priors are installed by families, schools, churches, unions, algorithms. Call it culture if you approve of the installers; call it indoctrination if you don’t. The probability calculus doesn’t care. Your tribal totems do.

Two quick toy moves you can use in prose

  • Likelihood hacking:
    “When evidence arrives, the tribe doesn’t deny the datum; it edits the likelihoods. ‘If my side did it, it’s an outlier; if your side did it, it’s a pattern.’ This is not hypocrisy; it’s a parameter update where the parameter is loyalty.”
  • Posterior divergence despite ‘facts’:
    “Give two citizens the same court ruling. One updates towards legitimacy because courts are reliable; the other away from legitimacy because courts are captured. The ruling is constant; the trust vector is not.”

If one wanted to reduce the split (perish the thought)

  • Forecast, don’t opine. Run cross-camp prediction markets or calibration tournaments. Bayes behaves when you pay people for accuracy rather than performance art.
  • Adversarial collaboration. Force both sides to pre-register what evidence would move them and how much. If someone’s d for disconfirming evidence is effectively zero, you’ve identified faith, not inference.
  • Reference classes, not anecdotes. Pull arguments out of the single-case trap and into base-rate land. Yes, it’s boring. So is surgery, but people still do it.

The punchline

Polarisation isn’t the failure of reason; it’s what happens when reason is strapped to identity. Priors are social. Likelihoods are political. Posteriors are performative. You can call it acculturation if you want to feel civilised, indoctrination if you want to throw a brick, but either way you’re watching Bayes’ theorem run inside a culture war. The maths is sober; the humans are not.


Flat-Earth Politics in a Cubic World

Audio: NotebookLM podcast on this topic.