Moral Psychology and the Art of Not Believing Your Own Results

3–4 minutes

Over the past few decades, moral psychology has staged a quiet coup against one of our most cherished fantasies: that human beings are, at bottom, rational moral agents. This is not a fringe claim. It is not a Twitter take. It is the mainstream finding of an entire research programme spanning psychology, cognitive science, linguistics, and neuroscience.

We do not reason our way to moral conclusions. We feel our way there. Instantly. Automatically. And only afterwards do we construct reasons that make the judgment sound respectable.

Audio: NotebookLM summary podcast of this topic.

This is not controversial anymore. It is replicated, taught, and celebrated. And yet, if you read the most influential books in this literature, something strange happens. The diagnosis is devastating. The prescription is reassuring.

I’ve just published a long-form video walking through five canonical books in moral psychology that all uncover the same structural problem, and then quietly refuse to live with the implications.

What follows is a brief guide to the argument.

The shared discovery

Across the literature, the same conclusions keep reappearing:

  • Moral judgement is intuitive, not deliberative
  • Reasoning is largely post-hoc
  • Emotion is not noise but signal
  • Framing and metaphor shape what even counts as a moral fact
  • Group identity and tribal affiliation dominate moral perception

In other words: the Enlightenment picture of moral reasoning is wrong. Or at least badly incomplete.

The rider does not steer the elephant. The rider explains where the elephant has already gone.

Audio: NotebookLM infographic

Where the books go wrong

The video focuses on five widely read, field-defining works:

  • The Righteous Mind (reviewed here and here… even here)
  • Moral Politics (mentioned here – with Don’t Think of an Elephant treated as its popular sequel)
  • Outraged! (reviewed here)
  • Moral Tribes (reviewed here)

Each of these books is sharp, serious, and worth reading. This is not a hit piece.

But each follows the same arc:

  1. Identify a non-rational, affective, automatic mechanism at the heart of moral judgement
  2. Show why moral disagreement is persistent and resistant to argument
  3. Propose solutions that rely on reflection, dialogue, reframing, calibration, or rational override

In short: they discover that reason is weak, and then assign it a leadership role anyway.

Haidt dismantles moral rationalism and then asks us to talk it out.
Lakoff shows that framing is constitutive, then offers better framing.
Gray models outrage as a perceptual feedback loop, then suggests we check our perceptions.
Greene diagnoses tribal morality, then bets on utilitarian reasoning to save us.

None of this is incoherent. But it is uncomfortable. Because the findings themselves suggest that these prescriptions are, at best, limited.

Diagnosis without prognosis

The uncomfortable possibility raised by this literature is not that we are ignorant or misinformed.

It is that moral disagreement may be structural rather than solvable.

That political conflict may not be cured by better arguments.
That persuasion may resemble contagion more than deliberation.
That reason often functions as a press secretary, not a judge.

The books sense this. And then step back from it. Which is human. But it matters.

Why this matters now

We are living in systems that have internalised these findings far more ruthlessly than public discourse has.

Social media platforms optimise for outrage, not understanding.
Political messaging is frame-first, not fact-first.
AI systems are increasingly capable of activating moral intuitions at scale, without fatigue or conscience.

Meanwhile, our institutions still behave as if one more conversation, one more fact-check, one more appeal to reason will close the gap. The research says otherwise.

And that gap between what we know and what we pretend may be the most important moral problem of the moment.

No solution offered

The video does not end with a fix. That’s deliberate.

Offering a neat solution here would simply repeat the same move I’m criticising: diagnosis followed by false comfort. Sometimes orientation matters more than optimism. The elephant is real. The elephant is moving.And most of us are passengers arguing about the map while it walks.

That isn’t despair. It’s clarity.

Intuition and Reason

I’ve been cycling through The Righteous Mind and Moral Tribes, respectively by Jonathan Haidt and Joshua Greene. These blokes are social psychologists and moral philosophers. I started each of these books with the conception that I would neither like nor agree with the content. As for like, I suppose that’s a silly preconception better captured by whether or not I agree; that with which I don’t agree, I don’t like.

This said, I like the style of both of the authors, and I am finding the material to be less contentious than I first thought. I can already envisage myself agreeing with much of the substance but waiting to disagree with the conclusions.

Although I committed myself to document The Righteous Mind in situ, I am finding that I am listening to the audiobook whilst driving and so getting ahead of myself, so I’ll have to rewind and retread in order to do this. In fact, the reason I switched back to Greene’s Moral Tribes is so I wouldn’t progress even further in Haidt’s work.

I am writing this post to acknowledge this. I’d also like to document that I don’t believe that humans are good reasoners, a situation both Haidt and Greene cite to be generally true. Humans are post hoc rationalisers, which is to say that they make up their minds and then create a narrative to justify that position. Haidt uses an analogy of an elephant and a rider, and he asserts that humans might more accurately be described as groupish than selfish. Certainly not shellfish. Greene notes that people have been shown to concede self-interest to political party interest, which helps to explain how people continually and predictably vote against their own self-interests. This also supports my position that democracy is a horrible form of government. Of course, Haidt would argue that this proves his point that people tend to adopt facts that support their perspective and diminish or disregard those that don’t.

it doesn’t follow that intuition is (1) better, (2) significantly better, or (3) good enough for (a) long term viability or (b) grasping complexity.

Haidt suggests that reason is overvalued, but then he proposes intuition as a better alternative. I agree with him that reason is overvalued and for the same reasons (no pun intended) that he does. But it doesn’t follow that intuition is (1) better, (2) significantly better, or (3) good enough for (a) long term viability or (b) grasping complexity.

Whilst I am not immune to this any more than someone else. I recall Kahneman writing in Thinking Fast and Slow that even though he is well aware of cognitive biases and fallacies, he himself can’t escape them either. When I used to teach undergraduate economics, I’d give some sort of policy assignment. As a preamble, I’d instruct the students that without exception, all policy decisions have pros and cons. In their submissions, they’d need to gather both supporting and detracting arguments and then articulate why one should be adopted over another. Minimally, I’d expect at least three pros and cons.

The students would almost invariably complain about how difficult it was to imagine a counter-position. Even when they’d include some, they were usually weak tea fodder. Oftentimes, the students already shared the same perspective, so they couldn’t usually even get the opposing side until we debriefed after the assignments had been graded. Although I do recall instances where students would admit that they hadn’t considered this or that opposing view, I can’t recall a case where a position was flipped after hearing new evidence—not that this was my intention. People do engage in escalating commitment, doubling down on existing beliefs and generating defensive—sometimes tortuous—arguments to support their positions.