Rational Ghosts: Why Enlightenment Democracy Was Built to Fail

3–4 minutes

We are governed by phantoms. Not the fun kind that rattle chains in castles, but Enlightenment rational ghosts – imaginary citizens who were supposed to be dispassionate, consistent, and perfectly informed. They never lived, but they still haunt our constitutions and television pundits. Every time some talking head declares “the people have spoken”, what they really mean is that the ghosts are back on stage.

👉 Full essay: Rational Ghosts: Why Enlightenment Democracy Was Built to Fail

The conceit was simple: build politics as if it were an engineering problem. Set the rules right, and stability follows. The trouble is that the material – actual people – wasn’t blueprint-friendly. Madison admitted faction was “sown in the nature of man”, Rousseau agonised over the “general will”, and Condorcet managed to trip over his own math. They saw the cracks even while laying the foundation. Then they shrugged and built anyway.

Audio: NotebookLM podcast on this topic.

The rational ghosts were tidy. Real humans are not. Our brains run on shortcuts: motivated reasoning, availability cascades, confirmation bias, Dunning–Kruger. We don’t deliberate; we improvise excuses. Education doesn’t fix it – it just arms us with better rationalisations. Media doesn’t fix it either – it corrals our biases into profitable outrage. The Enlightenment drafted for angels; what it got was apes with smartphones.

Even if the ghosts had shown up, the math betrayed them. Arrow proved that no voting system can translate preferences without distortion. McKelvey showed that whoever controls the sequence of votes controls the outcome. The “will of the people” is less an oracle than a Ouija board, and you can always see whose hand is pushing the planchette.

Scale finishes the job. Dunbar gave us 150 people as the human limit of meaningful community. Beyond that, trust decays into myth. Benedict Anderson called nations “imagined communities”, but social media has shattered the illusion. The national conversation is now a million algorithmic Dunbars, each convinced they alone are the real people.

Audio: This is a longer (40-minute) NotebookLM podcast on the essay itself.

Why did democracy limp along for two centuries if it was this haunted? Because it was on life-support. Growth, war, and civic myth covered the cracks. External enemies, national rituals, and propaganda made dysfunction look like consensus. It wasn’t design; it was borrowed capital. That capital has run out.

Cue the panic. The defences roll in: Churchill said democracy was the “least bad” system (he didn’t, but whatever). Voters self-correct. Education will fix it. It’s only an American problem. And if you don’t like it, what – authoritarianism? These are less arguments than incantations, muttered to keep the ghosts from noticing the creaks in the floorboards.

The real task isn’t to chant louder. It’s to stop pretending ghosts exist. Try subsidiarity: smaller-scale politics humans can actually grasp. Try deliberation: citizens’ assemblies show ordinary people can think, when not reduced to a soundbite. Try sortition: if elections are distorted by design, maybe roll the dice instead. Try polycentric governance: let overlapping authorities handle mismatch instead of hammering “one will”. None of these are perfect. They’re just less haunted.

Enlightenment democracy was built to fail because it was built for rational ghosts. The ghosts never lived. The floorboards are creaking. The task is ours: build institutions for the living, before the house collapses under its own myths.

The Argument in Skeleton Form

Beneath the prose, the critique of Enlightenment democracy can be expressed as a syllogism:
a foundation that assumed rational citizens collides with psychological bias, mathematical impossibility, and sociological limits.
The outcome is a double failure – corrupted inputs and incoherent outputs – masked only by temporary props.

Figure: Logical skeleton of “Rational Ghosts: Why Enlightenment Democracy Was Built to Fail.” For the complete essay, with sources and elaboration, see the open-access preprint on Zenodo: https://doi.org/10.5281/zenodo.17250225

Bayes in the Culture War: How Priors Become Prison Walls

3–5 minutes

Are you rational, or merely rehearsing your tribe’s catechism? Bayes’ theorem insists we should all update our beliefs the same way when presented with the same evidence. Yet in today’s political divide, identical events harden opposing convictions. The problem isn’t the math—it’s the priors. When your starting assumptions are inherited, acculturated, or indoctrinated, no amount of “evidence” will move you into enemy territory.

A Bayesian Sketch of the Divide

  • Let H be a contested claim (pick your poison: “the election was fair,” “immigration helps,” whatever).
  • People in Camp R and Camp B begin with different priors P_R(H) and P_B(H). That’s acculturation if you’re being polite, indoctrination if you’ve run out of patience.
  • They observe evidence E (news, a court ruling, a video clip, a statistic).
  • They update:

posterior odds = prior odds × P(H \mid E) = \frac{P(E \mid H) P(H)}{P(E)}

Except they don’t, not cleanly, because trust in sources warps the likelihoods.

Video: Jonny Thompson on Bayes’ Theorem.
I love Jonny’s content, which is why I reference it so often. He and I have such different philosophical worldviews. Vive la diffĂŠrence (or diffĂŠrance).

Why this locks in polarisation

1. Wildly different priors.
If Camp R starts at P_R(H)=0.8 and Camp B at P_B(H)=0.2, then even moderately pro-H evidence (say likelihood ratio LR = 3) yields:

  • R: prior odds 4:1 \rightarrow 12:1 \Rightarrow P(H \mid E)\approx 0.92
  • B: prior odds 1:4 \rightarrow 3:4 \Rightarrow P(H \mid E)\approx 0.43

Same evidence, one camp “settled,” the other still unconvinced. Repeat ad infinitum, preferably on primetime.

2. Identity-weighted likelihoods.
People don’t evaluate P(E \mid H); they evaluate P(E \mid H, \text{source I like}). Disconfirming evidence is down-weighted by a factor d<1. This is called “being rational” on your own planet and “motivated reasoning” on everyone else’s.

3. Different hypothesis sets.
Camps don’t just disagree on P(H); they entertain different Hs. If one side’s model includes “coordinated elite malfeasance” and the other’s does not, then identical data streams update into different universes.

4. Selective exposure = selection bias.
Evidence isn’t i.i.d.; it’s curated by feeds, friends, and fury. You are sampling from your own posterior predictive distribution and calling it “reality.”

5. Asymmetric loss functions.
Even if beliefs converged, choices won’t. If the social cost of dissent is high, the decision threshold moves. People report a “belief” that minimises ostracism rather than error.

6. No common knowledge, no convergence.
Aumann told us honest Bayesians with common priors and shared posteriors must agree. Remove either—common priors or the “we both know we both saw the same thing” bit—and you get the modern news cycle.

“Acculturation” vs “Indoctrination”

Same mechanism, different moral valence. Priors are installed by families, schools, churches, unions, algorithms. Call it culture if you approve of the installers; call it indoctrination if you don’t. The probability calculus doesn’t care. Your tribal totems do.

Two quick toy moves you can use in prose

  • Likelihood hacking:
    “When evidence arrives, the tribe doesn’t deny the datum; it edits the likelihoods. ‘If my side did it, it’s an outlier; if your side did it, it’s a pattern.’ This is not hypocrisy; it’s a parameter update where the parameter is loyalty.”
  • Posterior divergence despite ‘facts’:
    “Give two citizens the same court ruling. One updates towards legitimacy because courts are reliable; the other away from legitimacy because courts are captured. The ruling is constant; the trust vector is not.”

If one wanted to reduce the split (perish the thought)

  • Forecast, don’t opine. Run cross-camp prediction markets or calibration tournaments. Bayes behaves when you pay people for accuracy rather than performance art.
  • Adversarial collaboration. Force both sides to pre-register what evidence would move them and how much. If someone’s d for disconfirming evidence is effectively zero, you’ve identified faith, not inference.
  • Reference classes, not anecdotes. Pull arguments out of the single-case trap and into base-rate land. Yes, it’s boring. So is surgery, but people still do it.

The punchline

Polarisation isn’t the failure of reason; it’s what happens when reason is strapped to identity. Priors are social. Likelihoods are political. Posteriors are performative. You can call it acculturation if you want to feel civilised, indoctrination if you want to throw a brick, but either way you’re watching Bayes’ theorem run inside a culture war. The maths is sober; the humans are not.