Bayes in the Culture War: How Priors Become Prison Walls

3–5 minutes

Are you rational, or merely rehearsing your tribe’s catechism? Bayes’ theorem insists we should all update our beliefs the same way when presented with the same evidence. Yet in today’s political divide, identical events harden opposing convictions. The problem isn’t the math—it’s the priors. When your starting assumptions are inherited, acculturated, or indoctrinated, no amount of “evidence” will move you into enemy territory.

A Bayesian Sketch of the Divide

  • Let H be a contested claim (pick your poison: “the election was fair,” “immigration helps,” whatever).
  • People in Camp R and Camp B begin with different priors P_R(H) and P_B(H). That’s acculturation if you’re being polite, indoctrination if you’ve run out of patience.
  • They observe evidence E (news, a court ruling, a video clip, a statistic).
  • They update:

posterior odds = prior odds × P(H \mid E) = \frac{P(E \mid H) P(H)}{P(E)}

Except they don’t, not cleanly, because trust in sources warps the likelihoods.

Video: Jonny Thompson on Bayes’ Theorem.
I love Jonny’s content, which is why I reference it so often. He and I have such different philosophical worldviews. Vive la différence (or différance).

Why this locks in polarisation

1. Wildly different priors.
If Camp R starts at P_R(H)=0.8 and Camp B at P_B(H)=0.2, then even moderately pro-H evidence (say likelihood ratio LR = 3) yields:

  • R: prior odds 4:1 \rightarrow 12:1 \Rightarrow P(H \mid E)\approx 0.92
  • B: prior odds 1:4 \rightarrow 3:4 \Rightarrow P(H \mid E)\approx 0.43

Same evidence, one camp “settled,” the other still unconvinced. Repeat ad infinitum, preferably on primetime.

2. Identity-weighted likelihoods.
People don’t evaluate P(E \mid H); they evaluate P(E \mid H, \text{source I like}). Disconfirming evidence is down-weighted by a factor d<1. This is called “being rational” on your own planet and “motivated reasoning” on everyone else’s.

3. Different hypothesis sets.
Camps don’t just disagree on P(H); they entertain different Hs. If one side’s model includes “coordinated elite malfeasance” and the other’s does not, then identical data streams update into different universes.

4. Selective exposure = selection bias.
Evidence isn’t i.i.d.; it’s curated by feeds, friends, and fury. You are sampling from your own posterior predictive distribution and calling it “reality.”

5. Asymmetric loss functions.
Even if beliefs converged, choices won’t. If the social cost of dissent is high, the decision threshold moves. People report a “belief” that minimises ostracism rather than error.

6. No common knowledge, no convergence.
Aumann told us honest Bayesians with common priors and shared posteriors must agree. Remove either—common priors or the “we both know we both saw the same thing” bit—and you get the modern news cycle.

“Acculturation” vs “Indoctrination”

Same mechanism, different moral valence. Priors are installed by families, schools, churches, unions, algorithms. Call it culture if you approve of the installers; call it indoctrination if you don’t. The probability calculus doesn’t care. Your tribal totems do.

Two quick toy moves you can use in prose

  • Likelihood hacking:
    “When evidence arrives, the tribe doesn’t deny the datum; it edits the likelihoods. ‘If my side did it, it’s an outlier; if your side did it, it’s a pattern.’ This is not hypocrisy; it’s a parameter update where the parameter is loyalty.”
  • Posterior divergence despite ‘facts’:
    “Give two citizens the same court ruling. One updates towards legitimacy because courts are reliable; the other away from legitimacy because courts are captured. The ruling is constant; the trust vector is not.”

If one wanted to reduce the split (perish the thought)

  • Forecast, don’t opine. Run cross-camp prediction markets or calibration tournaments. Bayes behaves when you pay people for accuracy rather than performance art.
  • Adversarial collaboration. Force both sides to pre-register what evidence would move them and how much. If someone’s d for disconfirming evidence is effectively zero, you’ve identified faith, not inference.
  • Reference classes, not anecdotes. Pull arguments out of the single-case trap and into base-rate land. Yes, it’s boring. So is surgery, but people still do it.

The punchline

Polarisation isn’t the failure of reason; it’s what happens when reason is strapped to identity. Priors are social. Likelihoods are political. Posteriors are performative. You can call it acculturation if you want to feel civilised, indoctrination if you want to throw a brick, but either way you’re watching Bayes’ theorem run inside a culture war. The maths is sober; the humans are not.


What’s Probability?

The contestation over the definition of probability is alive and well—like a philosophical zombie that refuses to lie down and accept the tranquilliser of consensus. Despite over three centuries of intense mathematical, philosophical, and even theological wrangling, no single, universally accepted definition reigns supreme. Instead, we have a constellation of rival interpretations, each staking its claim on the epistemological turf, each clutching its own metaphysical baggage.

Audio: NotebookLM podcast on this topic.

Let us survey the battlefield:

1. Classical Probability (Laplacean Determinism in a Tuxedo)

This old warhorse defines probability as the ratio of favourable outcomes to possible outcomes, assuming all outcomes are equally likely. The problem? That assumption is doing all the heavy lifting, like a butler carrying a grand piano up five flights of stairs. It’s circular: we define probability using equiprobability, which itself presumes a notion of probability. Charming, but logically suspect.

2. Frequentist Probability (The Empiricist’s Fantasy)

Here, probability is the limit of relative frequencies as the number of trials tends to infinity. This gives us the illusion of objectivity—but only in a Platonic realm where we can conduct infinite coin tosses without the coin disintegrating or the heat death of the universe intervening. Also, it tells us nothing about singular cases. What’s the probability this specific bridge will collapse? Undefined, says the frequentist, helpfully.

3. Bayesian Probability (Subjectivity Dressed as Rigor)

Bayesians treat probability as a degree of belief—quantified plausibility updated with evidence. This is useful, flexible, and epistemically honest, but also deeply subjective. Two Bayesians can start with wildly different priors and, unless carefully constrained, remain in separate probabilistic realities. It’s like epistemology for solipsists with calculators.

4. Propensity Interpretation (The Ontology of Maybes)

Karl Popper and his ilk proposed that probability is a tendency or disposition of a physical system to produce certain outcomes. Sounds scientific, but try locating a “propensity” in a particle collider—it’s a metaphysical ghost, not a measurable entity. Worse, it struggles with repeatability and relevance outside of controlled environments.

5. Logical Probability (A Sober Attempt at Rationality)

Think of this as probability based on logical relations between propositions—à la Keynes or Carnap. It aims to be objective without being empirical. The problem? Assigning these logical relations is no easier than choosing priors in Bayesianism, and just as subjective when it comes to anything meaty.

6. Quantum Probability (Schrödinger’s Definition)

In quantum mechanics, probability emerges from the squared modulus of a wave function—so this is where physics says, “Shut up and calculate.” But this doesn’t solve the philosophical issue—it just kicks the can into Hilbert space. Interpretations of quantum theory (Copenhagen? Many Worlds?) embed different philosophies of probability, so the contestation merely changes battlegrounds.

Current Status: War of Attrition

There is no universal agreement, and likely never will be. Probability is used successfully across the sciences, economics, AI, and everyday reasoning—but the fact that these wildly different interpretations all “work” suggests that the concept is operationally robust yet philosophically slippery. Like money, love, or art, we use it constantly but define it poorly.

In short: the contestation endures because probability is not one thing—it is a shape-shifting chimera that serves multiple masters. Each interpretation captures part of the truth, but none hold it entire. Philosophers continue to argue, mathematicians continue to formalise, and practitioners continue to deploy it as if there were no disagreement at all.

And so the probability of this contest being resolved any time soon?
About zero.
Or one.
Depending on your interpretation.