Legibility and Ontology

3–5 minutes

These two words qualify as my words of the month: legibility and ontology.

I’ve been using them as lenses.

I picked up legibility from James C. Scott’s Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed, which is really a book about how well-intentioned schemes fail once reality is forced to become administrable. Ontology is an older philosophical workhorse, usually paired with epistemology, but I’m using it here in a looser, more pragmatic sense.

When I write, I write through lenses. Everyone does. Writing requires a point of view, even when we pretend otherwise.

In this post, I want to talk more informally about my recent essay, Grammatical Failure. I usually summarise my work elsewhere, but here I want to think out loud about it, particularly in relation to social ontology and epistemology. I won’t linger on definitions. They’re a search away. But a little framing helps.

Ontology, roughly: how reality is parsed.

Epistemology: how knowledge is justified within that parsing.

Audio: NotebookLM summary of this post.

Much of my recent work sits downstream of thinkers like Thomas Sowell, George Lakoff, Jonathan Haidt, Kurt Gray, and Joshua Greene. Despite their differences, they converge on a shared insight: human cognition is largely motivated preverbally. As a philosopher of language, that pre-language layer is where my interest sharpens.

I explored this in earlier work, including a diptych titled The Grammar of ImpasseConceptual Exhaustion and Causal Mislocation. Writing is how I gel these ideas. There are several related pieces still in the pipeline.

When I talk about grammar, I don’t mean Saussure or Chomsky. I mean something deeper: the ontological substrate beneath belief. Grammar, in this sense, is how reality gets parsed before beliefs ever form. It filters what can count as real, salient, or intelligible.

Let’s use a deliberately simplified example.

Imagine two ontological orientations. Call them Ont-C and Ont-L. This isn’t to say there are only two, but much of Western political discourse collapses into a binary anyway.

Ont-C tends to experience people as inherently bad, dangerous, or morally suspect. Ont-L tends to experience people as inherently good or at least corrigible. These aren’t opinions in the usual sense. They sit beneath belief, closer to affect and moral orientation.

Now consider retributive justice, setting aside the fact that justice itself is a thick concept.

From Ont-C, punishment teaches a lesson. It deters. It disciplines. From Ont-L, punishment without rehabilitation looks cruel or counterproductive, and the transgression itself may be read as downstream of systemic injustice.

Each position can acknowledge exceptions. Ont-L knows there are genuinely broken people. Ont-C knows there are saints. But those are edge cases, not defaults.

Now ask Ont-C and Ont-L to design a criminal justice system together. The result will feel intolerable to both. Too lenient. Too harsh. The disagreement isn’t over policy details. It’s over how reality is carved up in the first place.

And this is only one dimension.

Add others. Bring in Ont-V and Ont-M if you like, for vegan and meat-based ontologies. Suddenly, you have Ont-CV, Ont-CM, Ont-LV, and Ont-LM. Then add class, religion, gender, authority, harm, and whatever. Intersectionality stops looking like a solution and starts looking like a combinatorial explosion.

The Ont-Vs can share a meal, so long as they don’t talk politics.

The structure isn’t just unstable. It was never stable to begin with. We imagine foundations because legibility demands them.

Grammatical Failure is an attempt to explain why this instability isn’t a bug in liberal epistemology but a structural feature. The grammar does the sorting long before deliberation begins.

More on that soon.


In any case, once you start applying this ontological lens to other supposedly intractable disputes, you quickly realise that their intractability is not accidental.

Take abortion.

If we view the issue through the lenses of Ont-A (anti-abortion) and Ont-C (maternal choice), we might as well be peering through Ont-Oil and Ont-Water. The disagreement does not occur at the level of policy preferences or competing values. It occurs at the level of what counts as morally salient in the first place.

There is no middle ground here. No middle path. No synthesis waiting to be negotiated into existence.

That is not because the participants lack goodwill, intelligence, or empathy. It is because the ontological primitives are incommensurate. Each side experiences the other not as mistaken but as unintelligible.

We can will compromise all we like. The grammar does not comply.

Contemporary discourse often insists otherwise. It tells us that better arguments, clearer framing, or more dialogue will eventually converge. From this perspective, that insistence is not hopeful. It is confused. It mistakes a grammatical fracture for a deliberative failure.

You might try to consider other polemic topics and notice the same interplay.

Bayes in the Culture War: How Priors Become Prison Walls

3–5 minutes

Are you rational, or merely rehearsing your tribe’s catechism? Bayes’ theorem insists we should all update our beliefs the same way when presented with the same evidence. Yet in today’s political divide, identical events harden opposing convictions. The problem isn’t the math—it’s the priors. When your starting assumptions are inherited, acculturated, or indoctrinated, no amount of “evidence” will move you into enemy territory.

A Bayesian Sketch of the Divide

  • Let H be a contested claim (pick your poison: “the election was fair,” “immigration helps,” whatever).
  • People in Camp R and Camp B begin with different priors P_R(H) and P_B(H). That’s acculturation if you’re being polite, indoctrination if you’ve run out of patience.
  • They observe evidence E (news, a court ruling, a video clip, a statistic).
  • They update:

posterior odds = prior odds × P(H \mid E) = \frac{P(E \mid H) P(H)}{P(E)}

Except they don’t, not cleanly, because trust in sources warps the likelihoods.

Video: Jonny Thompson on Bayes’ Theorem.
I love Jonny’s content, which is why I reference it so often. He and I have such different philosophical worldviews. Vive la différence (or différance).

Why this locks in polarisation

1. Wildly different priors.
If Camp R starts at P_R(H)=0.8 and Camp B at P_B(H)=0.2, then even moderately pro-H evidence (say likelihood ratio LR = 3) yields:

  • R: prior odds 4:1 \rightarrow 12:1 \Rightarrow P(H \mid E)\approx 0.92
  • B: prior odds 1:4 \rightarrow 3:4 \Rightarrow P(H \mid E)\approx 0.43

Same evidence, one camp “settled,” the other still unconvinced. Repeat ad infinitum, preferably on primetime.

2. Identity-weighted likelihoods.
People don’t evaluate P(E \mid H); they evaluate P(E \mid H, \text{source I like}). Disconfirming evidence is down-weighted by a factor d<1. This is called “being rational” on your own planet and “motivated reasoning” on everyone else’s.

3. Different hypothesis sets.
Camps don’t just disagree on P(H); they entertain different Hs. If one side’s model includes “coordinated elite malfeasance” and the other’s does not, then identical data streams update into different universes.

4. Selective exposure = selection bias.
Evidence isn’t i.i.d.; it’s curated by feeds, friends, and fury. You are sampling from your own posterior predictive distribution and calling it “reality.”

5. Asymmetric loss functions.
Even if beliefs converged, choices won’t. If the social cost of dissent is high, the decision threshold moves. People report a “belief” that minimises ostracism rather than error.

6. No common knowledge, no convergence.
Aumann told us honest Bayesians with common priors and shared posteriors must agree. Remove either—common priors or the “we both know we both saw the same thing” bit—and you get the modern news cycle.

“Acculturation” vs “Indoctrination”

Same mechanism, different moral valence. Priors are installed by families, schools, churches, unions, algorithms. Call it culture if you approve of the installers; call it indoctrination if you don’t. The probability calculus doesn’t care. Your tribal totems do.

Two quick toy moves you can use in prose

  • Likelihood hacking:
    “When evidence arrives, the tribe doesn’t deny the datum; it edits the likelihoods. ‘If my side did it, it’s an outlier; if your side did it, it’s a pattern.’ This is not hypocrisy; it’s a parameter update where the parameter is loyalty.”
  • Posterior divergence despite ‘facts’:
    “Give two citizens the same court ruling. One updates towards legitimacy because courts are reliable; the other away from legitimacy because courts are captured. The ruling is constant; the trust vector is not.”

If one wanted to reduce the split (perish the thought)

  • Forecast, don’t opine. Run cross-camp prediction markets or calibration tournaments. Bayes behaves when you pay people for accuracy rather than performance art.
  • Adversarial collaboration. Force both sides to pre-register what evidence would move them and how much. If someone’s d for disconfirming evidence is effectively zero, you’ve identified faith, not inference.
  • Reference classes, not anecdotes. Pull arguments out of the single-case trap and into base-rate land. Yes, it’s boring. So is surgery, but people still do it.

The punchline

Polarisation isn’t the failure of reason; it’s what happens when reason is strapped to identity. Priors are social. Likelihoods are political. Posteriors are performative. You can call it acculturation if you want to feel civilised, indoctrination if you want to throw a brick, but either way you’re watching Bayes’ theorem run inside a culture war. The maths is sober; the humans are not.