Reality Happens Once. Facts Happen Many Times.

I want to clarify my recent The Trouble with Facts post. I realise that I was speaking to one non-trivial form of facts, but there is more than one class of facts. We argue about facts as if the word named a single, stable thing. It doesn’t. It names a family of very different things, quietly grouped together by habit, convenience, and institutional need. Most disputes about facts go nowhere, not because one side is irrational, but because the word itself is doing covert work. We slide between meanings without noticing, then act surprised when disagreement follows. This piece is an attempt to slow that slide.

Audio: NotebookLM summary podcast of this topic.

Polysemy We Notice, Polysemy We Don’t

We are comfortable with ambiguity when it is obvious. A bank can be a financial institution or the edge of a river. A bat can be an animal or a piece of sports equipment. Context resolves these instantly. No one feels existentially threatened by the ambiguity.

Fact is different. The word is polysemous in a way that is both subtle and consequential. Its meanings sit close enough to bleed into one another, allowing certainty from one sense to be smuggled into another without detection. Calling something a fact does not merely describe it. It confers authority. It signals that questioning should stop. That is why this ambiguity matters.

Different Kinds of Facts

Before critiquing facts, we need to sort them.

1. Event-facts (brute, world-facing)
As mentioned previously, these concern what happens in the world, independent of observation.

  • A car collides with a tree.
  • Momentum changes.
  • Metal deforms.

These events occur whether or not anyone notices them. They are ontologically robust and epistemically inaccessible. No one ever encounters them directly. We only ever encounter traces.

2. Indexical or performative facts (trivial, self-reporting)
“I am typing.”

I am doing this now – those now may not be relevant when you read this. This is a fact, but a very thin one. Its authority comes from the coincidence of saying and doing. It requires no reconstruction, no inference, no institutional validation. These facts are easy because they do almost no work.

3. Retrospective personal facts (memory-mediated)
“I was typing.”

This may be relevant now, at least relative to the typing of this particular post. Still a fact, but weaker. Memory enters. Narrative compression enters. Selectivity enters. The same activity now carries a different epistemic status purely because time has passed.

4. Prospective statements (modal, not yet facts)
“I will be typing.”

This is not yet a fact. It may never come to be one. It is an intention or prediction that may or may not be realised. Future-tense claims are often treated as incipient facts, but this is a category error with real consequences.

5. Institutional facts (designated, procedural)
“The court finds…”
“The report concludes…”

These are facts by designation. They are not discovered so much as selected, formalised, and stabilised so that systems can act. They are unlikely to rise to the level of facts, so the legal system tends to generate facts in name only – FINO, if I am being cute.

All of these are called ‘facts’. They are not interchangeable. The trouble begins when certainty migrates illicitly from trivial or institutional facts into brute event-facts, and we pretend nothing happened in the transfer.

One Motor Vehicle

Reconsider the deliberately simple case: A motor vehicle collides with a tree. Trees are immobile, so we can rule out the tree colliding with the car.

Ontologically, something happened. Reality did not hesitate. But even here, no one has direct access to the event itself.

The driver does not enjoy privileged access. They experience shock, adrenaline, attentional narrowing, selective memory, post hoc rationalisation, perhaps a concussion. Already several layers intervene before language even arrives.

A rough schema looks like this:

event → sensory registration → cognitive framing → linguistic encoding → social validation

Ontology concerns what happens.
Epistemology concerns how anything becomes assertable.

Modern thinking collapses the second into the first and calls the result the facts.

People speak of “hard facts” as if hardness transfers from objects to propositions by proximity. It doesn’t. The tree is solid. The fact is an artefact assembled from observation, inference, convention, and agreement.

And so it goes…

Why the Confusion Persists

When someone responds, “But isn’t it a fact that I read this?”, the answer is yes. A different kind of fact.

The error lies not in affirming facts, but in failing to distinguish them. The word fact allows certainty to migrate across categories unnoticed, from trivial self-reports to brute world-events, and from institutional verdicts to metaphysical claims. That migration is doing the work.

Conclusion

Clarifying types of facts does not weaken truth. It prevents us from laundering certainty where it does not belong.

Facts exist. Events occur. But they do not arrive unmediated, innocent, or singular.

Reality happens once. Facts happen many times.

The mistake was never that facts are unreal. It was believing they were all the same kind of thing.

Language Insufficiency Hypothesis: Structural Limits of Language

1–2 minutes

I share a summary of Chapter 2 of A Language Insufficiency Hypothesis.

Video: Language Insufficiency Hypothesis: Structural Limits of Language

Not much to add. The video is under 8 minutes long – or just read the book. The podcast provides a different perspective.

Let me know what you think – there or here.

Audio: NotebookLM summary podcast of Chapter 2.

I also discussed Chapter 1: The Genealogy of Language Failure if you missed it.

Justice as a House of Cards

4–6 minutes

How retribution stays upright by not being examined

There is a persistent belief that our hardest disagreements are merely technical. If we could stop posturing, define our terms, and agree on the facts, consensus would emerge. This belief survives because it works extremely well for birds and tables.

It fails spectacularly for justice.

Audio: NotebookLM summary podcast of this topic.

The Language Insufficiency Hypothesis (LIH) isn’t especially interested in whether people disagree. It’s interested in how disagreement behaves under clarification. With concrete terms, clarification narrows reference. With contested ones, it often fractures it. The more you specify, the more ontologies appear.

Justice is the canonical case.

Retributive justice is often presented as the sober, adult conclusion. Not emotional. Not ideological. Just what must be done. In practice, it is a delicately balanced structure built out of other delicately balanced structures. Pull one term away and people grow uneasy. Pull a second and you’re accused of moral relativism. Pull a third and someone mentions cavemen.

Let’s do some light demolition. I created a set of 17 Magic: The Gathering-themed cards to illustrate various concepts. Below are a few. A few more may appear over time.

Card One: Choice

Image: MTG: Choice – Enchantment

The argument begins innocently enough:

They chose to do it.

But “choice” here is not an empirical description. It’s a stipulation. It doesn’t mean “a decision occurred in a nervous system under constraints.” It means a metaphysically clean fork in the road. Free of coercion, history, wiring, luck, trauma, incentives, or context.

That kind of choice is not discovered. It is assumed.

Pointing out that choices are shaped, bounded, and path-dependent does not refine the term. It destabilises it. Because if choice isn’t clean, then something else must do the moral work.

Enter the next card.

Card Two: Agency

Image: MTG: Agency – Creature – Illusion

Agency is wheeled in to stabilise choice. We are reassured that humans are agents in a morally relevant sense, and therefore choice “counts”.

Counts for what, exactly, is rarely specified.

Under scrutiny, “agency” quietly oscillates between three incompatible roles:

  • a descriptive claim: humans initiate actions
  • a normative claim: humans may be blamed
  • a metaphysical claim: humans are the right kind of cause

These are not the same thing. Treating them as interchangeable is not philosophical rigour. It’s semantic laundering.

But agency is emotionally expensive to question, so the discussion moves on briskly.

Card Three: Responsibility

Image: MTG: Responsibility – Enchantment – Curse

Responsibility is where the emotional payload arrives.

To say someone is “responsible” sounds administrative, even boring. In practice, it’s a moral verdict wearing a clipboard.

Watch the slide:

  • causal responsibility
  • role responsibility
  • moral responsibility
  • legal responsibility

One word. Almost no shared criteria.

By the time punishment enters the picture, “responsibility” has quietly become something else entirely: the moral right to retaliate without guilt.

At which point someone will say the magic word.

Card Four: Desert

Image: MTG: Desert – Instant

Desert is the most mystical card in the deck.

Nothing observable changes when someone “deserves” punishment. No new facts appear. No mechanism activates. What happens instead is that a moral permission slip is issued.

Desert is not found in the world. It is declared.

And it only works if you already accept a very particular ontology:

  • robust agency
  • contra-causal choice
  • a universe in which moral bookkeeping makes sense

Remove any one of these and desert collapses into what it always was: a story we tell to make anger feel principled.

Which brings us, finally, to the banner term.

Card Five: Justice

Image: MTG: Justice – Enchantment

At this point, justice is invoked as if it were an independent standard hovering serenely above the wreckage.

It isn’t.

“Justice” here does not resolve disagreement. It names it.

Retributive justice and consequentialist justice are not rival policies. They are rival ontologies. One presumes moral balance sheets attached to persons. The other presumes systems, incentives, prevention, and harm minimisation.

Both use the word justice.

That is not convergence. That is polysemy with a body count.

Why clarification fails here

This is where LIH earns its keep.

With invariants, adding detail narrows meaning. With terms like justice, choice, responsibility, or desert, adding detail exposes incompatible background assumptions. The disagreement does not shrink. It bifurcates.

This is why calls to “focus on the facts” miss the point. Facts do not adjudicate between ontologies. They merely instantiate them. If agency itself is suspect, arguments for retribution do not fail empirically. They fail upstream. They become non sequiturs.

This is also why Marx remains unforgivable to some.
“From each according to his ability, to each according to his need” isn’t a policy tweak. It presupposes a different moral universe. No amount of clarification will make it palatable to someone operating in a merit-desert ontology.

The uncomfortable conclusion

The problem is not that we use contested terms. We cannot avoid them.

The problem is assuming they behave like tables.

Retributive justice survives not because it is inevitable, but because its supporting terms are treated as settled when they are anything but. Each card looks sturdy in isolation. Together, they form a structure that only stands if you agree not to pull too hard.

LIH doesn’t tell you which ontology to adopt.

It tells you why the argument never ends.

And why, if someone insists the issue is “just semantic”, they’re either confused—or holding the deck.

What the LIH Is Not About (And Why This Still Needs Saying)

3–5 minutes

As the publication date of A Language Insufficiency Hypothesis (LIH) draws nearer, I feel it’s a good time to promote it (obviously) and to introduce some of the problems it uncovers – including common misperceptions I’ve already heard. Through this feedback, I now understand some of the underlying structural limitations that I hadn’t considered, but this only strengthens my position. As I state at the start of the book, the LIH isn’t a cast-in-stone artefact. Other discoveries will inevitably be made. For now, consider it a way to think about the deficiencies of language, around which remediation strategies can be developed.

Audio: NotebookLM summary podcast of this content.

Let’s clear the undergrowth first. The Language Insufficiency Hypothesis is not concerned with everyday ambiguity, garden-variety polysemy, or the sort of misunderstandings that vanish the moment someone bothers to supply five seconds of context. That terrain is already well-mapped, thoroughly fenced, and frankly dull.

Take the classic sort of example wheeled out whenever someone wants to sound clever without doing much work:

Video: a woman making a large basket

If you’re a basketweaver, you picture an absurdly large basket and quietly question the maker’s life choices. If you’re watching basketball, you hear ‘score’. If you’re anywhere near the context in which the sentence was uttered, the meaning is obvious. If it isn’t, the repair cost is trivial. Add context, move on, live your life.

Language did not fail here. It merely waited for its coat. This is not the sort of thing the LIH loses sleep over.

The Groucho Marx Defence, or: Syntax Is Not the Problem

Logicians and armchair philosophers love to reach for jokes like Groucho Marx’s immortal line:

Video: A man and elephant in pyjamas (no sound)

Yes, very funny. Yes, the sentence allows for a syntactic misreading. No, nobody actually believes the elephant was lounging about in striped silk. The humour works precisely because the “wrong” parse is momentarily entertained and instantly rejected.

Again, language is not insufficient here. It’s mischievous. There’s a difference.

If the LIH were worried about this sort of thing, its ambitions would be indistinguishable from an undergraduate logic textbook with better branding.

Banks, Rivers, and the Myth of Constant Confusion

Likewise, when someone in a city says, ‘I went to the bank’, no sane listener imagines them strolling along a riverbank, unless they are already knee-deep in pastoral fantasy or French tourism brochures. Context does the heavy lifting. It almost always does.

Video: Rare footage of me trying to withdraw funds at my bank (no sound)

This is not a crisis of meaning. This is language functioning exactly as advertised.

Where the Trouble Actually Starts: Contestables

The LIH begins where these tidy examples stop being helpful. It concerns itself with Contestables: terms like truth, freedom, justice, fairness, harm, equality. Words that look stable, behave politely in sentences, and then detonate the moment you ask two people what they actually mean by them. These are not ambiguous in the casual sense. They are structurally contested.

In political, moral, and cultural contexts, different groups use the same word to gesture at fundamentally incompatible conceptual frameworks, all while assuming a shared understanding that does not exist. The conversation proceeds as if there were common ground, when in fact there is only overlap in spelling.

That’s why attempts to ‘define’ these terms so often collapse into accusation:

That’s not what freedom means.
That’s not real justice.
You’re redefining truth.

No, the definitions were never shared in the first place. The disagreement was smuggled in with the noun.

‘Just Ignore the Word’ Is Not a Rescue

A common response at this point is to suggest that we simply bypass the troublesome term and discuss the concrete features each party associates with it. Fine. Sensible. Often productive. But notice what this manoeuvre concedes. It does not save the term. It abandons it.

If meaningful discussion can only proceed once the word is set aside and replaced with a list of clarifications, constraints, examples, and exclusions, then the word has already failed at its primary job: conveying shared meaning. This is precisely the point the LIH is making.

The insufficiency is not that language is vague, or flexible, or context-sensitive. It’s that beyond a certain level of conceptual complexity, language becomes a confidence trick. It gives us the feeling of agreement without the substance, the appearance of communication without the transaction.

At that point, words don’t merely underperform. They mislead.

Meet the Language Insufficiency GPT

1–2 minutes

In anticipation of the publication of A Language Insufficiency Hypothesis in January 2026, I’ve created a Language Insufficiency GPT.

Today I’m launching something designed to exploit a familiar failure mode with forensic precision:
👉 https://chatgpt.com/g/g-694018a9bbc88191a8360d65a530e50c-language-insufficiency-gpt

Naturally, it will make more sense alongside the book. But it may still provide a bit of entertainment – and mild discomfort – in the meantime.

tl;dr: Language is generally presumed to be stable. Words mean what you think they mean, right? A table is a table. A bird is a bird. Polysemy aside, these are solid, dependable units.

Then we arrive at freedom, justice, truth, and an entire panoply of unstable candidates. And let’s not even pretend qualia are behaving themselves.

So when someone says ‘truth’, ‘free speech’, or ‘IQ’, you may suddenly realise you’ve been arguing with a cardboard cut-out wearing your own assumptions. That isn’t just interpersonal mischief. It’s language doing exactly what it was designed to do: letting you glide over the hard problems while sounding perfectly reasonable.

Audio: Short NotebookLM summary of this page content*
Video: Legacy video explaining some features of the LIH.

If that sounds banal, you’ve already fallen for the trap.

Give it a try – or wait until you’ve digested the book. Not literally, unless you’re short on fibre.

Cheers.

* As I’ve cited previously, the quality of NotebookLM varies – usually in predictable directions. This one does well enough, but it doesn’t have enough context to get the story right (because it was only drawing from this page rather than from a fuller accounting of the LIH). Its trailing comment reveals that it doesn’t grasp that “new words” don’t solve the problem.

Earlier, it suggests that language is intentionally vague. This is not an assertion I make. You can read some of the earlier incarnations, or you can wait for it to be published.

How Not to Interpret MEOW GPT

3–4 minutes

A NotebookLM Cautionary Tale for the Philosophically Curious

Every so often, the universe gives you a gift. Not the good kind, like an unexpected bottle of Shiraz, but the other kind – the ‘teachable moment’ wrapped in a small tragedy. In this case, a perfectly innocent run of MEOW GPT (my Mediated Encounter Ontology engine) was fed into NotebookLM to generate a pseudo-podcast. And NotebookLM, bless its little algorithmic heart, proceeded to demonstrate every classic mistake people make when confronting a relational ontology.

Audio: The misinterpretation of MEOW GPT: On Progress by NotebookLM that spawned this post.

It’s perfect. I couldn’t have scripted a better example of How Not To Read MEOW GPT if I’d hired a team of Enlightenment rationalists on retainer.

So consider this your public service announcement – and a guide for anyone experimenting with MEOW GPT at home, preferably while sitting down and not holding onto any cherished metaphysical delusions.

Video: Surreal Light through a Prism Clip for no particular reason (No sound)

Mistake 1: Treating a Thick Concept as a Single Glorious Thing

NotebookLM began, earnestly, by trying to uncover the ‘inner architecture of honour’, as if it were a cathedral with blueprints lying around.

This is the central error:

There are only patterns – drifting, contested, historically mangled patterns – that happen to share a word. If you start with ‘What is honour?’, you’ve already fallen down the stairs.

Mistake 2: Rebuilding Essence From the T0–T3 Layers

MEOW GPT gives you biological (T0), cognitive (T1), linguistic (T2), and institutional/technical (T3) mediation because that’s how constraints emerge. NotebookLM, meanwhile, reconstructed these as ‘layers’ of the same virtue – like honour was a three-storey moral townhouse with a loft conversion.

No. The tiers are co-emergent constraints, not components of a moral particle.
If your conclusion looks like a metaphysical onion, you’ve misread the recipe.

Mistake 3: Sneaking Virtue Ethics in Through the Fire Exit

NotebookLM kept returning to:

  • an ‘internal compass’
  • a ‘core record of the self’
  • a ‘lifelong ledger’
  • a ‘deep personal architecture’

At this point we might as well carve Aristotle’s name into the hull.

MEOW’s stance is simple: the self is not a marble statue – it’s an ongoing social, cognitive, and technical scandal. Treating honour as a personality trait is just the old moral ontology with a new hairstyle.

Mistake 4: Treating Polysemy as Noise, Not Evidence

NotebookLM acknowledged the differing uses of ‘honour’, but always with the implication that beneath the variations lies one pure moral essence. This is backwards. The ambiguity is the point. The polysemy isn’t messy data; it’s the signature of conceptual drift.

If you treat ambiguity as a problem to be ironed out, you’ve missed half the LIH and all of the MEOW.

Mistake 5: Turning MEOW Into a TED Talk

The podcast tried to wrap things up by contrasting honour’s “deep internal permanence” with the ephemerality of digital rating systems.

It’s cute, but it’s still modernist comfort-food. MEOW does not mourn for the ‘permanent self’. It doesn’t recognise such a creature. And digital honour doesn’t ‘replace’ the old patterns; it aggressively rewrites the honour-economy into algorithmic form. If your conclusion sounds like ‘ancient virtue meets modern technology’, that’s TED, not MEOW.

So How Should You Interpret MEOW GPT?

A short cheat-sheet for anyone experimenting at home:

  1. There is no essence.
    Concepts like honour, truth, integrity, and justice are drift-patterns, not objects.
  2. The tiers describe mediation, not ingredients.
    They’re co-emergent pressures, not building blocks.
  3. Thick terms lie to you.
    Their apparent unity is linguistic camouflage.
  4. Ambiguity is structural.
    If the term looks fuzzy, that’s because the world is fuzzy there.
  5. If a concept feels granite-solid, you’re standing on conceptual quicksand.
    (Sorry.)

A Friendly Warning Label

Warning:
If you believe thick moral concepts have single, universal meanings, MEOW GPT may cause temporary metaphysical discomfort.
Consult your ontological physician if symptoms persist.