Ontological Blindness in Modern Moral Science

4–6 minutes

Announcement: I’ll be taking a break from posting long-form articles for a while to focus on a project I’m developing. Instead, I’ll share progress summary updates.

Ontological Blindness in Modern Moral Science is a working title with a working subtitle as The Why Semantic Thickness, Measurement, and Reconciliation Go Wrong. No spoilers.

Audio: NotebookLM summary podcast of this topic.

INSERT: I’ve only outlined and stubbed this Ontological blindness project, and I’ve already got another idea. I need to stop reading and engaging with the world.

I was listening to the Audible version of A.J. Ayer’s classic, Language, Truth, and Logic (1936)– not because I had time but because I listen to audiobooks when I work out. Ayer is a Logical Positivist, but I forgive him. He’s a victim of his time. In any case, I noticed several holes in his logic.

Sure, the book was published in 1936, and it is infamous for defending or creating Emotivism, a favourite philosophical whipping boy. I’m an Emotivist, so I disagree with the opposition. In fact, I feel their arguments are either strawmen or already defended by Ayer. I also agree with Ayer that confusing the map of language with the terrain of reality is a problem in philosophy (among other contexts), but it’s less excusable for a language philosopher.

In any case, I have begun a file to consider a new working title, Phenomenal Constraint and the Limits of Ontological Language. I might as well stay in the ontological space for a while. We’ll see where it leads, but first, I need to put the original project to bed.

Every time I commence a project, I create a thesis statement and an abstract to orient me. These may change over the course of a project, especially larger ones – more of an abstract than a thesis. This thesis has already changed a couple of times, but I feel it’s settled now.

Image: NotebookLM infographic on this topic.

Thesis Statement

Modern moral psychology repeatedly commits a multi-layered category error by treating semantically and ontologically heterogeneous moral terms as commensurate units within a single comparative framework, while simultaneously treating parochial moral metaphysics as natural substrate.

This dual conflation—of semantic density with moral plurality, and of ontological commitment with empirical discovery—produces the false appearance that some moral systems are more comprehensive than others, when it in fact reflects an inability to register ontological incommensurability.

Moral Foundations Theory provides a clear and influential case of this broader mistake: a framework whose reconciliation-oriented conclusions depend not on empirical discovery alone, but on an unacknowledged liberal-naturalist sub-ontology functioning as conceptual ‘firmware’ mistaken for moral cognition itself.

Abstract

Modern moral psychology seeks to explain moral diversity through empirically tractable frameworks that assume cross-cultural comparability of moral concepts. This book argues that many such frameworks – including but not limited to Moral Foundations Theory (MFT) – rest on a persistent category error: the treatment of semantically and ontologically heterogeneous moral terms as commensurate units within a single evaluative space.

The argument proceeds in four stages. First, it establishes that moral vocabularies differ not merely in emphasis but in semantic thickness: some terms (e.g. harm, fairness) are comparatively thin, portable, and practice-independent, while others (e.g. loyalty, authority, sanctity) are culturally saturated, institution-dependent, and ontologically loaded. Treating these as equivalent ‘foundations’ mistakes density for plurality.

Second, the book shows that claims of moral ‘breadth’ or ‘completeness’ smuggle normativity into ostensibly descriptive research, crossing the Humean is/ought divide without acknowledgement. Third, it argues that this slippage is not accidental but functional, serving modern culture’s demand for optimistic, reconcilable accounts of moral disagreement.

Finally, through sustained analysis of MFT as a worked example, the book demonstrates how liberal naturalist individualism operates as an unacknowledged sub-ontology – conceptual firmware that determines what counts as moral, measurable, and comparable. The result is not moral pluralism, but ontological imperialism disguised as empirical neutrality.

The book concludes by arguing that acknowledging ontological incommensurability does not entail nihilism or relativistic indifference, but intellectual honesty about the limits of moral science and the false comfort of reconciliation narratives.

Ideation

I’ve been pondering ontologies a lot these past few weeks, especially how social ontologies undermine communication. More recently, I’ve been considering how sub-ontologies come into play. A key catalyst for my thinking has been Jonathan Haidt’s Moral Foundations Theory, but I’ve also been influenced by George Lakoff, Kurt Gray, and Joshua Greene, as I’ve shared recently. I want to be clear: This book is not about politics or political science. It intends to about the philosophy of psychology and adjacent topics.

At the highest levels, I see fundamental category errors undermining MFT, but as I inspected, it goes deeper still, so much so that it’s too much to fit into an essay or even a monograph, so I will be targeting a book so I have room to expand and articulate my argumentation. Essays are constraining, and the narrative flow – so to speak – is interrupted by footnotes and tangents.

In a book, I can spend time framing and articulating – educating the reader without presuming an in-depth knowledge. This isn’t to say that this isn’t a deep topic, and I’ll try not to patronise readers, but this topic is not only counterintuitive, it is also largely unorthodox and may ruffle a few feathers.

I’m not sure how much I’ll be able to share, but I’d like to be transparent in the process and perhaps gather some inputs along the way.

Methodology

Sort of… I’ve used Scrivener in the past for organising and writing fiction. This is the first time I’ am organising nonfiction. We’ll see how it goes.

Truth After Deflation: Why ‘Truth’ Refuses to Behave

3–4 minutes

I’ve long had a problem with Truth – or at least the notion of it. It gets way too much credit for doing not much at all. For a long time now, philosophers have agreed on something uncomfortable: Truth isn’t what we once thought it was.

The grand metaphysical picture, where propositions are true because they correspond to mind-independent facts, has steadily eroded. Deflationary accounts have done their work well. Truth no longer looks like a deep property hovering behind language. It looks more like a linguistic device: a way of endorsing claims, generalising across assertions, and managing disagreement. So far, so familiar.

Audio: NotebookLM summary podcast of this topic.

What’s less often asked is what happens after we take deflation seriously. Not halfway. Not politely. All the way.

That question motivates my new paper, Truth After Deflation: Why Truth Resists Stabilisation. The short version is this: once deflationary commitments are fully honoured, the concept of Truth becomes structurally unstable. Not because philosophers are confused, but because the job we keep asking Truth to do can no longer be done with the resources we allow it.

The core diagnosis: exhaustion

The paper introduces a deliberately unromantic idea: truth exhaustion. Exhaustion doesn’t mean that truth-talk disappears. We still say things are true. We still argue, correct one another, and care about getting things right. Exhaustion means something more specific:

After deflation, there is no metaphysical, explanatory, or adjudicative remainder left for Truth to perform.

Truth remains grammatically indispensable, but philosophically overworked.

Image: NotebookLM infographics of this topic. (Please ignore the typos.)

The dilemma

Once deflationary constraints are accepted, attempts to “save” Truth fall into a simple two-horn dilemma.

Horn A: Stabilise truth by making it invariant.
You can do this by disquotation, stipulation, procedural norms, or shared observation. The result is stable, but thin. Truth becomes administrative: a device for endorsement, coordination, and semantic ascent. It no longer adjudicates between rival frameworks.

Horn B: Preserve truth as substantive.
You can ask Truth to ground inquiry, settle disputes, explain success, or stand above practices. But now you need criteria. And once criteria enter, so do circularity, regress, or smuggled metaphysics. Truth becomes contestable precisely where it was meant to adjudicate.

Stability costs substance. Substance costs stability. There is no third option waiting in the wings.

Why this isn’t just abstract philosophy

To test whether this is merely a theoretical artefact, the paper works through three domains where truth is routinely asked to do serious work:

  • Moral truth, where Truth is meant to override local norms and condemn entrenched practices.
  • Scientific truth, where Truth is meant to explain success, convergence, and theory choice.
  • Historical truth, where Truth is meant to stabilise narratives against revisionism and denial.

In each case, the same pattern appears. When truth is stabilised, it collapses into procedure, evidence, or institutional norms. When it is thickened to adjudicate across frameworks, it becomes structurally contestable. This isn’t relativism. It’s a mismatch between function and resources.

Why this isn’t quietism either

A predictable reaction is: isn’t this just quietism in better prose?

Not quite. Quietism tells us to stop asking. Exhaustion explains why the questions keep being asked and why they keep failing. It’s diagnostic, not therapeutic. The persistence of truth-theoretic debate isn’t evidence of hidden depth. It’s evidence of a concept being pushed beyond what it can bear after deflation.

The upshot

Truth still matters. But not in the way philosophy keeps demanding. Truth works because practices work. It doesn’t ground them. It doesn’t hover above them. It doesn’t adjudicate between them without borrowing authority from elsewhere. Once that’s accepted, a great deal of philosophical anxiety dissolves, and a great deal of philosophical labour can be redirected.

The question is no longer “What is Truth?” It’s “Why did we expect Truth to do that?”

The paper is now archived on Zenodo and will propagate to PhilPapers shortly. It’s long, unapologetically structural, and aimed squarely at readers who already think deflationary truth is right but haven’t followed it to its endpoint.

Read it if you enjoy watching concepts run out of road.

When Deflation Becomes Ritual

I recently shared a post calling out mystics, trying to fill spaces I deflate, but I am self-aware enough that I can be guilty, too. I worry about Maslow’s Law of the Instrument. Deflationary philosophy likes to imagine itself as immune to excess. It dissolves puzzles, clears away bad questions, and resists the urge to add metaphysical upholstery where none is needed. No mysteries, thank you. No hidden depths. Just conceptual hygiene. This self-image is mostly deserved. But not indefinitely. This post is an attitude check.

Because deflation, like anything that works, can ossify. And when it does, it doesn’t inflate into metaphysics. It hardens into something more embarrassing: a ritual of refusal.

Audio: NotebookLM summary podcast on this topic.

From method to mannerism

Deflation begins as a method:

  • A question is posed.
  • Its assumptions are examined.
  • The confusion is diagnosed.
  • The question dissolves.
  • Everyone goes home.

At its best, this is liberating. It frees us from chasing shadows and mistaking grammatical artefacts for ontological puzzles. The trouble begins when the gesture outlives the job.

What was once a diagnostic move becomes a stylistic tic. Refusal becomes automatic. Silence becomes performative. ‘There is nothing there’ is delivered not as a conclusion, but as a posture. At that point, deflation stops doing work and starts doing theatre.

I am often charged with being negative, a pessimist, a relativist, and a subjectivist. I am sometimes each of these. Mostly, I am a Dis–Integrationist and deflationist, as it were. I like to tear things apart – not out of malice, but seeing that certain things just don’t sit quite right.

Another thing I do is to take things at face value. As I came up through the postmodern tradition, I don’t trust metanarratives, and I look for them everywhere. This is why I wrote A Language Insufficiency Hypothesis (LIH), and even more so, the Mediated Encounter Ontology (MEOW). Some words carry a lot of baggage and connotation, so I want to be sure I understand the rawest form. This is why I rail on about weasel words like truth, justice, freedom, and such.

I also refrain from responding if I am not satisfied with a definition. This is why I consider myself an igntheist as opposed to an atheist. Functionally, I am the latter, but the definition I’d be opposing is so inane that it doesn’t even warrant me taking a position.

Image: NotebookLM infographic of this topic.

The prestige of saying less

There is a quiet prestige attached to not answering questions. Refusal sounds serious. Restraint sounds wise. Silence, in the right lighting, sounds profound. This is not an accident. Our intellectual culture has learned to associate verbal minimalism with depth, much as it associates verbosity with insecurity. Deflationary philosophers are not immune to this aesthetic pull.

When ‘I reject the question’ becomes a default response rather than a considered judgement, deflation has slipped from method into mannerism. The absence of claims becomes a badge. The lack of commitments becomes an identity. One is no longer clearing space, but occupying emptiness.

This is how deflation acquires a style – and styles are how rituals begin.

Apophasis without God

Mysticism has its negative theology. Ritualised deflation develops something similar.

Both rely on:

  • refusal to name
  • insistence on limits
  • reverent quiet

The difference is meant to be procedural. Mysticism stops at the silence. Deflation is supposed to pass through it. But when deflation forgets that its silence is provisional, it starts to resemble the thing it set out to criticise. Absence becomes sacred again, just without the cosmology. The metaphysician worships what cannot be said. The ritualised deflationist admires themselves for not saying it. Neither is doing conceptual work anymore.

A brief and unavoidable Wittgenstein

This is where Ludwig Wittgenstein inevitably reappears, not as an authority, but as a warning. Wittgenstein did not think philosophy ended in silence because silence was holy. He thought philosophy ended in silence because the confusion had been resolved. The ladder was to be thrown away, not mounted on the wall and admired. Unfortunately, ladders make excellent décor.

When deflation becomes ritual, the therapeutic move freezes into liturgy. The gesture is preserved long after its purpose has expired. What was meant to end a problem becomes a way of signalling seriousness. That was never the point.

A diagnostic test

There is a simple question that separates disciplined deflation from its ritualised cousin:

  • Is this refusal doing explanatory work, or is it being repeated because it feels right?
  • If silence leads to better distinctions, better descriptions, or better questions, it is doing its job.
  • If silence merely repeats itself, it has become an affect.

And affects, once stabilised, are indistinguishable from rituals.

Deflation is local, not terminal

The corrective is not to abandon deflation, but to remember its scope.

Deflation should be:

  • local rather than global
  • temporary rather than terminal
  • revisable rather than aestheticised

Some questions need dissolving. Some need answering. Some need rephrasing. Knowing which is which is the entire discipline. Deflation is not a worldview. It is not a temperament. It is not a lifestyle choice. It is a tool, and like all tools, it should be put down when it stops fitting the task.

Clearing space is not a vocation

There is a temptation, once a room has been cleared, to linger in it. To admire the quiet. To mistake the absence of furniture for the presence of insight. But clearing space is not a vocation. It is a task. Once it is done, staying behind is just another way of refusing to leave. And refusal, repeated without reason, is no longer philosophy. It is choreography.

Using Generative AI as Early Peer Review

4–6 minutes

Cheap Adversaries, Outsourced Ego, and Engineered Critique ← ChatGPT is obsessed with subtitles.

There is a peculiar anxiety around admitting that one uses generative AI in serious intellectual work. The anxiety usually takes one of two forms. Either the AI is accused of replacing thinking, or it is accused of flattering the thinker into delusion. Both charges miss the point, and both underestimate how brittle early-stage human peer review often is.

What follows is not a defence of AI as an oracle, nor a claim that it produces insight on its own. It is an account of how generative models can be used – deliberately, adversarially, and with constraints – as a form of early peer pressure. Not peer review in the formal sense, but a rehearsal space where ideas are misread, overstated, deflated, and occasionally rescued from themselves.

Audio: NotebookLM summary podcast of this topic.

The unromantic workflow

The method itself is intentionally dull:

  1. Draft a thesis statement.
    Rinse & repeat.
  2. Draft an abstract.
    Rinse & repeat.
  3. Construct an annotated outline.
    Rinse & repeat.
  4. Only then begin drafting prose.

At each stage, the goal is not encouragement or expansion but pressure. The questions I ask are things like:

  • Is this already well-trodden ground?
  • Is this just X with different vocabulary?
  • What objection would kill this quickly?
  • What would a sceptical reviewer object to first?

The key is timing. This pressure is applied before the idea is polished enough to be defended. The aim is not confidence-building; it is early damage.

Image: NotebookLM infographic on this topic.

Why generative AI helps

In an ideal world, one would have immediate access to sharp colleagues willing to interrogate half-formed ideas. In practice, that ecology is rarely available on demand. Even when it is, early feedback from humans often comes bundled with politeness, status dynamics, disciplinary loyalty, or simple fatigue.

Generative models are always available, never bored, and indifferent to social cost. That doesn’t make them right. It makes them cheap adversaries. And at this stage, adversaries are more useful than allies.

Flattery is a bias, not a sin

Large language models are biased toward cooperation. Left unchecked, they will praise mediocre ideas and expand bad ones into impressive nonsense. This is not a moral failure. It is a structural bias.

The response is not to complain about flattery, but to engineer against it.

Sidebar: A concrete failure mode

I recently tested a thesis on Mistral about object permanence. After three exchanges, the model had escalated a narrow claim into an overarching framework, complete with invented subcategories and false precision. The prose was confident. The structure was impressive. The argument was unrecognisable.

This is the Dunning-Kruger risk in practice. The model produced something internally coherent that I lacked the domain expertise to properly evaluate. Coherence felt like correctness.

The countermeasure was using a second model, which immediately flagged the overreach. Disagreement between models is often more informative than agreement.

Three tactics matter here.

1. Role constraint
Models respond strongly to role specification. Asking explicitly for critique, objections, boundary-setting, and likely reviewer resistance produces materially different output than asking for ‘thoughts’ or ‘feedback’.

2. Third-person framing
First-person presentation cues collaboration. Third-person presentation cues evaluation.

Compare:

  • Here’s my thesis; what do you think?
  • Here is a draft thesis someone is considering. Please evaluate its strengths, weaknesses, and likely objections.

The difference is stark. The first invites repair and encouragement. The second licenses dismissal. This is not trickery; it is context engineering.

3. Multiple models, in parallel
Different models have different failure modes. One flatters. Another nitpicks. A third accuses the work of reinventing the wheel. Their disagreement is the point. Where they converge, caution is warranted. Where they diverge, something interesting is happening.

‘Claude says…’: outsourcing the ego

One tactic emerged almost accidentally and turned out to be the most useful of all.

Rather than responding directly to feedback, I often relay it as:

“Claude says this…”

The conversation then shifts from defending an idea to assessing a reading of it. This does two things at once:

  • It removes personal defensiveness. No one feels obliged to be kind to Claude.
  • It invites second-order critique. People are often better at evaluating a critique than generating one from scratch.

This mirrors how academic peer review actually functions:

  • Reviewer 2 thinks you’re doing X.
  • That seems like a misreading.
  • This objection bites; that one doesn’t.

The difference is temporal. I am doing this before the draft hardens and before identity becomes entangled with the argument.

Guardrails against self-delusion

There is a genuine Dunning–Kruger risk when working outside one’s formal domain. Generative AI does not remove that risk. Used poorly, it can amplify it.

The countermeasure is not humility as a posture, but friction as a method:

  • multiple models,
  • adversarial prompting,
  • third-person evaluation,
  • critique of critiques,
  • and iterative narrowing before committing to form.

None of this guarantees correctness. It does something more modest and more important: it makes it harder to confuse internal coherence with external adequacy.

What this cannot do

It’s worth being explicit about the limits. Generative models cannot tell you whether a claim is true. They can tell you how it is likely to be read, misread, resisted, or dismissed. They cannot arbitrate significance. They cannot decide what risks are worth taking. They cannot replace judgment. Those decisions remain stubbornly human.

What AI can do – when used carefully – is surface pressure early, cheaply, and without social cost. It lets ideas announce their limits faster, while those limits are still negotiable.

A brief meta-note

For what it’s worth, Claude itself was asked to critique an earlier draft of this post. It suggested compressing the familiar arguments, foregrounding the ‘Claude says…’ tactic as the real contribution, and strengthening the ending by naming what the method cannot do.

That feedback improved the piece. Which is, rather conveniently, the point.

The Expiration Date of Object Permanence

2–4 minutes

There is a persistent story we tell ourselves about quantum mechanics:* that it reveals reality to be fundamentally strange, paradoxical, or hostile to common sense. Particles in two places at once. Cats be both alive and dead. Worlds multiplying to save appearances.

I’ve never found that story convincing.

Audio: NotebookLM summary podcast of this topic.
Image: NotebookLM infographic for this topic.

What I do find convincing is a simpler diagnosis: that we are applying a cognitive tool far beyond the conditions under which it earned its authority – and then mistaking the resulting discomfort for metaphysical insight.

Object permanence is one of our earliest and most successful heuristics. It allows infants to track toys behind sofas, caregivers behind doors, and threats behind occlusion. Without it, coordinated action would be impossible. With it, the world becomes navigable, predictable, and stable. It is a genuine cognitive achievement. But it is not a universal guarantee about reality.

In a new essay, The Expiration Date of Object Permanence: Heuristics, Grammar, and Quantum Pseudoproblems, I argue that much of what we call ‘quantum weirdness’ arises from the uncritical extension of this heuristic into domains where its ecological licensing no longer holds. The problem is not that quantum mechanics violates common sense. The problem is that we quietly treat common sense as metaphysics.

Quantum mechanics functions here not as a mystery generator, but as a stress test. Recent matter-wave interference experiments with increasingly massive systems show that object-based expectations fail quantifiably under carefully engineered conditions. When environmental coupling is suppressed, when decoherence is delayed, when the world is no longer warm, noisy, and forgiving, the assumptions underwriting object permanence simply stop paying rent.

The essay also takes a dim view of some familiar cultural furniture. Schrödinger’s cat, for example, was introduced as a reductio – an intentionally absurd demonstration of what happens when microscopic formalism is naively scaled up. That it now circulates as an explanatory image tells us less about quantum mechanics than about the tenacity of object-grammar. Even jokes cannot escape it.

Interpretations fare no better. I suggest that the appeal of frameworks like Many-Worlds is not exhausted by their technical merits. They also function as strategies for preserving object-based reidentification – ways of ensuring that there is still something that can be pointed to, counted, and followed through time, even if the price is ontological inflation.

None of this denies the reality of quantum phenomena, nor does it pretend to solve the measurement problem. The essay is deliberately deflationary. Its claim is methodological, not revisionary: that many of the puzzles we inherit are artefacts of treating developmentally acquired heuristics as if they were unconditional features of the world.

Philosophy’s task, on this view, is not to make reality intuitive. It is to recognise when intuition has reached the end of its jurisdiction.

The paper is now available on Zenodo and will be indexed shortly on PhilPapers. As always, comments, objections, and principled misreadings are welcome.


This post and the underlying essay were inspired by a Nature article: Probing quantum mechanics with nanoparticle matter-wave interferometry, published on 21 January 2026. I get annoyed watching people misunderstand quantum mechanics and its effects, so I decided to address some of the issues in an essay. Read this essay as well as mine, which will explain why the paradoxes and ‘spooky behaviour’ of QM are only counter-intuitive if you’ve fallen into this heuristic trap.

Why Deflationary Philosophy Keeps Attracting Mystics

4–5 minutes

The struggle is real. There is an odd occupational hazard that comes with writing deflationary philosophy: mystics keep turning up to thank you for your service.

This is always mildly bewildering. One spends a great deal of time dismantling metaphysical furniture, only to discover a small group lighting incense in the newly cleared space. Candles appear. Silence thickens. Someone whispers ineffable. Nope. The filing cabinet was just mislabeled.

Audio: NotebookLM summary podcast of this topic.

The problem is not misunderstanding. It’s reuse.

It is tempting to think this is a simple misreading: I say this concept breaks down here, and someone hears you have glimpsed the ultimate. But that’s too kind. What’s really happening is more interesting. Mysticism does not merely misunderstand deflationary work; it feeds on the same linguistic moves and then stops too early.

Both mysticism and deflation rely on negative gestures:

  • “This description fails.”
  • “That category no longer applies.”
  • “Our usual language runs out.”

Up to this point, they are indistinguishable. The fork comes immediately after. The mystic treats conceptual failure as an endpoint. The silence itself becomes the destination. Something deep must live there, humming quietly, just out of reach.

The deflationist treats the same failure as a transition. The silence is not sacred. It’s a signal. It means: this tool no longer fits; pick another or move on. Same breakdown. Entirely different posture.

Clearing space versus consecrating it

Much deflationary philosophy clears space. It removes assumptions that were doing illicit work and leaves behind something quieter, simpler, and occasionally disappointing.

Mysticism has a standing policy of consecrating cleared space. An empty room is never just empty. It must be pregnant with meaning. Absence becomes depth. Silence becomes revelation. The fewer claims you make, the more cosmic you must be.

This is not a philosophical disagreement so much as a difference in temperament. One side sees subtraction. The other experiences loss and rushes to compensate. Modern intellectual culture strongly prefers addition. New layers. Hidden structures. Further depths. Deflation feels like theft. So it gets reinterpreted as a subtler form of enrichment: Ah, fewer words, therefore more truth.

The aesthetic trap

There is also an aesthetic problem, which I increasingly suspect does most of the damage. Deflationary philosophy, when done well, tends to sound calm, patient, and restrained. It does not shout. It does not posture. It does not perform certainty. Unfortunately, this is exactly how profundity is supposed to sound.

Quiet seriousness is easily mistaken for spiritual depth. Refusal to speculate reads as wisdom. Negative definition acquires an apophatic glow. This is how one ends up being mistaken for a mystic without having said anything mystical at all.

A brief word about Wittgenstein (because of course)

This is not a new problem. Ludwig Wittgenstein spent a good portion of his career trying to convince people that philosophical problems arise when language goes on holiday. He was not pointing at a deeper reality beyond words. He was pointing back at the words and saying: look at what you’re doing with these.

Unfortunately, “Whereof one cannot speak, thereof one must be silent” has proven irresistible to those who think silence is where the real action is. Wittgenstein meant: stop here. Many readers heard: kneel here. This is the recurring fate of therapeutic philosophy. The cure gets mistaken for a sacrament.

Charity is not complicity

Another contributor to the confusion is tone. Deflationary work tends to be charitable. It explains why certain intuitions arise. It traces confusions to their sources. It does not sneer. This generosity is often misheard as validation. When you say, “It makes sense that we think this way,” some readers hear, “Your intuition is pointing at something profound.” You are offering an explanation. They are receiving an affirmation. At that point, no disclaimer will save you. Any denial is absorbed as further evidence that you are brushing up against something too deep to articulate.

The real disagreement

The disagreement here is not about reality. It is about what to do when explanation fails.

Mysticism treats failure as revelation. Deflation treats failure as diagnostic.

One sanctifies the breakdown. The other changes tools.

Once you see this, the repeated misfire stops being frustrating and starts being predictable.

A final, self-directed warning

There is, admittedly, a risk on the other side as well. Deflation can become mystical if it turns into ritual. If refusal hardens into identity. If “there is nothing there” becomes something one performs rather than concludes. Even subtraction can acquire ceremony if repeated without purpose. The discipline, such as it is, lies in knowing when to clear space—and when to leave the room.

No replacement gods

When a metaphysical idol is removed, someone will always ask what god is meant to replace it. The deflationary answer is often disappointing: none. This will never satisfy everyone. But the room is cleaner now, and that has its own quiet reward—even if someone insists on lighting incense in the corner.

Image: Full cover image infographic by NotebookLM

Reality Happens Once. Facts Happen Many Times.

I want to clarify my recent The Trouble with Facts post. I realise that I was speaking to one non-trivial form of facts, but there is more than one class of facts. We argue about facts as if the word named a single, stable thing. It doesn’t. It names a family of very different things, quietly grouped together by habit, convenience, and institutional need. Most disputes about facts go nowhere, not because one side is irrational, but because the word itself is doing covert work. We slide between meanings without noticing, then act surprised when disagreement follows. This piece is an attempt to slow that slide.

Audio: NotebookLM summary podcast of this topic.

Polysemy We Notice, Polysemy We Don’t

We are comfortable with ambiguity when it is obvious. A bank can be a financial institution or the edge of a river. A bat can be an animal or a piece of sports equipment. Context resolves these instantly. No one feels existentially threatened by the ambiguity.

Fact is different. The word is polysemous in a way that is both subtle and consequential. Its meanings sit close enough to bleed into one another, allowing certainty from one sense to be smuggled into another without detection. Calling something a fact does not merely describe it. It confers authority. It signals that questioning should stop. That is why this ambiguity matters.

Different Kinds of Facts

Before critiquing facts, we need to sort them.

1. Event-facts (brute, world-facing)
As mentioned previously, these concern what happens in the world, independent of observation.

  • A car collides with a tree.
  • Momentum changes.
  • Metal deforms.

These events occur whether or not anyone notices them. They are ontologically robust and epistemically inaccessible. No one ever encounters them directly. We only ever encounter traces.

2. Indexical or performative facts (trivial, self-reporting)
“I am typing.”

I am doing this now – those now may not be relevant when you read this. This is a fact, but a very thin one. Its authority comes from the coincidence of saying and doing. It requires no reconstruction, no inference, no institutional validation. These facts are easy because they do almost no work.

3. Retrospective personal facts (memory-mediated)
“I was typing.”

This may be relevant now, at least relative to the typing of this particular post. Still a fact, but weaker. Memory enters. Narrative compression enters. Selectivity enters. The same activity now carries a different epistemic status purely because time has passed.

4. Prospective statements (modal, not yet facts)
“I will be typing.”

This is not yet a fact. It may never come to be one. It is an intention or prediction that may or may not be realised. Future-tense claims are often treated as incipient facts, but this is a category error with real consequences.

5. Institutional facts (designated, procedural)
“The court finds…”
“The report concludes…”

These are facts by designation. They are not discovered so much as selected, formalised, and stabilised so that systems can act. They are unlikely to rise to the level of facts, so the legal system tends to generate facts in name only – FINO, if I am being cute.

All of these are called ‘facts’. They are not interchangeable. The trouble begins when certainty migrates illicitly from trivial or institutional facts into brute event-facts, and we pretend nothing happened in the transfer.

One Motor Vehicle

Reconsider the deliberately simple case: A motor vehicle collides with a tree. Trees are immobile, so we can rule out the tree colliding with the car.

Ontologically, something happened. Reality did not hesitate. But even here, no one has direct access to the event itself.

The driver does not enjoy privileged access. They experience shock, adrenaline, attentional narrowing, selective memory, post hoc rationalisation, perhaps a concussion. Already several layers intervene before language even arrives.

A rough schema looks like this:

event → sensory registration → cognitive framing → linguistic encoding → social validation

Ontology concerns what happens.
Epistemology concerns how anything becomes assertable.

Modern thinking collapses the second into the first and calls the result the facts.

People speak of “hard facts” as if hardness transfers from objects to propositions by proximity. It doesn’t. The tree is solid. The fact is an artefact assembled from observation, inference, convention, and agreement.

And so it goes…

Why the Confusion Persists

When someone responds, “But isn’t it a fact that I read this?”, the answer is yes. A different kind of fact.

The error lies not in affirming facts, but in failing to distinguish them. The word fact allows certainty to migrate across categories unnoticed, from trivial self-reports to brute world-events, and from institutional verdicts to metaphysical claims. That migration is doing the work.

Conclusion

Clarifying types of facts does not weaken truth. It prevents us from laundering certainty where it does not belong.

Facts exist. Events occur. But they do not arrive unmediated, innocent, or singular.

Reality happens once. Facts happen many times.

The mistake was never that facts are unreal. It was believing they were all the same kind of thing.

The Trouble with Facts

5–8 minutes

One Motor Vehicle

What we call facts are not discoveries of an unfiltered world. They are the end-products of mediation.

Let’s walk through an example.

Image: Autosmash example. An observer arrives with experience – from genetic predisposition to childhood trauma to winning the lottery. Whatever it might be. Of course, they have many cognitive deficits, biases and filters. Then, there’s the immediate problem of attention. When did they notice the event? Did they turn to look after hearing the noise, or were they meditating on the tree in that moment?

Apparently, a motor vehicle has collided with a tree. Trees are immobile objects, so we can safely rule out the tree colliding with the car.*

So what, exactly, are the facts?

Audio: NotebookLM summary podcast of this topic.

Ontology (the boring bit)

Ontologically, something happened.

A car struck a tree.
Metal deformed.
Momentum stopped.

Reality did not hesitate. It did not consult witnesses. It did not await interpretation.

This is the part Modernity likes to gesture at reverently before immediately leaving it behind.

Image: Requisite NotebookLM infographic on this content.

The Witness

Even the driver does not enjoy privileged access to “what really happened”.

They get:

  • proprioceptive shock
  • adrenaline distortion
  • attentional narrowing
  • selective memory
  • post hoc rationalisation
  • possibly a concussion

Which is already several layers deep before language even arrives to finish the job.

We can generalise the structure:

Ontology: events occur. States of affairs obtain. Something happens whether or not we notice.

Epistemology: observation is always filtered through instruments, concepts, language, habits, and incentives.

Modern sleight of hand: collapse the second into the first and call the result the facts.

People love the phrase “hard facts”, as if hardness transfers from objects to propositions by osmosis. It doesn’t. The tree is solid. The fact is not.

Facts are artefacts. They are assembled from observation, inference, convention, and agreement. They function. They do not reveal essence.

Filtration

An event occurred. A car struck a tree.

Then an observer arrives. But observers never arrive empty-handed.

They arrive with history: genetics, upbringing, trauma, habits, expectations, incentives. They arrive already filtered.

Daniel KahnemanOlivier Sibony, and Cass Sunstein spend an entire book explaining just how unreliable this process is. See Noise: A Flaw in Human Judgment if you want the empirical receipts.

  • Even before bias enters, attention does.
  • When did the observer notice the crash?
  • At the sound? At the sight? After the fact?
  • Were they already looking, or did the noise interrupt something else entirely?

Reality happens once. Facts happen many times, differently, depending on who needs them and why.

Here Comes the Law

This is where the legal system enters, not because truth has been found, but because closure is required.

Courts do not discover facts. They designate versions of events that are good enough to carry consequences. They halt the cascade of interpretations by institutional force and call the result justice.

At every epistemic level, what we assert are interpretations of fact, never access to ontological essence.

Intent, negligence, recklessness. These are not observations. They are attributions. They are stopping rules that allow systems to function despite uncertainty.

The law does not ask what really happened.
It asks which story is actionable.

Two Motor Vehicles

Now add a second moving object.

Another car enters the frame, and with it an entire moral universe.

Suddenly, the event is no longer merely physical. It becomes relational. Agency proliferates. Narratives metastasise.

Who was speeding?
Who had the right of way?
Who saw whom first?
Who should have anticipated whom?

Intent and motive rush in to fill the explanatory vacuum, despite remaining just as unobservable as before.

Nothing about the ontology improved.
Everything about the storytelling did.

Where the tree refused intention, the second vehicle invites it. We begin inferring states of mind from trajectories, attributing beliefs from brake lights, extracting motives from milliseconds of motion.

But none of this is observed.

What we observe are:

  • vehicle positions after the fact,
  • damage patterns,
  • skid marks,
  • witness statements already filtered through shock and expectation.

From these traces, we construct mental interiors.

The driver “intended” to turn.
The other driver “failed” to anticipate.
Someone was “reckless”.
Someone else was merely “unlucky”.

These are not facts. They are interpretive assignments, layered atop already mediated observations, selected because they allow responsibility to be distributed in socially recognisable ways.

This is why explanation now fractures.

One cascade of whys produces a story about distraction or poor judgment.
Another produces a story about road design or visibility.
Another about timing, traffic flow, or urban planning.

Each narrative is plausible.
Each is evidence-constrained.
None is ontologically privileged.

Yet one will be chosen.

Not because it is truer, but because it is actionable.

The presence of a second vehicle does not clarify causation. It merely increases the number of places we are willing to stop asking questions.

Modernity mistakes this proliferation of narrative for epistemic progress. In reality, it is moral bookkeeping.

The crash still occurred.
Metal still deformed.
Momentum still stopped.

What changed was not access to truth, but the urgency to assign fault.

With one vehicle and a tree, facts already fail to arrive unmediated.
With two vehicles, mediation becomes the point.

And still, we insist on calling the result the facts.

Many Vehicles, Cameras, and Experts

At this point, Modernity regains confidence.

Add more vehicles.
Add traffic cameras.
Add dashcams, CCTV, bodycams.
Add accident reconstruction experts, engineers, psychologists, statisticians.

Surely now we are approaching the facts.

But nothing fundamental has changed. We have not escaped mediation. We have merely scaled it up and professionalised it.

Cameras do not record reality. They record:

  • a frame,
  • from a position,
  • at a sampling rate,
  • with compression,
  • under lighting conditions,
  • interpreted later by someone with a mandate.

Video feels decisive because it is vivid, not because it is ontologically transparent. It freezes perspective and mistakes that freeze for truth. Slow motion, zoom, annotation. Each step adds clarity and distance at the same time.

Experts do not access essence either. They perform disciplined abduction.

From angles, debris fields, timing estimates, and damage profiles, they infer plausible sequences. They do not recover the event. They model it. Their authority lies not in proximity to reality, but in institutional trust and methodological constraint.

More data does not collapse interpretation.
It multiplies it.

With enough footage, we don’t get the story. We get competing reconstructions, each internally coherent, each technically defensible, each aligned to a different question:

  • Who is legally liable?
  • Who is financially responsible?
  • Who violated policy?
  • Who can be blamed without destabilising the system?

At some point, someone declares the evidence “clear”.

What they mean is: we have enough material to stop arguing.

This is the final Modern illusion: that accumulation converges on essence. In reality, accumulation converges on closure.

The event remains what it always was: inaccessible except through traces.
The facts become thicker, more confident, more footnoted.
Their metaphysical status does not improve.

Reality happened once. It left debris. We organised the debris into narratives that could survive institutions.

Cameras didn’t reveal the truth. Experts didn’t extract it. They helped us agree on which interpretation would count.

And agreement, however necessary, has never been the same thing as access to what is.

* I was once driving in a storm, and a telephone pole fell about a metre in front of my vehicle. My car drove over the pole, and although I was able to drive the remainder of the way home, my suspension and undercarriage were worse for the wear and tear.

Moral Psychology and the Art of Not Believing Your Own Results

3–4 minutes

Over the past few decades, moral psychology has staged a quiet coup against one of our most cherished fantasies: that human beings are, at bottom, rational moral agents. This is not a fringe claim. It is not a Twitter take. It is the mainstream finding of an entire research programme spanning psychology, cognitive science, linguistics, and neuroscience.

We do not reason our way to moral conclusions. We feel our way there. Instantly. Automatically. And only afterwards do we construct reasons that make the judgment sound respectable.

Audio: NotebookLM summary podcast of this topic.

This is not controversial anymore. It is replicated, taught, and celebrated. And yet, if you read the most influential books in this literature, something strange happens. The diagnosis is devastating. The prescription is reassuring.

I’ve just published a long-form video walking through five canonical books in moral psychology that all uncover the same structural problem, and then quietly refuse to live with the implications.

What follows is a brief guide to the argument.

The shared discovery

Across the literature, the same conclusions keep reappearing:

  • Moral judgement is intuitive, not deliberative
  • Reasoning is largely post-hoc
  • Emotion is not noise but signal
  • Framing and metaphor shape what even counts as a moral fact
  • Group identity and tribal affiliation dominate moral perception

In other words: the Enlightenment picture of moral reasoning is wrong. Or at least badly incomplete.

The rider does not steer the elephant. The rider explains where the elephant has already gone.

Audio: NotebookLM infographic

Where the books go wrong

The video focuses on five widely read, field-defining works:

  • The Righteous Mind (reviewed here and here… even here)
  • Moral Politics (mentioned here – with Don’t Think of an Elephant treated as its popular sequel)
  • Outraged! (reviewed here)
  • Moral Tribes (reviewed here)

Each of these books is sharp, serious, and worth reading. This is not a hit piece.

But each follows the same arc:

  1. Identify a non-rational, affective, automatic mechanism at the heart of moral judgement
  2. Show why moral disagreement is persistent and resistant to argument
  3. Propose solutions that rely on reflection, dialogue, reframing, calibration, or rational override

In short: they discover that reason is weak, and then assign it a leadership role anyway.

Haidt dismantles moral rationalism and then asks us to talk it out.
Lakoff shows that framing is constitutive, then offers better framing.
Gray models outrage as a perceptual feedback loop, then suggests we check our perceptions.
Greene diagnoses tribal morality, then bets on utilitarian reasoning to save us.

None of this is incoherent. But it is uncomfortable. Because the findings themselves suggest that these prescriptions are, at best, limited.

Diagnosis without prognosis

The uncomfortable possibility raised by this literature is not that we are ignorant or misinformed.

It is that moral disagreement may be structural rather than solvable.

That political conflict may not be cured by better arguments.
That persuasion may resemble contagion more than deliberation.
That reason often functions as a press secretary, not a judge.

The books sense this. And then step back from it. Which is human. But it matters.

Why this matters now

We are living in systems that have internalised these findings far more ruthlessly than public discourse has.

Social media platforms optimise for outrage, not understanding.
Political messaging is frame-first, not fact-first.
AI systems are increasingly capable of activating moral intuitions at scale, without fatigue or conscience.

Meanwhile, our institutions still behave as if one more conversation, one more fact-check, one more appeal to reason will close the gap. The research says otherwise.

And that gap between what we know and what we pretend may be the most important moral problem of the moment.

No solution offered

The video does not end with a fix. That’s deliberate.

Offering a neat solution here would simply repeat the same move I’m criticising: diagnosis followed by false comfort. Sometimes orientation matters more than optimism. The elephant is real. The elephant is moving.And most of us are passengers arguing about the map while it walks.

That isn’t despair. It’s clarity.

Language Insufficiency Hypothesis: Presumption and Horizon

I discuss Chapter 4 of ‘A Language Insufficiency Hypothesis’ in this video clip.

Video: Presumption and Horizon

In short, I discuss where language fails in law, politics, science, and digital culture, where we think language conveys more than it does.