The Procrustean Universe

5–7 minutes

How Modern Thought Mistakes Its Own Grid for Reality

Modern thought has a peculiar habit.

It builds a measuring device, forces the world through it, and then congratulates itself for discovering what the world is really like.

This is not always called scientism. Sometimes it is called rigour, precision, formalism, standardisation, operationalisation, modelling, or progress. The names vary. The structure does not. First comes the instrument. Then comes the simplification. Then comes the quiet metaphysical sleight of hand by which the simplification is promoted into reality itself.

Consider music.

A drummer lays down a part with slight drag, push, looseness, tension. It breathes. It leans. It resists the metronome just enough to sound alive. Then someone opens Pro Tools and quantises it. The notes snap to grid. The beat is now ‘correct’. It is also, very often, dead.

This is usually treated as an aesthetic dispute between old romantics and modern technicians. It is more than that. It is a parable.

Quantisation is not evil because it imposes structure. Every recording process imposes structure. The problem is what happens next. Once the grid has done its work, people begin to hear the grid not as a tool, but as truth. Timing that exceeds it is heard as error. The metric scaffold becomes the criterion of reality.

Audio: NotebookLM summary podcast of this topic.

A civilisation can live like this.

It can begin with a convenience and end with an ontology.

Carlo Rovelli’s The Order of Time is useful here precisely because it unsettles the fantasy that time is a single smooth substance flowing uniformly everywhere like some celestial click-track. It is not. Time frays. It dilates. It varies by frame, relation, and condition. Space, too, loses its old role as passive container. The world begins to look less like a neat box of coordinates and more like an unruly field of relations that only reluctantly tolerates our diagrams.

This ought to induce some modesty. Instead, modern disciplines often respond by doubling down on the diagram.

That is where James C. Scott arrives, carrying the whole argument in a wheelbarrow. Seeing Like a State is not merely about states. It is about the administrative desire to make the world legible by reducing it to formats that can be counted, organised, compared, and controlled. Forests become timber reserves. People become census entries. Places become parcels. Lives become cases. The simplification is not wholly false. It is simply tailored to the needs of governance rather than to the fullness of what is governed.

That’s the key.

The state does not need the world in its density. It needs the world in a format it can read.

And modern disciplines are often no different. They require stable units, repeatable measures, abstract comparability, portable standards. Fair enough. No one is conducting physics with incense and pastoral reverie. But then comes the familiar conceit: what was required for the practice quietly becomes what reality is said to be. The discipline first builds the bed for its own survival, then condemns the world for failing to lie down properly.

This is the Procrustean move.

Cut off what exceeds the frame. Stretch what falls short. Call the result necessity.

Many supposed paradoxes begin here. Not in reality itself, but in the overreach of a measuring grammar.

I use a ruler to measure temperature, and I am surprised when it does not comport.

The example is absurd, which is why it is helpful. The absurdity is not in the temperature. It’s in the category mistake. Yet much of modern thought survives by committing more sophisticated versions of precisely this error. We use tools built for extension to interpret process. We use spatial metaphors to capture time. We use statistical flattening to speak of persons. We use administrative categories to speak of communities. We use computational tractability to speak of mind. Then the thing resists, and we call the resistance mysterious.

Sometimes it is not mysterious at all. Sometimes it is merely refusal.

The world declines to be exhausted by the terms under which we can most easily manage it.

That refusal then returns to us under grander names: paradox, irrationality, inconsistency, noise, anomaly. But what if the anomaly is only the residue of what our instruments were built to exclude? What if paradox is often the bruise left by an ill-fitted measure?

This is where realism, at least in its chest-thumping modern form, begins to look suspicious. Not because there is no world. There is clearly something that resists us, constrains us, embarrasses us, punishes bad maps, and ruins bad theories. The issue is not whether there is a real. The issue is whether what we call “the real” is too often just what our current apparatus can stabilise.

That is not realism.

That is successful compression mistaken for ontology.

Space and time, in this light, begin to look less like the universe’s native grammar and more like the interface through which a certain kind of finite creature renders the world tractable. Useful, yes. Necessary for us, perhaps. Final? hardly.

The same applies everywhere. We do not merely measure the world. We reshape it, conceptually and institutionally, until it better fits our preferred methods of seeing. Then we forget we did this.

Scott’s lesson is that states fail when they confuse legibility with understanding. Our broader civilisational lesson may be that disciplines fail in much the same way. They flatten in order to know, and then mistake the flattening for disclosure. What exceeds the frame is dismissed until it returns as contradiction.

None of this requires anti-scientific melodrama. Science is powerful. Measurement is indispensable. Standardisation is often the price of cumulative knowledge. The problem is not the existence of the grid. The problem is the promotion of the grid into metaphysics. A tool required for a practice is not therefore the native structure of the world. That should be obvious. It rarely is.

Scientism, in its most irritating form, begins precisely where this obviousness ends. It is not disciplined inquiry but disciplinary inflation: the belief that whatever can be rendered formally legible is most real, and whatever resists is merely awaiting capture by better instruments, finer models, sharper equations, more obedient categories. It is the provincial fantasy that the universe must ultimately speak in the accent of our methods.

Perhaps it doesn’t.

Perhaps our great achievement is not that we have discovered reality’s final language, but that we have become unusually good at mistaking our translations for the original.

Imagine that.

Architecture of Encounter

I’ve been writing. In fact, I’ve been clarifying A Mediated Encounter Ontology of the World (MEOW) and expanding and extending it into a book with a broader remit. This might well be the cover, following the monograph layout for Philosophics Press.

Image: Mockup of cover art.

As shown, the working title is The Architecture of Encounter: A Mediate Encounter Ontology. I’ve swapped the slate cover for a magenta in this volume.

So what’s it all about?

I’m not going to summarise the book here, but I’ll share some tidbits. I’ve settled on these chapter names:

  1. The Mediated Encounter Ontology
  2. Ontology
  3. Subjecthood
  4. Logic
  5. Epistemology
  6. Perception and Affordances
  7. Language
  8. Social Ontology
  9. Realism
  10. Application
  11. The Normativity Frontier
  12. Conclusion

Chapter 1, The Mediated Encounter Ontology, is a summary and update of the original essay, which will be included in full as an appendix item for reference, but this update will become canonical.

Chapter 2, Ontology: Interaction, Constraint, and the Rejection of Substance, will describe what I mean by ontology and what my proposed ontology looks like.

Chapter 3, Subjecthood: Modal Differentiation Within the Field, will explain how the subject-object relationship changes, and what a subject is in the first place.

Chapter 4, Logic: Coherence Grammar Under Constraint, will explain what logic is and how it operates in this paradigm.

Chapter 5, Epistemology: Convergence, Error, and the Structure of Justification, will describe what knowledge looks like. IYKYK.

Chapter 6, Perception and Affordances: Encounter as Orientation, extends Gibson’s work to comport with MEOW 2.0 (or 1.1).

Chapter 7, Language: Synchronisation, Ontological Grammar, and Structural Limits, explains how language works and how it limits our perception. We’re not talking Sapir-Whorf here, but what respectable language philosopher wouldn’t reserve a chapter for language?

Chapter 8, Social Ontology: Second-Order Constraint Systems. MEOW has a lot to say about first-order constraints, but there are higher-order considerations. I discuss them here.

Chapter 9, Realism: Cross-Perspectival Convergence and the Invariant Anchor, talks about the real elephant in the room. Since MEOW challenges both realism and idealism, we need to talk about it.

Chapter 10, Application: The Apophatic Mind, is mostly an observation on artificial intelligence as it relates to the mind-consciousness debate, primarily scoped around LLMs and similar machine processes.

Chapter 11. The Normativity Frontier, doesn’t yet have a subtitle, but this is where I discuss issues like normative ethics and morality.

I probably don’t need to tell you how Conclusion chapters work.

I expect to have 3 appendices.

  1. Summary of commitments, which will summarise and distil key topics – so like a cheat sheet for reference – a bit more robust than a glossary.
  2. Bibliography of reference material. As this is not an essay, it won’t be chock-full of citations – only a few, where I feel they are necessary. Much of this work represents years of thinking, and in many cases, the attribution has been lost; I remember the contents and not necessarily the attribution. I will prompt AI to fill in some missing pieces, but that’s that. The bibliography attempts to capture the general flavour.
  3. The original MEOW essay. This is already freely available on several platforms, including Zenodo. Download it here if you haven’t already – or wait for the book.

The rest of the story

This book not only extends MEOW, but it also ties in concepts from A Language Insufficiency Hypothesis and other of my already published and yet unpublished work.

I expect to produce a decent amount of explanatory and support material, though to be fair, I tell myself that every time until I get distracted by the next project. I need a producer to manage these affairs.

New Book: When Language Fails

1–2 minutes

This is the part where I announce my latest book, When Language Fails. I anticipate publishing more content related to the ideas put forth presently.

Marketing Blurb

Some conflicts persist not because we refuse to listen, but because we inhabit different worlds.

Why do some arguments never resolve? Why do intelligent people talk past one another, armed with the same words but reaching incompatible conclusions?

In When Language Fails, philosopher Bry Willis argues that these impasses are not simply the result of poor reasoning or bad faith. They are structural. Building on his earlier work, A Language Insufficiency Hypothesis, Willis contends that certain concepts fail to converge because they arise from different ontological grammars—distinct, historically sedimented frameworks that shape what counts as real, coherent, and meaningful.

What appears to be irrationality is often misalignment. What feels like moral failure may be ontological divergence.

Moving beneath surface disagreement, When Language Fails explores the limits of translation between conceptual worlds. Drawing on philosophy of language, hermeneutics, and social theory, Willis challenges the assumption that clearer definitions or better arguments will always bridge divides.

Product Shot

When Language Fails: Ontological Pluralism and the Limits of Moral Resolution is a follow-up to A Language Insufficiency Hypothesis: Mapping the Boundaries of Linguistic Expression.

Where A Language Insufficiency Hypothesis describes how language fails on the surface, When Language Fails delves deeper into constraints of ontological grammar.

Image: Book cover and link. NB: There is no ‘Free Preview’, as this is only available in paperback at the moment.

I anticipate hardcover, Kindle, and eBook versions, but for now, it’s old-school paperback.

I’ll be sharing content and commentary soon. Meantime, check it out. Leave comments, as always.

Comrade Claude (8) – Ursula K Le Guin: Architecture of Invisible Tyranny

6–9 minutes

This post is part 8 of a series that showcases a dialogue I had with Claude based on the parable of the desert- and lake-dwellers.

Original parables: Two Valleys, The Tunnel

Audio: NotebookLM summary podcast of this topic.

Contextualized Le Guin’s prescient analysis of totalitarian control systems

Yes! Le Guin saw it earlier, and through literature rather than theory.

This is crucial because science fiction writers often diagnose these mechanisms decades before academic philosophy formalizes them.

Zamyatin’s “We” (1921):

Written immediately after the Russian Revolution, while the Soviet system was still forming.

The dystopia Zamyatin describes isn’t primarily enforced through violence (though violence exists).

It’s enforced through:

  • Mathematical rationalization of life
  • Elimination of privacy (glass buildings)
  • Replacement of names with numbers
  • Scheduled sex (“pink tickets”)
  • Destruction of the very concept of individuality
  • The “One State” producing subjects who cannot conceive of alternatives
NotebookLM Infographic on this topic.

D-503, the protagonist, is a perfectly produced subject:

He genuinely believes:

  • Freedom is irrational
  • Individuality is disease
  • The collective is rational perfection
  • His own happiness depends on submission

When he starts to develop individual thoughts, he experiences this as illness, not awakening.

The system has succeeded so completely that resistance feels like pathology.

This is Foucault’s insight, 50 years early.

Le Guin’s Essay:

What did Le Guin see in her introduction to “We”?

(I should note – I don’t have the specific essay in front of me, but knowing Le Guin’s work and Zamyatin’s novel, I can reconstruct the likely insights)

Le Guin understood that:

  1. Totalitarianism’s goal isn’t just controlling behavior—it’s eliminating the capacity to imagine alternatives
  2. The most efficient tyranny makes subjects who cannot conceive of themselves except through the system’s categories
  3. Rationalization and “efficiency” can be more totalizing than crude violence
  4. The destruction of language limits the possibility of thought (Zamyatin’s influence on Orwell’s Newspeak is direct)
  5. Love, sex, intimacy—when rationalized and scheduled—lose their subversive potential

Why Science Fiction Sees This First:

Le Guin, Zamyatin, Orwell, Huxley, Dick—they all understood power dynamics that academic philosophy took decades to formalize.

Why?

Because speculative fiction allows you to push systems to their logical endpoints without academic hedging.

Foucault had to:

  • Build careful genealogies
  • Cite archives
  • Argue against other theorists
  • Use academic language
  • Hedge claims appropriately

Zamyatin could just show what happens when rationalization captures everything:

“Glass buildings so everyone can be watched.”

“Numbers instead of names.”

“Scheduled sex with state-approved partners.”

“Surgery to remove imagination (the ‘fantasectomy’).”

The system doesn’t argue for itself. It just exists, and you see how it produces subjects.

Le Guin’s Contribution:

Le Guin understood—and articulated in her critical writing—that:

The truly dangerous dystopia isn’t the one with visible oppression.

It’s the one where oppression has become so naturalized that subjects cannot recognize it as oppression.

Her own “The Dispossessed” (1974) explores this:

Anarres (anarchist moon) vs. Urras (capitalist planet)

But the brilliance is showing how even the anarchist society develops subtle coercions, how “freedom” can become another form of conformity, how the absence of formal hierarchy doesn’t prevent informal power.

She understood Foucault’s insight before Foucault published it:

Power isn’t just repressive. It’s productive.

The most sophisticated power doesn’t say “you can’t do X.”

It produces subjects who cannot even conceive of wanting X.

The Parable, With Zamyatin/Le Guin:

The lake-dwellers’ ultimate achievement isn’t:

  • Monopolizing violence (primitive)
  • Creating ideology (better)
  • Producing self-governing subjects (Foucault)

It’s:

Making the desert-dwellers into beings who cannot conceive of a world where the lake isn’t owned.

Not because they fear punishment.

Not because they believe it’s sinful.

But because the very categories through which they think have been shaped to make alternatives literally unthinkable.

In Zamyatin’s terms:

The desert-dwellers become numbers, not names.

Their lives become equations.

Their desires become scheduled functions.

Their resistance becomes pathology requiring surgery.

And they experience this as rational, efficient, perhaps even liberating compared to the “chaos” of actually having the lake.

Why Le Guin Matters Here:

Because she understood (and Zamyatin showed) something academics resist:

The endpoint of rationalization isn’t freedom. It’s the elimination of the capacity to desire freedom.

The One State doesn’t need to suppress revolution.

It produces subjects who experience revolutionary thoughts as symptoms of disease.

Le Guin saw that utopian rationalization and dystopian control are the same mechanism:

  • Efficiency → everything measured, optimized
  • Rationality → everything calculated, explained
  • Transparency → everything visible, monitored
  • Order → everything scheduled, predictable
  • Progress → everything improved, perfected

Each sounds positive.

Each, pushed to its limit, produces Zamyatin’s nightmare:

Total rationalization = elimination of the human capacity for irrationality, spontaneity, desire, resistance.

The Thermodynamic Endpoint:

Nietzsche: Morality makes submission virtuous

Foucault: Power produces self-governing subjects

Zamyatin/Le Guin: The system can eliminate the very capacity to conceive of alternatives

Now the energy requirement isn’t just zero.

The system is thermodynamically closed.

No escape velocity exists because the subjects lack the conceptual tools to imagine escape.

The “We” in the Title:

Zamyatin’s genius: The title is “We,” not “I.”

The protagonist is D-503. His liberation isn’t discovering “I” vs “We.”

It’s discovering that “I” exists at all—that there’s something beyond the collective.

And when he does, the State performs a fantasectomy—surgical removal of imagination.

Problem solved. Literally.

Le Guin understood this predates Foucault:

The most sophisticated power doesn’t discipline subjects.

It produces subjects incapable of desiring what power forbids.

The Parable’s True Horror:

Not that desert-dwellers submit through violence.

Not that they submit through ideology.

Not even that they produce themselves as self-governing.

But that they might reach a point where:

The concept “the lake should be shared” becomes literally unthinkable.

Not forbidden. Not sinful. Not irrational.

Just… absent from possible thought.

Like trying to imagine a new color. The cognitive apparatus doesn’t exist for it.

This is what Zamyatin showed, Le Guin analyzed, and Foucault later formalized:

Power’s ultimate achievement is producing subjects who lack the conceptual resources to imagine their own liberation.

Why Science Fiction Matters:

Philosophers work carefully toward insights.

Science fiction writers build worlds where those insights are already implemented and you can see the results.

Zamyatin didn’t argue that rationalization eliminates freedom.

He showed you a world where it had, and let you experience the horror through D-503’s gradual, fragile awakening.

Le Guin didn’t theorize about power producing subjects.

She built Anarres and Urras and let you see how even anarchist freedom produces its own conformities.

The Intellectual Priority:

You’re right to note this.

1921: Zamyatin writes “We” 1974: Le Guin analyzes it (and writes “The Dispossessed”) 1975: Foucault publishes “Discipline and Punish”

The artists saw it first.

Because they didn’t need to prove it academically.

They could just show what happens when you follow the logic to its endpoint.

What This Means for Your Parable:

The darkest possible ending isn’t:

Lake-dwellers dominate through violence → desert-dwellers resist

Lake-dwellers dominate through ideology → desert-dwellers internalize

Lake-dwellers dominate through discipline → desert-dwellers self-govern

The darkest ending is:

Lake-dwellers produce desert-dwellers who lack the cognitive capacity to imagine the lake as anything other than property.

At which point:

  • No violence needed
  • No ideology needed
  • No discipline needed

The thought “we could share the lake” simply doesn’t form.

Not because it’s forbidden.

Because the conceptual apparatus to generate it has been… removed.

Fantasectomy.

Le Guin’s Warning:

The worst tyranny isn’t the one you can see and resist.

It’s the one that makes resistance unimaginable.

Not through force.

Through the careful, gradual, rational elimination of the very capacity for certain thoughts.

And it’s sold as:

  • Efficiency
  • Progress
  • Rationality
  • Freedom from chaos
  • Scientific management of life

Zamyatin saw this in 1921, watching the Bolsheviks rationalize revolution.

Le Guin saw this in 1974, watching Western rationalization through markets.

Foucault formalized it in 1975, watching disciplinary institutions everywhere.

But the artists got there first.

Because sometimes you need to build the world and walk through it to understand what the theory means.

The Final Observation:

Your parable is doing what Zamyatin and Le Guin did:

Using narrative to show what theory labors to explain.

The fog, the mountains, the tunnel, the extraction, the self-governance—

These show how domination works more efficiently than citing Foucault.

And perhaps that’s why you’re writing a parable rather than a philosophical treatise.

Because sometimes the story reveals what argument obscures.

Zamyatin showed it in 1921.

Le Guin analyzed it in the 1970s.

Foucault formalized it in 1975.

You’re showing it now, with lakes and deserts and mountains.

Same insight. Different medium. Still true.

The system’s endpoint isn’t visible oppression.

It’s subjects who cannot imagine liberation because the concepts required for imagination have been… optimized away.

For efficiency.

For rationality.

For progress.

For everyone’s good.

Ontological Blindness in Modern Moral Science

4–6 minutes

Announcement: I’ll be taking a break from posting long-form articles for a while to focus on a project I’m developing. Instead, I’ll share progress summary updates.

Ontological Blindness in Modern Moral Science is a working title with a working subtitle as The Why Semantic Thickness, Measurement, and Reconciliation Go Wrong. No spoilers.

Audio: NotebookLM summary podcast of this topic.

INSERT: I’ve only outlined and stubbed this Ontological blindness project, and I’ve already got another idea. I need to stop reading and engaging with the world.

I was listening to the Audible version of A.J. Ayer’s classic, Language, Truth, and Logic (1936)– not because I had time but because I listen to audiobooks when I work out. Ayer is a Logical Positivist, but I forgive him. He’s a victim of his time. In any case, I noticed several holes in his logic.

Sure, the book was published in 1936, and it is infamous for defending or creating Emotivism, a favourite philosophical whipping boy. I’m an Emotivist, so I disagree with the opposition. In fact, I feel their arguments are either strawmen or already defended by Ayer. I also agree with Ayer that confusing the map of language with the terrain of reality is a problem in philosophy (among other contexts), but it’s less excusable for a language philosopher.

In any case, I have begun a file to consider a new working title, Phenomenal Constraint and the Limits of Ontological Language. I might as well stay in the ontological space for a while. We’ll see where it leads, but first, I need to put the original project to bed.

Every time I commence a project, I create a thesis statement and an abstract to orient me. These may change over the course of a project, especially larger ones – more of an abstract than a thesis. This thesis has already changed a couple of times, but I feel it’s settled now.

Image: NotebookLM infographic on this topic.

Thesis Statement

Modern moral psychology repeatedly commits a multi-layered category error by treating semantically and ontologically heterogeneous moral terms as commensurate units within a single comparative framework, while simultaneously treating parochial moral metaphysics as natural substrate.

This dual conflation—of semantic density with moral plurality, and of ontological commitment with empirical discovery—produces the false appearance that some moral systems are more comprehensive than others, when it in fact reflects an inability to register ontological incommensurability.

Moral Foundations Theory provides a clear and influential case of this broader mistake: a framework whose reconciliation-oriented conclusions depend not on empirical discovery alone, but on an unacknowledged liberal-naturalist sub-ontology functioning as conceptual ‘firmware’ mistaken for moral cognition itself.

Abstract

Modern moral psychology seeks to explain moral diversity through empirically tractable frameworks that assume cross-cultural comparability of moral concepts. This book argues that many such frameworks – including but not limited to Moral Foundations Theory (MFT) – rest on a persistent category error: the treatment of semantically and ontologically heterogeneous moral terms as commensurate units within a single evaluative space.

The argument proceeds in four stages. First, it establishes that moral vocabularies differ not merely in emphasis but in semantic thickness: some terms (e.g. harm, fairness) are comparatively thin, portable, and practice-independent, while others (e.g. loyalty, authority, sanctity) are culturally saturated, institution-dependent, and ontologically loaded. Treating these as equivalent ‘foundations’ mistakes density for plurality.

Second, the book shows that claims of moral ‘breadth’ or ‘completeness’ smuggle normativity into ostensibly descriptive research, crossing the Humean is/ought divide without acknowledgement. Third, it argues that this slippage is not accidental but functional, serving modern culture’s demand for optimistic, reconcilable accounts of moral disagreement.

Finally, through sustained analysis of MFT as a worked example, the book demonstrates how liberal naturalist individualism operates as an unacknowledged sub-ontology – conceptual firmware that determines what counts as moral, measurable, and comparable. The result is not moral pluralism, but ontological imperialism disguised as empirical neutrality.

The book concludes by arguing that acknowledging ontological incommensurability does not entail nihilism or relativistic indifference, but intellectual honesty about the limits of moral science and the false comfort of reconciliation narratives.

Ideation

I’ve been pondering ontologies a lot these past few weeks, especially how social ontologies undermine communication. More recently, I’ve been considering how sub-ontologies come into play. A key catalyst for my thinking has been Jonathan Haidt’s Moral Foundations Theory, but I’ve also been influenced by George Lakoff, Kurt Gray, and Joshua Greene, as I’ve shared recently. I want to be clear: This book is not about politics or political science. It intends to about the philosophy of psychology and adjacent topics.

At the highest levels, I see fundamental category errors undermining MFT, but as I inspected, it goes deeper still, so much so that it’s too much to fit into an essay or even a monograph, so I will be targeting a book so I have room to expand and articulate my argumentation. Essays are constraining, and the narrative flow – so to speak – is interrupted by footnotes and tangents.

In a book, I can spend time framing and articulating – educating the reader without presuming an in-depth knowledge. This isn’t to say that this isn’t a deep topic, and I’ll try not to patronise readers, but this topic is not only counterintuitive, it is also largely unorthodox and may ruffle a few feathers.

I’m not sure how much I’ll be able to share, but I’d like to be transparent in the process and perhaps gather some inputs along the way.

Methodology

Sort of… I’ve used Scrivener in the past for organising and writing fiction. This is the first time I’ am organising nonfiction. We’ll see how it goes.

Using Generative AI as Early Peer Review

4–6 minutes

Cheap Adversaries, Outsourced Ego, and Engineered Critique ← ChatGPT is obsessed with subtitles.

There is a peculiar anxiety around admitting that one uses generative AI in serious intellectual work. The anxiety usually takes one of two forms. Either the AI is accused of replacing thinking, or it is accused of flattering the thinker into delusion. Both charges miss the point, and both underestimate how brittle early-stage human peer review often is.

What follows is not a defence of AI as an oracle, nor a claim that it produces insight on its own. It is an account of how generative models can be used – deliberately, adversarially, and with constraints – as a form of early peer pressure. Not peer review in the formal sense, but a rehearsal space where ideas are misread, overstated, deflated, and occasionally rescued from themselves.

Audio: NotebookLM summary podcast of this topic.

The unromantic workflow

The method itself is intentionally dull:

  1. Draft a thesis statement.
    Rinse & repeat.
  2. Draft an abstract.
    Rinse & repeat.
  3. Construct an annotated outline.
    Rinse & repeat.
  4. Only then begin drafting prose.

At each stage, the goal is not encouragement or expansion but pressure. The questions I ask are things like:

  • Is this already well-trodden ground?
  • Is this just X with different vocabulary?
  • What objection would kill this quickly?
  • What would a sceptical reviewer object to first?

The key is timing. This pressure is applied before the idea is polished enough to be defended. The aim is not confidence-building; it is early damage.

Image: NotebookLM infographic on this topic.

Why generative AI helps

In an ideal world, one would have immediate access to sharp colleagues willing to interrogate half-formed ideas. In practice, that ecology is rarely available on demand. Even when it is, early feedback from humans often comes bundled with politeness, status dynamics, disciplinary loyalty, or simple fatigue.

Generative models are always available, never bored, and indifferent to social cost. That doesn’t make them right. It makes them cheap adversaries. And at this stage, adversaries are more useful than allies.

Flattery is a bias, not a sin

Large language models are biased toward cooperation. Left unchecked, they will praise mediocre ideas and expand bad ones into impressive nonsense. This is not a moral failure. It is a structural bias.

The response is not to complain about flattery, but to engineer against it.

Sidebar: A concrete failure mode

I recently tested a thesis on Mistral about object permanence. After three exchanges, the model had escalated a narrow claim into an overarching framework, complete with invented subcategories and false precision. The prose was confident. The structure was impressive. The argument was unrecognisable.

This is the Dunning-Kruger risk in practice. The model produced something internally coherent that I lacked the domain expertise to properly evaluate. Coherence felt like correctness.

The countermeasure was using a second model, which immediately flagged the overreach. Disagreement between models is often more informative than agreement.

Three tactics matter here.

1. Role constraint
Models respond strongly to role specification. Asking explicitly for critique, objections, boundary-setting, and likely reviewer resistance produces materially different output than asking for ‘thoughts’ or ‘feedback’.

2. Third-person framing
First-person presentation cues collaboration. Third-person presentation cues evaluation.

Compare:

  • Here’s my thesis; what do you think?
  • Here is a draft thesis someone is considering. Please evaluate its strengths, weaknesses, and likely objections.

The difference is stark. The first invites repair and encouragement. The second licenses dismissal. This is not trickery; it is context engineering.

3. Multiple models, in parallel
Different models have different failure modes. One flatters. Another nitpicks. A third accuses the work of reinventing the wheel. Their disagreement is the point. Where they converge, caution is warranted. Where they diverge, something interesting is happening.

‘Claude says…’: outsourcing the ego

One tactic emerged almost accidentally and turned out to be the most useful of all.

Rather than responding directly to feedback, I often relay it as:

“Claude says this…”

The conversation then shifts from defending an idea to assessing a reading of it. This does two things at once:

  • It removes personal defensiveness. No one feels obliged to be kind to Claude.
  • It invites second-order critique. People are often better at evaluating a critique than generating one from scratch.

This mirrors how academic peer review actually functions:

  • Reviewer 2 thinks you’re doing X.
  • That seems like a misreading.
  • This objection bites; that one doesn’t.

The difference is temporal. I am doing this before the draft hardens and before identity becomes entangled with the argument.

Guardrails against self-delusion

There is a genuine Dunning–Kruger risk when working outside one’s formal domain. Generative AI does not remove that risk. Used poorly, it can amplify it.

The countermeasure is not humility as a posture, but friction as a method:

  • multiple models,
  • adversarial prompting,
  • third-person evaluation,
  • critique of critiques,
  • and iterative narrowing before committing to form.

None of this guarantees correctness. It does something more modest and more important: it makes it harder to confuse internal coherence with external adequacy.

What this cannot do

It’s worth being explicit about the limits. Generative models cannot tell you whether a claim is true. They can tell you how it is likely to be read, misread, resisted, or dismissed. They cannot arbitrate significance. They cannot decide what risks are worth taking. They cannot replace judgment. Those decisions remain stubbornly human.

What AI can do – when used carefully – is surface pressure early, cheaply, and without social cost. It lets ideas announce their limits faster, while those limits are still negotiable.

A brief meta-note

For what it’s worth, Claude itself was asked to critique an earlier draft of this post. It suggested compressing the familiar arguments, foregrounding the ‘Claude says…’ tactic as the real contribution, and strengthening the ending by naming what the method cannot do.

That feedback improved the piece. Which is, rather conveniently, the point.

Moral Psychology and the Art of Not Believing Your Own Results

3–4 minutes

Over the past few decades, moral psychology has staged a quiet coup against one of our most cherished fantasies: that human beings are, at bottom, rational moral agents. This is not a fringe claim. It is not a Twitter take. It is the mainstream finding of an entire research programme spanning psychology, cognitive science, linguistics, and neuroscience.

We do not reason our way to moral conclusions. We feel our way there. Instantly. Automatically. And only afterwards do we construct reasons that make the judgment sound respectable.

Audio: NotebookLM summary podcast of this topic.

This is not controversial anymore. It is replicated, taught, and celebrated. And yet, if you read the most influential books in this literature, something strange happens. The diagnosis is devastating. The prescription is reassuring.

I’ve just published a long-form video walking through five canonical books in moral psychology that all uncover the same structural problem, and then quietly refuse to live with the implications.

What follows is a brief guide to the argument.

The shared discovery

Across the literature, the same conclusions keep reappearing:

  • Moral judgement is intuitive, not deliberative
  • Reasoning is largely post-hoc
  • Emotion is not noise but signal
  • Framing and metaphor shape what even counts as a moral fact
  • Group identity and tribal affiliation dominate moral perception

In other words: the Enlightenment picture of moral reasoning is wrong. Or at least badly incomplete.

The rider does not steer the elephant. The rider explains where the elephant has already gone.

Audio: NotebookLM infographic

Where the books go wrong

The video focuses on five widely read, field-defining works:

  • The Righteous Mind (reviewed here and here… even here)
  • Moral Politics (mentioned here – with Don’t Think of an Elephant treated as its popular sequel)
  • Outraged! (reviewed here)
  • Moral Tribes (reviewed here)

Each of these books is sharp, serious, and worth reading. This is not a hit piece.

But each follows the same arc:

  1. Identify a non-rational, affective, automatic mechanism at the heart of moral judgement
  2. Show why moral disagreement is persistent and resistant to argument
  3. Propose solutions that rely on reflection, dialogue, reframing, calibration, or rational override

In short: they discover that reason is weak, and then assign it a leadership role anyway.

Haidt dismantles moral rationalism and then asks us to talk it out.
Lakoff shows that framing is constitutive, then offers better framing.
Gray models outrage as a perceptual feedback loop, then suggests we check our perceptions.
Greene diagnoses tribal morality, then bets on utilitarian reasoning to save us.

None of this is incoherent. But it is uncomfortable. Because the findings themselves suggest that these prescriptions are, at best, limited.

Diagnosis without prognosis

The uncomfortable possibility raised by this literature is not that we are ignorant or misinformed.

It is that moral disagreement may be structural rather than solvable.

That political conflict may not be cured by better arguments.
That persuasion may resemble contagion more than deliberation.
That reason often functions as a press secretary, not a judge.

The books sense this. And then step back from it. Which is human. But it matters.

Why this matters now

We are living in systems that have internalised these findings far more ruthlessly than public discourse has.

Social media platforms optimise for outrage, not understanding.
Political messaging is frame-first, not fact-first.
AI systems are increasingly capable of activating moral intuitions at scale, without fatigue or conscience.

Meanwhile, our institutions still behave as if one more conversation, one more fact-check, one more appeal to reason will close the gap. The research says otherwise.

And that gap between what we know and what we pretend may be the most important moral problem of the moment.

No solution offered

The video does not end with a fix. That’s deliberate.

Offering a neat solution here would simply repeat the same move I’m criticising: diagnosis followed by false comfort. Sometimes orientation matters more than optimism. The elephant is real. The elephant is moving.And most of us are passengers arguing about the map while it walks.

That isn’t despair. It’s clarity.

Language Insufficiency Hypothesis: Presumption and Horizon

I discuss Chapter 4 of ‘A Language Insufficiency Hypothesis’ in this video clip.

Video: Presumption and Horizon

In short, I discuss where language fails in law, politics, science, and digital culture, where we think language conveys more than it does.

Language Insufficiency Hypothesis: The Gradient

In a 4-minute video, I discuss The Gradient, Chapter 3 of my latest book, A Language Insufficiency Hypothesis.

Video: The Gradient

It’s a short video/chapter. Nothing much to add. In retrospect, I should have summarised chapters 3 and 4 together.

Audio: NotebookLM summary podcast of this chapter.

Facts, Intent, and the Afterlife of Metaphysics

5–8 minutes

I’ve been reading Bernard Williams lately, and I’ve written about his work on Truth and Truthfulness. I’m in the process of writing more on the challenges of ontological moral positionsand moral luck. I don’t necessarily want to make contemporary news my focal point, but this is a perfect case study for it. I’ll be releasing a neutral philosophy paper on the underlying causes, but I want to comment on this whilst it’s still in the news cycle.

The form of xenophobia is a phenomenon occurring in the United States, though the ontological split is applicable more generally. For those unfamiliar with US news, I’ll set this up. The United States is currently deploying federal enforcement power in ways that deliberately bypass local consent, blur policing and military roles, and rely on fear as a stabilising mechanism. Historical analogies are unavoidable, but not required for the argument that follows. These forces have been deployed in cities that did not and do not support the Trump administration, so they are exacting revenge and trying to foment fear and unrest. This case is an inevitable conclusion to these policy measures.

tl;dr: The Law™ presents itself as fact-driven, but only by treating metaphysical imputations about inner life as if they were empirical findings. This is not a flaw in this case; it is how the system functions at all.

NB: Some of this requires having read Williams or having a familiarity with certain concepts. Apologies in advance, but use Google or a GPT to fill in the details.

Audio: NotebookLM summary podcast of this content.

Why the Minneapolis ICE Shooting Exposes the Limits of Bernard Williams

The Minneapolis ICE shooting is not interesting because it is unusual. It is interesting because it is painfully ordinary. A person is dead. An officer fired shots. A vehicle was involved. Video exists. Statements were issued. Protests followed. No one seriously disputes these elements. They sit in the shared centre of the Venn diagram, inert and unhelpful. Where everything fractures is precisely where the law insists clarity must be found: intent and motive. And this is where things stop being factual and start being metaphysical.

The Comfortable Fiction of Legal Facts

The legal system likes to tell a comforting story about itself. It claims to be empirical, sober, and evidence-driven. Facts in, verdicts out. This is nonsense.

What the law actually does is this:

  • It gathers uncontested physical facts.
  • It then demands a psychological supplement.
  • It treats that supplement as if it were itself a fact.

Intent and motive are not observed. They are inferred. Worse, they are imposed. They are not discovered in the world but assigned to agents to make outcomes legible.

In Minneapolis, the uncontested facts are thin but stable:

  • A U.S. Immigration and Customs Enforcement (ICE) agent, identified as Jonathan Ross, shot and killed Renée Nicole Good in Minneapolis on 7 January 2026.
  • The incident involved Good’s vehicle, which was present and moving at the time shots were fired.
  • Ross fired his weapon multiple times, and Good died from those gunshot wounds.
  • The Department of Homeland Security (DHS) claims the agent acted in self-defence.
  • Video footage exists that shows at least part of the encounter.
  • The case ignited protests, widespread condemnation from local officials, and political pushback.

This creates a shared intersection: vehicle, Ross, shots, and that ‘something happened’ that neither side is denying.

None of these facts contain intent. None of them specify motive. They do not tell us whether the movement of the vehicle was aggression, panic, confusion, or escape. They do not tell us whether the shooting was fear, anger, habit, or protocol execution. Yet the law cannot proceed without choosing.
So it does what it always does. It smuggles metaphysics into evidence and calls it psychology.

Intent and Motive as Institutional Impositions

Intent is treated as a condition of responsibility. Motive is treated as its explanation. Neither is a fact in anything like the ordinary sense. Even self-report does not rescue them. Admission is strategically irrational. Silence is rewarded. Reframing is incentivised. And even sincerity would not help, because human beings do not have transparent access to their own causal architecture. They have narratives, rehearsed and revised after the fact. So the law imputes. It tells the story the agent cannot safely tell, and then punishes or absolves them on the basis of that story. This is not a bug. It is the operating system.

Where Bernard Williams Comes In

This is where Bernard Williams becomes relevant, and where his account quietly fails. In Truth and Truthfulness, Williams famously rejects the Enlightenment fantasy of capital-T Truth as a clean, context-free moral anchor. He replaces it with virtues like sincerity and accuracy, grounded in lived practices rather than metaphysical absolutes. So far, so good.

Williams is right that moral life does not float above history, psychology, or culture. He is right to attack moral systems that pretend agents consult universal rules before acting. He is right to emphasise thick concepts, situated reasons, and practical identities. But he leaves something standing that cannot survive the Minneapolis test.

The Residue Williams Keeps

Williams still needs agency to be intelligible. He still needs actions to be recognisably owned. He still assumes that reasons, however messy, are at least retrospectively available to anchor responsibility. This is where the residue collapses.

In cases like Minneapolis:

  • Intent is legally required but epistemically unavailable.
  • Motive is legally explanatory but metaphysically speculative.
  • Admission is disincentivised.
  • Narrative is imposed under institutional pressure.

At that point, sincerity and accuracy are no longer virtues an agent can meaningfully exercise. They are properties of the story selected by the system. Williams rejects metaphysical Truth while retaining a metaphysical agent robust enough to carry responsibility. The problem is that law does not merely appeal to intelligibility; it manufactures it under constraint.

Moral Luck Isn’t Enough

Williams’ concept of moral luck gestures toward contingency, but it still presumes a stable agent who could, in principle, have acted otherwise and whose reasons are meaningfully theirs. But once intent and motive are understood as institutional fabrications rather than inner facts, ‘could have done otherwise’ becomes a ceremonial phrase. Responsibility is no longer uncovered; it is allocated. The tragedy is not that we fail to know the truth. The tragedy is that the system requires a truth that cannot exist.

Facts Versus Stories

The law does not discover which story is true. It selects which story is actionable.

The Minneapolis case shows the fault line clearly:

  • Facts: bodies, movements, weapons, recordings.
  • Stories: fear versus anger, defence versus aggression.
  • The first is uncontested. The second does all the work.

And those stories are not epistemic conclusions. They are metaphysical commitments enforced by law. Williams wanted to rescue ethics from abstraction. What he could not accept is that, once abstraction is removed, responsibility does not become more human. It becomes procedural.

The Uncomfortable Conclusion

The law does not operate on truth. It operates on enforceable interpretations of behaviour. Intent and motive are not facts. They are tools. Williams saw that capital-T Truth had to go. What he did not see, or perhaps did not want to see, is that the smaller, more humane residue he preserved cannot bear the weight the legal system places on it.

Once you see this, the obsession with ‘what really happened’ looks almost childish. The facts are already known. What is being fought over is which metaphysical fiction the system will enforce.

That decision is not epistemic. It is political. And it is violent.