Legibility and Ontology

3–5 minutes

These two words qualify as my words of the month: legibility and ontology.

I’ve been using them as lenses.

I picked up legibility from James C. Scott’s Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed, which is really a book about how well-intentioned schemes fail once reality is forced to become administrable. Ontology is an older philosophical workhorse, usually paired with epistemology, but I’m using it here in a looser, more pragmatic sense.

When I write, I write through lenses. Everyone does. Writing requires a point of view, even when we pretend otherwise.

In this post, I want to talk more informally about my recent essay, Grammatical Failure. I usually summarise my work elsewhere, but here I want to think out loud about it, particularly in relation to social ontology and epistemology. I won’t linger on definitions. They’re a search away. But a little framing helps.

Ontology, roughly: how reality is parsed.

Epistemology: how knowledge is justified within that parsing.

Audio: NotebookLM summary of this post.

Much of my recent work sits downstream of thinkers like Thomas Sowell, George Lakoff, Jonathan Haidt, Kurt Gray, and Joshua Greene. Despite their differences, they converge on a shared insight: human cognition is largely motivated preverbally. As a philosopher of language, that pre-language layer is where my interest sharpens.

I explored this in earlier work, including a diptych titled The Grammar of ImpasseConceptual Exhaustion and Causal Mislocation. Writing is how I gel these ideas. There are several related pieces still in the pipeline.

When I talk about grammar, I don’t mean Saussure or Chomsky. I mean something deeper: the ontological substrate beneath belief. Grammar, in this sense, is how reality gets parsed before beliefs ever form. It filters what can count as real, salient, or intelligible.

Let’s use a deliberately simplified example.

Imagine two ontological orientations. Call them Ont-C and Ont-L. This isn’t to say there are only two, but much of Western political discourse collapses into a binary anyway.

Ont-C tends to experience people as inherently bad, dangerous, or morally suspect. Ont-L tends to experience people as inherently good or at least corrigible. These aren’t opinions in the usual sense. They sit beneath belief, closer to affect and moral orientation.

Now consider retributive justice, setting aside the fact that justice itself is a thick concept.

From Ont-C, punishment teaches a lesson. It deters. It disciplines. From Ont-L, punishment without rehabilitation looks cruel or counterproductive, and the transgression itself may be read as downstream of systemic injustice.

Each position can acknowledge exceptions. Ont-L knows there are genuinely broken people. Ont-C knows there are saints. But those are edge cases, not defaults.

Now ask Ont-C and Ont-L to design a criminal justice system together. The result will feel intolerable to both. Too lenient. Too harsh. The disagreement isn’t over policy details. It’s over how reality is carved up in the first place.

And this is only one dimension.

Add others. Bring in Ont-V and Ont-M if you like, for vegan and meat-based ontologies. Suddenly, you have Ont-CV, Ont-CM, Ont-LV, and Ont-LM. Then add class, religion, gender, authority, harm, and whatever. Intersectionality stops looking like a solution and starts looking like a combinatorial explosion.

The Ont-Vs can share a meal, so long as they don’t talk politics.

The structure isn’t just unstable. It was never stable to begin with. We imagine foundations because legibility demands them.

Grammatical Failure is an attempt to explain why this instability isn’t a bug in liberal epistemology but a structural feature. The grammar does the sorting long before deliberation begins.

More on that soon.


In any case, once you start applying this ontological lens to other supposedly intractable disputes, you quickly realise that their intractability is not accidental.

Take abortion.

If we view the issue through the lenses of Ont-A (anti-abortion) and Ont-C (maternal choice), we might as well be peering through Ont-Oil and Ont-Water. The disagreement does not occur at the level of policy preferences or competing values. It occurs at the level of what counts as morally salient in the first place.

There is no middle ground here. No middle path. No synthesis waiting to be negotiated into existence.

That is not because the participants lack goodwill, intelligence, or empathy. It is because the ontological primitives are incommensurate. Each side experiences the other not as mistaken but as unintelligible.

We can will compromise all we like. The grammar does not comply.

Contemporary discourse often insists otherwise. It tells us that better arguments, clearer framing, or more dialogue will eventually converge. From this perspective, that insistence is not hopeful. It is confused. It mistakes a grammatical fracture for a deliberative failure.

You might try to consider other polemic topics and notice the same interplay.

Why “Just Think Critically” Keeps Failing

2–4 minutes

The paper is now available on Zenodo.

I’ve been wittering on about social ontological positions and legibility for a few months now. I’ve been writing a book and several essays, but this is the first to be published. In it, I not only counter Ranalli – not personally; his adopted belief – I also counter Thomas Sowell, George Lakoff, Jonathan Haidt, Kurt Gray, and Joshua Green. (Counter might be a little harsh; I agree with their conclusions, but I remain on the path they stray from.)

Audio: NotebookLM summary of the essay: Grammatical Failure

There is a strange faith circulating in contemporary culture: the belief that disagreement persists because someone, somewhere, hasn’t been taught how to think properly.

The prescription is always the same. Teach critical thinking. Encourage openness. Expose people to alternatives. If they would only slow down, examine the evidence, and reflect honestly, the right conclusions would present themselves.

When this doesn’t work, the explanation is equally ready to hand. The person must be biased. Indoctrinated. Captured by ideology. Reason-resistant.

What’s rarely considered is a simpler possibility: nothing has gone wrong.

Most of our public arguments assume that we are all operating inside the same conceptual space, disagreeing only about how to populate it. We imagine a shared menu of reasons, facts, and values, from which different people select poorly. On that picture, better reasoning should fix things.

What if what counts as a ‘reason’, what qualifies as ‘evidence’, or what even registers as a meaningful alternative is already structured differently before any deliberation begins?

At that point, telling someone to ‘think critically’ is like asking them to optimise a system they cannot see, using criteria they do not recognise. The instruction is not offensive. It’s unintelligible. This is why so many contemporary disputes feel immune to argument. Not merely heated, but strangely orthogonal. You aren’t rebutted so much as translated into something else entirely: naïve, immoral, dangerous, unserious. And you do the same in return.

Liberal epistemology has a neat explanation for this. It treats these failures as agent-level defects: insufficient openness, motivated reasoning, epistemic irresponsibility. The problem is always how people reason. The argument of Grammatical Failure is that this diagnosis is systematically misplaced. The real constraint, in many cases, lies upstream of reasoning itself. It lies in the semantic frameworks that determine what can count as a reason in the first place. When those frameworks diverge, deliberation doesn’t fail heroically. It fails grammatically.

This doesn’t mean people lack agency. It means agency operates within a grammar, not over it. We choose, revise, and reflect inside spaces of intelligibility we did not author. Asking deliberation to rewrite its own conditions is like asking a sentence to revise its own syntax mid-utterance. The result is a familiar pathology. Disagreement across frameworks is redescribed as epistemic vice. Category rejection is mistaken for weak endorsement. Indoctrination becomes a label we apply whenever persuasion fails. Not because anyone is lying, but because our diagnostic tools cannot represent what they are encountering.

The paper itself is not a manifesto or a programme. It doesn’t tell you what to believe, how to educate, or which politics to adopt. It does something more modest and more uncomfortable. It draws a boundary around what liberal epistemology can coherently explain – and shows what happens when that boundary is ignored.

Sometimes the problem isn’t that people won’t think.

It’s that they are already thinking in a grammar that your advice cannot reach.

Why So Serious?

1–2 minutes

Yes, I am still focusing on writing my ontology papers, but I still come up for air. Over lunch, I found this: Jonny Thomson showcasing Judge Coleridge: The Duty. Watch it.

Video: Philosophy Minis: Judge Coleridge: The Duty

This really got my hamster wheel cranking. In fact, it gave me another essay idea mired in formal logic. Yuck, I know.

My brief post here is to share this and ask why I don’t share ‘positive’ posts. Pretty much everything is critical. For one, it’s how my brain works. For two, I don’t really know.

When I see something, I instantly want to tear it apart, not for the sake of malice but because my mind registers it as WTAF?

In short, the judge says that one cannot privilege one’s own life over others. Of course, this got my hamster on steroids, considering the implication: does this invalidate self-defence? Wouldn’t it? 🧐

The answer is yes – but only if Law were tethered to Morality, which it isn’t. This will be my essay. Who knows when I’ll have time to write it? Please, stand by. Cheers.

What are your thoughts? Maybe I’ll share this as a video response on YouTube and TikTok. Time will tell – and it evidently heals all wounds.

Ontological Blindness in Modern Moral Science

4–6 minutes

Announcement: I’ll be taking a break from posting long-form articles for a while to focus on a project I’m developing. Instead, I’ll share progress summary updates.

Ontological Blindness in Modern Moral Science is a working title with a working subtitle as The Why Semantic Thickness, Measurement, and Reconciliation Go Wrong. No spoilers.

Audio: NotebookLM summary podcast of this topic.

INSERT: I’ve only outlined and stubbed this Ontological blindness project, and I’ve already got another idea. I need to stop reading and engaging with the world.

I was listening to the Audible version of A.J. Ayer’s classic, Language, Truth, and Logic (1936)– not because I had time but because I listen to audiobooks when I work out. Ayer is a Logical Positivist, but I forgive him. He’s a victim of his time. In any case, I noticed several holes in his logic.

Sure, the book was published in 1936, and it is infamous for defending or creating Emotivism, a favourite philosophical whipping boy. I’m an Emotivist, so I disagree with the opposition. In fact, I feel their arguments are either strawmen or already defended by Ayer. I also agree with Ayer that confusing the map of language with the terrain of reality is a problem in philosophy (among other contexts), but it’s less excusable for a language philosopher.

In any case, I have begun a file to consider a new working title, Phenomenal Constraint and the Limits of Ontological Language. I might as well stay in the ontological space for a while. We’ll see where it leads, but first, I need to put the original project to bed.

Every time I commence a project, I create a thesis statement and an abstract to orient me. These may change over the course of a project, especially larger ones – more of an abstract than a thesis. This thesis has already changed a couple of times, but I feel it’s settled now.

Image: NotebookLM infographic on this topic.

Thesis Statement

Modern moral psychology repeatedly commits a multi-layered category error by treating semantically and ontologically heterogeneous moral terms as commensurate units within a single comparative framework, while simultaneously treating parochial moral metaphysics as natural substrate.

This dual conflation—of semantic density with moral plurality, and of ontological commitment with empirical discovery—produces the false appearance that some moral systems are more comprehensive than others, when it in fact reflects an inability to register ontological incommensurability.

Moral Foundations Theory provides a clear and influential case of this broader mistake: a framework whose reconciliation-oriented conclusions depend not on empirical discovery alone, but on an unacknowledged liberal-naturalist sub-ontology functioning as conceptual ‘firmware’ mistaken for moral cognition itself.

Abstract

Modern moral psychology seeks to explain moral diversity through empirically tractable frameworks that assume cross-cultural comparability of moral concepts. This book argues that many such frameworks – including but not limited to Moral Foundations Theory (MFT) – rest on a persistent category error: the treatment of semantically and ontologically heterogeneous moral terms as commensurate units within a single evaluative space.

The argument proceeds in four stages. First, it establishes that moral vocabularies differ not merely in emphasis but in semantic thickness: some terms (e.g. harm, fairness) are comparatively thin, portable, and practice-independent, while others (e.g. loyalty, authority, sanctity) are culturally saturated, institution-dependent, and ontologically loaded. Treating these as equivalent ‘foundations’ mistakes density for plurality.

Second, the book shows that claims of moral ‘breadth’ or ‘completeness’ smuggle normativity into ostensibly descriptive research, crossing the Humean is/ought divide without acknowledgement. Third, it argues that this slippage is not accidental but functional, serving modern culture’s demand for optimistic, reconcilable accounts of moral disagreement.

Finally, through sustained analysis of MFT as a worked example, the book demonstrates how liberal naturalist individualism operates as an unacknowledged sub-ontology – conceptual firmware that determines what counts as moral, measurable, and comparable. The result is not moral pluralism, but ontological imperialism disguised as empirical neutrality.

The book concludes by arguing that acknowledging ontological incommensurability does not entail nihilism or relativistic indifference, but intellectual honesty about the limits of moral science and the false comfort of reconciliation narratives.

Ideation

I’ve been pondering ontologies a lot these past few weeks, especially how social ontologies undermine communication. More recently, I’ve been considering how sub-ontologies come into play. A key catalyst for my thinking has been Jonathan Haidt’s Moral Foundations Theory, but I’ve also been influenced by George Lakoff, Kurt Gray, and Joshua Greene, as I’ve shared recently. I want to be clear: This book is not about politics or political science. It intends to about the philosophy of psychology and adjacent topics.

At the highest levels, I see fundamental category errors undermining MFT, but as I inspected, it goes deeper still, so much so that it’s too much to fit into an essay or even a monograph, so I will be targeting a book so I have room to expand and articulate my argumentation. Essays are constraining, and the narrative flow – so to speak – is interrupted by footnotes and tangents.

In a book, I can spend time framing and articulating – educating the reader without presuming an in-depth knowledge. This isn’t to say that this isn’t a deep topic, and I’ll try not to patronise readers, but this topic is not only counterintuitive, it is also largely unorthodox and may ruffle a few feathers.

I’m not sure how much I’ll be able to share, but I’d like to be transparent in the process and perhaps gather some inputs along the way.

Methodology

Sort of… I’ve used Scrivener in the past for organising and writing fiction. This is the first time I’ am organising nonfiction. We’ll see how it goes.

Reality Happens Once. Facts Happen Many Times.

I want to clarify my recent The Trouble with Facts post. I realise that I was speaking to one non-trivial form of facts, but there is more than one class of facts. We argue about facts as if the word named a single, stable thing. It doesn’t. It names a family of very different things, quietly grouped together by habit, convenience, and institutional need. Most disputes about facts go nowhere, not because one side is irrational, but because the word itself is doing covert work. We slide between meanings without noticing, then act surprised when disagreement follows. This piece is an attempt to slow that slide.

Audio: NotebookLM summary podcast of this topic.

Polysemy We Notice, Polysemy We Don’t

We are comfortable with ambiguity when it is obvious. A bank can be a financial institution or the edge of a river. A bat can be an animal or a piece of sports equipment. Context resolves these instantly. No one feels existentially threatened by the ambiguity.

Fact is different. The word is polysemous in a way that is both subtle and consequential. Its meanings sit close enough to bleed into one another, allowing certainty from one sense to be smuggled into another without detection. Calling something a fact does not merely describe it. It confers authority. It signals that questioning should stop. That is why this ambiguity matters.

Different Kinds of Facts

Before critiquing facts, we need to sort them.

1. Event-facts (brute, world-facing)
As mentioned previously, these concern what happens in the world, independent of observation.

  • A car collides with a tree.
  • Momentum changes.
  • Metal deforms.

These events occur whether or not anyone notices them. They are ontologically robust and epistemically inaccessible. No one ever encounters them directly. We only ever encounter traces.

2. Indexical or performative facts (trivial, self-reporting)
“I am typing.”

I am doing this now – those now may not be relevant when you read this. This is a fact, but a very thin one. Its authority comes from the coincidence of saying and doing. It requires no reconstruction, no inference, no institutional validation. These facts are easy because they do almost no work.

3. Retrospective personal facts (memory-mediated)
“I was typing.”

This may be relevant now, at least relative to the typing of this particular post. Still a fact, but weaker. Memory enters. Narrative compression enters. Selectivity enters. The same activity now carries a different epistemic status purely because time has passed.

4. Prospective statements (modal, not yet facts)
“I will be typing.”

This is not yet a fact. It may never come to be one. It is an intention or prediction that may or may not be realised. Future-tense claims are often treated as incipient facts, but this is a category error with real consequences.

5. Institutional facts (designated, procedural)
“The court finds…”
“The report concludes…”

These are facts by designation. They are not discovered so much as selected, formalised, and stabilised so that systems can act. They are unlikely to rise to the level of facts, so the legal system tends to generate facts in name only – FINO, if I am being cute.

All of these are called ‘facts’. They are not interchangeable. The trouble begins when certainty migrates illicitly from trivial or institutional facts into brute event-facts, and we pretend nothing happened in the transfer.

One Motor Vehicle

Reconsider the deliberately simple case: A motor vehicle collides with a tree. Trees are immobile, so we can rule out the tree colliding with the car.

Ontologically, something happened. Reality did not hesitate. But even here, no one has direct access to the event itself.

The driver does not enjoy privileged access. They experience shock, adrenaline, attentional narrowing, selective memory, post hoc rationalisation, perhaps a concussion. Already several layers intervene before language even arrives.

A rough schema looks like this:

event → sensory registration → cognitive framing → linguistic encoding → social validation

Ontology concerns what happens.
Epistemology concerns how anything becomes assertable.

Modern thinking collapses the second into the first and calls the result the facts.

People speak of “hard facts” as if hardness transfers from objects to propositions by proximity. It doesn’t. The tree is solid. The fact is an artefact assembled from observation, inference, convention, and agreement.

And so it goes…

Why the Confusion Persists

When someone responds, “But isn’t it a fact that I read this?”, the answer is yes. A different kind of fact.

The error lies not in affirming facts, but in failing to distinguish them. The word fact allows certainty to migrate across categories unnoticed, from trivial self-reports to brute world-events, and from institutional verdicts to metaphysical claims. That migration is doing the work.

Conclusion

Clarifying types of facts does not weaken truth. It prevents us from laundering certainty where it does not belong.

Facts exist. Events occur. But they do not arrive unmediated, innocent, or singular.

Reality happens once. Facts happen many times.

The mistake was never that facts are unreal. It was believing they were all the same kind of thing.

The Trouble with Facts

5–8 minutes

One Motor Vehicle

What we call facts are not discoveries of an unfiltered world. They are the end-products of mediation.

Let’s walk through an example.

Image: Autosmash example. An observer arrives with experience – from genetic predisposition to childhood trauma to winning the lottery. Whatever it might be. Of course, they have many cognitive deficits, biases and filters. Then, there’s the immediate problem of attention. When did they notice the event? Did they turn to look after hearing the noise, or were they meditating on the tree in that moment?

Apparently, a motor vehicle has collided with a tree. Trees are immobile objects, so we can safely rule out the tree colliding with the car.*

So what, exactly, are the facts?

Audio: NotebookLM summary podcast of this topic.

Ontology (the boring bit)

Ontologically, something happened.

A car struck a tree.
Metal deformed.
Momentum stopped.

Reality did not hesitate. It did not consult witnesses. It did not await interpretation.

This is the part Modernity likes to gesture at reverently before immediately leaving it behind.

Image: Requisite NotebookLM infographic on this content.

The Witness

Even the driver does not enjoy privileged access to “what really happened”.

They get:

  • proprioceptive shock
  • adrenaline distortion
  • attentional narrowing
  • selective memory
  • post hoc rationalisation
  • possibly a concussion

Which is already several layers deep before language even arrives to finish the job.

We can generalise the structure:

Ontology: events occur. States of affairs obtain. Something happens whether or not we notice.

Epistemology: observation is always filtered through instruments, concepts, language, habits, and incentives.

Modern sleight of hand: collapse the second into the first and call the result the facts.

People love the phrase “hard facts”, as if hardness transfers from objects to propositions by osmosis. It doesn’t. The tree is solid. The fact is not.

Facts are artefacts. They are assembled from observation, inference, convention, and agreement. They function. They do not reveal essence.

Filtration

An event occurred. A car struck a tree.

Then an observer arrives. But observers never arrive empty-handed.

They arrive with history: genetics, upbringing, trauma, habits, expectations, incentives. They arrive already filtered.

Daniel KahnemanOlivier Sibony, and Cass Sunstein spend an entire book explaining just how unreliable this process is. See Noise: A Flaw in Human Judgment if you want the empirical receipts.

  • Even before bias enters, attention does.
  • When did the observer notice the crash?
  • At the sound? At the sight? After the fact?
  • Were they already looking, or did the noise interrupt something else entirely?

Reality happens once. Facts happen many times, differently, depending on who needs them and why.

Here Comes the Law

This is where the legal system enters, not because truth has been found, but because closure is required.

Courts do not discover facts. They designate versions of events that are good enough to carry consequences. They halt the cascade of interpretations by institutional force and call the result justice.

At every epistemic level, what we assert are interpretations of fact, never access to ontological essence.

Intent, negligence, recklessness. These are not observations. They are attributions. They are stopping rules that allow systems to function despite uncertainty.

The law does not ask what really happened.
It asks which story is actionable.

Two Motor Vehicles

Now add a second moving object.

Another car enters the frame, and with it an entire moral universe.

Suddenly, the event is no longer merely physical. It becomes relational. Agency proliferates. Narratives metastasise.

Who was speeding?
Who had the right of way?
Who saw whom first?
Who should have anticipated whom?

Intent and motive rush in to fill the explanatory vacuum, despite remaining just as unobservable as before.

Nothing about the ontology improved.
Everything about the storytelling did.

Where the tree refused intention, the second vehicle invites it. We begin inferring states of mind from trajectories, attributing beliefs from brake lights, extracting motives from milliseconds of motion.

But none of this is observed.

What we observe are:

  • vehicle positions after the fact,
  • damage patterns,
  • skid marks,
  • witness statements already filtered through shock and expectation.

From these traces, we construct mental interiors.

The driver “intended” to turn.
The other driver “failed” to anticipate.
Someone was “reckless”.
Someone else was merely “unlucky”.

These are not facts. They are interpretive assignments, layered atop already mediated observations, selected because they allow responsibility to be distributed in socially recognisable ways.

This is why explanation now fractures.

One cascade of whys produces a story about distraction or poor judgment.
Another produces a story about road design or visibility.
Another about timing, traffic flow, or urban planning.

Each narrative is plausible.
Each is evidence-constrained.
None is ontologically privileged.

Yet one will be chosen.

Not because it is truer, but because it is actionable.

The presence of a second vehicle does not clarify causation. It merely increases the number of places we are willing to stop asking questions.

Modernity mistakes this proliferation of narrative for epistemic progress. In reality, it is moral bookkeeping.

The crash still occurred.
Metal still deformed.
Momentum still stopped.

What changed was not access to truth, but the urgency to assign fault.

With one vehicle and a tree, facts already fail to arrive unmediated.
With two vehicles, mediation becomes the point.

And still, we insist on calling the result the facts.

Many Vehicles, Cameras, and Experts

At this point, Modernity regains confidence.

Add more vehicles.
Add traffic cameras.
Add dashcams, CCTV, bodycams.
Add accident reconstruction experts, engineers, psychologists, statisticians.

Surely now we are approaching the facts.

But nothing fundamental has changed. We have not escaped mediation. We have merely scaled it up and professionalised it.

Cameras do not record reality. They record:

  • a frame,
  • from a position,
  • at a sampling rate,
  • with compression,
  • under lighting conditions,
  • interpreted later by someone with a mandate.

Video feels decisive because it is vivid, not because it is ontologically transparent. It freezes perspective and mistakes that freeze for truth. Slow motion, zoom, annotation. Each step adds clarity and distance at the same time.

Experts do not access essence either. They perform disciplined abduction.

From angles, debris fields, timing estimates, and damage profiles, they infer plausible sequences. They do not recover the event. They model it. Their authority lies not in proximity to reality, but in institutional trust and methodological constraint.

More data does not collapse interpretation.
It multiplies it.

With enough footage, we don’t get the story. We get competing reconstructions, each internally coherent, each technically defensible, each aligned to a different question:

  • Who is legally liable?
  • Who is financially responsible?
  • Who violated policy?
  • Who can be blamed without destabilising the system?

At some point, someone declares the evidence “clear”.

What they mean is: we have enough material to stop arguing.

This is the final Modern illusion: that accumulation converges on essence. In reality, accumulation converges on closure.

The event remains what it always was: inaccessible except through traces.
The facts become thicker, more confident, more footnoted.
Their metaphysical status does not improve.

Reality happened once. It left debris. We organised the debris into narratives that could survive institutions.

Cameras didn’t reveal the truth. Experts didn’t extract it. They helped us agree on which interpretation would count.

And agreement, however necessary, has never been the same thing as access to what is.

* I was once driving in a storm, and a telephone pole fell about a metre in front of my vehicle. My car drove over the pole, and although I was able to drive the remainder of the way home, my suspension and undercarriage were worse for the wear and tear.

Qualified Subjectivism

4–7 minutes

I Am a Qualified Subjectivist. No, That Does Not Mean ‘Anything Goes’.

Make no mistake: I am a subjectivist. A qualified one. Not that kind of qualified – the qualification matters, but it’s rarely the part anyone listens to.

Image: Not that kind…

Here is the unglamorous starting point: all human encounters with the world are mediated. There is no raw feed. No unfiltered access. No metaphysical lead running directly from ‘reality’ into the human mind. Every encounter is processed through bodies, nervous systems, cultures, languages, technologies, institutions, and histories.

Audio: NotebookLM summary podcast of this content – See addendum below.

Whilst I discuss the specific architecture of this mediation at length in this preprint, here I will keep it simple.

If you are human, you do not encounter reality as such. You encounter it as processed. This is not controversial. What is controversial is admitting the obvious consequence: the subject is the final arbiter.

Image: NotebookLM Infographic of Qualified Subjectivism

The Subject Is the Final Arbiter

Every account of truth, reality, meaning, value, or fact is ultimately adjudicated by a subject. Not because subjects are sovereign gods, but because there is literally no other place adjudication can occur.

Who, exactly, do critics imagine is doing the adjudicating instead? A neutral tribunal floating outside experience? A cosmic referee with a clipboard? A universal consciousness we all forgot to log into?

This does not mean that truth is ‘whatever I feel like’. It means that truth-claims only ever arrive through a subject, even when they are heavily constrained by the world. And constraint matters. Reality pushes back. Environments resist. Bodies fail. Gravity does not care about your personal narrative.

Why This Is Not Solipsism

Solipsism says: only my mind exists. That is not my claim. My claim is almost boring by comparison: subjects are situated, not sovereign.

We are shaped by environments we did not choose and histories we did not write. Mediation does not eliminate reality; it is how reality arrives. Your beliefs are not free-floating inventions; they are formed under biological, social, and material pressure. Two people can be exposed to the same event and encounter it differently because the encounter is not the event itself – it is the event as mediated through a particular orientation.

Why Objectivity Keeps Sneaking Back In

At this point, someone usually says: ‘But surely some things are objectively true.’

Yes. And those truths are still encountered subjectively. The mistake is thinking that objectivity requires a ‘view from nowhere’. It doesn’t. It requires stability across mediations, not the elimination of mediation altogether. We treat some claims as objective because they hold up under variation, while others fracture immediately. But in all cases, the encounter still happens somewhere, to someone.

The Real Source of the Panic

The real anxiety here is not philosophical. It’s moral and political. People are terrified that if we give up the fantasy of unmediated access to universal truth, then legitimacy collapses and ‘anything goes’.

This is a category error born of wishful thinking. What actually collapses is the hope that semantic convergence is guaranteed. Once you accept that mediation is unavoidable, you are forced to confront a harder reality: disagreement is often structural, not corrigible. Language does not fail because nothing is true. Language fails because too much is true, incompatibly.

So Yes, I Am a Qualified Subjectivist

Interpretation only ever occurs through subjects. Subjects are always mediated. Mediation is always constrained. And constraint does not guarantee convergence.

That is the position. It is not radical, fashionable, or comforting. It is simply what remains once you stop pretending there is a god’s-eye view quietly underwriting your arguments. Discomfort is simply a reliable indicator that a fantasy has been disturbed.

Audio: NotebookLM summary of this Geworfenheit addendum

If all this sounds suspiciously familiar, that’s because it is. Heidegger had a word for it: Geworfenheit – usually translated as thrownness.

The idea is simple, and deeply irritating to anyone still hoping for a clean start. You do not enter the world as a neutral observer. You are thrown into it: into a body, a language, a culture, a history, a set of institutions, a moment you did not choose. You do not begin from nowhere and then acquire a perspective. You begin already situated, already oriented, already implicated.

This is not a poetic flourish. It is a structural claim about human existence.

Image: Another NotebookLM infographic for the fun of it.

What my qualified subjectivism insists on – without Heidegger’s ontological theatre – is the same basic constraint: there is no view from nowhere because there is no nowhere to stand. The subject does not float above mediation; the subject is constituted by it. Thrownness is not an accident to be corrected by better theory. It is the condition under which any theorising occurs at all.

Seen this way, the demand for pure objectivity starts to look less like a philosophical ideal and more like nostalgia for an impossible innocence. A wish to rewind existence to a point before bodies, languages, power, and history got involved. That point never existed.

Geworfenheit matters here because it dissolves the caricature that subjectivism is about arbitrary choice. Being thrown is the opposite of choosing freely. It is constraint before reflection. Orientation before argument. Salience before reasons. You do not decide what matters from a neutral menu; what matters shows up already weighted, already charged, already resistant.

This is why appeals to “just be objective” always ring hollow. Objectivity does not mean escaping thrownness. It means achieving relative stability within it. Some claims hold across many thrown positions. Others fracture immediately. That distinction matters. But none of it happens outside mediation.

So when I say the subject is the final arbiter, I am not crowning the subject king of reality. I am pointing out the obvious: adjudication happens somewhere, to someone, from within a situation they did not author. Thrownness guarantees that there is no cosmic referee waiting to overrule the encounter.

If that makes you uncomfortable, good. It should. Discomfort is often just the sensation of a fantasy losing its grip.

Facts, Intent, and the Afterlife of Metaphysics

5–8 minutes

I’ve been reading Bernard Williams lately, and I’ve written about his work on Truth and Truthfulness. I’m in the process of writing more on the challenges of ontological moral positionsand moral luck. I don’t necessarily want to make contemporary news my focal point, but this is a perfect case study for it. I’ll be releasing a neutral philosophy paper on the underlying causes, but I want to comment on this whilst it’s still in the news cycle.

The form of xenophobia is a phenomenon occurring in the United States, though the ontological split is applicable more generally. For those unfamiliar with US news, I’ll set this up. The United States is currently deploying federal enforcement power in ways that deliberately bypass local consent, blur policing and military roles, and rely on fear as a stabilising mechanism. Historical analogies are unavoidable, but not required for the argument that follows. These forces have been deployed in cities that did not and do not support the Trump administration, so they are exacting revenge and trying to foment fear and unrest. This case is an inevitable conclusion to these policy measures.

tl;dr: The Law™ presents itself as fact-driven, but only by treating metaphysical imputations about inner life as if they were empirical findings. This is not a flaw in this case; it is how the system functions at all.

NB: Some of this requires having read Williams or having a familiarity with certain concepts. Apologies in advance, but use Google or a GPT to fill in the details.

Audio: NotebookLM summary podcast of this content.

Why the Minneapolis ICE Shooting Exposes the Limits of Bernard Williams

The Minneapolis ICE shooting is not interesting because it is unusual. It is interesting because it is painfully ordinary. A person is dead. An officer fired shots. A vehicle was involved. Video exists. Statements were issued. Protests followed. No one seriously disputes these elements. They sit in the shared centre of the Venn diagram, inert and unhelpful. Where everything fractures is precisely where the law insists clarity must be found: intent and motive. And this is where things stop being factual and start being metaphysical.

The Comfortable Fiction of Legal Facts

The legal system likes to tell a comforting story about itself. It claims to be empirical, sober, and evidence-driven. Facts in, verdicts out. This is nonsense.

What the law actually does is this:

  • It gathers uncontested physical facts.
  • It then demands a psychological supplement.
  • It treats that supplement as if it were itself a fact.

Intent and motive are not observed. They are inferred. Worse, they are imposed. They are not discovered in the world but assigned to agents to make outcomes legible.

In Minneapolis, the uncontested facts are thin but stable:

  • A U.S. Immigration and Customs Enforcement (ICE) agent, identified as Jonathan Ross, shot and killed Renée Nicole Good in Minneapolis on 7 January 2026.
  • The incident involved Good’s vehicle, which was present and moving at the time shots were fired.
  • Ross fired his weapon multiple times, and Good died from those gunshot wounds.
  • The Department of Homeland Security (DHS) claims the agent acted in self-defence.
  • Video footage exists that shows at least part of the encounter.
  • The case ignited protests, widespread condemnation from local officials, and political pushback.

This creates a shared intersection: vehicle, Ross, shots, and that ‘something happened’ that neither side is denying.

None of these facts contain intent. None of them specify motive. They do not tell us whether the movement of the vehicle was aggression, panic, confusion, or escape. They do not tell us whether the shooting was fear, anger, habit, or protocol execution. Yet the law cannot proceed without choosing.
So it does what it always does. It smuggles metaphysics into evidence and calls it psychology.

Intent and Motive as Institutional Impositions

Intent is treated as a condition of responsibility. Motive is treated as its explanation. Neither is a fact in anything like the ordinary sense. Even self-report does not rescue them. Admission is strategically irrational. Silence is rewarded. Reframing is incentivised. And even sincerity would not help, because human beings do not have transparent access to their own causal architecture. They have narratives, rehearsed and revised after the fact. So the law imputes. It tells the story the agent cannot safely tell, and then punishes or absolves them on the basis of that story. This is not a bug. It is the operating system.

Where Bernard Williams Comes In

This is where Bernard Williams becomes relevant, and where his account quietly fails. In Truth and Truthfulness, Williams famously rejects the Enlightenment fantasy of capital-T Truth as a clean, context-free moral anchor. He replaces it with virtues like sincerity and accuracy, grounded in lived practices rather than metaphysical absolutes. So far, so good.

Williams is right that moral life does not float above history, psychology, or culture. He is right to attack moral systems that pretend agents consult universal rules before acting. He is right to emphasise thick concepts, situated reasons, and practical identities. But he leaves something standing that cannot survive the Minneapolis test.

The Residue Williams Keeps

Williams still needs agency to be intelligible. He still needs actions to be recognisably owned. He still assumes that reasons, however messy, are at least retrospectively available to anchor responsibility. This is where the residue collapses.

In cases like Minneapolis:

  • Intent is legally required but epistemically unavailable.
  • Motive is legally explanatory but metaphysically speculative.
  • Admission is disincentivised.
  • Narrative is imposed under institutional pressure.

At that point, sincerity and accuracy are no longer virtues an agent can meaningfully exercise. They are properties of the story selected by the system. Williams rejects metaphysical Truth while retaining a metaphysical agent robust enough to carry responsibility. The problem is that law does not merely appeal to intelligibility; it manufactures it under constraint.

Moral Luck Isn’t Enough

Williams’ concept of moral luck gestures toward contingency, but it still presumes a stable agent who could, in principle, have acted otherwise and whose reasons are meaningfully theirs. But once intent and motive are understood as institutional fabrications rather than inner facts, ‘could have done otherwise’ becomes a ceremonial phrase. Responsibility is no longer uncovered; it is allocated. The tragedy is not that we fail to know the truth. The tragedy is that the system requires a truth that cannot exist.

Facts Versus Stories

The law does not discover which story is true. It selects which story is actionable.

The Minneapolis case shows the fault line clearly:

  • Facts: bodies, movements, weapons, recordings.
  • Stories: fear versus anger, defence versus aggression.
  • The first is uncontested. The second does all the work.

And those stories are not epistemic conclusions. They are metaphysical commitments enforced by law. Williams wanted to rescue ethics from abstraction. What he could not accept is that, once abstraction is removed, responsibility does not become more human. It becomes procedural.

The Uncomfortable Conclusion

The law does not operate on truth. It operates on enforceable interpretations of behaviour. Intent and motive are not facts. They are tools. Williams saw that capital-T Truth had to go. What he did not see, or perhaps did not want to see, is that the smaller, more humane residue he preserved cannot bear the weight the legal system places on it.

Once you see this, the obsession with ‘what really happened’ looks almost childish. The facts are already known. What is being fought over is which metaphysical fiction the system will enforce.

That decision is not epistemic. It is political. And it is violent.

When Language Fails – Abstract

3–4 minutes

I set aside some time to design the front cover of my next book. I’m excited to share this – but that’s always the case. It’s substantially complete. In fact, it sidelined another book, also substantially complete, but the content in this might force me to change the other one. It should be ready for February. I share the current state of the Abstract

This book is meant to be an academic monograph, whilst the other, working title: The Competency Paradox, is more of a polemic.

As I mentioned in another post, it builds upon and reorients the works of George LakoffJonathan HaidtKurt Gray, and Joshua Greene. I’ve already revised and extended Gallie’s essentially contested concepts in A Language Insufficiency Hypothesis in the form of Contestables, but I lean on them again here.

Abstract

Contemporary moral and political discourse is marked by a peculiar frustration: disputes persist even after factual clarification, legal process, and good-faith argumentation have been exhausted. Competing parties frequently agree on what happened, acknowledge that harm occurred, and yet remain irreconcilably divided over whether justice has been served. This persistence is routinely attributed to misinformation, bad faith, or affective polarisation. Such diagnoses are comforting. They are also often wrong.

This paper advances a different claim. Certain conflicts are not primarily epistemic or semantic in nature, but ontological. They arise from incompatible orientations that structure how agents register salience, threat, authority, autonomy, and legitimacy. These orientations are genealogically shaped through enculturation, institutions, and languaged traditions, yet operationally they function prior to linguistic articulation: salience fires before reasons are narrated. Moral vocabulary enters downstream, tasked with reconciling commitments that were never shared.

From this perspective, the instability of concepts such as justice is not the primary problem but a symptom. Justice belongs to a class of Contestables (in Gallie’s sense, PDF): action-authorising terms that appear determinate while remaining untethered from shared reference under ontological plurality. Appeals to clearer definitions, better process, or shared values therefore misfire. They presume a common ontological ground that does not, in fact, exist.

When institutions are nevertheless required to act, they cannot adjudicate between ontologies. They can only select. Courts, juries, regulatory bodies, and enforcement agencies collapse plural interpretations into a single outcome. That outcome is necessarily experienced as legitimate by those whose orientation it instantiates, and as injustice by those whose orientation it negates. No procedural refinement can eliminate this asymmetry. At best, procedure dampens variance, distributes loss, and increases tolerability.
Crucially, the selection itself is constrained but underdetermined. Even within formal structures, human judgment, discretion, mood, confidence, fear, and narrative framing play a decisive role. Following Keynes, this irreducible contingency may be described as animal spirits. In formal terms, institutional outcomes are sampled from a constrained space of possibilities, but the reaction topology remains structurally predictable regardless of which branch is taken.

The consequence is stark but clarifying: outrage is not evidence that a system has failed to deliver justice; it is evidence that plural ontological orientations have been forced through a single decision point. Where semantic reconciliation is structurally unavailable, exogenous power is the dominant near-term mediator. Power does not resolve the conflict; it pauses it and stabilises meaning sufficiently for coordination to continue.

This analysis does not deny the reality of harm, the importance of law, or the necessity of institutions. Nor does it lapse into nihilism or indifference. Rather, it reframes the problem. In ontologically plural environments, the task is not moral convergence but maintenance: containing collision, resisting premature coherence, and designing institutions that minimise catastrophic failure rather than promising final resolution.

The argument developed here predates any particular event. Its value lies precisely in its predictive capacity. Given plural ontologies, untethered contestables, and institutions that must act, the pattern of reaction is invariant. The surface details change; the structure does not.
What follows is not a proposal for reconciliation. It is a diagnosis of why reconciliation is so often a category error, and why pretending otherwise is making things worse.

A Language Insufficiency Hypothesis

1–2 minutes

Almost a decade in the making, this book explains why more time, more effort, and more detail do not reliably improve certain forms of communication. Beyond a point, returns diminish sharply. In some domains, they collapse altogether.

The manuscript focuses on English, but the hypothesis has already been extended to French (published separately), and I am continuing work on other ontological barriers. If you’re interested in testing or extending the framework in your own language, feel free to get in touch.

Also available in a clothbound edition at Barnes & Noble.

Over the coming weeks, I’ll be unpacking aspects of the Language Insufficiency Hypothesis in more depth here. The book’s role is deliberately limited: it defines the problem, establishes the structure, and offers grounding examples. The real work happens in the consequences.

For now, the important thing is simple: the book is finally available.