Truth, Subjectivity, and Constraint

3–5 minutes

I like this bloke. Here, he clarifies Rorty’s perspective on Truth. I am quite in sync with Rorty’s position, perhaps 90-odd per cent.

Allow me to explain.

Audio: NotebookLM summary podcast of this topic.

I have written about truth several times over the years, 1, 2, 3, and more. In earlier posts, I put the point rather bluntly: truth is largely rhetorical. I still think that captured something important, but it now feels incomplete. With the development of my Mediated Encounter Ontology of the World (MEOW) and the Language Insufficiency Hypothesis (LIH), the picture needs tightening.

NotebookLM Infographic on this topic.

The first step is to stop pretending that ‘truth’ names a single thing.

Philosopher Bernard Williams helpfully distinguished between thin and thick senses of truth in Truth and Truthfulness. The distinction is simple but instructive.

In its thin sense, truth is almost trivial. Saying ‘it is true that p’ typically adds nothing beyond asserting p. The word ‘true’ functions as a logical convenience: it allows endorsement, disquotation, and generalisation. Philosophically speaking, this version of truth carries very little metaphysical weight. Most arguments about truth, however, are not about this thin sense.

In practice, truth usually appears in a thicker social sense. Here, truth is embedded in practices of inquiry and communication. Communities develop norms around sincerity, accuracy, testimony, and credibility. These norms help stabilise claims so that people can coordinate action and share information.

At this level, truth becomes something like a social achievement. A statement counts as ‘true’ when it can be defended, circulated, reinforced, and relied upon within a shared framework of interpretation. Evidence matters, but so do rhetoric, persuasion, institutional authority, and the distribution of power. This is the sense in which truth is rhetorical, but rhetoric is not sovereign.

NotebookLM Infographic on this topic. I prompted NotebookLM to illustrate a 4-layered model that shows how removed language is from encounter, attention, conception, and representation of what we normally consider to be reality. This view is supported by both MEOW and LIH.

Human beings can imagine almost anything about the world, yet the world has a stubborn habit of refusing certain descriptions. Gravity does not yield to persuasion. A bridge designed according to fashionable rhetoric rather than sound engineering will collapse regardless of how compelling its advocates may have been.

This constraint does not disappear in socially constructed domains. Institutions, identities, norms, and laws are historically contingent and rhetorically stabilised, but they remain embedded within material, biological, and ecological conditions. A social fiction can persist for decades or centuries, but eventually it encounters pressures that force revision.

Subjectivity, therefore, doesn’t imply that ‘anything goes’. It simply means that all human knowledge is mediated.

We encounter the world through perception, language, culture, and conceptual frameworks. Every description is produced from a particular standpoint, using particular tools, within particular historical circumstances. Language compresses experience and inevitably loses information along the way. No statement captures reality without distortion. This is the basic insight behind the Language Insufficiency Hypothesis.

At the same time, our descriptions remain answerable to the constraints of the world we inhabit. Some descriptions survive repeated encounters better than others.

In domains where empirical constraint is strong – engineering, physics, medicine – bad descriptions fail quickly. In domains where constraint is indirect – ethics, politics, identity, aesthetics – multiple interpretations may remain viable for long periods. In such cases, rhetoric, institutional authority, and power often function as tie-breakers, stabilising one interpretation over others so that societies can coordinate their activities. These settlements are rarely permanent.

What appears to be truth in one era may dissolve in another. Concepts drift. Institutions evolve. Technologies reshape the landscape of possibility. Claims that once seemed self-evident may later appear parochial or incoherent.

In this sense, many truths in human affairs are best understood as temporally successful settlements under constraint.

Even the most stable arrangements remain vulnerable to change because the conditions that sustain them are constantly shifting. Agents change. Environments change. Expectations change. The very success of a social order often generates the tensions that undermine it. Change, in other words, is the only persistence.

The mistake of traditional realism is to imagine truth as a mirror of reality – an unmediated correspondence between statement and world. The mistake of crude relativism is to imagine that language and power can shape reality without limit. Both positions misunderstand the situation.

We do not possess a final language that captures reality exactly as it is. But neither are we free to describe the world however we please. Truth is not revelation, and it is not mere invention.

It is the provisional stabilisation of claims within mediated encounter, negotiated through language, rhetoric, and institutions, and continually tested against a world that never fully yields to our descriptions. We don’t discover Truth with a capital T. We negotiate survivable descriptions under pressure.

Ontological Grammars of Abortion

I’ve created a video to discuss ontological ontology grounded in an example of abortion, a particularly polemic topic. For more details, read the essay, Grammatical Failure – Why Liberal Epistemology Cannot Diagnose Indoctrination.

Video: Architecture of Grammatical Compromise. (Duration: 10:30)

In this video, I define Ontology, Grammar, and Commensurability before I use abortion as a poster child. Then, I discuss what happens when ontological grammars are incommensurable.

These thinkers follow:

Michel Foucault: Biopower, notably The History of Sexuality, Volume I.

Bernard Williams: Thick Moral Concepts from Ethics and the Limits of Philosophy.

Pierre Bourdieu: Habitus, notably from Outline of a Theory of Practice.

Karl Popper: Paradox of Intolerance.

I discuss the challenge of the promise of compromise and its three possible outcomes, none of which are true compromises.

Watch the video for context. Read the essay for fuller details.

The Architecture of Cognitive Compromise

4–6 minutes

Abortion, Ontological Grammar, and the Limits of Civil Discourse

When federal abortion protections were withdrawn in the United States, many observers treated the event as a policy reversal, a judicial shift, or a partisan victory. Those are surface descriptions. They are not wrong. They are simply too thin.

What was exposed was not a failure of dialogue. It was the collision of ontological grammars.

Audio: NotebookLM summary podcast.

1. Thick Concepts and the Illusion of Neutral Ground

book cover

Bernard Williams famously distinguished between ‘thin’ moral terms (good, bad, right) and ‘thick’ ones (cruel, courageous, treacherous), where description and evaluation are fused.

Abortion is not a thin concept. It is thick all the way down.

For one framework, the operative grammar is something like:

  • Life begins at conception.
  • The foetus is a rights-bearing entity.
  • Termination is morally equivalent to killing.

For another:

  • Personhood is socially and biologically emergent.
  • Bodily autonomy is foundational.
  • Forced gestation is coercive.

Notice that these are not competing policies. They are competing ontological commitments about what exists, what counts as a person, and what kind of being a pregnant body is.

Argument across this divide does not merely contest conclusions. It contests the background conditions under which reasons register as reasons.

This is not ‘people see the world differently’. It is: people parse reality through grammars that do not commute.

2. Ontological Grammar: Where Deliberation Stops

By ‘ontological grammar’, I do not mean syntax in the Saussurean or Chomskyan sense. I mean the pre-reflective substrate that structures what appears salient, real, morally charged, or negligible.

We deliberate within grammars. We do not deliberate our way into them.

Liberal Enlightenment optimism assumes that if disagreement persists, more information, better reasoning, or improved empathy will close the gap. But if the dispute concerns the very ontology of ‘life’, ‘person’, or ‘rights’, no amount of fact-sharing resolves the issue. The disagreement is upstream of facts.

The closure of federal abortion access did not prove that one side reasoned better. It demonstrated that institutional containment had failed.

3. Biopower and the Management of Bodies

Michel Foucault gives us a crucial lens: biopower. Modern states do not merely govern territory; they administer life. Birth rates, mortality, sexuality, health – these become objects of policy.

Abortion sits directly inside this matrix.

A state that restricts abortion is not only expressing moral judgment. It is reallocating control over reproductive capacity. It is asserting a claim over which bodies count, which futures are permitted, and which biological processes are subject to regulation.

The conflict is therefore not purely ethical. It is biopolitical.

And what appears as ‘civil discourse’ around abortion is often possible only so long as institutional frameworks keep both grammars partially satisfied. When federal protections existed, they acted as a stabilising superstructure. Remove that, and the ontological conflict becomes naked.

4. Habitus and the Illusion of Reasoned Consensus

Pierre Bourdieu would remind us that our dispositions are not self-authored. Habitus sedimented through family, religion, class, and institutional life shapes what feels obvious, outrageous, or unthinkable.

People do not wake up one morning and choose an abortion ontology.

They inherit it. It becomes embodied common sense.

Thus, when someone says, ‘Surely we can agree that making a person feel whole is more important than ideological purity’, they are already speaking from within a grammar that prioritises individual authenticity and psychological coherence. That priority is not universal. It is historically situated.

Compromise is not achieved by stepping outside habitus. It is achieved when institutional and social conditions allow divergent grammars to coexist without totalising one another.

5. The Popperian Threshold

Karl Popper warned of the ‘paradox of tolerance‘: unlimited tolerance may enable intolerant forces to eliminate tolerance itself.

In particularly virulent climates, appeals to compromise are heard not as gestures of goodwill but as tactical weakness.

When one faction succeeds in unilaterally redefining the legal status of abortion at a federal level, it is not merely participating in discourse. It is altering the biopolitical infrastructure. Once altered, the range of permissible disagreement narrows.

Civil discourse, then, is not a natural equilibrium. It is a managed condition sustained by institutional design, social trust, and shared legibility.

NB: Popper’s paradox of tolerance is often invoked as a moral axiom. But it is better understood as a self-protective clause internal to liberal ontology. It presupposes a shared commitment to rational exchange. When that commitment erodes, the paradox does not resolve disagreement; it merely marks the point at which biopower intervenes to preserve a regime.

6. Why This Is Not Just ‘People Disagree’

The lay intuition – ‘people see the world differently’ – is descriptively correct and analytically useless.

What the ontological grammar model adds is structure:

  • Disagreements cluster around thick concepts.
  • Thick concepts fuse description and evaluation.
  • Frameworks determine what counts as a reason.
  • Institutions temporarily stabilise incompatible grammars.
  • When stabilisation weakens, conflict appears irreconcilable.

Abortion is not uniquely polarising because people are irrational. It is polarising because it touches ontological primitives: life, personhood, autonomy, and obligation.

In such cases, ‘compromise’ is not achieved by discovering a middle truth. It is achieved – if at all – by constructing a legal and institutional arrangement that both grammars can grudgingly inhabit.

NotebookLM Infographic

7. The Uncomfortable Conclusion

The Enlightenment story tells us that disagreement is a surface phenomenon, curable by better reasoning.

The ontological grammar story tells us something harsher: some disagreements are not resolvable through language because they are about the conditions under which language binds.

This does not entail quietism. It entails clarity.

Civil discourse is not proof that grammars have converged. It is evidence that power, institutions, and habitus have aligned sufficiently to prevent rupture.

When that alignment shifts, the illusion of shared ontology evaporates.

And what we are left with is not failed reasoning – but exposed foundations.


I planned to use prostitution and anti-natalism as other cases for elucidation, but I see this has already grown long. I’ll reserve these are others for another day and time.

Announcement: Legibility GPT

2–3 minutes

I’ve just published a new GPT in support of my new ontology, grammar, and legibility project, Legibility GPT.

As with Languange Insufficiency GPT and MEOW GPT, it is meant to assist in the exploration of the concepts for which they were built and named. For an interesting time, analyse a concept through all three.

Simply put, Legibility GPT assesses conceptual thickness, based on the work of Bernard Williams in Ethics and the Limits of Philosophy. Thin concepts, whether containing moral content or description, carry commensurable information, so disagreement remediation may be attained. For thick moral concepts, this becomes increasingly unlikely because the moral content becomes an anchor. Generally speaking, the conflicting ontological positions either weigh the concept differently or, in extreme cases, one side doesn’t even categorise the concept as principally moral. I use legibility in the sense articulated by James C Scott in Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed.

Audio: NotebookLM concept summary podcast.

A cartographic tool for conceptual conflict. Legibility GPT analyses how ontology and grammar shape the meaning of contested terms, identifying points of admissibility, exclusion, and incommensurability. It maps disagreement without taking sides. 

Usage: Input a term or concept. This GPT will output the various polysemous contexts of the concept and break out the various ontological commitments and grammatical functions with examples of valid and invalid phrases within that grammar.

This GPT will also score and sort on incommensurability. A particularly divisive concept might be abortion.

Related Papers:

Grammatical Failure: Why Liberal Epistemology Cannot Diagnose Indoctrination

Language As Interface: Underconstraint, Genealogy, and Moral Incommensurability

Analyses English terms and short concept phrases using the Language Insufficiency Hypothesis (LIH), diagnosing semantic stability, polysemy, connotation, and category drift in contemporary usage.

Related Papers:

A Language Insufficiency Hypothesis: Mapping the Boundaries of Linguistic Expression

A structured analysis tool that maps any given concept onto the MEOW mediation framework (T0–T3). Produces a consistent, tiered breakdown including scope checks, applicability flags, and limits of interpretation.

Related Papers:

The Mediated Encounter Ontology of the World: A Relational Metaphysics Beyond Mind and World

Legibility and Ontology

3–5 minutes

These two words qualify as my words of the month: legibility and ontology.

I’ve been using them as lenses.

I picked up legibility from James C. Scott’s Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed, which is really a book about how well-intentioned schemes fail once reality is forced to become administrable. Ontology is an older philosophical workhorse, usually paired with epistemology, but I’m using it here in a looser, more pragmatic sense.

When I write, I write through lenses. Everyone does. Writing requires a point of view, even when we pretend otherwise.

In this post, I want to talk more informally about my recent essay, Grammatical Failure. I usually summarise my work elsewhere, but here I want to think out loud about it, particularly in relation to social ontology and epistemology. I won’t linger on definitions. They’re a search away. But a little framing helps.

Ontology, roughly: how reality is parsed.

Epistemology: how knowledge is justified within that parsing.

Audio: NotebookLM summary of this post.

Much of my recent work sits downstream of thinkers like Thomas Sowell, George Lakoff, Jonathan Haidt, Kurt Gray, and Joshua Greene. Despite their differences, they converge on a shared insight: human cognition is largely motivated preverbally. As a philosopher of language, that pre-language layer is where my interest sharpens.

I explored this in earlier work, including a diptych titled The Grammar of ImpasseConceptual Exhaustion and Causal Mislocation. Writing is how I gel these ideas. There are several related pieces still in the pipeline.

When I talk about grammar, I don’t mean Saussure or Chomsky. I mean something deeper: the ontological substrate beneath belief. Grammar, in this sense, is how reality gets parsed before beliefs ever form. It filters what can count as real, salient, or intelligible.

Let’s use a deliberately simplified example.

Imagine two ontological orientations. Call them Ont-C and Ont-L. This isn’t to say there are only two, but much of Western political discourse collapses into a binary anyway.

Ont-C tends to experience people as inherently bad, dangerous, or morally suspect. Ont-L tends to experience people as inherently good or at least corrigible. These aren’t opinions in the usual sense. They sit beneath belief, closer to affect and moral orientation.

Now consider retributive justice, setting aside the fact that justice itself is a thick concept.

From Ont-C, punishment teaches a lesson. It deters. It disciplines. From Ont-L, punishment without rehabilitation looks cruel or counterproductive, and the transgression itself may be read as downstream of systemic injustice.

Each position can acknowledge exceptions. Ont-L knows there are genuinely broken people. Ont-C knows there are saints. But those are edge cases, not defaults.

Now ask Ont-C and Ont-L to design a criminal justice system together. The result will feel intolerable to both. Too lenient. Too harsh. The disagreement isn’t over policy details. It’s over how reality is carved up in the first place.

And this is only one dimension.

Add others. Bring in Ont-V and Ont-M if you like, for vegan and meat-based ontologies. Suddenly, you have Ont-CV, Ont-CM, Ont-LV, and Ont-LM. Then add class, religion, gender, authority, harm, and whatever. Intersectionality stops looking like a solution and starts looking like a combinatorial explosion.

The Ont-Vs can share a meal, so long as they don’t talk politics.

The structure isn’t just unstable. It was never stable to begin with. We imagine foundations because legibility demands them.

Grammatical Failure is an attempt to explain why this instability isn’t a bug in liberal epistemology but a structural feature. The grammar does the sorting long before deliberation begins.

More on that soon.


In any case, once you start applying this ontological lens to other supposedly intractable disputes, you quickly realise that their intractability is not accidental.

Take abortion.

If we view the issue through the lenses of Ont-A (anti-abortion) and Ont-C (maternal choice), we might as well be peering through Ont-Oil and Ont-Water. The disagreement does not occur at the level of policy preferences or competing values. It occurs at the level of what counts as morally salient in the first place.

There is no middle ground here. No middle path. No synthesis waiting to be negotiated into existence.

That is not because the participants lack goodwill, intelligence, or empathy. It is because the ontological primitives are incommensurate. Each side experiences the other not as mistaken but as unintelligible.

We can will compromise all we like. The grammar does not comply.

Contemporary discourse often insists otherwise. It tells us that better arguments, clearer framing, or more dialogue will eventually converge. From this perspective, that insistence is not hopeful. It is confused. It mistakes a grammatical fracture for a deliberative failure.

You might try to consider other polemic topics and notice the same interplay.

Facts, Intent, and the Afterlife of Metaphysics

5–8 minutes

I’ve been reading Bernard Williams lately, and I’ve written about his work on Truth and Truthfulness. I’m in the process of writing more on the challenges of ontological moral positionsand moral luck. I don’t necessarily want to make contemporary news my focal point, but this is a perfect case study for it. I’ll be releasing a neutral philosophy paper on the underlying causes, but I want to comment on this whilst it’s still in the news cycle.

The form of xenophobia is a phenomenon occurring in the United States, though the ontological split is applicable more generally. For those unfamiliar with US news, I’ll set this up. The United States is currently deploying federal enforcement power in ways that deliberately bypass local consent, blur policing and military roles, and rely on fear as a stabilising mechanism. Historical analogies are unavoidable, but not required for the argument that follows. These forces have been deployed in cities that did not and do not support the Trump administration, so they are exacting revenge and trying to foment fear and unrest. This case is an inevitable conclusion to these policy measures.

tl;dr: The Law™ presents itself as fact-driven, but only by treating metaphysical imputations about inner life as if they were empirical findings. This is not a flaw in this case; it is how the system functions at all.

NB: Some of this requires having read Williams or having a familiarity with certain concepts. Apologies in advance, but use Google or a GPT to fill in the details.

Audio: NotebookLM summary podcast of this content.

Why the Minneapolis ICE Shooting Exposes the Limits of Bernard Williams

The Minneapolis ICE shooting is not interesting because it is unusual. It is interesting because it is painfully ordinary. A person is dead. An officer fired shots. A vehicle was involved. Video exists. Statements were issued. Protests followed. No one seriously disputes these elements. They sit in the shared centre of the Venn diagram, inert and unhelpful. Where everything fractures is precisely where the law insists clarity must be found: intent and motive. And this is where things stop being factual and start being metaphysical.

The Comfortable Fiction of Legal Facts

The legal system likes to tell a comforting story about itself. It claims to be empirical, sober, and evidence-driven. Facts in, verdicts out. This is nonsense.

What the law actually does is this:

  • It gathers uncontested physical facts.
  • It then demands a psychological supplement.
  • It treats that supplement as if it were itself a fact.

Intent and motive are not observed. They are inferred. Worse, they are imposed. They are not discovered in the world but assigned to agents to make outcomes legible.

In Minneapolis, the uncontested facts are thin but stable:

  • A U.S. Immigration and Customs Enforcement (ICE) agent, identified as Jonathan Ross, shot and killed Renée Nicole Good in Minneapolis on 7 January 2026.
  • The incident involved Good’s vehicle, which was present and moving at the time shots were fired.
  • Ross fired his weapon multiple times, and Good died from those gunshot wounds.
  • The Department of Homeland Security (DHS) claims the agent acted in self-defence.
  • Video footage exists that shows at least part of the encounter.
  • The case ignited protests, widespread condemnation from local officials, and political pushback.

This creates a shared intersection: vehicle, Ross, shots, and that ‘something happened’ that neither side is denying.

None of these facts contain intent. None of them specify motive. They do not tell us whether the movement of the vehicle was aggression, panic, confusion, or escape. They do not tell us whether the shooting was fear, anger, habit, or protocol execution. Yet the law cannot proceed without choosing.
So it does what it always does. It smuggles metaphysics into evidence and calls it psychology.

Intent and Motive as Institutional Impositions

Intent is treated as a condition of responsibility. Motive is treated as its explanation. Neither is a fact in anything like the ordinary sense. Even self-report does not rescue them. Admission is strategically irrational. Silence is rewarded. Reframing is incentivised. And even sincerity would not help, because human beings do not have transparent access to their own causal architecture. They have narratives, rehearsed and revised after the fact. So the law imputes. It tells the story the agent cannot safely tell, and then punishes or absolves them on the basis of that story. This is not a bug. It is the operating system.

Where Bernard Williams Comes In

This is where Bernard Williams becomes relevant, and where his account quietly fails. In Truth and Truthfulness, Williams famously rejects the Enlightenment fantasy of capital-T Truth as a clean, context-free moral anchor. He replaces it with virtues like sincerity and accuracy, grounded in lived practices rather than metaphysical absolutes. So far, so good.

Williams is right that moral life does not float above history, psychology, or culture. He is right to attack moral systems that pretend agents consult universal rules before acting. He is right to emphasise thick concepts, situated reasons, and practical identities. But he leaves something standing that cannot survive the Minneapolis test.

The Residue Williams Keeps

Williams still needs agency to be intelligible. He still needs actions to be recognisably owned. He still assumes that reasons, however messy, are at least retrospectively available to anchor responsibility. This is where the residue collapses.

In cases like Minneapolis:

  • Intent is legally required but epistemically unavailable.
  • Motive is legally explanatory but metaphysically speculative.
  • Admission is disincentivised.
  • Narrative is imposed under institutional pressure.

At that point, sincerity and accuracy are no longer virtues an agent can meaningfully exercise. They are properties of the story selected by the system. Williams rejects metaphysical Truth while retaining a metaphysical agent robust enough to carry responsibility. The problem is that law does not merely appeal to intelligibility; it manufactures it under constraint.

Moral Luck Isn’t Enough

Williams’ concept of moral luck gestures toward contingency, but it still presumes a stable agent who could, in principle, have acted otherwise and whose reasons are meaningfully theirs. But once intent and motive are understood as institutional fabrications rather than inner facts, ‘could have done otherwise’ becomes a ceremonial phrase. Responsibility is no longer uncovered; it is allocated. The tragedy is not that we fail to know the truth. The tragedy is that the system requires a truth that cannot exist.

Facts Versus Stories

The law does not discover which story is true. It selects which story is actionable.

The Minneapolis case shows the fault line clearly:

  • Facts: bodies, movements, weapons, recordings.
  • Stories: fear versus anger, defence versus aggression.
  • The first is uncontested. The second does all the work.

And those stories are not epistemic conclusions. They are metaphysical commitments enforced by law. Williams wanted to rescue ethics from abstraction. What he could not accept is that, once abstraction is removed, responsibility does not become more human. It becomes procedural.

The Uncomfortable Conclusion

The law does not operate on truth. It operates on enforceable interpretations of behaviour. Intent and motive are not facts. They are tools. Williams saw that capital-T Truth had to go. What he did not see, or perhaps did not want to see, is that the smaller, more humane residue he preserved cannot bear the weight the legal system places on it.

Once you see this, the obsession with ‘what really happened’ looks almost childish. The facts are already known. What is being fought over is which metaphysical fiction the system will enforce.

That decision is not epistemic. It is political. And it is violent.

Just the Facts, Mum (About Speed Limits)

3–4 minutes

We tend to think of speed limits as facts. Numbers. Neutral. Posted. Enforced. And yet almost no one treats them that way.

Roads are engineered to handle speeds well above the numeral on the sign. Police officers routinely tolerate a band of deviation. We know they’ll allow around ten miles per hour over the stated limit. They know we know. We know that they know that we know. Ad infinitum.

Audio: NotebookLM summary podcast of this topic.

Courts accept that instruments have margins of error. Drivers adjust instinctively for weather, traffic density, visibility, vehicle condition, and local customs. A straight, empty motorway at 3 a.m. is not experienced the same way as a narrow residential street at school pickup time, even if the number on the sign is identical. Everyone knows this. And yet we continue to talk about the speed limit as if it were an unmediated fact about the world.

This is not a complaint about traffic law. Speed limits work remarkably well, precisely because they are not what they appear to be. They are not discoveries about nature, but stabilised conventions: administrative thresholds designed to coordinate behaviour under uncertainty. The familiar numbers – 30, 50, 70 – are not found in the asphalt. Never 57 or 63. They are chosen, rounded, and maintained because they are legible, enforceable, and socially negotiable. What makes speed limits interesting is not their arbitrariness, but their success.

They hold not because they are exact, but because they survive approximation. They absorb error, tolerate deviation, and remain usable despite the fact that everyone involved understands their limits. In practice, enforcement relies less on the number itself than on judgments about reasonableness, risk, and context. The ‘fact’ persists because it is embedded in a network of practices, instruments, and shared expectations.

If you end up in court driving 60 in a 50, your ability to argue about instrument calibration won’t carry much weight. You’re already operating 20 per cent over specification. That’s beyond wiggle room – highly technical nomenclature, to be sure.

Blood alcohol limits work the same way. The legal threshold looks like a natural boundary. It isn’t. It’s a policy decision layered atop probabilistic measurement. Unemployment rates, diagnostic cutoffs, evidentiary standards – all of them look objective and immediate whilst concealing layers of judgment, calibration, and compromise. Each functions as a closure device: ending debate not because uncertainty has been eliminated, but because further uncertainty would make coordination impossible.

The trouble begins when we forget this – and we do. When facts are treated as simple givens rather than negotiated achievements, they become untouchable. Questioning them gets mistaken for denying reality. Acknowledging their construction gets misheard as relativism. What started as a practical tool hardens into something that feels absolute.

This is how we end up saying things like ‘just give me the facts’ whilst quietly relying on tolerance bands, interpretive discretion, and institutional judgment to make those facts usable at all.

If this sounds right – if facts work precisely because they’re mediated, not despite it – then the question becomes: what does truthfulness require once we’ve acknowledged this?

I’ve written a longer essay exploring that question, starting from Bernard Williams’ account of truthfulness as an ethical practice and extending it to facts themselves. The argument isn’t that facts are illusory or unreliable. It’s that recognising how they actually work – through stabilisation, constraint, and correction – clarifies rather than undermines objectivity.

The speed limit is the hint. Here’s the argument: The Fact of the Matter: After Bernard Williams – Truthfulness, Facts, and the Myth of Immediacy

Audio: NotebookLM summary podcast of the underlying essay (not this blog content).

I Need a Break

5–7 minutes

More precisely, I need less sleep and longer days – preferably twice as long. I’ve been writing almost non-stop for the better part of a week: fourteen- to sixteen-hour days, fuelled by irritation and the stubborn belief that if I just keep reading, something will finally click into place.

I’m not complaining. This is a virtuous cycle.
Reading leads to writing. Writing demands more reading. Eventually, the loop closes into something that looks suspiciously like progress.

Audio: Short NotebookLM summary podcast on this topic.

Still, there’s a bottleneck.

Because some of this work – the work I’m most excited about – I’m deliberately not publishing yet. Journals, bless their glacial hearts, don’t much care for prior publication. So ideas sit in limbo for six to eighteen months, locked in a room like argumentative houseplants, slowly growing sideways.

From the perspective of someone who thinks in public, this is maddening.

Now add AI to the mix.

This is where things get dangerous.

I’ll feed ChatGPT a thesis, a skeletal structure, notes, and references. I ask what I’m missing. It obliges – often helpfully – by pointing me toward adjacent thinkers and relevant literature, complete with page numbers. From there, I verify, hunt down the sources, skim, read, discard, or integrate.

And every so often, I stumble across something that makes me swear out loud.

This week, it was Bernard Williams.

I’ve cited Ethics and the Limits of Philosophy before. But this time, I actually sat down and read it properly. Which immediately prompted the thought:

Why didn’t I read this sooner?

Williams dismantles moral objectivity with the calm precision of someone who knows the Enlightenment project has already lost – he just hasn’t told everyone yet. Thick and thin moral concepts, locality, non-extensibility, the collapse of universal moral reason at scale – yes, yes, yes. He published this in 1985. Fine. I’ll survive.

But then I went further.

Williams shows that morality fails between people at scale.
I argue that it fails within a single person over time.

That became my second paper.

And this is where things went off the rails.

Because in the course of writing that paper, I dipped into Hart’s The Concept of Law and Endicott’s Vagueness in Law. These are not fringe polemics. These are law textbooks. For law students. People allegedly trained to parse language for a living.

And what I found was… astonishing.

Let me paraphrase the admissions:

Image: When the law is vague, judicial decisions may be unconstrained by the law.

Endicott: “By upsetting the standard view of adjudication, the book reaches conclusions that some people find horrible: when the law is vague, judicial decision- making will in some cases be unconstrained by the law. It is impossible in principle for judges always to treat like cases alike. Predictability in the law is to some extent unattainable. Moreover, I argue in Chapter 9,2 that vagueness cannot be eliminated from law. These conclusions might seem to imply that the rule of law is, at least to some extent, conceptually impossible.”

Image: Vagueness is inevitable. Deal with it.

Endicott: “Secondly, I do not claim that vagueness is a purely linguistic feature of law. And the book relies on no claim about the relation between law and language. These points must be stressed, because vagueness is commonly thought of as a linguistic phenomenon. And. indeed, most of the discussion in the book concerns the vagueness of linguistic expressions. But the indeterminacy claim is not just a claim about language (so I argue in Chapter 3.12). So. for example, the claim in Chapter 6 that general evaluative and normative expressions are necessarily vague is not just a claim about the word ‘good’ and the word ‘right1: it is a claim about any linguistic expression in which we could conceivably express general evaluative and normative judgments. It therefore includes a claim about what is good and what is right.”

Image: Whether law is morally valuable to a community is not my concern. Justice and the rule of law may be political virtues — or not. I don’t defend them here.

Endicott: “Disputes between legal positivists and natural law theorists have concerned not only the relation between law and adjudication, but also the relation between law and morality. Here I take no general position on the intrinsic moral value of law. I do rely on the claims that law can be valuable to a community, and that justice and the rule of law are two ideals which a com- munity can intelligibly pursue as political virtues. Even those claims are controversial (Kelsen and some of the theorists discussed in Chapter 2 have controverted them ). But I do not defend them here. This work aims to show that the indeterminacy claim does nothing to threaten the pursuit of justice and the rule of law. Those ideals cannot be well understood if we try to make them depend on determinacy in the requirements of the law.”

Say what?

Read together – not even uncharitably – the message is clear:

Law is indeterminate.
Indeterminacy is unavoidable.
And whether law is good, just, or valuable is… optional.

The subtext isn’t even hiding.

Law is a power structure first.
If it happens to align with justice, fairness, or communal value, well, lovely. A bonus. Champagne all round.

This does not sit well with a sceptical cynic.

What really broke me, though, wasn’t the argument itself. Philosophers make grim claims all the time. What broke me was the silence around it.

How does this pass under the radar?

How do cohorts of law students – drilled in textual analysis, trained to read footnotes like tea leaves – not trip over this elephant stampede? How do they graduate believing they’re upholding inalienable rights, rather than participating in a managed system of coercion that occasionally behaves itself?

Self-preservation, I suppose.
Wilful ignorance.
Professional cosplay.

I’ve seen this before.

As an economist, ask the wrong foundational question, and you’re instantly radioactive. Persona non grata. Careers don’t end with explosions — they end with polite silence and no invitations.

I probably should have committed to heterodox philosophy from the start.
Or stayed a musician.

I remember leaving graduate school, putting on a suit, and feeling like I was wearing a costume. Cosplay, before we had the word. “Business professional” as a role, not an identity.

I’ve always felt intellectually capable of doing whatever I set out to do. My temperament, however, has never agreed to play along.

Which is perhaps why diagnosing ontologies comes so naturally. Once you see the scaffolding, you can’t unsee it – whether it’s metaphysics, jurisprudence, or a corporate department pretending it has a mission.

Then David Graeber came along with Bullshit Jobs, and I remember thinking:
Thank God. It’s not just me.

So yes. I need a break.

I need sleep.
I need silence.
I need to stop reading law books that accidentally admit they’re about power and then act surprised when someone notices.

Mostly, I need to type:

WTAF?

And then go outside.

Announcing: The Rhetoric of Evil

5–8 minutes

How a Theological Artefact Survived Secular Moral Thought


DOI: https://doi.org/10.5281/zenodo.17757134

Every so often – usually when the Enlightenment ghosts begin rattling their tin cups again – one feels compelled to swat at the conceptual cobwebs they left dangling over moral philosophy. Today is one of those days.

Audio: NotebookLM podcast summarising the Rhetoric of Evil essay, not this page’s content.

I’ve just released The Rhetoric of Evil on Zenodo, a paper that politely (or impolitely, depending on your threshold) argues that ‘evil’ is not a metaphysical heavy-hitter but a rhetorical throw-pillow stuffed with theological lint. The term persists not because it explains anything, but because it lets us pretend we’ve explained something – a linguistic parlour trick that’s survived well past its sell-by date.

And because this is the age of artificial augury, I naturally asked MEOW GPT for its view of the manuscript. As expected, it nodded approvingly in that eerie, laser-precise manner unique to machines trained to agree with you – but to its credit, it didn’t merely applaud. It produced a disarmingly lucid analysis of the essay’s internal mechanics, the way ‘evil’ behaves like a conceptual marionette, and how our inherited metaphors govern the very moral judgments we think we’re making freely.

Below is MEOW GPT’s reaction, alongside my own exposition for anyone wanting a sense of how this essay fits within the broader project of dismantling the Enlightenment’s conceptual stage-props.

MEOW-GPT’s Response

(A machine’s-eye view of rhetorical exorcism)

“Evil is functioning as a demonological patch on an epistemic gap.
When agents encounter a high-constraint event they cannot immediately model,
the T₂ layer activates an inherited linguistic shortcut — the ‘evil’ label — which compresses complexity into a binary and arrests further inquiry.”

“The marionette metaphor is accurate: once we say a person ‘is evil,’ agency collapses into occult causation. Inquiry halts. Moral theatre begins.”

It went on like this – detecting exactly the mediated encounter-structure I intended, while offering a frighteningly clean schematic of how affect (T₀), heuristics (T₁), linguistic reification (T₂), and cultural choreography (T₃) conspire to turn incomprehension into metaphysics.

Machines, it seems, are quite good at detecting when humans are bullshitting themselves.

Why publish this now?

This essay marks the next plank in the broader anti-Enlightenment platform I’ve been assembling – LIH, MEOW, the ongoing dismantling of truth-fetishism, and now the unsettling realisation that ‘evil’ is little more than a theological revenant dressed up for secular work.

The term’s persistence is not a testament to its necessity but to our laziness:

  • It sounds like an explanation.
  • It licenses retribution without understanding.
  • It stabilises group boundaries.
  • It lets us outsource moral thought to a one-word incantation.

In short: ‘evil’ is the map-dragon we keep drawing because surveying the terrain is too much work.

This essay attempts to retire that dragon – not to soften judgment, but to sharpen it. If anything, the paper argues that abolishing ‘evil’ frees us to see atrocity more clearly, and thus confront it more seriously.

If you’d like to read the full argument:

📄 The Rhetoric of Evil: How a Theological Artefact Survived Secular Moral Thought

Appendix: How MEOW-GPT Reads “Evil”

A didactic demonstration of the framework in action. Download the MEOW framework below.

📄The Mediated Encounter Ontology of the World: A Relational Metaphysics Beyond Mind and World

Because MEOW-GPT was trained on the Mediated Encounter Ontology rather than on folk metaphysics, its analyses can serve as a useful diagnostic tool. It doesn’t simply summarise concepts; it reconstructs them along the four mediational tiers (T₀–T₃), showing – mechanically and without superstition – how a term behaves across biological affect, cognitive heuristics, linguistic structures, and cultural narratives.

This section is offered not as an alternative argument to the essay above, but as a worked example: a transparent demonstration of how the MEOW engine decomposes a high-complexity, high-stakes concept like evil once the theological scaffolding has been removed.

MEOW-GPT’s output shows, in compressed form, the same pattern the essay traces in prose:
that evil is not a force, not a metaphysical essence, not an occult substrate, but a mediated pattern in human encounters, one that persists rhetorically long after its ontology has collapsed.

For readers curious about how MEOW ‘thinks’, the raw structured analysis follows.

MEOW-GPT Analysis: “Evil”

(Raw output for reference; lightly formatted for readability)

Concept

Input: Evil
Type: Normative/evaluative moral concept (often reified as a property of acts, characters, or forces)
Scope check: In-scope – ‘evil’ can be analysed as a mediated-encounter pattern (MEOW) and a high-complexity term with unstable linguistic behaviour (LIH).

High-level summary (≤120 words)

In MEOW terms, ‘evil’ is not a spooky substance but a way of organising certain encounter-events: severe, norm-breaking harms that present as resistant, shocking, and morally intolerable. Biologically, it piggybacks on harm, threat, and disgust systems. Cognitively, it compresses complex appraisals—intent, cruelty, scale of harm – into a powerful label. Linguistically, it sits in the unstable region where abstraction is high and language’s reliability drops, so people fight over its scope and weaponise it. Socially and technically, institutions, media, and platforms use ‘evil’ to mark enemies, justify punishment, and dramatise conflict. The term is emotionally efficient but conceptually fragile, and MEOW treats it as a relational, mediated pattern rather than an independent metaphysical force.

T₀ – Biological Mediation

  • Moralised harm perception piggybacks on survival systems: pain avoidance, threat detection, kin protection. ‘Evil’ clusters around encounters that trigger extreme danger-signals.
  • High arousal (fear, rage, disgust) makes some harms feel qualitatively world-violating, not merely personally threatening.
  • Disgust toward contamination, mutilation, or predation heavily colours what gets called ‘evil’.
  • Species-specific cues (infant distress cries, pain expressions) shape which harms are even legible candidates for evil.

T₁ – Cognitive Mediation

  • “Evil” compresses a multi-factor appraisal (intentionality, cruelty, gratuitousness) into a one-step heuristic.
  • Essence thinking converts acts into character: the person is evil, not merely did wrong.
  • Attribution biases assign ‘evil’ to out-groups more readily than to in-groups.
  • Memory structures simplify causation into villain scripts that overwrite nuance.
  • Once assigned, the label becomes a prediction loop: every ambiguous action confirms the essence.

T₂ – Linguistic Mediation

  • On the Effectiveness–Complexity Gradient, ‘evil’ straddles Contestables and Fluids: ubiquitous but perpetually disputed.
  • It compresses harm, norm-violation, metaphysical colouring, and dramatic emphasis into a single syllable—powerful, but noisy.
  • Dominant metaphors (‘dark’, ‘tainted’, ‘monstrous’) smuggle in substance-ontology that MEOW rejects.
  • Noun-forms (‘evil’, ‘the Evil One’) promote ontologising; adjectival forms track events better, but usage constantly slides between them.
  • Cross-linguistic drift supports LIH: different traditions map the term to impurity, harm, misfortune, cosmic opposition, or taboo.

T₃ – Social/Technical Mediation

  • Religious systems embed ‘evil’ in cosmologies that harden friend/enemy binaries.
  • Legal systems avoid the term formally but reproduce it rhetorically in sentencing, media commentary, and public reaction.
  • Politics uses ‘evil’ to justify exceptional measures and collapse deliberation into moral theatre.
  • Cultural industries supply vivid villain archetypes that feed back into real-world judgments.
  • Technical systems must operationalise ‘evil’ into concrete proxies, revealing how imprecise the everyday concept is.

Limits & Failure Modes (LIH notes)

The framework is human-centric; non-human or ecosystemic ‘views of evil’ remain speculative.

‘Evil’ is a textbook Contestable: central, indispensable, and permanently argued over.

In cosmological uses (‘radical evil’, ‘evil in the world’), it approaches Fluid or ineffable status – right where LIH predicts language collapse.

MEOW cannot confirm or deny metaphysical dualisms; it only analyses how humans mediate and narrate such claims.

The Relative Intersubjectivity of Subjectivity

1–2 minutes

As I was preparing another essay – an essay on the rhetoric of evil – I had a thought about the relative intersubjectivity of subjectivity.

If one takes subjectivity seriously – not the Hollywood version with self-made heroes, but the real creature stitched together from language, history, and whatever emotional debris it stepped in on the way to adulthood – then one ends up somewhere awkward: the relative intersubjectivity of subjectivity.

Video: Two red figures walking (no sound)

Which is to say, we’re all standing on conceptual scaffolding built by other people, insisting it’s solid marble. A charming fiction, until we apply it to anything with moral voltage. ‘Evil’, for instance, collapses the moment you remove the demonological life-support and notice it’s little more than a child’s intensifier strapped to a cultural power tool.

More on that later. For now, just sit with the discomfort that the ‘self’ making moral judgments is already a negotiated artefact – relational, compromised, and never as autonomous as it pretends.