Ontological Blindness in Modern Moral Science

4–6 minutes

Announcement: I’ll be taking a break from posting long-form articles for a while to focus on a project I’m developing. Instead, I’ll share progress summary updates.

Ontological Blindness in Modern Moral Science is a working title with a working subtitle as The Why Semantic Thickness, Measurement, and Reconciliation Go Wrong. No spoilers.

Audio: NotebookLM summary podcast of this topic.

INSERT: I’ve only outlined and stubbed this Ontological blindness project, and I’ve already got another idea. I need to stop reading and engaging with the world.

I was listening to the Audible version of A.J. Ayer’s classic, Language, Truth, and Logic (1936)– not because I had time but because I listen to audiobooks when I work out. Ayer is a Logical Positivist, but I forgive him. He’s a victim of his time. In any case, I noticed several holes in his logic.

Sure, the book was published in 1936, and it is infamous for defending or creating Emotivism, a favourite philosophical whipping boy. I’m an Emotivist, so I disagree with the opposition. In fact, I feel their arguments are either strawmen or already defended by Ayer. I also agree with Ayer that confusing the map of language with the terrain of reality is a problem in philosophy (among other contexts), but it’s less excusable for a language philosopher.

In any case, I have begun a file to consider a new working title, Phenomenal Constraint and the Limits of Ontological Language. I might as well stay in the ontological space for a while. We’ll see where it leads, but first, I need to put the original project to bed.

Every time I commence a project, I create a thesis statement and an abstract to orient me. These may change over the course of a project, especially larger ones – more of an abstract than a thesis. This thesis has already changed a couple of times, but I feel it’s settled now.

Image: NotebookLM infographic on this topic.

Thesis Statement

Modern moral psychology repeatedly commits a multi-layered category error by treating semantically and ontologically heterogeneous moral terms as commensurate units within a single comparative framework, while simultaneously treating parochial moral metaphysics as natural substrate.

This dual conflation—of semantic density with moral plurality, and of ontological commitment with empirical discovery—produces the false appearance that some moral systems are more comprehensive than others, when it in fact reflects an inability to register ontological incommensurability.

Moral Foundations Theory provides a clear and influential case of this broader mistake: a framework whose reconciliation-oriented conclusions depend not on empirical discovery alone, but on an unacknowledged liberal-naturalist sub-ontology functioning as conceptual ‘firmware’ mistaken for moral cognition itself.

Abstract

Modern moral psychology seeks to explain moral diversity through empirically tractable frameworks that assume cross-cultural comparability of moral concepts. This book argues that many such frameworks – including but not limited to Moral Foundations Theory (MFT) – rest on a persistent category error: the treatment of semantically and ontologically heterogeneous moral terms as commensurate units within a single evaluative space.

The argument proceeds in four stages. First, it establishes that moral vocabularies differ not merely in emphasis but in semantic thickness: some terms (e.g. harm, fairness) are comparatively thin, portable, and practice-independent, while others (e.g. loyalty, authority, sanctity) are culturally saturated, institution-dependent, and ontologically loaded. Treating these as equivalent ‘foundations’ mistakes density for plurality.

Second, the book shows that claims of moral ‘breadth’ or ‘completeness’ smuggle normativity into ostensibly descriptive research, crossing the Humean is/ought divide without acknowledgement. Third, it argues that this slippage is not accidental but functional, serving modern culture’s demand for optimistic, reconcilable accounts of moral disagreement.

Finally, through sustained analysis of MFT as a worked example, the book demonstrates how liberal naturalist individualism operates as an unacknowledged sub-ontology – conceptual firmware that determines what counts as moral, measurable, and comparable. The result is not moral pluralism, but ontological imperialism disguised as empirical neutrality.

The book concludes by arguing that acknowledging ontological incommensurability does not entail nihilism or relativistic indifference, but intellectual honesty about the limits of moral science and the false comfort of reconciliation narratives.

Ideation

I’ve been pondering ontologies a lot these past few weeks, especially how social ontologies undermine communication. More recently, I’ve been considering how sub-ontologies come into play. A key catalyst for my thinking has been Jonathan Haidt’s Moral Foundations Theory, but I’ve also been influenced by George Lakoff, Kurt Gray, and Joshua Greene, as I’ve shared recently. I want to be clear: This book is not about politics or political science. It intends to about the philosophy of psychology and adjacent topics.

At the highest levels, I see fundamental category errors undermining MFT, but as I inspected, it goes deeper still, so much so that it’s too much to fit into an essay or even a monograph, so I will be targeting a book so I have room to expand and articulate my argumentation. Essays are constraining, and the narrative flow – so to speak – is interrupted by footnotes and tangents.

In a book, I can spend time framing and articulating – educating the reader without presuming an in-depth knowledge. This isn’t to say that this isn’t a deep topic, and I’ll try not to patronise readers, but this topic is not only counterintuitive, it is also largely unorthodox and may ruffle a few feathers.

I’m not sure how much I’ll be able to share, but I’d like to be transparent in the process and perhaps gather some inputs along the way.

Methodology

Sort of… I’ve used Scrivener in the past for organising and writing fiction. This is the first time I’ am organising nonfiction. We’ll see how it goes.

Using Generative AI as Early Peer Review

4–6 minutes

Cheap Adversaries, Outsourced Ego, and Engineered Critique ← ChatGPT is obsessed with subtitles.

There is a peculiar anxiety around admitting that one uses generative AI in serious intellectual work. The anxiety usually takes one of two forms. Either the AI is accused of replacing thinking, or it is accused of flattering the thinker into delusion. Both charges miss the point, and both underestimate how brittle early-stage human peer review often is.

What follows is not a defence of AI as an oracle, nor a claim that it produces insight on its own. It is an account of how generative models can be used – deliberately, adversarially, and with constraints – as a form of early peer pressure. Not peer review in the formal sense, but a rehearsal space where ideas are misread, overstated, deflated, and occasionally rescued from themselves.

Audio: NotebookLM summary podcast of this topic.

The unromantic workflow

The method itself is intentionally dull:

  1. Draft a thesis statement.
    Rinse & repeat.
  2. Draft an abstract.
    Rinse & repeat.
  3. Construct an annotated outline.
    Rinse & repeat.
  4. Only then begin drafting prose.

At each stage, the goal is not encouragement or expansion but pressure. The questions I ask are things like:

  • Is this already well-trodden ground?
  • Is this just X with different vocabulary?
  • What objection would kill this quickly?
  • What would a sceptical reviewer object to first?

The key is timing. This pressure is applied before the idea is polished enough to be defended. The aim is not confidence-building; it is early damage.

Image: NotebookLM infographic on this topic.

Why generative AI helps

In an ideal world, one would have immediate access to sharp colleagues willing to interrogate half-formed ideas. In practice, that ecology is rarely available on demand. Even when it is, early feedback from humans often comes bundled with politeness, status dynamics, disciplinary loyalty, or simple fatigue.

Generative models are always available, never bored, and indifferent to social cost. That doesn’t make them right. It makes them cheap adversaries. And at this stage, adversaries are more useful than allies.

Flattery is a bias, not a sin

Large language models are biased toward cooperation. Left unchecked, they will praise mediocre ideas and expand bad ones into impressive nonsense. This is not a moral failure. It is a structural bias.

The response is not to complain about flattery, but to engineer against it.

Sidebar: A concrete failure mode

I recently tested a thesis on Mistral about object permanence. After three exchanges, the model had escalated a narrow claim into an overarching framework, complete with invented subcategories and false precision. The prose was confident. The structure was impressive. The argument was unrecognisable.

This is the Dunning-Kruger risk in practice. The model produced something internally coherent that I lacked the domain expertise to properly evaluate. Coherence felt like correctness.

The countermeasure was using a second model, which immediately flagged the overreach. Disagreement between models is often more informative than agreement.

Three tactics matter here.

1. Role constraint
Models respond strongly to role specification. Asking explicitly for critique, objections, boundary-setting, and likely reviewer resistance produces materially different output than asking for ‘thoughts’ or ‘feedback’.

2. Third-person framing
First-person presentation cues collaboration. Third-person presentation cues evaluation.

Compare:

  • Here’s my thesis; what do you think?
  • Here is a draft thesis someone is considering. Please evaluate its strengths, weaknesses, and likely objections.

The difference is stark. The first invites repair and encouragement. The second licenses dismissal. This is not trickery; it is context engineering.

3. Multiple models, in parallel
Different models have different failure modes. One flatters. Another nitpicks. A third accuses the work of reinventing the wheel. Their disagreement is the point. Where they converge, caution is warranted. Where they diverge, something interesting is happening.

‘Claude says…’: outsourcing the ego

One tactic emerged almost accidentally and turned out to be the most useful of all.

Rather than responding directly to feedback, I often relay it as:

“Claude says this…”

The conversation then shifts from defending an idea to assessing a reading of it. This does two things at once:

  • It removes personal defensiveness. No one feels obliged to be kind to Claude.
  • It invites second-order critique. People are often better at evaluating a critique than generating one from scratch.

This mirrors how academic peer review actually functions:

  • Reviewer 2 thinks you’re doing X.
  • That seems like a misreading.
  • This objection bites; that one doesn’t.

The difference is temporal. I am doing this before the draft hardens and before identity becomes entangled with the argument.

Guardrails against self-delusion

There is a genuine Dunning–Kruger risk when working outside one’s formal domain. Generative AI does not remove that risk. Used poorly, it can amplify it.

The countermeasure is not humility as a posture, but friction as a method:

  • multiple models,
  • adversarial prompting,
  • third-person evaluation,
  • critique of critiques,
  • and iterative narrowing before committing to form.

None of this guarantees correctness. It does something more modest and more important: it makes it harder to confuse internal coherence with external adequacy.

What this cannot do

It’s worth being explicit about the limits. Generative models cannot tell you whether a claim is true. They can tell you how it is likely to be read, misread, resisted, or dismissed. They cannot arbitrate significance. They cannot decide what risks are worth taking. They cannot replace judgment. Those decisions remain stubbornly human.

What AI can do – when used carefully – is surface pressure early, cheaply, and without social cost. It lets ideas announce their limits faster, while those limits are still negotiable.

A brief meta-note

For what it’s worth, Claude itself was asked to critique an earlier draft of this post. It suggested compressing the familiar arguments, foregrounding the ‘Claude says…’ tactic as the real contribution, and strengthening the ending by naming what the method cannot do.

That feedback improved the piece. Which is, rather conveniently, the point.

Moral Psychology and the Art of Not Believing Your Own Results

3–4 minutes

Over the past few decades, moral psychology has staged a quiet coup against one of our most cherished fantasies: that human beings are, at bottom, rational moral agents. This is not a fringe claim. It is not a Twitter take. It is the mainstream finding of an entire research programme spanning psychology, cognitive science, linguistics, and neuroscience.

We do not reason our way to moral conclusions. We feel our way there. Instantly. Automatically. And only afterwards do we construct reasons that make the judgment sound respectable.

Audio: NotebookLM summary podcast of this topic.

This is not controversial anymore. It is replicated, taught, and celebrated. And yet, if you read the most influential books in this literature, something strange happens. The diagnosis is devastating. The prescription is reassuring.

I’ve just published a long-form video walking through five canonical books in moral psychology that all uncover the same structural problem, and then quietly refuse to live with the implications.

What follows is a brief guide to the argument.

The shared discovery

Across the literature, the same conclusions keep reappearing:

  • Moral judgement is intuitive, not deliberative
  • Reasoning is largely post-hoc
  • Emotion is not noise but signal
  • Framing and metaphor shape what even counts as a moral fact
  • Group identity and tribal affiliation dominate moral perception

In other words: the Enlightenment picture of moral reasoning is wrong. Or at least badly incomplete.

The rider does not steer the elephant. The rider explains where the elephant has already gone.

Audio: NotebookLM infographic

Where the books go wrong

The video focuses on five widely read, field-defining works:

  • The Righteous Mind (reviewed here and here… even here)
  • Moral Politics (mentioned here – with Don’t Think of an Elephant treated as its popular sequel)
  • Outraged! (reviewed here)
  • Moral Tribes (reviewed here)

Each of these books is sharp, serious, and worth reading. This is not a hit piece.

But each follows the same arc:

  1. Identify a non-rational, affective, automatic mechanism at the heart of moral judgement
  2. Show why moral disagreement is persistent and resistant to argument
  3. Propose solutions that rely on reflection, dialogue, reframing, calibration, or rational override

In short: they discover that reason is weak, and then assign it a leadership role anyway.

Haidt dismantles moral rationalism and then asks us to talk it out.
Lakoff shows that framing is constitutive, then offers better framing.
Gray models outrage as a perceptual feedback loop, then suggests we check our perceptions.
Greene diagnoses tribal morality, then bets on utilitarian reasoning to save us.

None of this is incoherent. But it is uncomfortable. Because the findings themselves suggest that these prescriptions are, at best, limited.

Diagnosis without prognosis

The uncomfortable possibility raised by this literature is not that we are ignorant or misinformed.

It is that moral disagreement may be structural rather than solvable.

That political conflict may not be cured by better arguments.
That persuasion may resemble contagion more than deliberation.
That reason often functions as a press secretary, not a judge.

The books sense this. And then step back from it. Which is human. But it matters.

Why this matters now

We are living in systems that have internalised these findings far more ruthlessly than public discourse has.

Social media platforms optimise for outrage, not understanding.
Political messaging is frame-first, not fact-first.
AI systems are increasingly capable of activating moral intuitions at scale, without fatigue or conscience.

Meanwhile, our institutions still behave as if one more conversation, one more fact-check, one more appeal to reason will close the gap. The research says otherwise.

And that gap between what we know and what we pretend may be the most important moral problem of the moment.

No solution offered

The video does not end with a fix. That’s deliberate.

Offering a neat solution here would simply repeat the same move I’m criticising: diagnosis followed by false comfort. Sometimes orientation matters more than optimism. The elephant is real. The elephant is moving.And most of us are passengers arguing about the map while it walks.

That isn’t despair. It’s clarity.

Language Insufficiency Hypothesis: Presumption and Horizon

I discuss Chapter 4 of ‘A Language Insufficiency Hypothesis’ in this video clip.

Video: Presumption and Horizon

In short, I discuss where language fails in law, politics, science, and digital culture, where we think language conveys more than it does.

Language Insufficiency Hypothesis: The Gradient

In a 4-minute video, I discuss The Gradient, Chapter 3 of my latest book, A Language Insufficiency Hypothesis.

Video: The Gradient

It’s a short video/chapter. Nothing much to add. In retrospect, I should have summarised chapters 3 and 4 together.

Audio: NotebookLM summary podcast of this chapter.

Facts, Intent, and the Afterlife of Metaphysics

5–8 minutes

I’ve been reading Bernard Williams lately, and I’ve written about his work on Truth and Truthfulness. I’m in the process of writing more on the challenges of ontological moral positionsand moral luck. I don’t necessarily want to make contemporary news my focal point, but this is a perfect case study for it. I’ll be releasing a neutral philosophy paper on the underlying causes, but I want to comment on this whilst it’s still in the news cycle.

The form of xenophobia is a phenomenon occurring in the United States, though the ontological split is applicable more generally. For those unfamiliar with US news, I’ll set this up. The United States is currently deploying federal enforcement power in ways that deliberately bypass local consent, blur policing and military roles, and rely on fear as a stabilising mechanism. Historical analogies are unavoidable, but not required for the argument that follows. These forces have been deployed in cities that did not and do not support the Trump administration, so they are exacting revenge and trying to foment fear and unrest. This case is an inevitable conclusion to these policy measures.

tl;dr: The Law™ presents itself as fact-driven, but only by treating metaphysical imputations about inner life as if they were empirical findings. This is not a flaw in this case; it is how the system functions at all.

NB: Some of this requires having read Williams or having a familiarity with certain concepts. Apologies in advance, but use Google or a GPT to fill in the details.

Audio: NotebookLM summary podcast of this content.

Why the Minneapolis ICE Shooting Exposes the Limits of Bernard Williams

The Minneapolis ICE shooting is not interesting because it is unusual. It is interesting because it is painfully ordinary. A person is dead. An officer fired shots. A vehicle was involved. Video exists. Statements were issued. Protests followed. No one seriously disputes these elements. They sit in the shared centre of the Venn diagram, inert and unhelpful. Where everything fractures is precisely where the law insists clarity must be found: intent and motive. And this is where things stop being factual and start being metaphysical.

The Comfortable Fiction of Legal Facts

The legal system likes to tell a comforting story about itself. It claims to be empirical, sober, and evidence-driven. Facts in, verdicts out. This is nonsense.

What the law actually does is this:

  • It gathers uncontested physical facts.
  • It then demands a psychological supplement.
  • It treats that supplement as if it were itself a fact.

Intent and motive are not observed. They are inferred. Worse, they are imposed. They are not discovered in the world but assigned to agents to make outcomes legible.

In Minneapolis, the uncontested facts are thin but stable:

  • A U.S. Immigration and Customs Enforcement (ICE) agent, identified as Jonathan Ross, shot and killed Renée Nicole Good in Minneapolis on 7 January 2026.
  • The incident involved Good’s vehicle, which was present and moving at the time shots were fired.
  • Ross fired his weapon multiple times, and Good died from those gunshot wounds.
  • The Department of Homeland Security (DHS) claims the agent acted in self-defence.
  • Video footage exists that shows at least part of the encounter.
  • The case ignited protests, widespread condemnation from local officials, and political pushback.

This creates a shared intersection: vehicle, Ross, shots, and that ‘something happened’ that neither side is denying.

None of these facts contain intent. None of them specify motive. They do not tell us whether the movement of the vehicle was aggression, panic, confusion, or escape. They do not tell us whether the shooting was fear, anger, habit, or protocol execution. Yet the law cannot proceed without choosing.
So it does what it always does. It smuggles metaphysics into evidence and calls it psychology.

Intent and Motive as Institutional Impositions

Intent is treated as a condition of responsibility. Motive is treated as its explanation. Neither is a fact in anything like the ordinary sense. Even self-report does not rescue them. Admission is strategically irrational. Silence is rewarded. Reframing is incentivised. And even sincerity would not help, because human beings do not have transparent access to their own causal architecture. They have narratives, rehearsed and revised after the fact. So the law imputes. It tells the story the agent cannot safely tell, and then punishes or absolves them on the basis of that story. This is not a bug. It is the operating system.

Where Bernard Williams Comes In

This is where Bernard Williams becomes relevant, and where his account quietly fails. In Truth and Truthfulness, Williams famously rejects the Enlightenment fantasy of capital-T Truth as a clean, context-free moral anchor. He replaces it with virtues like sincerity and accuracy, grounded in lived practices rather than metaphysical absolutes. So far, so good.

Williams is right that moral life does not float above history, psychology, or culture. He is right to attack moral systems that pretend agents consult universal rules before acting. He is right to emphasise thick concepts, situated reasons, and practical identities. But he leaves something standing that cannot survive the Minneapolis test.

The Residue Williams Keeps

Williams still needs agency to be intelligible. He still needs actions to be recognisably owned. He still assumes that reasons, however messy, are at least retrospectively available to anchor responsibility. This is where the residue collapses.

In cases like Minneapolis:

  • Intent is legally required but epistemically unavailable.
  • Motive is legally explanatory but metaphysically speculative.
  • Admission is disincentivised.
  • Narrative is imposed under institutional pressure.

At that point, sincerity and accuracy are no longer virtues an agent can meaningfully exercise. They are properties of the story selected by the system. Williams rejects metaphysical Truth while retaining a metaphysical agent robust enough to carry responsibility. The problem is that law does not merely appeal to intelligibility; it manufactures it under constraint.

Moral Luck Isn’t Enough

Williams’ concept of moral luck gestures toward contingency, but it still presumes a stable agent who could, in principle, have acted otherwise and whose reasons are meaningfully theirs. But once intent and motive are understood as institutional fabrications rather than inner facts, ‘could have done otherwise’ becomes a ceremonial phrase. Responsibility is no longer uncovered; it is allocated. The tragedy is not that we fail to know the truth. The tragedy is that the system requires a truth that cannot exist.

Facts Versus Stories

The law does not discover which story is true. It selects which story is actionable.

The Minneapolis case shows the fault line clearly:

  • Facts: bodies, movements, weapons, recordings.
  • Stories: fear versus anger, defence versus aggression.
  • The first is uncontested. The second does all the work.

And those stories are not epistemic conclusions. They are metaphysical commitments enforced by law. Williams wanted to rescue ethics from abstraction. What he could not accept is that, once abstraction is removed, responsibility does not become more human. It becomes procedural.

The Uncomfortable Conclusion

The law does not operate on truth. It operates on enforceable interpretations of behaviour. Intent and motive are not facts. They are tools. Williams saw that capital-T Truth had to go. What he did not see, or perhaps did not want to see, is that the smaller, more humane residue he preserved cannot bear the weight the legal system places on it.

Once you see this, the obsession with ‘what really happened’ looks almost childish. The facts are already known. What is being fought over is which metaphysical fiction the system will enforce.

That decision is not epistemic. It is political. And it is violent.

Language Insufficiency Hypothesis: The Genealogy of Language Failure

1–2 minutes

I published A Language Insufficiency Hypothesis this month, and this is one of a series of videos summarising the content. In this segment, I’m discussing Chapter 1: A Genealogy of Insufficiency

In this video, I touch on Plato to Barthes and Foucault. Derrida gets no love, and I mention bounded rationality, but not Simon. I discuss Steven Pinker’s dissent in more detail in a later chapter.

Below, I’ve included some artefacts from the book.

Image: Chapter 1: Page 1
Image: Genealogy of Insufficiency: A Historical Trajectory
Image: Table of Contents

A Language Insufficiency Announcement

1–2 minutes

A Language Insufficiency Hypothesis is now available, and I am commencing a series of video content to support it.

Video: Language Insufficiency Hypothesis – Part 1 – The Basic Concepts (Duration: 6:44)

In this primer, I introduce the Language Effectiveness–Complexity Gradient and the nomenclature of the hypothesis: Invariants, Contestables, Fluids, and Ineffables.

In the next segment, I’ll discuss the Effectiveness and Presumed Effectiveness Horizons.

If you would like to support my work, consider purchasing one of my books. Leaving ratings and reviews helps more than you know to appease the algorithm gods.

The book is available at Amazon, Barnes and Noble, and traditional booksellers.

ISBN (Hard Cover): 978-0-9710869-0-6
ISBN (Paperback): 978-0-9710869-4-4

US Library of Congress ID: LCCN: 2025927066

A Language Insufficiency Hypothesis

1–2 minutes

Almost a decade in the making, this book explains why more time, more effort, and more detail do not reliably improve certain forms of communication. Beyond a point, returns diminish sharply. In some domains, they collapse altogether.

The manuscript focuses on English, but the hypothesis has already been extended to French (published separately), and I am continuing work on other ontological barriers. If you’re interested in testing or extending the framework in your own language, feel free to get in touch.

Also available in a clothbound edition at Barnes & Noble.

Over the coming weeks, I’ll be unpacking aspects of the Language Insufficiency Hypothesis in more depth here. The book’s role is deliberately limited: it defines the problem, establishes the structure, and offers grounding examples. The real work happens in the consequences.

For now, the important thing is simple: the book is finally available.

James Talks Truth

2–3 minutes

I’ve read about 85 per cent of James by Percival Everett. I recommend it. On the surface, it is simply a very good story set in the narrative universe of Mark Twain’s Huck Finn and Tom Sawyer. I will avoid spoilers as best I can.

The novel is set in the antebellum American South. James and the others move through Missouri, a state that openly supported slavery, and at one point into Illinois, a state that officially opposed it but quietly failed to live up to its own rhetoric. Illinois, it turns out, is no safe haven. Ideology and practice, as ever, are on speaking terms only when it suits them.

Audio: Short NotebookLM summary podcast of this content.

This is not a book review. I may write one later for my Ridley Park site once I’ve finished the book. What interests me here are two philosophical tensions Everett stages with remarkable economy.

There are two characters who are Black but able to pass as white. One of them feels profound guilt about this. He was raised as a slave, escaped, and knows exactly what it means to be treated as Black because he has lived it. Passing feels like theft. Survival, perhaps, but theft all the same.

The other is more unsettled. He was raised as a white man and only later discovers that he is not, as the language goes, “pure-bred”. This revelation leaves him suspended between identities. Should he now accept a Black identity he has never inhabited, or continue to pass quietly, benefitting from a system that would destroy him if it knew?

James offers him advice that is as brutal as it is lucid:

“Belief has nothing to do with truth. Believe what you like. Believe I’m lying and move through the world as a white boy. Believe I’m telling the truth and move through the world as a white boy anyway. Either way, no difference.”

This is the philosophical nerve of the book.

Truth, Everett suggests, is indifferent to belief. Belief does not mediate justice. It does not reorganise power. It does not rewire how the world responds to your body. What matters is not what is true, nor even what is believed to be true, but how one is seen.

The world does not respond to essences. It responds to appearances.

Identity here is not an inner fact waiting to be acknowledged; it is a surface phenomenon enforced by institutions, habits, and violence. The truth can be known, spoken, even proven, and still change nothing. The social machine runs on perception, not ontology.

In James, Everett is not offering moral comfort. He is stripping away a modernist fantasy: that truth, once revealed, obliges the world to behave differently. It doesn’t. The world only cares what you look like while moving through it.

Truth, it turns out, is perfectly compatible with injustice.