When the Borrowed Ontology Gets a Driver’s Licence

5–8 minutes

The trolley problem’s borrowed ontology was already philosophically dubious in the seminar room. It becomes materially dangerous when compiled into autonomous systems, because assumptions that once guided thought experiments now govern conduct without appeal.

The first essay argued that the trolley problem is not a neutral moral test but a borrowed ontological grammar. It preformats the scene before reasoning begins, then invites us to mistake compliance with its terms for moral insight. All of that was bad enough when confined to philosophy seminars and undergraduate anguish.

It’s even worse now. Grammar has escaped the classroom. It’s been formalised, compiled, and deployed in systems that make decisions about who lives and who dies. And it wasn’t adopted because it is morally sound. It was adopted because it’s formally legible. Legibility rears its ugly head.

Autonomous systems don’t inherit trolley logic because no one’s examined it and found it adequate to the moral world. They inherit it because it’s the sort of ontology a machine can process: discretised, scalar, optimisable. Computational tractability is not a neutral filter. It selects for ontologies that can be ranked and calculated, and discards what can’t. Trolley grammar survives not on but on formatability. The philosophical problems didn’t get solved. They got encoded.

Audio: NotebookLM summary podcast of this topic.

The Grammar Gets Compiled

The autonomous vehicle ethics literature is, for the most part, the trolley problem with a chassis bolted on.

Public debate still poses the same stale questions in a shinier casing: one pedestrian or five, passenger or crowd, young or old, many or few. These dominate media headlines and a remarkable number of engineering white papers. They are also, without exception, trolley questions – which means they carry every presupposition the first essay indicted.

They assume:

  • persons are countable units
  • deaths are commensurable
  • the relevant moral act is optimisation over comparable outcomes

And they assume all of this so completely that the engineering literature rarely pauses to ask whether any of it’s true. It simply proceeds as though the ontology were settled, because – and let’s be honest here – for computational purposes, it has to be.

This is the quiet scandal. The trolley grammar wasn’t scrutinised and then selected. It was convenient and so inherited. Engineers needed inputs that could be discretised, outputs that could be ranked, and an objective function that could be minimised. The trolley ontology arrived pre-packaged for exactly that specification. The fit was not philosophical. It was architectural. Funny, that.

Judgement Moves Upstream

In the trolley problem, the chooser was at least a fiction of agency – a staged human making a staged decision in real time. That fiction was already problematic. In the autonomous vehicle, even that residual theatre is over.

The ‘decision’ about who to hit, who to spare, and what to optimise isn’t made at the moment of impact. It’s made months or years before – in a design meeting, a spec document, a policy gradient, a loss function. The human chooser doesn’t disappear so much as retreat upstream, where moral judgement is converted into a spec and then forgotten as a latent judgment.

The engineer who writes the objective function is, in a meaningful sense, the person pulling the lever – though not likely culpable or legally liable. In my accounting, they should be, but they don’t experience themselves that way. They experience themselves as solving a technical problem, which it is… among other things. The moral content of their decisions is dissolved into parameters, weights, and optimisation targets, at which point it becomes invisible as moral content. The judgment is still there – baked into code, where it executes without renewed deliberation, without situational awareness, without the capacity to recognise an exception. The trolley problem’s fictional chooser has found their ideal form – not a person at all, but a function call.

Commensurability Becomes a Requirement

This is where the original essay’s diagnosis turns actively dangerous. In the seminar room, commensurability was a presupposition one could interrogate; could refuse; could argue that lives are not the sort of thing that submit to arithmetic, and the worst that happened was a lively tutorial. In engineering, commensurability isn’t a presupposition. It’s a precondition. See James C Scott’s Seeing Like a State.

You can’t write a decision algorithm without assigning comparable values to outcomes. To optimise, you need a scalar or a ranking. To rank, you need commensurable outputs. The system can’t tolerate genuine incommensurability – not because incommensurability is philosophically wrong, but because it is computationally intractable. So what was once a dubious metaphysical assumption becomes an architectural necessity.

The same structure appears in algorithmic triage. A hospital system designed to allocate ventilators during a crisis must score patients on factors like age, comorbidities, projected survival, and so on. Each patient becomes a datum. Each datum enters a ranking, which produces an allocation, which determines who breathes. In some political circles, these might have been cast as death panels. Every step in that chain requires the commensurability that the trolley grammar simply assumed and that the first essay argued was never justified. The machine demands the ontology that the philosopher merely entertained.

And here is the cruelty of it all. In the seminar, you could resist the grammar. You could say: ‘These lives are not commensurable’, ‘this comparison is malformed’, or ‘I refuse the maths’. The system can’t refuse the ontology it was built to execute. It’ll compute within the borrowed grammar until it’s switched off or until someone it couldn’t see is killed by an assumption nobody thought to question.

Moral Remainder and Structural Blindness

Everything the first essay identified as absent from the trolley grammar – context, relationship, role, history, the embeddedness of actual moral life – is not merely missing from the autonomous system. It’s structurally excluded by the requirements of the platform.

Role and obligation. Narrative history. Situated responsibility. Relational asymmetry. Tacit social meaning. Unquantified vulnerability. The possibility that not all harms belong in one metric space at all, ad infinitum… None of these can be rendered as a tractable variable, and what can’t be rendered as a tractable variable isn’t weighed lightly…or at all. Humans bask in their hubris, the purported ability to tame complexity, but their track record tells a different story.

My first essay noted that the trolley problem’s chooser was stripped of everything that makes moral life recognisably human. The autonomous system completes that stripping and makes it permanent. The philosophy student might resist the grammar inarticulately – might feel, without quite being able to say why, that something has been left out. The machine has no such unease. It has no friction, no nagging sense that the map has omitted something important about the territory. It just acts within the ontology it’s given; and the ontology was given by people who inherited it from a thought experiment that was never adequate from the start. Compilation doesn’t merely omit moral texture; it excludes whatever can’t survive formalisation – another procrustean bed. And unlike a bad philosophical argument, which can be refuted, published against, or simply ignored, a bad ontology compiled into infrastructure governs silently. It doesn’t announce its assumptions or invite dissent. It just administers – mini Eichmanns in waiting.

The trolley problem asked what you’d do at the lever. It at least had the decency to pretend you were present for the decision. The autonomous vehicle has already been told what counts – by engineers who mistake ontology for specification, by a machine that can’t question the grammar it executes. In the trolley problem, the borrowed ontology framed the question. In the autonomous vehicle, it drives the car.

Beep, beep.

Facts, Intent, and the Afterlife of Metaphysics

5–8 minutes

I’ve been reading Bernard Williams lately, and I’ve written about his work on Truth and Truthfulness. I’m in the process of writing more on the challenges of ontological moral positionsand moral luck. I don’t necessarily want to make contemporary news my focal point, but this is a perfect case study for it. I’ll be releasing a neutral philosophy paper on the underlying causes, but I want to comment on this whilst it’s still in the news cycle.

The form of xenophobia is a phenomenon occurring in the United States, though the ontological split is applicable more generally. For those unfamiliar with US news, I’ll set this up. The United States is currently deploying federal enforcement power in ways that deliberately bypass local consent, blur policing and military roles, and rely on fear as a stabilising mechanism. Historical analogies are unavoidable, but not required for the argument that follows. These forces have been deployed in cities that did not and do not support the Trump administration, so they are exacting revenge and trying to foment fear and unrest. This case is an inevitable conclusion to these policy measures.

tl;dr: The Law™ presents itself as fact-driven, but only by treating metaphysical imputations about inner life as if they were empirical findings. This is not a flaw in this case; it is how the system functions at all.

NB: Some of this requires having read Williams or having a familiarity with certain concepts. Apologies in advance, but use Google or a GPT to fill in the details.

Audio: NotebookLM summary podcast of this content.

Why the Minneapolis ICE Shooting Exposes the Limits of Bernard Williams

The Minneapolis ICE shooting is not interesting because it is unusual. It is interesting because it is painfully ordinary. A person is dead. An officer fired shots. A vehicle was involved. Video exists. Statements were issued. Protests followed. No one seriously disputes these elements. They sit in the shared centre of the Venn diagram, inert and unhelpful. Where everything fractures is precisely where the law insists clarity must be found: intent and motive. And this is where things stop being factual and start being metaphysical.

The Comfortable Fiction of Legal Facts

The legal system likes to tell a comforting story about itself. It claims to be empirical, sober, and evidence-driven. Facts in, verdicts out. This is nonsense.

What the law actually does is this:

  • It gathers uncontested physical facts.
  • It then demands a psychological supplement.
  • It treats that supplement as if it were itself a fact.

Intent and motive are not observed. They are inferred. Worse, they are imposed. They are not discovered in the world but assigned to agents to make outcomes legible.

In Minneapolis, the uncontested facts are thin but stable:

  • A U.S. Immigration and Customs Enforcement (ICE) agent, identified as Jonathan Ross, shot and killed Renée Nicole Good in Minneapolis on 7 January 2026.
  • The incident involved Good’s vehicle, which was present and moving at the time shots were fired.
  • Ross fired his weapon multiple times, and Good died from those gunshot wounds.
  • The Department of Homeland Security (DHS) claims the agent acted in self-defence.
  • Video footage exists that shows at least part of the encounter.
  • The case ignited protests, widespread condemnation from local officials, and political pushback.

This creates a shared intersection: vehicle, Ross, shots, and that ‘something happened’ that neither side is denying.

None of these facts contain intent. None of them specify motive. They do not tell us whether the movement of the vehicle was aggression, panic, confusion, or escape. They do not tell us whether the shooting was fear, anger, habit, or protocol execution. Yet the law cannot proceed without choosing.
So it does what it always does. It smuggles metaphysics into evidence and calls it psychology.

Intent and Motive as Institutional Impositions

Intent is treated as a condition of responsibility. Motive is treated as its explanation. Neither is a fact in anything like the ordinary sense. Even self-report does not rescue them. Admission is strategically irrational. Silence is rewarded. Reframing is incentivised. And even sincerity would not help, because human beings do not have transparent access to their own causal architecture. They have narratives, rehearsed and revised after the fact. So the law imputes. It tells the story the agent cannot safely tell, and then punishes or absolves them on the basis of that story. This is not a bug. It is the operating system.

Where Bernard Williams Comes In

This is where Bernard Williams becomes relevant, and where his account quietly fails. In Truth and Truthfulness, Williams famously rejects the Enlightenment fantasy of capital-T Truth as a clean, context-free moral anchor. He replaces it with virtues like sincerity and accuracy, grounded in lived practices rather than metaphysical absolutes. So far, so good.

Williams is right that moral life does not float above history, psychology, or culture. He is right to attack moral systems that pretend agents consult universal rules before acting. He is right to emphasise thick concepts, situated reasons, and practical identities. But he leaves something standing that cannot survive the Minneapolis test.

The Residue Williams Keeps

Williams still needs agency to be intelligible. He still needs actions to be recognisably owned. He still assumes that reasons, however messy, are at least retrospectively available to anchor responsibility. This is where the residue collapses.

In cases like Minneapolis:

  • Intent is legally required but epistemically unavailable.
  • Motive is legally explanatory but metaphysically speculative.
  • Admission is disincentivised.
  • Narrative is imposed under institutional pressure.

At that point, sincerity and accuracy are no longer virtues an agent can meaningfully exercise. They are properties of the story selected by the system. Williams rejects metaphysical Truth while retaining a metaphysical agent robust enough to carry responsibility. The problem is that law does not merely appeal to intelligibility; it manufactures it under constraint.

Moral Luck Isn’t Enough

Williams’ concept of moral luck gestures toward contingency, but it still presumes a stable agent who could, in principle, have acted otherwise and whose reasons are meaningfully theirs. But once intent and motive are understood as institutional fabrications rather than inner facts, ‘could have done otherwise’ becomes a ceremonial phrase. Responsibility is no longer uncovered; it is allocated. The tragedy is not that we fail to know the truth. The tragedy is that the system requires a truth that cannot exist.

Facts Versus Stories

The law does not discover which story is true. It selects which story is actionable.

The Minneapolis case shows the fault line clearly:

  • Facts: bodies, movements, weapons, recordings.
  • Stories: fear versus anger, defence versus aggression.
  • The first is uncontested. The second does all the work.

And those stories are not epistemic conclusions. They are metaphysical commitments enforced by law. Williams wanted to rescue ethics from abstraction. What he could not accept is that, once abstraction is removed, responsibility does not become more human. It becomes procedural.

The Uncomfortable Conclusion

The law does not operate on truth. It operates on enforceable interpretations of behaviour. Intent and motive are not facts. They are tools. Williams saw that capital-T Truth had to go. What he did not see, or perhaps did not want to see, is that the smaller, more humane residue he preserved cannot bear the weight the legal system places on it.

Once you see this, the obsession with ‘what really happened’ looks almost childish. The facts are already known. What is being fought over is which metaphysical fiction the system will enforce.

That decision is not epistemic. It is political. And it is violent.