When the Borrowed Ontology Gets a Driver’s Licence

5–8 minutes

The trolley problem’s borrowed ontology was already philosophically dubious in the seminar room. It becomes materially dangerous when compiled into autonomous systems, because assumptions that once guided thought experiments now govern conduct without appeal.

The first essay argued that the trolley problem is not a neutral moral test but a borrowed ontological grammar. It preformats the scene before reasoning begins, then invites us to mistake compliance with its terms for moral insight. All of that was bad enough when confined to philosophy seminars and undergraduate anguish.

It’s even worse now. Grammar has escaped the classroom. It’s been formalised, compiled, and deployed in systems that make decisions about who lives and who dies. And it wasn’t adopted because it is morally sound. It was adopted because it’s formally legible. Legibility rears its ugly head.

Autonomous systems don’t inherit trolley logic because no one’s examined it and found it adequate to the moral world. They inherit it because it’s the sort of ontology a machine can process: discretised, scalar, optimisable. Computational tractability is not a neutral filter. It selects for ontologies that can be ranked and calculated, and discards what can’t. Trolley grammar survives not on but on formatability. The philosophical problems didn’t get solved. They got encoded.

Audio: NotebookLM summary podcast of this topic.

The Grammar Gets Compiled

The autonomous vehicle ethics literature is, for the most part, the trolley problem with a chassis bolted on.

Public debate still poses the same stale questions in a shinier casing: one pedestrian or five, passenger or crowd, young or old, many or few. These dominate media headlines and a remarkable number of engineering white papers. They are also, without exception, trolley questions – which means they carry every presupposition the first essay indicted.

They assume:

  • persons are countable units
  • deaths are commensurable
  • the relevant moral act is optimisation over comparable outcomes

And they assume all of this so completely that the engineering literature rarely pauses to ask whether any of it’s true. It simply proceeds as though the ontology were settled, because – and let’s be honest here – for computational purposes, it has to be.

This is the quiet scandal. The trolley grammar wasn’t scrutinised and then selected. It was convenient and so inherited. Engineers needed inputs that could be discretised, outputs that could be ranked, and an objective function that could be minimised. The trolley ontology arrived pre-packaged for exactly that specification. The fit was not philosophical. It was architectural. Funny, that.

Judgement Moves Upstream

In the trolley problem, the chooser was at least a fiction of agency – a staged human making a staged decision in real time. That fiction was already problematic. In the autonomous vehicle, even that residual theatre is over.

The ‘decision’ about who to hit, who to spare, and what to optimise isn’t made at the moment of impact. It’s made months or years before – in a design meeting, a spec document, a policy gradient, a loss function. The human chooser doesn’t disappear so much as retreat upstream, where moral judgement is converted into a spec and then forgotten as a latent judgment.

The engineer who writes the objective function is, in a meaningful sense, the person pulling the lever – though not likely culpable or legally liable. In my accounting, they should be, but they don’t experience themselves that way. They experience themselves as solving a technical problem, which it is… among other things. The moral content of their decisions is dissolved into parameters, weights, and optimisation targets, at which point it becomes invisible as moral content. The judgment is still there – baked into code, where it executes without renewed deliberation, without situational awareness, without the capacity to recognise an exception. The trolley problem’s fictional chooser has found their ideal form – not a person at all, but a function call.

Commensurability Becomes a Requirement

This is where the original essay’s diagnosis turns actively dangerous. In the seminar room, commensurability was a presupposition one could interrogate; could refuse; could argue that lives are not the sort of thing that submit to arithmetic, and the worst that happened was a lively tutorial. In engineering, commensurability isn’t a presupposition. It’s a precondition. See James C Scott’s Seeing Like a State.

You can’t write a decision algorithm without assigning comparable values to outcomes. To optimise, you need a scalar or a ranking. To rank, you need commensurable outputs. The system can’t tolerate genuine incommensurability – not because incommensurability is philosophically wrong, but because it is computationally intractable. So what was once a dubious metaphysical assumption becomes an architectural necessity.

The same structure appears in algorithmic triage. A hospital system designed to allocate ventilators during a crisis must score patients on factors like age, comorbidities, projected survival, and so on. Each patient becomes a datum. Each datum enters a ranking, which produces an allocation, which determines who breathes. In some political circles, these might have been cast as death panels. Every step in that chain requires the commensurability that the trolley grammar simply assumed and that the first essay argued was never justified. The machine demands the ontology that the philosopher merely entertained.

And here is the cruelty of it all. In the seminar, you could resist the grammar. You could say: ‘These lives are not commensurable’, ‘this comparison is malformed’, or ‘I refuse the maths’. The system can’t refuse the ontology it was built to execute. It’ll compute within the borrowed grammar until it’s switched off or until someone it couldn’t see is killed by an assumption nobody thought to question.

Moral Remainder and Structural Blindness

Everything the first essay identified as absent from the trolley grammar – context, relationship, role, history, the embeddedness of actual moral life – is not merely missing from the autonomous system. It’s structurally excluded by the requirements of the platform.

Role and obligation. Narrative history. Situated responsibility. Relational asymmetry. Tacit social meaning. Unquantified vulnerability. The possibility that not all harms belong in one metric space at all, ad infinitum… None of these can be rendered as a tractable variable, and what can’t be rendered as a tractable variable isn’t weighed lightly…or at all. Humans bask in their hubris, the purported ability to tame complexity, but their track record tells a different story.

My first essay noted that the trolley problem’s chooser was stripped of everything that makes moral life recognisably human. The autonomous system completes that stripping and makes it permanent. The philosophy student might resist the grammar inarticulately – might feel, without quite being able to say why, that something has been left out. The machine has no such unease. It has no friction, no nagging sense that the map has omitted something important about the territory. It just acts within the ontology it’s given; and the ontology was given by people who inherited it from a thought experiment that was never adequate from the start. Compilation doesn’t merely omit moral texture; it excludes whatever can’t survive formalisation – another procrustean bed. And unlike a bad philosophical argument, which can be refuted, published against, or simply ignored, a bad ontology compiled into infrastructure governs silently. It doesn’t announce its assumptions or invite dissent. It just administers – mini Eichmanns in waiting.

The trolley problem asked what you’d do at the lever. It at least had the decency to pretend you were present for the decision. The autonomous vehicle has already been told what counts – by engineers who mistake ontology for specification, by a machine that can’t question the grammar it executes. In the trolley problem, the borrowed ontology framed the question. In the autonomous vehicle, it drives the car.

Beep, beep.

Unwilling Steelman, Part III

A five-part descent into the illusion of autonomy, where biology writes the script, reason provides the excuse, and the self is merely the echo of its own conditioning. This is a follow-up to a recent post on the implausibility of free will.

Manipulability as Disproof

If your will can be altered without your consent, was it ever truly yours?

“If a button on the outside of your skull can change your morality,
then where, exactly, is your autonomy hiding?”

Audio: NotebookLM podcast of this topic.

We’ve heard it all before:

“Sure, I’m influenced — but at the end of the day, I choose.”
But what happens when that influence isn’t influence, but modulation?
What if your very sense of right and wrong — your willingness to forgive, to punish, to empathise — can be dialled like a radio station?

And what if you never know it’s happening?

Your Morality Is Neurochemical

Studies using Transcranial Magnetic Stimulation (TMS) and Transcranial Direct Current Stimulation (tDCS) have shown that moral judgments can be shifted by stimulating the dorsolateral prefrontal cortex (DLPFC).

  • Turn it up: the subject becomes more utilitarian.
  • Turn it down: the subject becomes more emotionally reactive.
  • They make different decisions in the exact same scenarios, depending on which neural pathway is dominant.

The kicker?

They always explain their choices as though they had made them deliberately.

There is no awareness of the manipulation.
Only a retrospective illusion of authorship.

A|B Testing the Soul

Let’s run a thought experiment.

Scenario A: You’re well-fed, calm, unprovoked.
Scenario B: You’re hungry, cortisol-spiked, primed with images of threat.

Same moral dilemma. Different choice.

Query both versions of you, and both will offer coherent post hoc justifications.
Neither suspects that their “will” was merely a biochemical condition in drag.

If both versions feel authentic, then neither can claim authority.

Your will is not sovereign.
It’s state-dependent.
And if it changes without your knowledge, it was never really yours to begin with.

Even the Observer Is a Variable

To make matters worse: the person judging your decision is just as susceptible.

An irritated observer sees you as difficult.
A relaxed one sees you as generous.
The same action — different verdict.

And yet both observers think they are the neutral party.
They are not.
They are chemically calibrated hallucinations, mistaking their reaction for objective truth.

You’re a Vending Machine, Not a Virtuoso

This isn’t metaphor. It’s architecture.

  • You input a stimulus.
  • The brain processes it using pre-loaded scripts, shaped by hormones, past trauma, fatigue, blood sugar, social context.
  • An output emerges.
  • Then the brain rationalises it, like a PR firm cleaning up after a CEO’s impulse tweet.

Reason follows emotion.
Emotion is involuntary.
Therefore, your reasoning is not yours. It’s a post-event explanation for something you didn’t choose to feel.

TL;DR: If It Can Be Tweaked, It’s Not Yours

  • Your “moral core” can be adjusted without your awareness.
  • You justify manipulated choices with total confidence.
  • Your assessors are equally chemically biased.
  • There is no neutral version of “you” — just shifting states with internal coherence.
  • And if your choices depend on state, and your state can be altered, then freedom is a costume worn by contingency.

📅 Coming Tomorrow

The Feedback Loop of False Selves

You are being judged — by others who are also compromised.