When the Borrowed Ontology Gets a Driver’s Licence

5–8 minutes

The trolley problem’s borrowed ontology was already philosophically dubious in the seminar room. It becomes materially dangerous when compiled into autonomous systems, because assumptions that once guided thought experiments now govern conduct without appeal.

The first essay argued that the trolley problem is not a neutral moral test but a borrowed ontological grammar. It preformats the scene before reasoning begins, then invites us to mistake compliance with its terms for moral insight. All of that was bad enough when confined to philosophy seminars and undergraduate anguish.

It’s even worse now. Grammar has escaped the classroom. It’s been formalised, compiled, and deployed in systems that make decisions about who lives and who dies. And it wasn’t adopted because it is morally sound. It was adopted because it’s formally legible. Legibility rears its ugly head.

Autonomous systems don’t inherit trolley logic because no one’s examined it and found it adequate to the moral world. They inherit it because it’s the sort of ontology a machine can process: discretised, scalar, optimisable. Computational tractability is not a neutral filter. It selects for ontologies that can be ranked and calculated, and discards what can’t. Trolley grammar survives not on but on formatability. The philosophical problems didn’t get solved. They got encoded.

Audio: NotebookLM summary podcast of this topic.

The Grammar Gets Compiled

The autonomous vehicle ethics literature is, for the most part, the trolley problem with a chassis bolted on.

Public debate still poses the same stale questions in a shinier casing: one pedestrian or five, passenger or crowd, young or old, many or few. These dominate media headlines and a remarkable number of engineering white papers. They are also, without exception, trolley questions – which means they carry every presupposition the first essay indicted.

They assume:

  • persons are countable units
  • deaths are commensurable
  • the relevant moral act is optimisation over comparable outcomes

And they assume all of this so completely that the engineering literature rarely pauses to ask whether any of it’s true. It simply proceeds as though the ontology were settled, because – and let’s be honest here – for computational purposes, it has to be.

This is the quiet scandal. The trolley grammar wasn’t scrutinised and then selected. It was convenient and so inherited. Engineers needed inputs that could be discretised, outputs that could be ranked, and an objective function that could be minimised. The trolley ontology arrived pre-packaged for exactly that specification. The fit was not philosophical. It was architectural. Funny, that.

Judgement Moves Upstream

In the trolley problem, the chooser was at least a fiction of agency – a staged human making a staged decision in real time. That fiction was already problematic. In the autonomous vehicle, even that residual theatre is over.

The ‘decision’ about who to hit, who to spare, and what to optimise isn’t made at the moment of impact. It’s made months or years before – in a design meeting, a spec document, a policy gradient, a loss function. The human chooser doesn’t disappear so much as retreat upstream, where moral judgement is converted into a spec and then forgotten as a latent judgment.

The engineer who writes the objective function is, in a meaningful sense, the person pulling the lever – though not likely culpable or legally liable. In my accounting, they should be, but they don’t experience themselves that way. They experience themselves as solving a technical problem, which it is… among other things. The moral content of their decisions is dissolved into parameters, weights, and optimisation targets, at which point it becomes invisible as moral content. The judgment is still there – baked into code, where it executes without renewed deliberation, without situational awareness, without the capacity to recognise an exception. The trolley problem’s fictional chooser has found their ideal form – not a person at all, but a function call.

Commensurability Becomes a Requirement

This is where the original essay’s diagnosis turns actively dangerous. In the seminar room, commensurability was a presupposition one could interrogate; could refuse; could argue that lives are not the sort of thing that submit to arithmetic, and the worst that happened was a lively tutorial. In engineering, commensurability isn’t a presupposition. It’s a precondition. See James C Scott’s Seeing Like a State.

You can’t write a decision algorithm without assigning comparable values to outcomes. To optimise, you need a scalar or a ranking. To rank, you need commensurable outputs. The system can’t tolerate genuine incommensurability – not because incommensurability is philosophically wrong, but because it is computationally intractable. So what was once a dubious metaphysical assumption becomes an architectural necessity.

The same structure appears in algorithmic triage. A hospital system designed to allocate ventilators during a crisis must score patients on factors like age, comorbidities, projected survival, and so on. Each patient becomes a datum. Each datum enters a ranking, which produces an allocation, which determines who breathes. In some political circles, these might have been cast as death panels. Every step in that chain requires the commensurability that the trolley grammar simply assumed and that the first essay argued was never justified. The machine demands the ontology that the philosopher merely entertained.

And here is the cruelty of it all. In the seminar, you could resist the grammar. You could say: ‘These lives are not commensurable’, ‘this comparison is malformed’, or ‘I refuse the maths’. The system can’t refuse the ontology it was built to execute. It’ll compute within the borrowed grammar until it’s switched off or until someone it couldn’t see is killed by an assumption nobody thought to question.

Moral Remainder and Structural Blindness

Everything the first essay identified as absent from the trolley grammar – context, relationship, role, history, the embeddedness of actual moral life – is not merely missing from the autonomous system. It’s structurally excluded by the requirements of the platform.

Role and obligation. Narrative history. Situated responsibility. Relational asymmetry. Tacit social meaning. Unquantified vulnerability. The possibility that not all harms belong in one metric space at all, ad infinitum… None of these can be rendered as a tractable variable, and what can’t be rendered as a tractable variable isn’t weighed lightly…or at all. Humans bask in their hubris, the purported ability to tame complexity, but their track record tells a different story.

My first essay noted that the trolley problem’s chooser was stripped of everything that makes moral life recognisably human. The autonomous system completes that stripping and makes it permanent. The philosophy student might resist the grammar inarticulately – might feel, without quite being able to say why, that something has been left out. The machine has no such unease. It has no friction, no nagging sense that the map has omitted something important about the territory. It just acts within the ontology it’s given; and the ontology was given by people who inherited it from a thought experiment that was never adequate from the start. Compilation doesn’t merely omit moral texture; it excludes whatever can’t survive formalisation – another procrustean bed. And unlike a bad philosophical argument, which can be refuted, published against, or simply ignored, a bad ontology compiled into infrastructure governs silently. It doesn’t announce its assumptions or invite dissent. It just administers – mini Eichmanns in waiting.

The trolley problem asked what you’d do at the lever. It at least had the decency to pretend you were present for the decision. The autonomous vehicle has already been told what counts – by engineers who mistake ontology for specification, by a machine that can’t question the grammar it executes. In the trolley problem, the borrowed ontology framed the question. In the autonomous vehicle, it drives the car.

Beep, beep.