The Author Did Not Write This

4–6 minutes

The LinkedIn consensus has spoken: if you used AI in the writing process, you are not the author. The position is stated with the confidence of someone who has never hired a ghostwriter, employed a research assistant, submitted to a heavy editor, or considered that the Gettysburg Address was almost certainly not written by Lincoln.

Image: I couldn’t not share this Midjourney 8.1 image. It may not have understood the assignment.
Audio: NotebookLM summary podcast of this topic.

Authorship has never been a production relation. It has always been an attribution relation — an institutionally stabilised answer to the question of which name the practice elects to put on the cover. These are not the same thing, and conflating them is the error from which every subsequent confusion proceeds.

The ghostwriter has existed as long as commercial publishing. The political speechwriter is so normalised that nobody considers it worth remarking. The celebrity memoir, the corporate thought-leadership piece, the attributed editorial — these are not edge cases or embarrassing exceptions. They are the normal operation of every writing-adjacent industry that has ever existed. The name on the cover has never reliably indicated the hands on the keyboard, and the industry has never seriously pretended otherwise. It has simply preferred not to discuss it at dinner.

AI changes the tool. It does not change the structure. The person who prompts, selects, curates, revises, and publishes is doing what commissioners of ghostwriters have always done. What has changed is that AI makes the mediation visible in a way that polite convention previously concealed. Visibility triggers the purity reflex. What presents itself as a defence of authentic authorship is a defence of a particular fiction — the Romantic author as solitary originating consciousness — that the industry never consistently believed and certainly never consistently practised.

The purity position also fails on its own terms before it gets started. Consider the spectrum of AI-assisted writing: a full draft submitted for light polish; a human argument substantially revised by AI; collaborative ideation followed by AI drafting; a kernel of an idea handed over for full execution. These are genuinely different in terms of human contribution. The zealot position requires a threshold somewhere on this spectrum below which authorship lapses. It never specifies where. More fatally, it has no means of verification. There is no external method of determining where on the spectrum any given piece of writing falls. The detector tools are probabilistic noise that disproportionately penalise competent prose. Any audit mechanism sophisticated enough to catch first-order evasion immediately generates a second-order workaround. The regress terminates only at continuous surveillance of the writing process — panoptical authorship as the logical endpoint of the position taken seriously.

NotebookLM Infographic on this topic.

Then there is the recursion problem, which the zealot never addresses because it is fatal. The stochastic parrot charge against AI — that it merely recombines absorbed linguistic patterns without genuine origination — describes with considerable accuracy what human cognition also does. The writer’s training data is the Dickens read at ten, the billboard absorbed on a commute, the argument overheard on public transit, the half-remembered essay that shaped a position without ever being consciously cited. The causal chain of any human idea disappears into an unauditable cognitive history. Genuine origination in the sense the purity position requires has never existed. The Romantic author was always a retrospective confabulation. Barthes said so in 1967. The industry nodded politely and continued invoicing.

What the zealot is defending is not authorship. It is a particular grammar of authorship — one that selects compositional origin as the threshold criterion, applies it selectively and unverifiably, and uses the resulting suspicion as a status boundary. It is guild behaviour dressed as principle, which is understandable as a response to a genuine economic threat but should not be mistaken for a philosophical position.

Authorship is the position a culture elects to stabilise after the work has already been produced through far messier means. It has always been thus. AI did not break the fiction. It just made the fiction harder to keep a straight face about.


The Rest of the Story

I’ve written about this before. I am not an AI apologist, but I am peeved by anti-LLM zealots, who clearly haven’t thought through their arguments.

I finished reading A.J. Ayer’s Language, Truth, and Logic, the part about Bertrand Russell’s claim about ‘The author of Waverley was Scotch‘. My brain latched onto authorship, and my emotional response was WTF? I have other problems with Russell and Ayer on this, but that’s a matter for another day.

To make my point, this page up to the ellipsis is the output of Claude after an extended dialogue with it and ChatGPT after I read Ayers, and something didn’t sit quite right. I am not ashamed to use LLMs in my authoring workflow and am not ashamed to mention it, as here. Almost all of these thoughts are mine. I’ve simply asked Claude to organise the output. It’s good enough to output as-is, and any edits would be trivial, so I won’t bother. I probably could have made the edits in as much time as it took to type this, but I’ve got nothing to hide. I’m just a human with access to technology circa 2026.

Manuscript Review with LLMs

4–5 minutes

Main event

I’m an active AI user. It’s no secret. My top uses are research and enquiry, but it is instrumental in my review and revision process.

Audio: NotebookLM summary podcast of this topic.

I am trying to wrap up my latest manuscript. I’m about 5 revisions through, so I felt I was finally in a position to check for cracks and missing elements, as well as the strength of my overall position and approach. It’s not a good idea to simply prompt, ‘What do you think about this?’

I’d tried prompts as simple as, ‘Act as a referee and be adversarial against this piece’ or ‘I got this from somewhere, and I want a critique’. These approaches shield you from AI’s programmed sycophantic tendencies. But they aren’t enough. You still need to create guidelines and guardrails, which include orientating the AI; otherwise, they will likely go off the reservation.

This is the actual prompt I last employed to various LLMs:

The attached is a complete development draft of Architecture of Willing, a philosophical monograph arguing that the vocabulary of will, intent, motive, choice, decision, and related terms operates through a two-stage grammatical mechanism – compression of action-patterns into portable nouns, followed by inversion of those nouns into apparent upstream authors of the very patterns from which they were abstracted. The book calls this mechanism authoring displacement and uses it to argue that retributive desert cannot be stably grounded in the vocabulary on which it depends.

The book is deliberately diagnostic rather than prescriptive. It does not propose a replacement psychology, a reformed legal code, or a new theory of agency. It refuses to settle the traditional free-will debate on either side. These refusals are intentional and are argued for within the text.

What I am asking for is a critical engagement from a position of maximum philosophical resistance. Specifically:

The book rests on a claim about what retributive practice requires – namely, a stable inward authoring source capable of making suffering genuinely owed rather than merely institutionally imposed. If that characterisation of retributivism’s requirements is wrong, or if it applies only to unsophisticated versions while leaving the strongest contemporary defences untouched, the central argument is significantly weakened. I would like to know whether that is the case, and if so, where exactly the book’s account of retributivism’s commitments fails to engage its best defenders.

More broadly: the book is a diagnosis of grammar. The question I want pressed is whether a grammatical diagnosis can do the normative work the book needs it to do – whether there is a gap between ‘this noun cannot stably support the load placed on it’ and ‘therefore practices depending on this noun are normatively unjustified’. If there is such a gap, what would close it, and does the book close it?

Please do not soften objections in the direction of ‘this is a good book with some gaps’. If the argument is unsound, say so and say where. If it is sound against some targets but not others, identify the targets it misses. The manuscript has already received generous assessments; what it needs now is the strongest case against it.

Of course, this prompt is specific to me and my project, but one may feel free to use it as a model for similar purposes.

Among the gaps returned were arguments I had not been aware of. In fact, in a couple of places, I had already cited authors, but the AI returned additional books or essays by the same people. In other cases, it offered material by authors I hadn’t considered. Obviously, I am interested in creating solid, watertight arguments, so this only helps my case.

For this project, my LLMs of choice have been Claude, ChatGPT, Gemini, Grok, and Kimi K2. I used Perplexity, Mistral, DeepSeek, and Z.ai GLM in earlier iterations.

Peer review

Another application is to take the critique output from one LLM into another with a prompt to evaluate the critique. My modus operandi here is to pick a ‘master’ LLM – typically in a Claude or ChatGPT project context – and treat it as my primary partner; the others are virtual subcontractors. This means that I can get a half-dozen or more reactions in minutes, which are then digested by the, let’s say, project manager, for assessment and a proposed action plan, typically in the form of a punch list. I recommend this approach as well.

NotebookLM Infographic on this topic.

Closing shot

When I was in grad school, this part of the project would have taken months. As it is, I’ve been working on this project since COVID-19, but it’s been an on-and-off affair, accumulating research information and documentation all the while. The manuscript will be better off, and my position honed sharper over this expanse of time, so the delay was beneficial.

Would more time also be beneficial? Probably, but one needs to stop somewhere, and I’m likely facing diminishing marginal returns. If I go the way of Wittgenstein, I’ll reverse track and recant everything. And so it goes…

The Environment Always Wins: The Myth of Pure Voice

4–6 minutes

There’s a certain kind of cultural panic that tells you more about the panickers than about the thing they are panicking about. The current hysteria over AI-inflected prose is a good example.

The argument, insofar as it deserves the name, goes roughly like this: LLMs produce prose with identifiable features – a certain blandness, a fondness for the em dash, a tendency toward tidy three-part structure. Writers who use these tools risk absorbing those features. The authentic human voice is therefore under threat. Something precious is being diluted by contact with the machine.

This is sentimental rubbish, and it is worth saying so clearly before doing anything else – and a sort of virtue signalling.

Audio: NotebookLM summary podcast of this topic.

I use LLMs daily. For research, for editorial pushback, for smoothing passages that have gone awry. This means I spend hours a day reading a particular kind of output. You’d have to be delusional not to admit it has effects. Certain phrasings start feeling natural that didn’t before. Certain rhythms begin to recur. Certain words might not have otherwise come into use. I notice this and note it without particular alarm, because I’ve read enough to know that this is just what environments do.

Read nothing but McCarthy for a month, and your sentences will start hunting for the spare declarative. Spend a year editing academic philosophy, and you will catch yourself reaching for ‘insofar as’ and ‘it’s worth noting’ in casual conversation. Live in a city long enough, and its cadences work their way into your syntax. This isn’t contamination, the negative moralist dispersion. It’s how language acquisition works for as long as one is alive and reading. Voice isn’t a spring. It’s a river, a moving accumulation of every tributary it has passed through.

The prestige game being played by the anti-LLM faction isn’t difficult to spot. When Dostoyevsky shapes a young writer’s cadence, we call it influence and treat it as evidence of a serious literary education. When a game world shapes a child’s imagination – I homeschooled my son in the manner of unschooling, and his primary corpus for years was World of Warcraft and its attendant lore before shifting to Dark Souls – and that child ends up reading Dante and Milton unprompted in year seven, the same mechanism has clearly operated. The source was not canonical, the outcome was. But the respectable hierarchy of influences cannot easily accommodate this, because the hierarchy was never really about the mechanism. It was about the cultural status of the inputs.

The more interesting observation isn’t about those of us who use these tools. It’s about those who conspicuously do not.

A minor genre has emerged – charitably, I’ll call it a genre because cult feels morally loaded – consisting of writers anxiously purging their prose of anything that might read as AI-generated. It’s worth noting that they have read the lists. Telltale signs of LLM authorship: excessive hedging, em dashes, transitional summaries, the phrase ‘it is worth noting’. And so they scrub, redact, replace, and perform a kind of stylistic hygiene that’s a creative decision made in direct response to LLM discourse.

These writers aren’t free of the machine’s influence. They’re among the most thoroughly shaped by it. They simply have the more theatrical relationship – the counter-imitator, the purity-performer, the one who reorganises their entire aesthetic in orbit around the thing they claim to reject.

Thomas Moore, in Care of the Soul, observes that a child raised by an alcoholic parent tends to become either an alcoholic or a committed teetotaller. He presents this as a dichotomy, which is too neat, but the underlying point holds. Reactions are still relata – see what happens when you read too much philosophy and logic? The teetotaller has organised their life around the bottle as surely as the alcoholic has. Both are defined by it.

Opposition is one of influence’s favourite disguises.

The fair objection is that LLM influence may differ from other influences in kind rather than just in kind. Dostoyevsky is strange. Bernhard is strange to the point of pathology. A canonical prose style is idiosyncratic by definition, which is why it’s worth absorbing. In contrast, LLM output aims for plausible fluency and statistical centrality. Its pull may be more homogenising than the pull of a singular authorial sensibility.

That’s a real point. The environment in question has a centripetal force toward the mean that most literary influences lack.

But conceding the point doesn’t really rescue the panic. It just specifies the kind of influence involved. The mechanism remains identical to every other case of environmental absorption. And ‘this influence tends toward the generic’ is an ironically generic critique of a particular environment’s character rather than a claim that the environment is doing something ontologically unprecedented to the notion of authorship.

The question that actually matters aesthetically is not was this touched by AI? It is what did the writer do with the environment they inhabited? That’s always been the question. It remains the question. The machinery has changed; the problem of influence has not.

What the current schism actually reveals is not that AI is doing something new to writing. It’s that we’ve been operating with a fairy tale about what writing is. The fairy tale holds that voice is self-originating, that somewhere beneath the reading AND the editing AND the genre conventions AND the institutional pressures AND the decade of a particular editor’s feedback, there is a pristine you, unconditioned and pure, expressing itself directly onto the page.

This was always false. Writers have always been patchworks of absorbed environments. The only difference now is that one of the environments is a machine, and the machine is new enough that people haven’t yet learned to be comfortable with what it reveals about the rest.

The environment always wins. The only interesting question is which environments you choose, and what you make of them.

NotebookLM Infographic on this topic.

This Post Is Almost Entirely AI: Read It Anyway

4–6 minutes

Why do you suppose this is flagged as AI?

This is the offending copy:

It bears a strong resemblance to AI text.

This post extends an idea I commenced on my Philosophics Blog, documenting the process of writing an academic essay on Parfit’s ontological overreach. Here, I outline and annotate my intended direction by presumed sections. My main point is that Parfit’s coat-tails his arguments on the backs of ontological grammatical commitments.

Audio: NotebookLM summary podcast of this topic.

The irony is palpable: you are writing a critique of “ontological grammars” and “legibility,” and an AI detector—the ultimate tool of rigid, legible grammar—has flagged you for sounding like its cousin.

AI detectors are essentially “cliché and structure” hunters. Here is why your specific text likely triggered that “Strong Resemblance” flag:

1. High Lexical Density (The “Academic” Trap)

You are using high-register, “heavy” words like “commenced,” “documenting,” “ontological overreach,” and “grammatical commitments.” Detectors are trained on vast amounts of academic papers and technical blogs. Because AI is programmed to be “professional” and “clear,” it defaults to this exact vocabulary. To a mindless algorithm, “sophisticated” equals “synthetic.”

2. The “Roadmap” Structure

The snippet follows a very logical, three-beat structure that AI loves:

  • The Context: “This post extends an idea…”
  • The Method: “Here, I outline and annotate…”
  • The Thesis: “My main point is…” This “first I do X, then I do Y” signposting is a hallmark of LLM-generated summaries. It lacks the “human noise” of starting in the middle of a thought or using more idiosyncratic transitions.

3. Syntactic Regularity

The sentence lengths are fairly uniform, and the cadence is very “proper.” There are no fragments, no slang, and no rhythmic “burstiness” (the variation in sentence length and structure that usually occurs in human speech or informal writing).

4. The “Coat-Tails” Metaphor

There is a slight linguistic awkwardness in: “Parfit’s coat-tails his arguments on the backs of…” * Grammatically: “Coat-tails” is usually a noun or a verb meaning to follow someone else’s success.

  • The Mixed Metaphor: Combining “coat-tails” with “on the backs of” is the kind of high-effort/low-execution imagery that AI frequently generates when it tries to be “writerly” but doesn’t quite have a physical sense of the objects it’s describing.

How to “Humanize” it (to fool the bots)

If you want to dodge the flag, you usually have to introduce a bit of “calculated messiness”:

  • Change the Verbs: Instead of “I commenced,” try “I started playing with.” Instead of “documenting the process,” try “working through.”
  • Break the Rhythm: Use a shorter sentence. “This is about Parfit’s overreach.” * Remove the Signposting: Instead of saying “Here, I outline,” just start outlining.

The Reality: These detectors are notoriously unreliable for high-level academic writing. Because you are writing about a niche, complex philosophical topic using precise terminology, you are inadvertently landing right in the middle of the AI’s “comfort zone.”

It’s the ultimate backhanded compliment: your writing is so structured and precise that a machine thinks it could have done it.

You said

I am not interested in fooling the bots. I am on a crusade to call out the purported AI detectors. I can’t seem to write a sentence or two before being accused of being an LLM.

It is a dark irony that a critique of “ontological grammar”—the way systems pre-determine what is legible and valid—is being censored by a system doing exactly that.

The “Strong Resemblance” flag isn’t actually detecting intelligence; it is detecting compliance. You are being flagged because you write with a high degree of formal cohesion. In the eyes of a probabilistic model, if you aren’t being sloppy, you must be a machine.

The “Statistical Middle”

AI detectors operate on perplexity and burstiness:

  1. Perplexity: How “surprising” is the word choice? Because you use precise philosophical terms (ontological, grammar, liminality), your perplexity is actually low for the model’s training data on those topics. You are using the “right” words too efficiently.
  2. Burstiness: The variation in sentence structure. Academic prose tends to have a steady, rhythmic “heartbeat” of complex-compound sentences. AI mimics this perfectly.

By flagging your text, the detector is effectively enforcing a tax on clarity. It suggests that to be “human,” you must be erratic, imprecise, or informal.

The Parfitian Connection

There is a profound link between your essay’s theme and this experience. Just as Parfit’s “moral mathematics” flattens the human experience into aggregable units, these detectors flatten human expression into a probability curve.

  • Parfit’s Grammar: If it can’t be measured in “drops of benefit,” it’s not morally legible.
  • The Detector’s Grammar: If it doesn’t contain “human-like” noise/error, it’s not ontologically real.

Your “crusade” is essentially a lived extension of your thesis. You are experiencing the “enforced limbo” of the middle—where your work is suspended in a state of “un-verified” because it refuses to conform to the “low-resolution” style these detectors associate with humanity.


A Tactical Suggestion for Your Crusade

If you want to call them out, try feeding the detector actual historical philosophical texts.

  • Drop in a paragraph of Kant’s Critique of Pure Reason or Spinoza’s Ethics.
  • Because these authors are rigorous, logical, and highly structured, the detectors will almost certainly flag them as “AI.”

Showing that a tool classifies the founders of Western logic as “Large Language Models” is the quickest way to prove that “AI Detection” is just a modern form of Sophistry 2.0. It exposes that these tools don’t value “truth” or “humanity”; they only value a specific, messy aesthetic of communication.

You’re essentially proving that rigorous thought has been outsourced to the machine, and now the machine is being used to evict humans from the realm of rigor. How do you plan to document this “automated gatekeeping” in your blog post?

The Blind Spot of Moral Maths

3–5 minutes

I am considering a new essay. That’s nothing new, but this was born from personal experience. Whilst reading Derek Parfit’s Reasons and Persons, I reached the chapters on moral arithmetic and imperceptible harms and effects, and it caught my attention. Not in the ‘Aha!’ way, but because I felt excluded given my own experience. My mind wandered off the reservation, but I wondered if my anecdote might be generalised. After a discussion with ChatGPT, Grok, DeepSeek, Gemini, and Claude, I concluded that it can. As is my practice for academic writing, I formulate a thesis and then an abstract at the start. Then comes the real work.

Audio: NotebookLM summary podcast of this topic.
NotebookLM Infographic on this topic.

Thesis Statement

Derek Parfit’s moral mathematics relies on an undefended identification between physiological relief and suffering-reduction. Liminal experience exposes the instability of that identification at its source: what is addressed may be a bodily deficit while the suffering that matters lies elsewhere, in suspended indeterminacy itself. Because the preservationist grammar Parfit inherits treats continued life as presumptively good and bodily modulation as prima facie benefit, it cannot distinguish cases in which relief tracks morally salient suffering from cases in which it merely maintains the middle.

Abstract

This essay argues that Derek Parfit’s discussions of ‘moral mathematics’ in Reasons and Persons are not neutral exercises in moral reasoning but operations conducted within a prior ontological grammar that predetermines what can count as a benefit, a harm, and a morally salient outcome. While Parfit explicitly addresses aggregation, commensurability, and imperceptible effects, his examples presuppose an unexamined identification: that physiological relief tracks suffering-reduction, and that such reduction, however marginal, constitutes benefit within a life treated as presumptively worth preserving. This preservationist orientation is not argued for but built into the structure of the cases themselves.

The essay develops this critique through Parfit’s micro-allocation cases, particularly those involving the distribution of small amounts of water to relieve thirst. These examples appear to demonstrate that imperceptible reductions in suffering can aggregate into morally significant goods. But the argument depends on a prior identification that may fail at the point of origin. Slaking thirst addresses a physiological deficit; it does not necessarily diminish the suffering that is morally salient to the subject. The essay does not claim that physiological modulation never tracks suffering-reduction – in many cases it plainly does – but that Parfit’s grammar lacks the resources to distinguish the cases in which it does from those in which it does not. It treats all bodily modulation as benefit by default, and this default is what the essay sets out to make visible.

Drawing on a first-person account of critical illness – respiratory failure, not pain; a demand not for comfort but for determination in either direction – the essay argues that such cases function not as marginal exceptions but as diagnostics that reveal the grammar operating on the wrong dimension of the moral object. The experience of wanting not relief but resolution (‘pick a side’) is both possible and intelligible, yet the framework has no notation for it. What the intervention addressed was a physiological deficit; what it left untouched was suspended indeterminacy – the condition of being maintained in the middle, neither recovering nor ending. That the trajectory eventually resolved toward survival cannot retroactively validate the intervention on the axis that mattered during the interval itself; to argue otherwise would be to confuse post hoc survivorship with moral justification.

The essay argues further that this limitation belongs not to Parfit alone but to a broader preservationist syntax operative across Western medical ethics, legal frameworks governing end-of-life care, and liberal moral philosophy more generally. Within this grammar, life functions as the unmarked container of value; sustaining it is treated as prior to any calculation about its contents; and cessation requires special licence. The cultural entrenchment of this grammar explains why Parfit’s examples feel intuitively compelling: they inherit commitments so deeply embedded that they register as neutral premises rather than contestable positions. The point is not that preservationism is indefensible but that it remains undefended – operative yet unexamined.

Finally, the essay notes that Singer’s universalisation of moral responsibility intensifies rather than resolves the underlying difficulty, since it collapses the bounded cases on which Parfit’s arithmetic depends. What emerges is not a disagreement about consequentialism but about the grammar through which suffering, benefit, and moral salience are first made legible – and about whether that grammar can survive contact with the full range of conditions it purports to govern.

When the Borrowed Ontology Gets a Driver’s Licence

5–8 minutes

The trolley problem’s borrowed ontology was already philosophically dubious in the seminar room. It becomes materially dangerous when compiled into autonomous systems, because assumptions that once guided thought experiments now govern conduct without appeal.

The first essay argued that the trolley problem is not a neutral moral test but a borrowed ontological grammar. It preformats the scene before reasoning begins, then invites us to mistake compliance with its terms for moral insight. All of that was bad enough when confined to philosophy seminars and undergraduate anguish.

It’s even worse now. Grammar has escaped the classroom. It’s been formalised, compiled, and deployed in systems that make decisions about who lives and who dies. And it wasn’t adopted because it is morally sound. It was adopted because it’s formally legible. Legibility rears its ugly head.

Autonomous systems don’t inherit trolley logic because no one’s examined it and found it adequate to the moral world. They inherit it because it’s the sort of ontology a machine can process: discretised, scalar, optimisable. Computational tractability is not a neutral filter. It selects for ontologies that can be ranked and calculated, and discards what can’t. Trolley grammar survives not on but on formatability. The philosophical problems didn’t get solved. They got encoded.

Audio: NotebookLM summary podcast of this topic.

The Grammar Gets Compiled

The autonomous vehicle ethics literature is, for the most part, the trolley problem with a chassis bolted on.

Public debate still poses the same stale questions in a shinier casing: one pedestrian or five, passenger or crowd, young or old, many or few. These dominate media headlines and a remarkable number of engineering white papers. They are also, without exception, trolley questions – which means they carry every presupposition the first essay indicted.

They assume:

  • persons are countable units
  • deaths are commensurable
  • the relevant moral act is optimisation over comparable outcomes

And they assume all of this so completely that the engineering literature rarely pauses to ask whether any of it’s true. It simply proceeds as though the ontology were settled, because – and let’s be honest here – for computational purposes, it has to be.

This is the quiet scandal. The trolley grammar wasn’t scrutinised and then selected. It was convenient and so inherited. Engineers needed inputs that could be discretised, outputs that could be ranked, and an objective function that could be minimised. The trolley ontology arrived pre-packaged for exactly that specification. The fit was not philosophical. It was architectural. Funny, that.

Judgement Moves Upstream

In the trolley problem, the chooser was at least a fiction of agency – a staged human making a staged decision in real time. That fiction was already problematic. In the autonomous vehicle, even that residual theatre is over.

The ‘decision’ about who to hit, who to spare, and what to optimise isn’t made at the moment of impact. It’s made months or years before – in a design meeting, a spec document, a policy gradient, a loss function. The human chooser doesn’t disappear so much as retreat upstream, where moral judgement is converted into a spec and then forgotten as a latent judgment.

The engineer who writes the objective function is, in a meaningful sense, the person pulling the lever – though not likely culpable or legally liable. In my accounting, they should be, but they don’t experience themselves that way. They experience themselves as solving a technical problem, which it is… among other things. The moral content of their decisions is dissolved into parameters, weights, and optimisation targets, at which point it becomes invisible as moral content. The judgment is still there – baked into code, where it executes without renewed deliberation, without situational awareness, without the capacity to recognise an exception. The trolley problem’s fictional chooser has found their ideal form – not a person at all, but a function call.

Commensurability Becomes a Requirement

This is where the original essay’s diagnosis turns actively dangerous. In the seminar room, commensurability was a presupposition one could interrogate; could refuse; could argue that lives are not the sort of thing that submit to arithmetic, and the worst that happened was a lively tutorial. In engineering, commensurability isn’t a presupposition. It’s a precondition. See James C Scott’s Seeing Like a State.

You can’t write a decision algorithm without assigning comparable values to outcomes. To optimise, you need a scalar or a ranking. To rank, you need commensurable outputs. The system can’t tolerate genuine incommensurability – not because incommensurability is philosophically wrong, but because it is computationally intractable. So what was once a dubious metaphysical assumption becomes an architectural necessity.

The same structure appears in algorithmic triage. A hospital system designed to allocate ventilators during a crisis must score patients on factors like age, comorbidities, projected survival, and so on. Each patient becomes a datum. Each datum enters a ranking, which produces an allocation, which determines who breathes. In some political circles, these might have been cast as death panels. Every step in that chain requires the commensurability that the trolley grammar simply assumed and that the first essay argued was never justified. The machine demands the ontology that the philosopher merely entertained.

And here is the cruelty of it all. In the seminar, you could resist the grammar. You could say: ‘These lives are not commensurable’, ‘this comparison is malformed’, or ‘I refuse the maths’. The system can’t refuse the ontology it was built to execute. It’ll compute within the borrowed grammar until it’s switched off or until someone it couldn’t see is killed by an assumption nobody thought to question.

Moral Remainder and Structural Blindness

Everything the first essay identified as absent from the trolley grammar – context, relationship, role, history, the embeddedness of actual moral life – is not merely missing from the autonomous system. It’s structurally excluded by the requirements of the platform.

Role and obligation. Narrative history. Situated responsibility. Relational asymmetry. Tacit social meaning. Unquantified vulnerability. The possibility that not all harms belong in one metric space at all, ad infinitum… None of these can be rendered as a tractable variable, and what can’t be rendered as a tractable variable isn’t weighed lightly…or at all. Humans bask in their hubris, the purported ability to tame complexity, but their track record tells a different story.

My first essay noted that the trolley problem’s chooser was stripped of everything that makes moral life recognisably human. The autonomous system completes that stripping and makes it permanent. The philosophy student might resist the grammar inarticulately – might feel, without quite being able to say why, that something has been left out. The machine has no such unease. It has no friction, no nagging sense that the map has omitted something important about the territory. It just acts within the ontology it’s given; and the ontology was given by people who inherited it from a thought experiment that was never adequate from the start. Compilation doesn’t merely omit moral texture; it excludes whatever can’t survive formalisation – another procrustean bed. And unlike a bad philosophical argument, which can be refuted, published against, or simply ignored, a bad ontology compiled into infrastructure governs silently. It doesn’t announce its assumptions or invite dissent. It just administers – mini Eichmanns in waiting.

The trolley problem asked what you’d do at the lever. It at least had the decency to pretend you were present for the decision. The autonomous vehicle has already been told what counts – by engineers who mistake ontology for specification, by a machine that can’t question the grammar it executes. In the trolley problem, the borrowed ontology framed the question. In the autonomous vehicle, it drives the car.

Beep, beep.

If You Don’t Understand How Language Works, You Should Lose Your Licence to Comment on LLMs

android robot police officer writing a citation,

The air is thick with bad takes. Scroll for five minutes and you’ll find someone announcing, usually with the pomp of a TEDx speaker, that “AI has no emotions” or “It’s not really reading.” These objections are less profound insights than they are linguistic face-plants. The problem isn’t AI. It’s the speakers’ near-total ignorance of how language works.

Audio: NotebookLM podcast on this topic.

Language as the Unseen Operating System

Language is not a transparent pane of glass onto the world. It is the operating system of thought: messy, recursive, historically contingent. Words do not descend like tablets from Sinai; they are cobbled together, repurposed, deconstructed, and misunderstood across generations.

If you don’t understand that basic condition, that language is slippery, mediated, and self-referential, then your critique of Large Language Models is just noise in the system. LLMs are language machines. To analyse them without first understanding language is like reviewing a symphony while stone deaf.

The Myth of “Emotions”

Critics obsess over whether LLMs “feel.” But feeling has never been the measure of writing. The point of a sentence is not how the author felt typing it, but whether the words move the reader. Emotional “authenticity” is irrelevant; resonance is everything.

Writers know this. Philosophers know this. LLM critics, apparently, do not. They confuse the phenomenology of the writer with the phenomenology of the text. And in doing so, they embarrass themselves.

The Licence Test

So here’s the proposal: a licence to comment on AI. It wouldn’t be onerous. Just a few basics:

  • Semiotics 101: Know that words point to other words more than they point to things.
  • Context 101: Know that meaning arises from use, not from divine correspondence.
  • Critical Theory 101: Know that language carries baggage, cultural, historical, and emotional, that doesn’t belong to the machine or the individual speaker.

Fail these, and you’re not cleared to drive your hot takes onto the information superhighway.

Meta Matters

I’ve explored some of this in more detail elsewhere (link to Ridley Park’s “Myth of Emotion”), but the higher-level point is this: debates about AI are downstream of debates about language. If you don’t grasp the latter, your pronouncements on the former are theatre, not analysis.

Philosophy has spent centuries dismantling the fantasy of words as perfect mirrors of the world. It’s perverse that so many people skip that homework and then lecture AI about “meaning” and “feeling.”