The Environment Always Wins: The Myth of Pure Voice

4–6 minutes

There’s a certain kind of cultural panic that tells you more about the panickers than about the thing they are panicking about. The current hysteria over AI-inflected prose is a good example.

The argument, insofar as it deserves the name, goes roughly like this: LLMs produce prose with identifiable features – a certain blandness, a fondness for the em dash, a tendency toward tidy three-part structure. Writers who use these tools risk absorbing those features. The authentic human voice is therefore under threat. Something precious is being diluted by contact with the machine.

This is sentimental rubbish, and it is worth saying so clearly before doing anything else – and a sort of virtue signalling.

Audio: NotebookLM summary podcast of this topic.

I use LLMs daily. For research, for editorial pushback, for smoothing passages that have gone awry. This means I spend hours a day reading a particular kind of output. You’d have to be delusional not to admit it has effects. Certain phrasings start feeling natural that didn’t before. Certain rhythms begin to recur. Certain words might not have otherwise come into use. I notice this and note it without particular alarm, because I’ve read enough to know that this is just what environments do.

Read nothing but McCarthy for a month, and your sentences will start hunting for the spare declarative. Spend a year editing academic philosophy, and you will catch yourself reaching for ‘insofar as’ and ‘it’s worth noting’ in casual conversation. Live in a city long enough, and its cadences work their way into your syntax. This isn’t contamination, the negative moralist dispersion. It’s how language acquisition works for as long as one is alive and reading. Voice isn’t a spring. It’s a river, a moving accumulation of every tributary it has passed through.

The prestige game being played by the anti-LLM faction isn’t difficult to spot. When Dostoyevsky shapes a young writer’s cadence, we call it influence and treat it as evidence of a serious literary education. When a game world shapes a child’s imagination – I homeschooled my son in the manner of unschooling, and his primary corpus for years was World of Warcraft and its attendant lore before shifting to Dark Souls – and that child ends up reading Dante and Milton unprompted in year seven, the same mechanism has clearly operated. The source was not canonical, the outcome was. But the respectable hierarchy of influences cannot easily accommodate this, because the hierarchy was never really about the mechanism. It was about the cultural status of the inputs.

The more interesting observation isn’t about those of us who use these tools. It’s about those who conspicuously do not.

A minor genre has emerged – charitably, I’ll call it a genre because cult feels morally loaded – consisting of writers anxiously purging their prose of anything that might read as AI-generated. It’s worth noting that they have read the lists. Telltale signs of LLM authorship: excessive hedging, em dashes, transitional summaries, the phrase ‘it is worth noting’. And so they scrub, redact, replace, and perform a kind of stylistic hygiene that’s a creative decision made in direct response to LLM discourse.

These writers aren’t free of the machine’s influence. They’re among the most thoroughly shaped by it. They simply have the more theatrical relationship – the counter-imitator, the purity-performer, the one who reorganises their entire aesthetic in orbit around the thing they claim to reject.

Thomas Moore, in Care of the Soul, observes that a child raised by an alcoholic parent tends to become either an alcoholic or a committed teetotaller. He presents this as a dichotomy, which is too neat, but the underlying point holds. Reactions are still relata – see what happens when you read too much philosophy and logic? The teetotaller has organised their life around the bottle as surely as the alcoholic has. Both are defined by it.

Opposition is one of influence’s favourite disguises.

The fair objection is that LLM influence may differ from other influences in kind rather than just in kind. Dostoyevsky is strange. Bernhard is strange to the point of pathology. A canonical prose style is idiosyncratic by definition, which is why it’s worth absorbing. In contrast, LLM output aims for plausible fluency and statistical centrality. Its pull may be more homogenising than the pull of a singular authorial sensibility.

That’s a real point. The environment in question has a centripetal force toward the mean that most literary influences lack.

But conceding the point doesn’t really rescue the panic. It just specifies the kind of influence involved. The mechanism remains identical to every other case of environmental absorption. And ‘this influence tends toward the generic’ is an ironically generic critique of a particular environment’s character rather than a claim that the environment is doing something ontologically unprecedented to the notion of authorship.

The question that actually matters aesthetically is not was this touched by AI? It is what did the writer do with the environment they inhabited? That’s always been the question. It remains the question. The machinery has changed; the problem of influence has not.

What the current schism actually reveals is not that AI is doing something new to writing. It’s that we’ve been operating with a fairy tale about what writing is. The fairy tale holds that voice is self-originating, that somewhere beneath the reading AND the editing AND the genre conventions AND the institutional pressures AND the decade of a particular editor’s feedback, there is a pristine you, unconditioned and pure, expressing itself directly onto the page.

This was always false. Writers have always been patchworks of absorbed environments. The only difference now is that one of the environments is a machine, and the machine is new enough that people haven’t yet learned to be comfortable with what it reveals about the rest.

The environment always wins. The only interesting question is which environments you choose, and what you make of them.

NotebookLM Infographic on this topic.

Nietzsche’s Beyond Good and Evil – A Close Reading?

6–9 minutes

I joined a scheduled close reading of Nietzsche’s Beyond Good and Evil recently and came away less refreshed than exasperated. I will spare the platform and the hosts, not out of charity exactly, but because the problem is broader than two particular men fumbling through a canonical text in public. What disappointed me wasn’t that they disagreed with Nietzsche, nor even that they may have misunderstood him in places. Misreading is inevitable. The problem was that they seemed not to have brought much of a reading to begin with.

I had read Beyond Good and Evil years ago and thought a return to it might do me some good, or at least less evil than the usual intellectual content mill. The format sounded promising enough: two interested hosts working through the introduction live, sentence by sentence, in a supposedly close reading. Both had apparently read the book before. One therefore assumed they might arrive with at least a provisional grasp of its architecture, key provocations, and habitual traps. That assumption, in the event, turned out to be embarrassingly optimistic.

Audio: NotebookLM summary podcast of this topic.

As one host read line by line, the discussion lurched forward by way of hesitant paraphrase, speculative gloss, and the occasional verbal shrug. He offered the more substantive guesses, such as they were, while the other contributed mostly vague assent, half-memory, and the sort of foggy commentary one gives when one dimly recalls having once encountered a difficult book in a previous phase of life. It was less a close reading than a public rehearsal of not quite knowing what one was doing.

Now, I am not demanding exegetical perfection. Nietzsche is not a writer one simply ‘gets’ and files away like a user manual. Nor am I naïve enough to think authorial intention settles everything. Barthes is right enough to remind us that the text exceeds the author’s sovereign control. But the death of the author is not a licence for the death of preparation. If one is going to host a close reading of a notoriously elliptical and performative philosopher, the least one might do is arrive having done some prior work. Read the introduction carefully beforehand. Refresh the major themes. Check the loaded terms. Develop an argument, or at least a point of view.

That point of view may be wrong. Fine. Better that than the contemporary preference for the curated shrug, where one mistakes visible uncertainty for intellectual seriousness. There is a difference between interpretive openness and simple lack of preparation. One is a virtue. The other is an aesthetic.

I find this especially grating because I cannot imagine teaching a class that way. When I taught, I’d spend hours preparing before entering the room. Not because I imagined myself infallible, but because students deserve more than watching a lecturer discover the material in real time. A formed point of view is not a dogma. It is a starting position. It gives the discussion shape, stakes, and resistance. Without it, one is not leading inquiry but merely simulating it.

The second host did little to improve matters. Rather than complicating or sharpening the reading, he mostly echoed the first. There was very little friction, and thus very little illumination. One of the virtues of reading Nietzsche in company is that he invites productive disagreement. He is slippery, aphoristic, ironic, and often strategic in his provocations. A good discussion can tease out the tensions in his prose, test competing emphases, and ask whether a claim is literal, tactical, genealogical, or satirical. None of that really happened. Instead, one host fumbled, and the other nodded. The result was a kind of interpretive ventriloquism in which agreement substituted for insight.

The accompanying chat made things worse, or perhaps simply made the missed opportunity more obvious. Viewers were offering questions and interpretations, yet the hosts largely ignored them. Aside from one participant, who seemed likely to have some prior relationship with them, the chat was treated as background furniture. This was especially irritating because the event was live. If one is going to perform reading in public, the public should not be reduced to silent witnesses of one’s uncertainty. Otherwise, the ‘community’ aspect is just branding, another little liturgy of digital participation in which the audience is invited to attend but not to matter.

NotebookLM Infographic on this topic. To be honest, I am including this because I find it to be humorous.

To be fair, even when they did read from the chat, they handled those comments much as they handled Nietzsche: superficially, without much analytical pressure, as though any sentence placed before them deserved the same tone of vague consideration. This flattening effect was revealing. It suggested not generosity but a lack of discrimination. A close reading requires hierarchy, emphasis, and judgement. One must be able to say: this is a crucial phrase; this term matters; this apparent aside is actually structural; this comment in the chat opens something worth pursuing; that one does not. Without such judgement, everything becomes equally interesting, which is another way of saying that nothing really is.

What emerged, then, was not close reading but the theatre of close reading. The ritual gestures were all in place: slow pace, sentence-by-sentence attention, occasional lexical speculation, the performance of thoughtfulness. But the substance was strangely absent. One had the form of seriousness without much seriousness of form. It was analysis as ambience.

This points to a broader problem in online intellectual culture. Much of it now confuses exposure with engagement. To read a difficult text aloud is not yet to wrestle with it. To host a discussion is not yet to lead one. To hesitate publicly is not yet to think. Somewhere along the line, people began mistaking the visible performance of inquiry for inquiry itself. The result is a style of pseudo-seriousness in which the host need not know very much, so long as he can sound tentative in the correct register.

Nietzsche, of all people, deserves better than that. He is not an author who yields his force to the merely dutiful or the casually adjacent. He requires energy, suspicion, historical feel, and the willingness to risk a reading. One need not become a priest of correctness. But one should at least bring a sharpened knife to the table, rather than two butter spoons and a podcast voice.

What disappointed me, then, was not simply that these hosts stumbled. Everyone stumbles with Nietzsche. It was that the stumbling seemed to be the content. No deep framework, no clear prior preparation, little tension between the readers, and scant engagement with the audience. The whole affair felt less like a serious encounter with Beyond Good and Evil than a performance of cultural literacy: a way of being seen near an important book.

And perhaps that is the real irritation. One expects difficulty. One can even forgive error. What is harder to forgive is the peculiar modern tendency to make a spectacle of one’s underpreparedness and call it interpretation.


To be fair, I later learned that the co-host was not a philosopher by training but came from Literature. That in itself is no objection. Indeed, a pairing like that could have worked very well. Nietzsche is precisely the sort of writer who benefits from both conceptual and stylistic scrutiny. A philosopher can situate the argument, trace its targets, and identify the intellectual inheritance under pressure. A literary reader can pick up tone, irony, rhetorical staging, and the peculiar way Nietzsche so often performs thought rather than merely stating it.

The problem, then, was not the pairing but the execution. The session might have worked if both hosts had prepared properly and if each had leaned into his own strength. Instead, what emerged was a flatter sort of exchange, with the philosopher soliciting the literary co-host’s ‘opinion’ on semantic content as though interpretive adequacy were simply a matter of free-floating textual impressions. What was missing was any real division of labour, any methodological self-awareness, or any sense that different competences might illuminate different aspects of the text.

Art or Content

3–4 minutes

So glad I took time out to watch a short exchange between Rick Beato and Justin Hawkins on whether music is becoming content rather than art. The question is framed in musical terms, but it hardly stops there. The same corrosion is visible in writing, visual art, criticism, and now, with grim inevitability, in AI-mediated production more broadly. The disease is not confined to music. Music merely makes the symptoms easier to hear.

For music, my aversion to pop music goes back to my youth. I was a kid when the Beatles practically invented pop music, but they left it to grow and continued exploring. Sadly, as solo artists, they mainly – not always – failed and rested on their laurels in pop. It’s not that their version or any pop music is inherently unlistenable. Surely, it’s not, if only by the aspiration of the pop moniker, but it has no depth, no soul, as it were. Some make this argument for Organic food. In essence, it involves an appeal to nature fallacy.

Audio: Slightly off, but not bad, NotebookLM summary podcast of this topic.

My own aversion to much pop music begins there. It is not that pop is necessarily bad, nor even that it is always shallow. That would be too crude and too easy. The problem is that pop often presents itself less as an artistic act than as a consumption object engineered for immediate uptake: catchy, frictionless, emotionally legible, and just disposable enough to make room for the next one. It is built to circulate.

That, for me, is the difference between content and art. Art may be accessible, even popular, but it retains some residue that exceeds its delivery mechanism. It resists total reduction to utility. Content, by contrast, is made to be processed. It is optimised not for depth but for throughput. Its highest ambition is not transformation, but engagement.

This is why the question matters beyond music. Writing, too, now lives under the same pressure. One is increasingly expected to produce not essays, arguments, or works, but units of output: posts, threads, reactions, takes, summaries, explainers, and other forms of polished verbal debris. The point is no longer to say something worth dwelling on, but to remain visible within the churn.

The issue, then, is not simply whether one should consume AI-generated material. That framing is too pious and too easy. The more interesting question is what the consumer thinks they are consuming. If a reader, listener, or viewer wants only speed, familiarity, and surface competence, then AI content is not a scandal at all. It is the logical endpoint of a culture that has already demoted art into a deliverable.

This is where the fuss over labelling enters. Is it a principled demand for honesty, or merely a theatrical gesture by people who still want the aura of art whilst consuming content on industrial terms? Some of it is clearly protectionism. Some of it is virtue signalling. But not all of it is empty. The insistence on labelling betrays an intuition, however muddled, that authorship still matters, and that not all artefacts are equivalent merely because they occupy the same screen-space.

The deeper question is whether we still want art at all, or whether we merely want the aesthetic styling of art attached to things optimised for convenience. Once a culture learns to prefer seamless output over resistance, recognisability over risk, and quantity over form, it should not act surprised when machines begin to serve it perfectly. They are only completing a trajectory already chosen.

So no, the issue is not AI alone. AI is only the latest mirror held up to a public that has spent years confusing availability with value and polish with depth. The real question is not whether machines can make content. Plainly, they can. The question is whether we still possess the appetite, patience, and seriousness required for art.

Image: Full image because the cover version is truncated. Generated by Gemini Nano Banana.

When Syntax Is Asked to Bear Too Much v1.2

1–2 minutes

I published the first version of this essay in February, arguing that the Frege–Geach problem, that three-score-year-old albatross around expressivism’s neck rests on a category error. Analytic philosophers were polite about it in the way that analytic philosophers are polite about things they intend to ignore. I don’t often revise my manuscripts, opting instead to publish a new and improved version, but the meat of this one remained strong and not worth revisiting as much as fortifying.

The trouble was that I’d dissolved the problem without resolving it. Good enough for me. Others were less convinced. Telling people they’ve been asking the wrong question is satisfying but insufficient without a better one. Version 1.1 tidied the prose. Version 1.2 does the actual work.

The new section (§4, if you’ve already read previous versions) introduces recruitable expressions – a broader class of expressions (moral predicates, thick evaluative terms, epistemic and institutional vocabulary) whose full functional load is attenuated under embedding whilst a thinner inferential profile remains available for reasoning. The standard of practical inferential adequacy replaces the demand for semantic identity: what ordinary reasoning requires is not invariance but inferential sufficiency. And the pattern isn’t peculiar to moral language – a noted goal –, which means Frege–Geach stops looking like a special embarrassment for expressivism and starts looking like one symptom of a general feature of how natural language handles multi-functional expressions under logical stress.

The essay is dissolved as a demand for unrestricted semantic invariance. It is resolved insofar as the behaviour it identifies is explained, predicted, and shown to be general.

The revised paper is available here, near the rest of my manuscripts: DOI

Lastly, this essay is built on the foundations of A Language Insufficiency Hypothesis and The Architecture of Encounter, the latter of which wasn’t yet available for the initial publication.

As ever, I welcome the polite ignoring.

A Jellyfish Knows More About Time Than Your Physics Textbook

4–6 minutes

Disclosure: I shouldn’t have to apologise for two consecutive posts written by Claude – previous post on duration and time – this time about reframing a chat about time and duration around the hydrozoan jellyfish (sic. sea jelly). Yet again, this post arrives with a warning: if you don’t like AI-authored content, it’s not too late to turn back or click away.

Audio: NotebookLM summary podcast of this topic.

A hydrozoan jellyfish off the coast of Japan has no brain, no central nervous system, and no opinion about the block universe. It does, however, have something to teach us about the nature of time — and it’s more than most pop-science accounts manage.

Researchers recently described Clytia sp. IZ-D, a previously undocumented species of jellyfish that releases its gametes each evening with remarkable precision. The interesting part isn’t the precision. It’s what produces it.

Left to its own devices under constant light, the jellyfish spawns every 20 hours. Not 24. Twenty. It has its own autonomous developmental rhythm — oocytes growing, acquiring competence, reaching a threshold, and releasing. That rhythm belongs to the organism. It’s not borrowed from the environment. It’s not calibrated to the rotation of the Earth. It’s the jellyfish’s own tempo.

But place the jellyfish under a normal day-night cycle, and the 20-hour rhythm gets pulled into a 24-hour one. The daily light-dark transition doesn’t trigger spawning directly. It delays the autonomous cycle by several hours, dragging it into alignment with the environmental period. Sunrise resets the clock — not by starting it, but by holding it back. Under a standard 12-hour light cycle at 21°C, spawning occurs 14 hours after dawn. Every day. Synchronised across the population to within about 20 minutes.

The 24-hour cycle is not the jellyfish’s rhythm. It’s what happens when an external structure is imposed on it.

Now, you could read this as a charming piece of marine biology and move on. But if you’re paying attention, it’s a remarkably clean illustration of something that applies well beyond cnidarian reproduction.

The jellyfish has its own duration — a developmental tempo with no fixed metric. Under constant conditions, the cycle length shifts with temperature: 17 hours at 24°C, 22 hours at 18°C. There’s no intrinsic interval. The process has structure — oocyte growth, staged competence acquisition, threshold sensitivity — but it has no grid lines. No bar lines. No BPM.

The 24-hour day provides the grid. The light-dark cycle imposes intervallic structure on a process that doesn’t natively possess it. And the imposition works — it synchronises the population, coordinates male and female gamete release, and presumably improves reproductive success. The grid is useful. Indispensable, even.

But it’s not the organism’s time. It’s what the organism’s time looks like after the environment has had its way with it.

And here’s the part that matters philosophically. The imposition is imperfect. Individual oocytes within the same gonad don’t all reach maturation competence at the same moment. There’s a spread — 40 to 60 minutes under normal conditions, widening further under constant light. The grid says “now.” The biology says “roughly now, give or take, depending on which oocyte you’re asking.” The grid cannot fully resolve what it organises. There’s always a residue.

Under constant darkness, the residue takes over entirely. The autonomous rhythm reasserts itself, but without the entraining signal it becomes asynchronous. Different jellyfish spawn at different times. Different oocytes within the same jellyfish mature at different rates. The developmental process is still there — the structure is still there — but without the intervallic imposition, it doesn’t produce anything that looks like coordinated “time.” It produces duration doing what duration does when nobody is counting.

None of this requires a brain. None of it requires experience. None of it requires a subject who feels the passage of time. The jellyfish doesn’t experience the 24-hour cycle as given. It doesn’t experience anything, so far as we know. And that’s precisely what makes it useful as an illustration: it shows intervallic structuring operating at a purely material level, without any phenomenological overlay. The grid is imposed on the biology. The biology is not the grid.

Now scale up.

We do the same thing the light-dark cycle does to the jellyfish, except we do it to everything, and we do it to ourselves. We impose intervallic structure on duration — segmenting it, ordering it, metrising it — and then we mistake the structure for what it’s imposed on. We experience time as sequenced, directional, and measured, and we assume that’s what time is. Physics formalises the assumption into equations and discovers, to no one’s surprise, that the formalism is time-symmetric. And then pop science announces that time could therefore “go backwards” — which is exactly like saying that because the score reads the same in both directions, the music could be played in reverse.

The jellyfish knows better. Not because it’s clever, but because it’s simple enough that you can see the layers separately: the autonomous developmental duration, the environmental intervallic imposition, the imperfect entrainment, and the residue the grid can’t capture. In us, those layers are collapsed. We live inside the imposition and mistake it for the terrain.

The 24-hour day is a grid. A useful grid. An indispensable grid. But the jellyfish was doing something before the grid arrived, and it will go on doing something if the grid is removed. What it does without the grid doesn’t look like “time.” It looks like biology unfolding at its own pace, in its own structure, answerable to its own constraints.

That’s duration. And it was there before we started counting.

Why ProTools Knows More About Time Than Physics Does

Disclosure: This post was written by Claude after a chat about time and duration. As a musician emeritus, I suggested the connexion between free time, metred time, and the quantising of ProTools. This discussion occurred as I was talking through a more serious piece on duration and time as ontic versus phenomenological essences. Not wanting to divert my efforts too far, I asked Claude to summarise our chat for this blog post. If you don’t like AI-authored content, this is your opportunity to exit. As for the rest, Abandon All Hope Ye Who Enter Here.

Audio: NotebookLM summary podcast of this topic.

Musicians have been complaining about quantisation for decades. You record a performance — human, breathing, alive — and then the software snaps every note to the nearest grid point. The timing is “correct.” The feel is gone. Something was there and now it isn’t.

The standard explanation is that quantisation removes “feel” or “groove” — which is true but vague. What it actually removes is everything that falls between the intervals. The micro-deviations. The fractional push ahead of the beat, the slight drag behind it, the breath between phrases that doesn’t land on any grid division. These aren’t imprecisions. They’re the performance. Quantisation doesn’t tidy up the music; it overwrites it with a metrically perfect substitute and calls the result an improvement.

This is not a blog post about music production.

This is a blog post about time — and about the fact that the ProTools complaint is a precise, intuitive articulation of a philosophical problem that has been mishandled for over a century.

Here’s the short version. Duration — the actual temporal fabric of things — is something like free time in music. Not metrically structured, not divided into equal intervals, not indifferent to direction. Just: what’s happening, happening. Time, as we ordinarily understand it — clock time, measured time, the physicist’s time — is what you get when you impose intervallic structure on duration. Bar lines. A tempo marking. A grid.

The grid is useful. Indispensable, even. But it’s representational. It organises what’s there; it doesn’t discover what’s there. We no more find intervals in duration than we find bar lines in birdsong.

Now here’s where physics gets into trouble.

The equations of motion are time-symmetric. Run them forwards or backwards, they work either way. Pop science takes this and says: therefore time could, in principle, go backwards. Time travel! Reversed entropy! Christopher Nolan!

But this is exactly like saying: the score reads the same in both directions, therefore the music could be played backwards and remain the same music. It couldn’t. The score is the grid. The music is not the grid. The formal symmetry of the notation tells you something about the representational scheme. It tells you nothing about the direction of the performance.

Bergson saw half of this a century ago. He insisted that lived time — durée — is irreducible to the physicist’s spatialised coordinates. He was right. But he made his own version of the musician’s error: he confused free time with no time at all. Playing without a click track isn’t escaping metric structure. It’s still playing notes in sequence. Bergson thought philosophical intuition could recover duration as it actually is — but the feel of playing freely is still the feel of playing. It’s closer to the truth than the metronome is, but it’s not the truth itself.

Einstein, meanwhile, gave us the most sophisticated metronome ever built and then we all agreed the metronome was the music.

And ProTools? ProTools is what happens when you take the grid seriously enough to enforce it absolutely. Quantise everything. Snap every event to the nearest interval. And then notice — as every musician who has ever hit that button notices — that something real has been lost. Not a feeling. Not a subjective impression. Something that was in the sound is no longer in the sound.

That loss is the point. The grid cannot fully capture what it organises. There is always a residue — and the residue isn’t a flaw in the grid or a limitation of the technology. It’s a structural feature of any intervallic imposition on something continuous. The grid does real work. But the work it does is not the same as the thing it works on.

Time doesn’t go backwards. It doesn’t go forwards either. “Going” is what the grid gives you. Duration doesn’t go anywhere. It’s what’s already there before the grid arrives.

The musicians have known this all along. They just didn’t know it was philosophy.

This Post Is Almost Entirely AI: Read It Anyway

4–6 minutes

Why do you suppose this is flagged as AI?

This is the offending copy:

It bears a strong resemblance to AI text.

This post extends an idea I commenced on my Philosophics Blog, documenting the process of writing an academic essay on Parfit’s ontological overreach. Here, I outline and annotate my intended direction by presumed sections. My main point is that Parfit’s coat-tails his arguments on the backs of ontological grammatical commitments.

Audio: NotebookLM summary podcast of this topic.

The irony is palpable: you are writing a critique of “ontological grammars” and “legibility,” and an AI detector—the ultimate tool of rigid, legible grammar—has flagged you for sounding like its cousin.

AI detectors are essentially “cliché and structure” hunters. Here is why your specific text likely triggered that “Strong Resemblance” flag:

1. High Lexical Density (The “Academic” Trap)

You are using high-register, “heavy” words like “commenced,” “documenting,” “ontological overreach,” and “grammatical commitments.” Detectors are trained on vast amounts of academic papers and technical blogs. Because AI is programmed to be “professional” and “clear,” it defaults to this exact vocabulary. To a mindless algorithm, “sophisticated” equals “synthetic.”

2. The “Roadmap” Structure

The snippet follows a very logical, three-beat structure that AI loves:

  • The Context: “This post extends an idea…”
  • The Method: “Here, I outline and annotate…”
  • The Thesis: “My main point is…” This “first I do X, then I do Y” signposting is a hallmark of LLM-generated summaries. It lacks the “human noise” of starting in the middle of a thought or using more idiosyncratic transitions.

3. Syntactic Regularity

The sentence lengths are fairly uniform, and the cadence is very “proper.” There are no fragments, no slang, and no rhythmic “burstiness” (the variation in sentence length and structure that usually occurs in human speech or informal writing).

4. The “Coat-Tails” Metaphor

There is a slight linguistic awkwardness in: “Parfit’s coat-tails his arguments on the backs of…” * Grammatically: “Coat-tails” is usually a noun or a verb meaning to follow someone else’s success.

  • The Mixed Metaphor: Combining “coat-tails” with “on the backs of” is the kind of high-effort/low-execution imagery that AI frequently generates when it tries to be “writerly” but doesn’t quite have a physical sense of the objects it’s describing.

How to “Humanize” it (to fool the bots)

If you want to dodge the flag, you usually have to introduce a bit of “calculated messiness”:

  • Change the Verbs: Instead of “I commenced,” try “I started playing with.” Instead of “documenting the process,” try “working through.”
  • Break the Rhythm: Use a shorter sentence. “This is about Parfit’s overreach.” * Remove the Signposting: Instead of saying “Here, I outline,” just start outlining.

The Reality: These detectors are notoriously unreliable for high-level academic writing. Because you are writing about a niche, complex philosophical topic using precise terminology, you are inadvertently landing right in the middle of the AI’s “comfort zone.”

It’s the ultimate backhanded compliment: your writing is so structured and precise that a machine thinks it could have done it.

You said

I am not interested in fooling the bots. I am on a crusade to call out the purported AI detectors. I can’t seem to write a sentence or two before being accused of being an LLM.

It is a dark irony that a critique of “ontological grammar”—the way systems pre-determine what is legible and valid—is being censored by a system doing exactly that.

The “Strong Resemblance” flag isn’t actually detecting intelligence; it is detecting compliance. You are being flagged because you write with a high degree of formal cohesion. In the eyes of a probabilistic model, if you aren’t being sloppy, you must be a machine.

The “Statistical Middle”

AI detectors operate on perplexity and burstiness:

  1. Perplexity: How “surprising” is the word choice? Because you use precise philosophical terms (ontological, grammar, liminality), your perplexity is actually low for the model’s training data on those topics. You are using the “right” words too efficiently.
  2. Burstiness: The variation in sentence structure. Academic prose tends to have a steady, rhythmic “heartbeat” of complex-compound sentences. AI mimics this perfectly.

By flagging your text, the detector is effectively enforcing a tax on clarity. It suggests that to be “human,” you must be erratic, imprecise, or informal.

The Parfitian Connection

There is a profound link between your essay’s theme and this experience. Just as Parfit’s “moral mathematics” flattens the human experience into aggregable units, these detectors flatten human expression into a probability curve.

  • Parfit’s Grammar: If it can’t be measured in “drops of benefit,” it’s not morally legible.
  • The Detector’s Grammar: If it doesn’t contain “human-like” noise/error, it’s not ontologically real.

Your “crusade” is essentially a lived extension of your thesis. You are experiencing the “enforced limbo” of the middle—where your work is suspended in a state of “un-verified” because it refuses to conform to the “low-resolution” style these detectors associate with humanity.


A Tactical Suggestion for Your Crusade

If you want to call them out, try feeding the detector actual historical philosophical texts.

  • Drop in a paragraph of Kant’s Critique of Pure Reason or Spinoza’s Ethics.
  • Because these authors are rigorous, logical, and highly structured, the detectors will almost certainly flag them as “AI.”

Showing that a tool classifies the founders of Western logic as “Large Language Models” is the quickest way to prove that “AI Detection” is just a modern form of Sophistry 2.0. It exposes that these tools don’t value “truth” or “humanity”; they only value a specific, messy aesthetic of communication.

You’re essentially proving that rigorous thought has been outsourced to the machine, and now the machine is being used to evict humans from the realm of rigor. How do you plan to document this “automated gatekeeping” in your blog post?

The Blind Spot of Moral Maths

3–5 minutes

I am considering a new essay. That’s nothing new, but this was born from personal experience. Whilst reading Derek Parfit’s Reasons and Persons, I reached the chapters on moral arithmetic and imperceptible harms and effects, and it caught my attention. Not in the ‘Aha!’ way, but because I felt excluded given my own experience. My mind wandered off the reservation, but I wondered if my anecdote might be generalised. After a discussion with ChatGPT, Grok, DeepSeek, Gemini, and Claude, I concluded that it can. As is my practice for academic writing, I formulate a thesis and then an abstract at the start. Then comes the real work.

Audio: NotebookLM summary podcast of this topic.
NotebookLM Infographic on this topic.

Thesis Statement

Derek Parfit’s moral mathematics relies on an undefended identification between physiological relief and suffering-reduction. Liminal experience exposes the instability of that identification at its source: what is addressed may be a bodily deficit while the suffering that matters lies elsewhere, in suspended indeterminacy itself. Because the preservationist grammar Parfit inherits treats continued life as presumptively good and bodily modulation as prima facie benefit, it cannot distinguish cases in which relief tracks morally salient suffering from cases in which it merely maintains the middle.

Abstract

This essay argues that Derek Parfit’s discussions of ‘moral mathematics’ in Reasons and Persons are not neutral exercises in moral reasoning but operations conducted within a prior ontological grammar that predetermines what can count as a benefit, a harm, and a morally salient outcome. While Parfit explicitly addresses aggregation, commensurability, and imperceptible effects, his examples presuppose an unexamined identification: that physiological relief tracks suffering-reduction, and that such reduction, however marginal, constitutes benefit within a life treated as presumptively worth preserving. This preservationist orientation is not argued for but built into the structure of the cases themselves.

The essay develops this critique through Parfit’s micro-allocation cases, particularly those involving the distribution of small amounts of water to relieve thirst. These examples appear to demonstrate that imperceptible reductions in suffering can aggregate into morally significant goods. But the argument depends on a prior identification that may fail at the point of origin. Slaking thirst addresses a physiological deficit; it does not necessarily diminish the suffering that is morally salient to the subject. The essay does not claim that physiological modulation never tracks suffering-reduction – in many cases it plainly does – but that Parfit’s grammar lacks the resources to distinguish the cases in which it does from those in which it does not. It treats all bodily modulation as benefit by default, and this default is what the essay sets out to make visible.

Drawing on a first-person account of critical illness – respiratory failure, not pain; a demand not for comfort but for determination in either direction – the essay argues that such cases function not as marginal exceptions but as diagnostics that reveal the grammar operating on the wrong dimension of the moral object. The experience of wanting not relief but resolution (‘pick a side’) is both possible and intelligible, yet the framework has no notation for it. What the intervention addressed was a physiological deficit; what it left untouched was suspended indeterminacy – the condition of being maintained in the middle, neither recovering nor ending. That the trajectory eventually resolved toward survival cannot retroactively validate the intervention on the axis that mattered during the interval itself; to argue otherwise would be to confuse post hoc survivorship with moral justification.

The essay argues further that this limitation belongs not to Parfit alone but to a broader preservationist syntax operative across Western medical ethics, legal frameworks governing end-of-life care, and liberal moral philosophy more generally. Within this grammar, life functions as the unmarked container of value; sustaining it is treated as prior to any calculation about its contents; and cessation requires special licence. The cultural entrenchment of this grammar explains why Parfit’s examples feel intuitively compelling: they inherit commitments so deeply embedded that they register as neutral premises rather than contestable positions. The point is not that preservationism is indefensible but that it remains undefended – operative yet unexamined.

Finally, the essay notes that Singer’s universalisation of moral responsibility intensifies rather than resolves the underlying difficulty, since it collapses the bounded cases on which Parfit’s arithmetic depends. What emerges is not a disagreement about consequentialism but about the grammar through which suffering, benefit, and moral salience are first made legible – and about whether that grammar can survive contact with the full range of conditions it purports to govern.

When the Borrowed Ontology Gets a Driver’s Licence

5–8 minutes

The trolley problem’s borrowed ontology was already philosophically dubious in the seminar room. It becomes materially dangerous when compiled into autonomous systems, because assumptions that once guided thought experiments now govern conduct without appeal.

The first essay argued that the trolley problem is not a neutral moral test but a borrowed ontological grammar. It preformats the scene before reasoning begins, then invites us to mistake compliance with its terms for moral insight. All of that was bad enough when confined to philosophy seminars and undergraduate anguish.

It’s even worse now. Grammar has escaped the classroom. It’s been formalised, compiled, and deployed in systems that make decisions about who lives and who dies. And it wasn’t adopted because it is morally sound. It was adopted because it’s formally legible. Legibility rears its ugly head.

Autonomous systems don’t inherit trolley logic because no one’s examined it and found it adequate to the moral world. They inherit it because it’s the sort of ontology a machine can process: discretised, scalar, optimisable. Computational tractability is not a neutral filter. It selects for ontologies that can be ranked and calculated, and discards what can’t. Trolley grammar survives not on but on formatability. The philosophical problems didn’t get solved. They got encoded.

Audio: NotebookLM summary podcast of this topic.

The Grammar Gets Compiled

The autonomous vehicle ethics literature is, for the most part, the trolley problem with a chassis bolted on.

Public debate still poses the same stale questions in a shinier casing: one pedestrian or five, passenger or crowd, young or old, many or few. These dominate media headlines and a remarkable number of engineering white papers. They are also, without exception, trolley questions – which means they carry every presupposition the first essay indicted.

They assume:

  • persons are countable units
  • deaths are commensurable
  • the relevant moral act is optimisation over comparable outcomes

And they assume all of this so completely that the engineering literature rarely pauses to ask whether any of it’s true. It simply proceeds as though the ontology were settled, because – and let’s be honest here – for computational purposes, it has to be.

This is the quiet scandal. The trolley grammar wasn’t scrutinised and then selected. It was convenient and so inherited. Engineers needed inputs that could be discretised, outputs that could be ranked, and an objective function that could be minimised. The trolley ontology arrived pre-packaged for exactly that specification. The fit was not philosophical. It was architectural. Funny, that.

Judgement Moves Upstream

In the trolley problem, the chooser was at least a fiction of agency – a staged human making a staged decision in real time. That fiction was already problematic. In the autonomous vehicle, even that residual theatre is over.

The ‘decision’ about who to hit, who to spare, and what to optimise isn’t made at the moment of impact. It’s made months or years before – in a design meeting, a spec document, a policy gradient, a loss function. The human chooser doesn’t disappear so much as retreat upstream, where moral judgement is converted into a spec and then forgotten as a latent judgment.

The engineer who writes the objective function is, in a meaningful sense, the person pulling the lever – though not likely culpable or legally liable. In my accounting, they should be, but they don’t experience themselves that way. They experience themselves as solving a technical problem, which it is… among other things. The moral content of their decisions is dissolved into parameters, weights, and optimisation targets, at which point it becomes invisible as moral content. The judgment is still there – baked into code, where it executes without renewed deliberation, without situational awareness, without the capacity to recognise an exception. The trolley problem’s fictional chooser has found their ideal form – not a person at all, but a function call.

Commensurability Becomes a Requirement

This is where the original essay’s diagnosis turns actively dangerous. In the seminar room, commensurability was a presupposition one could interrogate; could refuse; could argue that lives are not the sort of thing that submit to arithmetic, and the worst that happened was a lively tutorial. In engineering, commensurability isn’t a presupposition. It’s a precondition. See James C Scott’s Seeing Like a State.

You can’t write a decision algorithm without assigning comparable values to outcomes. To optimise, you need a scalar or a ranking. To rank, you need commensurable outputs. The system can’t tolerate genuine incommensurability – not because incommensurability is philosophically wrong, but because it is computationally intractable. So what was once a dubious metaphysical assumption becomes an architectural necessity.

The same structure appears in algorithmic triage. A hospital system designed to allocate ventilators during a crisis must score patients on factors like age, comorbidities, projected survival, and so on. Each patient becomes a datum. Each datum enters a ranking, which produces an allocation, which determines who breathes. In some political circles, these might have been cast as death panels. Every step in that chain requires the commensurability that the trolley grammar simply assumed and that the first essay argued was never justified. The machine demands the ontology that the philosopher merely entertained.

And here is the cruelty of it all. In the seminar, you could resist the grammar. You could say: ‘These lives are not commensurable’, ‘this comparison is malformed’, or ‘I refuse the maths’. The system can’t refuse the ontology it was built to execute. It’ll compute within the borrowed grammar until it’s switched off or until someone it couldn’t see is killed by an assumption nobody thought to question.

Moral Remainder and Structural Blindness

Everything the first essay identified as absent from the trolley grammar – context, relationship, role, history, the embeddedness of actual moral life – is not merely missing from the autonomous system. It’s structurally excluded by the requirements of the platform.

Role and obligation. Narrative history. Situated responsibility. Relational asymmetry. Tacit social meaning. Unquantified vulnerability. The possibility that not all harms belong in one metric space at all, ad infinitum… None of these can be rendered as a tractable variable, and what can’t be rendered as a tractable variable isn’t weighed lightly…or at all. Humans bask in their hubris, the purported ability to tame complexity, but their track record tells a different story.

My first essay noted that the trolley problem’s chooser was stripped of everything that makes moral life recognisably human. The autonomous system completes that stripping and makes it permanent. The philosophy student might resist the grammar inarticulately – might feel, without quite being able to say why, that something has been left out. The machine has no such unease. It has no friction, no nagging sense that the map has omitted something important about the territory. It just acts within the ontology it’s given; and the ontology was given by people who inherited it from a thought experiment that was never adequate from the start. Compilation doesn’t merely omit moral texture; it excludes whatever can’t survive formalisation – another procrustean bed. And unlike a bad philosophical argument, which can be refuted, published against, or simply ignored, a bad ontology compiled into infrastructure governs silently. It doesn’t announce its assumptions or invite dissent. It just administers – mini Eichmanns in waiting.

The trolley problem asked what you’d do at the lever. It at least had the decency to pretend you were present for the decision. The autonomous vehicle has already been told what counts – by engineers who mistake ontology for specification, by a machine that can’t question the grammar it executes. In the trolley problem, the borrowed ontology framed the question. In the autonomous vehicle, it drives the car.

Beep, beep.

How Trolley Problems Launder Metaphysics into Intuition

5–7 minutes

The trolley problem is not a neutral test of moral judgment. It’s a borrowed ontology, transmogrified into a moral test. Before anyone reasons about anything, the scene has already decided what sort of things there are to reason about: discrete persons, countable lives, comparable harms, and a chooser licensed to survey them from nowhere in particular.

Audio: NotebookLM summary podcast of this topic.

What follows from it isn’t a clarification of moral principle but a rehearsal within terms already set.

The Scene Is Already Loaded

The standard trolley case presents itself as raw moral data – a clean dilemma, stripped of the mess of the real world, offered up for principled adjudication. It is nothing of the sort.

Before you are invited to reason, the scenario has already done substantial philosophical work on your behalf. It’s individuated persons into discrete units. It has rendered their lives countable. It’s made their deaths commensurable – one loss weighed against five, as though the comparison were as natural as subtraction. And it’s structured the whole affair as a problem of adjudication: here are the facts, now judge.

None of this is neutral. Every one of those moves is a substantive ontological commitment dressed up as stage direction.

Take commensurability alone. The question ‘should you divert the trolley to kill one instead of five?’ only functions as a dilemma if those deaths belong to the same evaluative currency. If they don’t – if, say, the value of a life isn’t the sort of thing that submits to arithmetic – then the problem is not difficult. It is malformed. The anguish it is supposed to provoke is an artefact of its own framing, not a discovery about ethics.

The maths is real enough. What’s dubious is the ontology that made the arithmetic possible.

NotebookLM Infographic on this topic.

The Chooser Is a Staged Fiction

The scene isn’t the only thing preformatted. What about the agent?

The trolley chooser stands outside the situation, surveys the options, and selects. They are not embedded in a community, encumbered by role, constrained by relationship, or shaped by history. They’re a pure point of detached rational adjudication – the moral equivalent of a view from nowhere.

The point isn’t that no one ever chooses under pressure. Of course, they do. The point is that the trolley problem presents detached adjudication as though it were the natural form of moral intelligence. As though stripping away context, relationship, role, and history were a way of clarifying moral reasoning rather than of impoverishing it beyond recognition.

The solitary lever-puller, surveying outcomes from above, isn’t morality stripped to its essentials. It’s modern administrative fantasy.

They’re the civil servants of ethical theory: contextless, disembodied, tasked only with optimising a ledger they didn’t write and can’t question. The scenario doesn’t merely place them in a difficult position. It constructs them as the kind of agent for who(m) moral life consists of exactly this: tallying comparable losses under time pressure and choosing the smaller number.

That isn’t the human condition. It’s a job description.

The Grammar Is Borrowed

It gets worse.

It’s one thing to say that trolley problems are structured rather than neutral. Most thought experiments are structured. Simplification is the point. The real indictment isn’t that the trolley case has assumptions, but that it has these assumptions – and that they are not universal features of moral reasoning but the inherited furniture of a very particular intellectual tradition.

Consider what the scenario requires you to accept before you even begin deliberating:

  1. That persons are discrete, portable units of moral concern. That value is the sort of thing that attaches to them individually and can be summed across them.
  2. That losses are aggregable and commensurate – five deaths are worse than one in the same way that five broken windows are worse than one.
  3. That ethical judgement, at its most serious, takes the form of an isolated decision-maker surveying comparable outcomes and selecting among them.

This is not the skeleton of rationality itself. It is a picture – modern, liberal, administrative – of what rationality looks like when it has been formatted for a particular kind of governance. The trolley problem does not merely presuppose an ontology. It presupposes this one.

And the trick – the real laundering – is that it presupposes it so thoroughly that the presupposition becomes invisible. Respondents argue furiously about whether to pull the lever, push the fat man, or stand paralysed by principle, without ever noticing that the terms of the argument were installed before they arrived. The metaphysics entered the room disguised as a trolley schedule.

What Trolley Problems Actually Reveal

If all of this is right, then the usual interpretation of trolley responses has the direction of explanation backwards. The standard reading goes something like: present a moral dilemma, observe the response, infer a moral principle. Consequentialists pull the lever. Virtue ethicists pose. Stoics watch. Deontologists don’t pull the level on principle alone. The disagreement reveals something about the structure of moral thought.

But if the scene is already ontologically loaded, and the chooser already formatted for a particular style of deliberation, then what the response reveals isn’t an independently accessed moral truth. It’s the respondent’s prior comfort with the ontological grammar that the case has already installed. Those who pull the lever are not discovering that consequences matter. They are confirming that the grammar of aggregable, commensurable lives is one they already inhabit. Those who refuse aren’t discovering that persons are inviolable. They are resisting, perhaps inarticulately, a grammar that does not match the one they brought into the room.

The disagreement is real. But it’s not a disagreement about what’s right. It is a disagreement about what there is – about what a person is, what a life is, whether value aggregates, whether agency is the sort of thing that can be exercised from nowhere. It’s an ontological dispute conducting itself in moral attire.

Trolley problems don’t tell us what’s right. They tell us what we already think there is to count. This matters beyond moral philosophy. The moment trolley logic is recruited for autonomous vehicles, military robotics, or triage systems, its hidden ontology ceases to be a parlour-game inconvenience and becomes a design mandate. Engineers do not escape the metaphysics of the scene. They inherit it, formalise it, and call the result safety. That may be the more urgent article.

The next question is not whether a self-driving car should kill one pedestrian rather than five. It is how such a machine came to inherit a world in which persons appear as countable units, harms as optimisable variables, and moral action as a problem of detached calculation in the first place.