I published the first version of this essay in February, arguing that the Frege–Geach problem, that three-score-year-old albatross around expressivism’s neck rests on a category error. Analytic philosophers were polite about it in the way that analytic philosophers are polite about things they intend to ignore. I don’t often revise my manuscripts, opting instead to publish a new and improved version, but the meat of this one remained strong and not worth revisiting as much as fortifying.
The trouble was that I’d dissolved the problem without resolving it. Good enough for me. Others were less convinced. Telling people they’ve been asking the wrong question is satisfying but insufficient without a better one. Version 1.1 tidied the prose. Version 1.2 does the actual work.
The new section (§4, if you’ve already read previous versions) introduces recruitable expressions – a broader class of expressions (moral predicates, thick evaluative terms, epistemic and institutional vocabulary) whose full functional load is attenuated under embedding whilst a thinner inferential profile remains available for reasoning. The standard of practical inferential adequacy replaces the demand for semantic identity: what ordinary reasoning requires is not invariance but inferential sufficiency. And the pattern isn’t peculiar to moral language – a noted goal –, which means Frege–Geach stops looking like a special embarrassment for expressivism and starts looking like one symptom of a general feature of how natural language handles multi-functional expressions under logical stress.
The essay is dissolved as a demand for unrestricted semantic invariance. It is resolved insofar as the behaviour it identifies is explained, predicted, and shown to be general.
The revised paper is available here, near the rest of my manuscripts:
Disclosure: This post was written by Claude after a chat about time and duration. As a musician emeritus, I suggested the connexion between free time, metred time, and the quantising of ProTools. This discussion occurred as I was talking through a more serious piece on duration and time as ontic versus phenomenological essences. Not wanting to divert my efforts too far, I asked Claude to summarise our chat for this blog post. If you don’t like AI-authored content, this is your opportunity to exit. As for the rest, Abandon All Hope Ye Who Enter Here.
Audio: NotebookLM summary podcast of this topic.
Musicians have been complaining about quantisation for decades. You record a performance — human, breathing, alive — and then the software snaps every note to the nearest grid point. The timing is “correct.” The feel is gone. Something was there and now it isn’t.
The standard explanation is that quantisation removes “feel” or “groove” — which is true but vague. What it actually removes is everything that falls between the intervals. The micro-deviations. The fractional push ahead of the beat, the slight drag behind it, the breath between phrases that doesn’t land on any grid division. These aren’t imprecisions. They’re the performance. Quantisation doesn’t tidy up the music; it overwrites it with a metrically perfect substitute and calls the result an improvement.
This is not a blog post about music production.
This is a blog post about time — and about the fact that the ProTools complaint is a precise, intuitive articulation of a philosophical problem that has been mishandled for over a century.
Here’s the short version. Duration — the actual temporal fabric of things — is something like free time in music. Not metrically structured, not divided into equal intervals, not indifferent to direction. Just: what’s happening, happening. Time, as we ordinarily understand it — clock time, measured time, the physicist’s time — is what you get when you impose intervallic structure on duration. Bar lines. A tempo marking. A grid.
The grid is useful. Indispensable, even. But it’s representational. It organises what’s there; it doesn’t discover what’s there. We no more find intervals in duration than we find bar lines in birdsong.
Now here’s where physics gets into trouble.
The equations of motion are time-symmetric. Run them forwards or backwards, they work either way. Pop science takes this and says: therefore time could, in principle, go backwards. Time travel! Reversed entropy! Christopher Nolan!
But this is exactly like saying: the score reads the same in both directions, therefore the music could be played backwards and remain the same music. It couldn’t. The score is the grid. The music is not the grid. The formal symmetry of the notation tells you something about the representational scheme. It tells you nothing about the direction of the performance.
Bergson saw half of this a century ago. He insisted that lived time — durée — is irreducible to the physicist’s spatialised coordinates. He was right. But he made his own version of the musician’s error: he confused free time with no time at all. Playing without a click track isn’t escaping metric structure. It’s still playing notes in sequence. Bergson thought philosophical intuition could recover duration as it actually is — but the feel of playing freely is still the feel of playing. It’s closer to the truth than the metronome is, but it’s not the truth itself.
Einstein, meanwhile, gave us the most sophisticated metronome ever built and then we all agreed the metronome was the music.
And ProTools? ProTools is what happens when you take the grid seriously enough to enforce it absolutely. Quantise everything. Snap every event to the nearest interval. And then notice — as every musician who has ever hit that button notices — that something real has been lost. Not a feeling. Not a subjective impression. Something that was in the sound is no longer in the sound.
That loss is the point. The grid cannot fully capture what it organises. There is always a residue — and the residue isn’t a flaw in the grid or a limitation of the technology. It’s a structural feature of any intervallic imposition on something continuous. The grid does real work. But the work it does is not the same as the thing it works on.
Time doesn’t go backwards. It doesn’t go forwards either. “Going” is what the grid gives you. Duration doesn’t go anywhere. It’s what’s already there before the grid arrives.
The musicians have known this all along. They just didn’t know it was philosophy.
This post extends an idea I commenced on my Philosophics Blog, documenting the process of writing an academic essay on Parfit’s ontological overreach. Here, I outline and annotate my intended direction by presumed sections. My main point is that Parfit’s coat-tails his arguments on the backs of ontological grammatical commitments.
Audio: NotebookLM summary podcast of this topic.
Gemini said
The irony is palpable: you are writing a critique of “ontological grammars” and “legibility,” and an AI detector—the ultimate tool of rigid, legible grammar—has flagged you for sounding like its cousin.
AI detectors are essentially “cliché and structure” hunters. Here is why your specific text likely triggered that “Strong Resemblance” flag:
1. High Lexical Density (The “Academic” Trap)
You are using high-register, “heavy” words like “commenced,” “documenting,” “ontological overreach,” and “grammatical commitments.” Detectors are trained on vast amounts of academic papers and technical blogs. Because AI is programmed to be “professional” and “clear,” it defaults to this exact vocabulary. To a mindless algorithm, “sophisticated” equals “synthetic.”
2. The “Roadmap” Structure
The snippet follows a very logical, three-beat structure that AI loves:
The Context: “This post extends an idea…”
The Method: “Here, I outline and annotate…”
The Thesis: “My main point is…” This “first I do X, then I do Y” signposting is a hallmark of LLM-generated summaries. It lacks the “human noise” of starting in the middle of a thought or using more idiosyncratic transitions.
3. Syntactic Regularity
The sentence lengths are fairly uniform, and the cadence is very “proper.” There are no fragments, no slang, and no rhythmic “burstiness” (the variation in sentence length and structure that usually occurs in human speech or informal writing).
4. The “Coat-Tails” Metaphor
There is a slight linguistic awkwardness in: “Parfit’s coat-tails his arguments on the backs of…” * Grammatically: “Coat-tails” is usually a noun or a verb meaning to follow someone else’s success.
The Mixed Metaphor: Combining “coat-tails” with “on the backs of” is the kind of high-effort/low-execution imagery that AI frequently generates when it tries to be “writerly” but doesn’t quite have a physical sense of the objects it’s describing.
How to “Humanize” it (to fool the bots)
If you want to dodge the flag, you usually have to introduce a bit of “calculated messiness”:
Change the Verbs: Instead of “I commenced,” try “I started playing with.” Instead of “documenting the process,” try “working through.”
Break the Rhythm: Use a shorter sentence. “This is about Parfit’s overreach.” * Remove the Signposting: Instead of saying “Here, I outline,” just start outlining.
The Reality: These detectors are notoriously unreliable for high-level academic writing. Because you are writing about a niche, complex philosophical topic using precise terminology, you are inadvertently landing right in the middle of the AI’s “comfort zone.”
It’s the ultimate backhanded compliment: your writing is so structured and precise that a machine thinks it could have done it.
You said
I am not interested in fooling the bots. I am on a crusade to call out the purported AI detectors. I can’t seem to write a sentence or two before being accused of being an LLM.
Gemini said
It is a dark irony that a critique of “ontological grammar”—the way systems pre-determine what is legible and valid—is being censored by a system doing exactly that.
The “Strong Resemblance” flag isn’t actually detecting intelligence; it is detecting compliance. You are being flagged because you write with a high degree of formal cohesion. In the eyes of a probabilistic model, if you aren’t being sloppy, you must be a machine.
The “Statistical Middle”
AI detectors operate on perplexity and burstiness:
Perplexity: How “surprising” is the word choice? Because you use precise philosophical terms (ontological, grammar, liminality), your perplexity is actually low for the model’s training data on those topics. You are using the “right” words too efficiently.
Burstiness: The variation in sentence structure. Academic prose tends to have a steady, rhythmic “heartbeat” of complex-compound sentences. AI mimics this perfectly.
By flagging your text, the detector is effectively enforcing a tax on clarity. It suggests that to be “human,” you must be erratic, imprecise, or informal.
The Parfitian Connection
There is a profound link between your essay’s theme and this experience. Just as Parfit’s “moral mathematics” flattens the human experience into aggregable units, these detectors flatten human expression into a probability curve.
Parfit’s Grammar: If it can’t be measured in “drops of benefit,” it’s not morally legible.
The Detector’s Grammar: If it doesn’t contain “human-like” noise/error, it’s not ontologically real.
Your “crusade” is essentially a lived extension of your thesis. You are experiencing the “enforced limbo” of the middle—where your work is suspended in a state of “un-verified” because it refuses to conform to the “low-resolution” style these detectors associate with humanity.
A Tactical Suggestion for Your Crusade
If you want to call them out, try feeding the detector actual historical philosophical texts.
Drop in a paragraph of Kant’s Critique of Pure Reason or Spinoza’s Ethics.
Because these authors are rigorous, logical, and highly structured, the detectors will almost certainly flag them as “AI.”
Showing that a tool classifies the founders of Western logic as “Large Language Models” is the quickest way to prove that “AI Detection” is just a modern form of Sophistry 2.0. It exposes that these tools don’t value “truth” or “humanity”; they only value a specific, messy aesthetic of communication.
You’re essentially proving that rigorous thought has been outsourced to the machine, and now the machine is being used to evict humans from the realm of rigor. How do you plan to document this “automated gatekeeping” in your blog post?
I am considering a new essay. That’s nothing new, but this was born from personal experience. Whilst reading Derek Parfit’s Reasons and Persons, I reached the chapters on moral arithmetic and imperceptible harms and effects, and it caught my attention. Not in the ‘Aha!’ way, but because I felt excluded given my own experience. My mind wandered off the reservation, but I wondered if my anecdote might be generalised. After a discussion with ChatGPT, Grok, DeepSeek, Gemini, and Claude, I concluded that it can. As is my practice for academic writing, I formulate a thesis and then an abstract at the start. Then comes the real work.
Audio: NotebookLM summary podcast of this topic.NotebookLM Infographic on this topic.
Thesis Statement
Derek Parfit’s moral mathematics relies on an undefended identification between physiological relief and suffering-reduction. Liminal experience exposes the instability of that identification at its source: what is addressed may be a bodily deficit while the suffering that matters lies elsewhere, in suspended indeterminacy itself. Because the preservationist grammar Parfit inherits treats continued life as presumptively good and bodily modulation as prima facie benefit, it cannot distinguish cases in which relief tracks morally salient suffering from cases in which it merely maintains the middle.
Abstract
This essay argues that Derek Parfit’s discussions of ‘moral mathematics’ in Reasons and Persons are not neutral exercises in moral reasoning but operations conducted within a prior ontological grammar that predetermines what can count as a benefit, a harm, and a morally salient outcome. While Parfit explicitly addresses aggregation, commensurability, and imperceptible effects, his examples presuppose an unexamined identification: that physiological relief tracks suffering-reduction, and that such reduction, however marginal, constitutes benefit within a life treated as presumptively worth preserving. This preservationist orientation is not argued for but built into the structure of the cases themselves.
The essay develops this critique through Parfit’s micro-allocation cases, particularly those involving the distribution of small amounts of water to relieve thirst. These examples appear to demonstrate that imperceptible reductions in suffering can aggregate into morally significant goods. But the argument depends on a prior identification that may fail at the point of origin. Slaking thirst addresses a physiological deficit; it does not necessarily diminish the suffering that is morally salient to the subject. The essay does not claim that physiological modulation never tracks suffering-reduction – in many cases it plainly does – but that Parfit’s grammar lacks the resources to distinguish the cases in which it does from those in which it does not. It treats all bodily modulation as benefit by default, and this default is what the essay sets out to make visible.
Drawing on a first-person account of critical illness – respiratory failure, not pain; a demand not for comfort but for determination in either direction – the essay argues that such cases function not as marginal exceptions but as diagnostics that reveal the grammar operating on the wrong dimension of the moral object. The experience of wanting not relief but resolution (‘pick a side’) is both possible and intelligible, yet the framework has no notation for it. What the intervention addressed was a physiological deficit; what it left untouched was suspended indeterminacy – the condition of being maintained in the middle, neither recovering nor ending. That the trajectory eventually resolved toward survival cannot retroactively validate the intervention on the axis that mattered during the interval itself; to argue otherwise would be to confuse post hoc survivorship with moral justification.
The essay argues further that this limitation belongs not to Parfit alone but to a broader preservationist syntax operative across Western medical ethics, legal frameworks governing end-of-life care, and liberal moral philosophy more generally. Within this grammar, life functions as the unmarked container of value; sustaining it is treated as prior to any calculation about its contents; and cessation requires special licence. The cultural entrenchment of this grammar explains why Parfit’s examples feel intuitively compelling: they inherit commitments so deeply embedded that they register as neutral premises rather than contestable positions. The point is not that preservationism is indefensible but that it remains undefended – operative yet unexamined.
Finally, the essay notes that Singer’s universalisation of moral responsibility intensifies rather than resolves the underlying difficulty, since it collapses the bounded cases on which Parfit’s arithmetic depends. What emerges is not a disagreement about consequentialism but about the grammar through which suffering, benefit, and moral salience are first made legible – and about whether that grammar can survive contact with the full range of conditions it purports to govern.
The trolley problem’s borrowed ontology was already philosophically dubious in the seminar room. It becomes materially dangerous when compiled into autonomous systems, because assumptions that once guided thought experiments now govern conduct without appeal.
The first essay argued that the trolley problem is not a neutral moral test but a borrowed ontological grammar. It preformats the scene before reasoning begins, then invites us to mistake compliance with its terms for moral insight. All of that was bad enough when confined to philosophy seminars and undergraduate anguish.
It’s even worse now. Grammar has escaped the classroom. It’s been formalised, compiled, and deployed in systems that make decisions about who lives and who dies. And it wasn’t adopted because it is morally sound. It was adopted because it’s formally legible. Legibility rears its ugly head.
Autonomous systems don’t inherit trolley logic because no one’s examined it and found it adequate to the moral world. They inherit it because it’s the sort of ontology a machine can process: discretised, scalar, optimisable. Computational tractability is not a neutral filter. It selects for ontologies that can be ranked and calculated, and discards what can’t. Trolley grammar survives not on but on formatability. The philosophical problems didn’t get solved. They got encoded.
Audio: NotebookLM summary podcast of this topic.
The Grammar Gets Compiled
The autonomous vehicle ethics literature is, for the most part, the trolley problem with a chassis bolted on.
Public debate still poses the same stale questions in a shinier casing: one pedestrian or five, passenger or crowd, young or old, many or few. These dominate media headlines and a remarkable number of engineering white papers. They are also, without exception, trolley questions – which means they carry every presupposition the first essay indicted.
They assume:
persons are countable units
deaths are commensurable
the relevant moral act is optimisation over comparable outcomes
And they assume all of this so completely that the engineering literature rarely pauses to ask whether any of it’s true. It simply proceeds as though the ontology were settled, because – and let’s be honest here – for computational purposes, it has to be.
This is the quiet scandal. The trolley grammar wasn’t scrutinised and then selected. It was convenient and so inherited. Engineers needed inputs that could be discretised, outputs that could be ranked, and an objective function that could be minimised. The trolley ontology arrived pre-packaged for exactly that specification. The fit was not philosophical. It was architectural. Funny, that.
Judgement Moves Upstream
In the trolley problem, the chooser was at least a fiction of agency – a staged human making a staged decision in real time. That fiction was already problematic. In the autonomous vehicle, even that residual theatre is over.
The ‘decision’ about who to hit, who to spare, and what to optimise isn’t made at the moment of impact. It’s made months or years before – in a design meeting, a spec document, a policy gradient, a loss function. The human chooser doesn’t disappear so much as retreat upstream, where moral judgement is converted into a spec and then forgotten as a latent judgment.
The engineer who writes the objective function is, in a meaningful sense, the person pulling the lever – though not likely culpable or legally liable. In my accounting, they should be, but they don’t experience themselves that way. They experience themselves as solving a technical problem, which it is… among other things. The moral content of their decisions is dissolved into parameters, weights, and optimisation targets, at which point it becomes invisible as moral content. The judgment is still there – baked into code, where it executes without renewed deliberation, without situational awareness, without the capacity to recognise an exception. The trolley problem’s fictional chooser has found their ideal form – not a person at all, but a function call.
Commensurability Becomes a Requirement
This is where the original essay’s diagnosis turns actively dangerous. In the seminar room, commensurability was a presupposition one could interrogate; could refuse; could argue that lives are not the sort of thing that submit to arithmetic, and the worst that happened was a lively tutorial. In engineering, commensurability isn’t a presupposition. It’s a precondition. See James C Scott’s Seeing Like a State.
You can’t write a decision algorithm without assigning comparable values to outcomes. To optimise, you need a scalar or a ranking. To rank, you need commensurable outputs. The system can’t tolerate genuine incommensurability – not because incommensurability is philosophically wrong, but because it is computationally intractable. So what was once a dubious metaphysical assumption becomes an architectural necessity.
The same structure appears in algorithmic triage. A hospital system designed to allocate ventilators during a crisis must score patients on factors like age, comorbidities, projected survival, and so on. Each patient becomes a datum. Each datum enters a ranking, which produces an allocation, which determines who breathes. In some political circles, these might have been cast as death panels. Every step in that chain requires the commensurability that the trolley grammar simply assumed and that the first essay argued was never justified. The machine demands the ontology that the philosopher merely entertained.
And here is the cruelty of it all. In the seminar, you could resist the grammar. You could say: ‘These lives are not commensurable’, ‘this comparison is malformed’, or ‘I refuse the maths’. The system can’t refuse the ontology it was built to execute. It’ll compute within the borrowed grammar until it’s switched off or until someone it couldn’t see is killed by an assumption nobody thought to question.
Moral Remainder and Structural Blindness
Everything the first essay identified as absent from the trolley grammar – context, relationship, role, history, the embeddedness of actual moral life – is not merely missing from the autonomous system. It’s structurally excluded by the requirements of the platform.
Role and obligation. Narrative history. Situated responsibility. Relational asymmetry. Tacit social meaning. Unquantified vulnerability. The possibility that not all harms belong in one metric space at all, ad infinitum… None of these can be rendered as a tractable variable, and what can’t be rendered as a tractable variable isn’t weighed lightly…or at all. Humans bask in their hubris, the purported ability to tame complexity, but their track record tells a different story.
My first essay noted that the trolley problem’s chooser was stripped of everything that makes moral life recognisably human. The autonomous system completes that stripping and makes it permanent. The philosophy student might resist the grammar inarticulately – might feel, without quite being able to say why, that something has been left out. The machine has no such unease. It has no friction, no nagging sense that the map has omitted something important about the territory. It just acts within the ontology it’s given; and the ontology was given by people who inherited it from a thought experiment that was never adequate from the start. Compilation doesn’t merely omit moral texture; it excludes whatever can’t survive formalisation – another procrustean bed. And unlike a bad philosophical argument, which can be refuted, published against, or simply ignored, a bad ontology compiled into infrastructure governs silently. It doesn’t announce its assumptions or invite dissent. It just administers – mini Eichmanns in waiting.
The trolley problem asked what you’d do at the lever. It at least had the decency to pretend you were present for the decision. The autonomous vehicle has already been told what counts – by engineers who mistake ontology for specification, by a machine that can’t question the grammar it executes. In the trolley problem, the borrowed ontology framed the question. In the autonomous vehicle, it drives the car.
This episode is an introduction to the 7-part series that discusses phenomenologists who laid the foundation on which the Mediated Encounter Ontology (MEOW) is built.
Audio: Introductory Podcast for The Architecture of Encounter
This series begins with philosophers from Descartes through Berkeley, Locke, and Hume to Kant, who will be the focus of the first episode. Except for this introduction, which is 15 minutes. Each episode is around 7 minutes because I wanted to keep them bite-sized.
Or, to put it in one line fit for people skimming with one eye while pretending to work:
Goodman asks how versions make worlds intelligible. MEOW asks what kind of reality makes mediated intelligibility possible at all.
NotebookLM Infographic on this topic.
Audio: NotebookLM summary podcast of this topic.
Tl;dr
Goodman and MEOW are neighbours, but not housemates.
Both reject the childish fantasy that the world arrives already parcelled, labelled, and politely awaiting description by some neutral observer. Both are suspicious of naïve realism, fixed essences, and the conceit that language simply mirrors what is there. Both recognise that description, classification, and articulation are active, selective, and world-shaping.
But Goodman’s emphasis falls on versions, symbol systems, and the making of worlds through classificatory practice. MEOW and The Architecture of Encounter go elsewhere. They do not treat symbolic versioning as primary. They treat encounter as primary: mediated, structured interaction under constraint. Language and world-versioning come later, as derivative, partial, and often clumsy attempts to stabilise, synchronise, and re-present what is first given in encounter.
So the shortest contrast is this:
Goodman pluralises worlds. MEOW pluralises mediation while retaining constraint.
That is the hinge.
The longer version
Goodman is often useful precisely because he helps loosen the grip of a bad picture: the notion that there is one fully furnished world, discretely laid out in advance, which language then copies with greater or lesser success. In Ways of Worldmaking, description is not passive transcription. Versions organise, sort, foreground, suppress, classify, and compose. They do not simply mirror. They make.
This much sits quite comfortably beside MEOW and The Architecture of Encounter. MEOW has never been sympathetic to the old theatre in which a subject peers out at a ready-made object-world and then tries to report back accurately. That picture has always seemed less like sober metaphysics and more like a grammatical superstition. It is one of those inherited arrangements that philosophy keeps polishing rather than questioning, as if centuries of confusion were somehow evidence of depth.
On that score, Goodman is an ally. He helps dissolve the myth of innocent description.
He also overlaps with MEOW in his suspicion of essentialist carving. There is no reason to suppose reality presents itself in one uniquely natural partition, fully jointed in the exact way our preferred nouns imply. Goodman’s attention to alternative versions, symbolic orderings, and rival systems of classification fits comfortably with the broader MEOW suspicion that what we call “objects” are not self-announcing substances but stabilised articulations within a mediated field. In The Architecture of Encounter, this becomes still sharper: subjects and objects are not ontological primitives but abstractions from recurring encounter-structures. That already places the framework some distance from ordinary metaphysical furniture.
So far, then, the affinity is genuine.
But it is just as important not to overstate it.
Goodman’s centre of gravity is symbolic and versional. His concern is with how worlds are made through systems of description, notation, projection, ordering, and exemplification. The operative verbs are things like sort, render, compose, construct. The world is inseparable from the version.
MEOW and The Architecture of Encounter are doing something heavier. They are not merely offering a theory of how descriptions organise a world. They are offering an ontology in which encounter-events are primary. The basic unit is not an interpreted object, nor a version, nor a sentence, but a structured event of mediated contact under constraint. Mediation is not a regrettable screen placed between mind and world. It is constitutive of whatever relation there is. But neither is mediation free invention. Encounter is answerable to what resists, pushes back, stabilises, recurs, and converges. That is the role of constraint.
This is where the deepest divergence emerges.
Goodman is often read, not unfairly, as weakening the notion of a single underlying world more radically than MEOW can tolerate. His pluralism risks allowing “worldmaking” to carry most of the ontological burden. The result can begin to sound as though right versions are all the realism one is entitled to. There are worlds, or world-versions, and their legitimacy depends less on correspondence to a singular underlying reality than on fit, function, coherence, utility, and internal rightness.
MEOW resists that move. It does not return to vulgar realism, with its fantasy of a view from nowhere, but it also refuses to let mediation collapse into fabrication. Constraint is not a decorative afterthought. It is the realist anchor. One may have multiple mediations, multiple articulations, multiple ontological grammars, multiple local stabilisations, but these are not unconstrained improvisations. They are answerable to an invariant field of relational resistance.
Put more brutally: Goodman destabilises the ready-made world and then tends to leave us with versions. MEOW destabilises the ready-made world and then asks what must be true for divergent mediations nonetheless to converge, however partially, on the same resistant reality.
That difference matters.
It matters again when language enters the picture. Goodman grants an enormous role to symbol systems in worldmaking. MEOW, especially once read through The Architecture of Encounter and A Language Insufficiency Hypothesis, treats language more suspiciously. Language matters, certainly, but it is late, compressed, and lossy. It is not the primordial engine of world-constitution. It is a finite synchronisation technology layered atop more basic forms of mediation: biological, perceptual, attentional, cognitive, social. Language helps coordinate. It helps compress. It helps stabilise public handling. But it also distorts, truncates, nominalises, and overcommits.
That is where LIH adds a useful corrective to Goodman. If Goodman sometimes sounds like a connoisseur of world-versioning, LIH reminds us that our versioning machinery is often embarrassingly underpowered for the tasks philosophers assign to it. Human beings keep trying to force syntax to carry ontological burdens it was never built to bear. We take grammatical distinctions for metaphysical disclosures. We inherit noun-heavy structures and then wonder why the world starts looking like a warehouse of things. We reify processes, discretise continua, and carve durational realities into portable lexical chunks. Then, having manufactured these pseudo-stabilities, we congratulate ourselves for discovering “selves”, “minds”, “meanings”, “moral facts”, and other linguistic taxidermy.
Goodman certainly helps expose the active role of symbolic systems. But LIH presses further by insisting that symbolic systems are not merely worldmaking tools. They are also bottlenecks. They fail. They coarsen. They generate ontological illusions through the very act of public coordination.
That is why I would not place Goodman and MEOW in opposition, but in a relation of partial inheritance and correction.
Goodman is valuable because he helps dismantle the myth of passive representation. He is right to resist the idea that language or symbolisation merely records a pre-cut world. He is right to foreground selection, ordering, categorisation, and articulation. He is right to reject the transparent-window fantasy.
But from a MEOW standpoint, he does not go far enough into encounter, and perhaps goes too far into version.
What is missing is a richer account of pre-linguistic mediation, presentational structure, salience, affordance, and the layered constraints under which any symbolic practice becomes possible in the first place. Symbol systems do not float free. They do not arise in a void. They are parasitic upon lived, embodied, constrained encounter. Nor is their plurality enough, by itself, to explain why some articulations fail, why some converge, why some distort in systematic ways, or why reality resists our preferred descriptions with such vulgar persistence.
That last point is worth dwelling on, because it is where many anti-realist gestures lose their nerve. The fact that access is mediated does not imply that reality is manufactured. The fact that articulation is active does not imply that resistance is optional. The fact that classifications vary does not imply that there is nothing to be classified beyond the classificatory act.
So the bottom line remains the same.
Goodman is useful for breaking the spell of the one already-made world and for showing that symbolisation is not passive mirroring. But MEOW and The Architecture of Encounter push in a different direction. They relocate the primary philosophical action from symbol systems to encounter-events, from worldmaking to world-disclosure under mediation, and from plural worlds to plural access under constraint. A Language Insufficiency Hypothesis then sharpens the point by showing that language is not an omnipotent engine of constitution but a compression scheme with predictable failure modes.
I published an essay on the Frege–Geach problem in February. I published an update yesterday. I still wasn’t satisfied, so I engaged with several LLMs. This was my approach.
The involved LLMs were:
Claude
Grok
ChatGPT
Gemini
Audio: NotebookLM summary podcast of this topic. (This summary misses the mark in some ways, but it brings up some interesting observations along the way.)
First, I fed them some documents in no particular order, my goal being to share my own knowledge and position on the purported problem.
I am interested in resolving the Frege–Geach problem, but it seems I can only dissolve it. This doesn’t appear to be adequate for some analytical philosophers. How might I get closer to resolving it? My main argument is that they are assuming that language is stronger than it is, and they don’t agree with my argument.
As the prompt notes and by design, many analytical philosophers are reluctant to grant the degree of insufficiency I take to be constitutive of natural language, especially where logical embedding is concerned. Evidently, that counts as my not wanting to play their game. From my perspective, they are committed to a different ontological grammar. What this means practically is that I need to present my solution proposal in their terms. This doesn’t mean their terms are right; problems are only relevant in their dialect, even though my argument is that all dialects are lossy – mine included.
Part of the challenge is that formal logic was invented precisely because ordinary language is imprecise, yet its standards are often retrofitted back onto natural language as though they revealed what language must have been doing all along.
Without sharing the entire play-by-play of the transcripts, I established my course of action. I’d dissolved the problem, but I hadn’t yet resolved it.
My initial intuition of several years ago was to argue that they were expecting too much from grammar. I’ll use a well-worn example. Follow these statements:
IF ‘Murder is wrong.’
THEN ‘If murder is wrong, then getting your brother to murder is wrong.’
SO ‘Getting your brother to murder is wrong.’
According to them, the embedded ‘murder is wrong‘ doesn’t make sense. Here’s their logic:
According to Ayer, moral statements are simply emotive. When one utters, ‘murder is wrong‘, they are really saying ‘Boo, murder‘ – ‘I don’t like murder‘.
If ‘murder‘ is defined as ‘killing disallowed by the state‘, then murder is wrong might be translated into ‘killing disallowed by the state is wrong’ or ‘what the state declares is wrong is wrong’, but we also know that the state makes many pronouncements, many of which carry no moral weight and others which are counter to expected moral positions – law does not equal moral, and vice versa. Let’s move on and revisit our statements:
IF ‘Boo to Murder is wrong.’
THEN ‘If boo to murder is wrong, then boo to getting your brother to murder is wrong.’
SO ‘Getting your brother to murder is wrong.’
My intuition was that the embedded clause does not perform the same linguistic act as the standalone assertion, even if the lexical material is repeated. We’re committing a category error. More crucially, the category it belongs to doesn’t exist, so it’s unspecified. It needs to be invented.
Although I struggled to find apt nomenclature, I settled on performance-sensitive expressions.
A parallel challenge is that the solution can’t be a simple carve-out for moral language. Whilst I feel that moral language does use its own grammar and semantics, I don’t expect analytical philosophers to accept this assertion, so the solution should be more generalisable. I’d need to demonstrate where else this conditional logic fails in the same manner.
ChatGPT had this to say in response to a draft:
The comparative-cost section is good, but one sentence should be added to pre-empt the “your account also uses theory-laden notions” objection. A critic may reply: your own terms, like “coherence zone” and “synchronisation protocol,” are also theoretical machinery. True enough. The difference is that your machinery is independently motivated and not introduced solely to patch Frege-Geach. You imply this already, but it would help to say so directly.
It offers clearer language:
Embedded moral predicates are not semantically identical to their unembedded counterparts, but neither are they inert; they are performance-sensitive expressions whose full evaluative load is attenuated under embedding while a thinner inferential profile remains available for reasoning.
In any case, I am still polishing the essay, dotting Is and crossing Ts. I think I’ve got the main argument and some examples. One of my weaknesses may be that I rely heavily on my own theories, but these are published and debatable on their own merits.
A real challenge of some problems and paradoxes is that they are self-inflicted. Faith that language is truth apt and lossless leads down many blind alleys.
“Whom do you serve?” is the unasked question in the Grail story. Parsifal’s failure is usually treated as spiritual unreadiness, moral immaturity, or some such medieval hush. I’ve never had much sympathy for that reading. My own experience has made it difficult to leave certain questions politely untouched.
In my late teens and early twenties, I worked in Los Angeles as a musician, recording engineer, and producer. That meant spending a good deal of time around wealthy celebrities, not at parties but at work. The useful lesson was not glamour. It was status. More precisely, it was learning how often status functions as atmosphere: something to be indulged while present and ignored once it passes.
Berry Gordy once walked into a mixing session at his own studio and imposed a round of suggestions on a track headed for film. Nobody contradicted him. We didn’t need to. We knew that when he left, we would reset the console and write off the lost hour as weather. That is what aura often is: not wisdom, not competence, merely a temporary distortion field around a person whom others have learned not to challenge in the moment.
Gordy was an exception. Most days, there were no sacred cows in the room. Egos were bruised, feelings hurt, and compromises made. Work got done. If anything, the greater impostors were the label middle managers, appeased when present, ignored when absent, and forever acting as though they were paying for decisions that would ultimately be recouped from the artist anyway.
By the time I left music and went legit, I had already learnt the useful part: never mistake decorum for truth. I was never fully indoctrinated into pedestal-thinking, and I have never since managed to treat institutional aura as anything but local theatre. That is why the Grail story catches me at an odd angle. I do not identify with the knight who withholds the question. I identify with the person who asks it anyway, or with the child in Andersen who has not yet learned that collective performance outranks plain observation.
That, perhaps, is the real training most institutions require: not belief, but acculturation into silence. Some of us never quite acquire it.
Perhaps not 100% because I’ve just spent hours chatting with several LLMs, complaining about the spate of purported AI detectors that tell me ‘this content shows a high similarity to AI-generated content’ or somewhat.
Audio: NotebookLM summary podcast of this topic.
If I weren’t already familiar with the AI tells, I am now, as Claude reluctantly shared this:
Lexical tics. The ones you’ve already flagged are real: “delve,” “robust,” “nuanced,” “multifaceted,” “landscape” (used metaphorically for any domain whatsoever), “tapestry,” “leverage” (as verb), “facilitate,” “underscores.” There’s also a tier of slightly subtler ones — “notably,” “crucially,” “it’s worth noting that,” “this is particularly important because” — which function as false-emphasis markers, lending rhetorical weight where the content doesn’t actually earn it.
Structural patterns. This is where it gets more diagnostic than any single word choice. LLM prose tends toward a very particular architecture: a thesis-restating opening sentence, three to five parallel elaboration points (often with suspiciously balanced treatment), and a synthesising conclusion that circles back to the opening. The ratio of setup-to-substance is high. Everything gets contextualised before it gets said. There’s a compulsive both-sides-ism, a hedging reflex — “while X is certainly true, it’s also important to consider Y” — that reads less like genuine intellectual caution and more like a conflict-avoidance subroutine.
Tonal tells. LLM text tends to be relentlessly encouraging. There’s an affective flatness masquerading as warmth — everything is “great question,” everything “raises important considerations.” The prose never gets genuinely irritable, confused, bored, or reckless. It maintains a kind of supervisory friendliness, the register of a middle manager who’s been on a coaching course. Relatedly, there’s almost never a genuinely surprising analogy or a metaphor that risks falling flat — the figurative language is always safe, always legible, always serviceable.
Punctuation and rhythm. The em-dash point is real but often overstated. More telling is sentence-length uniformity — LLM prose tends to hover in a narrow band of 15–25 words per sentence, rarely producing either the genuinely terse or the genuinely complex-subordinated. Paragraph length is also strikingly regular. The prose breathes at metronomic intervals.
The absence of mess. Perhaps the deepest tell. Human writing — even polished human writing — contains asymmetries: a paragraph that runs long because the writer got interested, a point that gets dropped without resolution, a transition that’s slightly rough, a joke that doesn’t quite land but stays in anyway. LLM prose is clean in a way that signals not craft but indifference — nothing is there because the writer cared about it, so nothing is disproportionate.
Add to this grammar-checkers attempting to drive writing back to middle school formats – not to mention competing grammar checkers hypercorrecting each other. The struggle is real.
NotebookLM Infographic on this topic.
Of course, a solution might be just to review the content and undo these inclusions. Almost interestingly, I use Autocrit (but not often because it is too orthodox), but it does serve somewhat as an anti-AI agent, assessing rhythm and pace, plus the usual copyediting functions.
I prefer to draft in a stream-of-consciousness style. The AI organises my messes, so if I asked it to, it would remove or relocate my functional parenthetical about Autocrit. But I’m leaving it just to prove I’m human. Or did I add it to an AI-scripted piece? 🧐
Whilst I considered that I could either overdo AI or join the 54 per cent of Americans who write at a third-grade level. Grok suggested something even more sinister – Friggin Musk. It suggested that I double down on the AI likeness and make my content into an AI parody factory – overpopulate it with em-dashes, deving, and tapestry. Evidently, Carole King was AI before Suno.
In any case – and AI might suggest moving this to the top – the problem is that I now have an additional layer that interrupts my flow and process. It’s disconcerting, and I resent it. My psyche is disturbed to appease witchhunters. And it’s bollox.
The question is whether to succumb to the moral suasion or ignore the moral posturing.
This post contains no sugar, salt, fat, carbohydrates, protein, or fibre. No animals were harmed in the production of this blog. All proceeds will be donated to the Unicorn Recovery Foundation.