A great deal of philosophy begins with the claim that we ought to examine our assumptions. Fewer philosophers seem interested in examining the mechanisms that decide which assumptions are allowed to count as philosophy in the first place.
This is not a polemic about the Analytic–Continental divide. It’s an observation about how that divide quietly maintains itself. The immediate provocation was banal. Almost embarrassingly so.
Are cold, or do we have cold?
In English, the answer feels obvious. I am cold. The grammar barely registers. In French, Italian, or German, the structure flips. One has cold. Or hunger. Or thirst. Or age. Or a name, understood as something one performs rather than something one is. I spoke about this here and here. Indulge this link to the original position being argued.
On the surface, this looks like a curiosity for linguistics students. A translation quirk. A grammatical footnote. But grammar is rarely innocent.
Audio: NotepadLM summary podcast on this topic.
Grammar as Ontological Scaffolding
The verbs to be and to have are not neutral carriers. They quietly encode assumptions about identity, property, possession, and stability.
When I say I am cold, I cast coldness as a property of the self. It becomes something like height or nationality: a state attributable to the person. When I say I have cold, the experience is externalised. The self remains distinct from the condition it undergoes. Neither option is metaphysically clean.
Both structures smuggle in commitments before any philosophy has been done. One risks inflating a transient sensation into an ontological state. The other risks reifying it into a thing one owns, carries, or accumulates. My own suggestion in a recent exchange was a third option: sensing.
Cold is not something one is or has so much as something one feels. A relational encounter. An event between organism and environment. Not an identity predicate, not a possession.
This suggestion was met with a fair pushback: doesn’t saying that cold ‘belongs to the world’ simply introduce a different metaphysical assumption? Yes. It does. And that response neatly demonstrates the problem.
When Grammar Starts Doing Philosophy
The original claim was idiomatic, not ontological. It was a negative gesture, not a positive thesis. The point was not to relocate cold as a mind-independent substance floating about like a rock. It was to resist treating it as an essence of the person. But once you slow down, you see how quickly everyday grammar demands metaphysical loyalty.
Being invites substance. Having invites inventory. Sensing keeps the relation open, but even that makes people nervous. This nervousness is instructive. It reveals how much metaphysical weight we quietly load onto grammatical habits simply because they feel natural. And that feeling of naturalness matters more than we like to admit.
Two Philosophical Temperaments, One Linguistic Groove
At this point, the temptation is to draw a clean line:
On one side: the Anglo-American Analytic tradition, comfortable treating mental states as properties, objects, or items to be catalogued. Locke’s introspective inventory. Hume’s bundle. Logical positivism’s clean surfaces.
On the other: the Continental tradition, suspicious of objectification, insisting on an irreducible subject for whom experience occurs but who is never identical with its contents. Kant, Husserl, Heidegger, Sartre.
The grammar aligns disturbingly well. Languages that habitually say I am cold make it feel natural to treat experience as something inspectable. Languages that insist on having or undergoing experiences keep the subject distinct by default.
This is not linguistic determinism. English speakers can read phenomenology. German speakers can do analytic philosophy. But language behaves less like a prison and more like a grooved path. Some moves feel obvious. Others feel forced, artificial, or obscure.
Philosophies do not arise from grammar alone. But grammar makes certain philosophies feel intuitively right long before arguments are exchanged.
Where Gatekeeping Enters Quietly
This brings us to the part that rarely gets discussed.
The Analytic–Continental divide persists not only because of philosophical disagreement, but because of institutional reinforcement. Peer review, citation norms, and journal cultures act as boundary-maintenance mechanisms. They are not primarily crucibles for testing ideas. They are customs checkpoints for recognisability.
I have been explicitly cautioned, more than once, to remove certain figures or references depending on the venue. Don’t mention late Wittgenstein here. Don’t cite Foucault there. Unless, of course, you are attacking them. This is not about argumentative weakness. It’s about genre violation.
Hybrid work creates a problem for reviewers because it destabilises the grammar of evaluation. The usual criteria don’t apply cleanly. The paper is difficult to shelve. And unshelvable work is treated as a defect rather than a signal. No bad faith is required. The system is doing what systems do: minimising risk, preserving identity, maintaining exchange rates.
Cold as a Diagnostic Tool
The reason the cold example works is precisely because it is trivial.
No one’s career depends on defending a metaphysics of chilliness. That makes it safe enough to expose how quickly grammar starts making demands once you pay attention.
If something as mundane as cold wobbles under scrutiny, then the scaffolding we rely on for far more abstract notions – self, identity, agency, consciousness – should make us uneasy.
And if this is true for human languages, it becomes far more pressing when we imagine communication across radically different forms of life.
Shared vocabulary does not guarantee shared metaphysics. Familiar verbs can conceal profound divergence. First contact, if it ever occurs, will not fail because we lack words. It will fail because we mistake grammatical comfort for ontological agreement.
A Modest Conclusion
None of this settles which philosophical tradition is ‘right’. That question is far less interesting than it appears. What it does suggest is that philosophy is unusually sensitive to linguistic scaffolding, yet unusually resistant to examining the scaffolding of its own institutions.
We pride ourselves on questioning assumptions while quietly enforcing the conditions under which questions are allowed to count. Cold just happens to be a good place to start noticing.
A Footnote on Linguistic Determinism
It’s worth being explicit about what this is not. This is not an endorsement of strong linguistic determinism, nor a revival of Sapir–Whorf in its more ambitious forms. English speakers are not condemned to analytic philosophy, nor are Romance-language speakers predestined for phenomenology.
Language does not imprison thought. It inclines it.
Grammar operates less like a set of handcuffs and more like a well-worn path. Some moves feel effortless. Others require deliberate resistance. Philosophical traditions co-evolve with these habits, reinforcing what already feels natural while treating alternatives as strained, obscure, or unnecessary.
As the publication date of A Language Insufficiency Hypothesis (LIH) draws nearer, I feel it’s a good time to promote it (obviously) and to introduce some of the problems it uncovers – including common misperceptions I’ve already heard. Through this feedback, I now understand some of the underlying structural limitations that I hadn’t considered, but this only strengthens my position. As I state at the start of the book, the LIH isn’t a cast-in-stone artefact. Other discoveries will inevitably be made. For now, consider it a way to think about the deficiencies of language, around which remediation strategies can be developed.
Audio: NotebookLM summary podcast of this content.
Let’s clear the undergrowth first. The Language Insufficiency Hypothesis is not concerned with everyday ambiguity, garden-variety polysemy, or the sort of misunderstandings that vanish the moment someone bothers to supply five seconds of context. That terrain is already well-mapped, thoroughly fenced, and frankly dull.
Take the classic sort of example wheeled out whenever someone wants to sound clever without doing much work:
‘I made a 30-foot basket’.
Video: a woman making a large basket
If you’re a basketweaver, you picture an absurdly large basket and quietly question the maker’s life choices. If you’re watching basketball, you hear ‘score’. If you’re anywhere near the context in which the sentence was uttered, the meaning is obvious. If it isn’t, the repair cost is trivial. Add context, move on, live your life.
Language did not fail here. It merely waited for its coat. This is not the sort of thing the LIH loses sleep over.
The Groucho Marx Defence, or: Syntax Is Not the Problem
Logicians and armchair philosophers love to reach for jokes like Groucho Marx’s immortal line:
‘I shot an elephant in my pyjamas. Why it was wearing my pyjamas, I’ll never know’.
Video: A man and elephant in pyjamas (no sound)
Yes, very funny. Yes, the sentence allows for a syntactic misreading. No, nobody actually believes the elephant was lounging about in striped silk. The humour works precisely because the “wrong” parse is momentarily entertained and instantly rejected.
Again, language is not insufficient here. It’s mischievous. There’s a difference.
If the LIH were worried about this sort of thing, its ambitions would be indistinguishable from an undergraduate logic textbook with better branding.
Banks, Rivers, and the Myth of Constant Confusion
Likewise, when someone in a city says, ‘I went to the bank’, no sane listener imagines them strolling along a riverbank, unless they are already knee-deep in pastoral fantasy or French tourism brochures. Context does the heavy lifting. It almost always does.
Video: Rare footage of me trying to withdraw funds at my bank (no sound)
This is not a crisis of meaning. This is language functioning exactly as advertised.
Where the Trouble Actually Starts: Contestables
The LIH begins where these tidy examples stop being helpful. It concerns itself with Contestables: terms like truth, freedom, justice, fairness, harm, equality. Words that look stable, behave politely in sentences, and then detonate the moment you ask two people what they actually mean by them. These are not ambiguous in the casual sense. They are structurally contested.
In political, moral, and cultural contexts, different groups use the same word to gesture at fundamentally incompatible conceptual frameworks, all while assuming a shared understanding that does not exist. The conversation proceeds as if there were common ground, when in fact there is only overlap in spelling.
That’s why attempts to ‘define’ these terms so often collapse into accusation:
That’s not what freedom means. That’s not real justice. You’re redefining truth.
No, the definitions were never shared in the first place. The disagreement was smuggled in with the noun.
‘Just Ignore the Word’ Is Not a Rescue
A common response at this point is to suggest that we simply bypass the troublesome term and discuss the concrete features each party associates with it. Fine. Sensible. Often productive. But notice what this manoeuvre concedes. It does not save the term. It abandons it.
If meaningful discussion can only proceed once the word is set aside and replaced with a list of clarifications, constraints, examples, and exclusions, then the word has already failed at its primary job: conveying shared meaning. This is precisely the point the LIH is making.
The insufficiency is not that language is vague, or flexible, or context-sensitive. It’s that beyond a certain level of conceptual complexity, language becomes a confidence trick. It gives us the feeling of agreement without the substance, the appearance of communication without the transaction.
At that point, words don’t merely underperform. They mislead.
I shared this post not too long ago. Today, I shared it in a different context, but I feel is interesting – because I feel that many things are interesting, especially around language and communication.
Ocrampal shared a link to an article debating whether we are cold or have cold. Different cultures express this differently. It’s short. Read it on his site.
Audio: Exceptional NotebookLM summary podcast of this topic.
I replied to the post:
Nicely observed. I’ve pondered this myself. Small linguistic tweak: between être and avoir, avoir already behaves better metaphysically, but sentir seems the cleanest fit. Cold isn’t something one is or has so much as something one senses — a relational encounter rather than an ontological state or possession.
Between having and being, having is the lesser sin — but sensing/feeling feels truer. Cold belongs to the world; we merely sense it.
He replied in turn:
Agree except for: “Cold belongs to the world”. That is a metaphysical assumption that has consequences …
Finally (perhaps, penultimately), I responded:
Yes, it does. That statement was idiomatic, to express that ‘cold’ is environmental; we can’t be it or possess it. Coincidentally, I recently wrote about ‘cold’ in a different context:
A more verbose version of this response might have been:
This pushback is fair, but I’m not trying to re-ontologise cold. “Belongs to the world” in that context is doing rhetorical, not metaphysical, work; it’s idiomatic.
The point isn’t that cold is a mind-independent substance waiting around like a rock. It’s that whatever cold is, it doesn’t sit comfortably as an identity predicate (‘I am…cold’ – ‘J’ai…froid‘) or a possession (‘I have…cold’ – so, not ‘Je suis…froid‘) – neither to be confused with ‘I have a cold’, a different animal altogether.
‘Sensing’ (‘I feel…cold’ – ‘Je me sens…froid‘ – we have to use the reflexive pronoun, me, here; in English, this syntax has been deprecated) keeps the relation explicit without smuggling in ownership or essence. It leaves cold as an encounter-property, not a thing I contain and not a thing I am.
If anything, that phrasing was meant to resist metaphysical inflation, not commit to it.
And this is exactly the problem I gestured at in the aliens piece. We mistake familiar grammatical scaffolding for shared metaphysics. We assume that if the sentence parses cleanly, the ontology must be sound.
Language doesn’t just describe experience. It quietly files it into categories and then acts surprised when those categories start making demands.
Cold, like aliens, exposes the trick. The moment you slow down, the grammar starts to wobble. And that wobble is doing far more philosophical work than most of our declarative sentences are willing to admit.
Naturally, it will make more sense alongside the book. But it may still provide a bit of entertainment – and mild discomfort – in the meantime.
tl;dr: Language is generally presumed to be stable. Words mean what you think they mean, right? A table is a table. A bird is a bird. Polysemy aside, these are solid, dependable units.
Then we arrive at freedom, justice, truth, and an entire panoply of unstable candidates. And let’s not even pretend qualia are behaving themselves.
So when someone says ‘truth’, ‘free speech’, or ‘IQ’, you may suddenly realise you’ve been arguing with a cardboard cut-out wearing your own assumptions. That isn’t just interpersonal mischief. It’s language doing exactly what it was designed to do: letting you glide over the hard problems while sounding perfectly reasonable.
Audio: Short NotebookLM summary of this page content*
Video: Legacy video explaining some features of the LIH.
If that sounds banal, you’ve already fallen for the trap.
Give it a try – or wait until you’ve digested the book. Not literally, unless you’re short on fibre.
Cheers.
Written by Bry Willis
microglyphics
* As I’ve cited previously, the quality of NotebookLM varies – usually in predictable directions. This one does well enough, but it doesn’t have enough context to get the story right (because it was only drawing from this page rather than from a fuller accounting of the LIH). Its trailing comment reveals that it doesn’t grasp that “new words” don’t solve the problem.
Earlier, it suggests that language is intentionally vague. This is not an assertion I make. You can read some of the earlier incarnations, or you can wait for it to be published.
Why shared language creates the illusion – not the reality – of shared experience
Human beings routinely assume that if another agent speaks our language, we have achieved genuine mutual understanding. Fluency is treated as a proxy for shared concepts, shared perceptual categories, and even shared consciousness. This assumption appears everywhere: in science fiction, in popular philosophy videos, and in everyday cross-cultural interactions. It is a comforting idea, but philosophically indefensible.
Video: Could You Explain Cold to an Alien? – Hank Green
Recent discussions about whether one could ‘explain cold to an alien’ reveal how deeply this assumption is embedded. Participants in such debates often begin from the tacit premise that language maps transparently onto experience, and that if two interlocutors use the same linguistic term, they must be referring to a comparable phenomenon.
A closer analysis shows that this premise fails at every level.
Audio: NotebookLM summary podcast on this topic.
Shared Language Does Not Imply Shared Phenomenology
Even within the human species, thermal experience is markedly variable. Individuals from colder climates often tolerate temperatures that visitors from warmer regions find unbearable. Acclimation, cultural norms, metabolic adaptation, and learned behavioural patterns all shape what ‘cold’ feels like.
If the same linguistic term corresponds to such divergent experiences within a species, the gap across species becomes unbridgeable.
A reptile, for example, regulates temperature not by feeling cold in any mammalian sense
A reptile, for example, regulates temperature not by feeling cold in any mammalian sense, but by adjusting metabolic output. A thermometer measures cold without experiencing anything at all. Both respond to temperature; neither inhabits the human category ‘cold’.
Thus, the human concept is already species-specific, plastic, and contextually learned — not a universal experiential module waiting to be translated.
Measurement, Behaviour, and Experience Are Distinct
Thermometers and reptiles react to temperature shifts, and yet neither possesses cold-qualia. This distinction illuminates the deeper philosophical point:
Measurement registers a variable.
Behaviour implements a functional response.
Experience is a mediated phenomenon arising from a particular biological and cognitive architecture.
Aliens might measure temperature as precisely as any scientific instrument. That alone tells us nothing about whether they experience anything analogous to human ‘cold’, nor whether the concept is even meaningful within their ecology.
The Problem of Conceptual Export: Why Explanation Fails
Attempts to ‘explain cold’ to hypothetical aliens often jump immediately to molecular description – slower vibrational states, reduced kinetic energy, and so forth. This presumes that the aliens share:
our physical ontology,
our conceptual divisions,
our sense-making framework,
and our valuation of molecular explanation as intrinsically clarifying.
But these assumptions are ungrounded.
Aliens may organise their world around categories we cannot imagine. They may not recognise molecules as explanatory entities. They may not treat thermal variation as affectively laden or behaviourally salient. They may not even carve reality at scales where ‘temperature’ appears as a discrete variable.
When the conceptual scaffolding differs, explanation cannot transfer. The task is not translation but category creation, and there is no guarantee that the requisite categories exist on both sides.
The MEOW Framework: MEOWa vs MEOWb
The Mediated Encounter Ontology (MEOW) clarifies this breakdown by distinguishing four layers of mediation:
T0: biological mediation
T1: cognitive mediation
T2: linguistic mediation
T3: social mediation
Humans run MEOWa, a world structured through mammalian physiology, predictive cognition, metaphor-saturated language, and social-affective narratives.
Aliens (in fiction or speculation) operate MEOWb, a formally parallel mediation stack but with entirely different constituents.
Two systems can speak the same language (T2 alignment) whilst:
perceiving different phenomena (T0 divergence),
interpreting them through incompatible conceptual schemas (T1 divergence),
and embedding them in distinct social-meaning structures (T3 divergence).
Linguistic compatibility does not grant ontological compatibility. MEOWa and MEOWb allow conversation but not comprehension.
Fiction as Illustration: Why Aliens Speaking English Misleads Us
In Sustenance, the aliens speak flawless Standard Southern English. Their linguistic proficiency invites human characters (and readers) to assume shared meaning. Yet beneath the surface:
Their sensory world differs;
their affective architecture differs;
their concepts do not map onto human categories;
and many human experiential terms lack any analogue within their mediation.
The result is not communication but a parallel monologue: the appearance of shared understanding masking profound ontological incommensurability.
The Philosophical Consequence: No Universal Consciousness Template
Underlying all these failures is a deeper speciesist assumption: that consciousness is a universal genus, and that discrete minds differ only in degree. The evidence points elsewhere.
If “cold” varies across humans, fails to apply to reptiles, and becomes meaningless for thermometers, then we have no grounds for projecting it into alien phenomenology. Nor should we assume that other species – biological or artificial – possess the same experiential categories, emotional valences, or conceptual ontologies that humans treat as foundational.
Consciousness is not a universal template awaiting instantiation in multiple substrates. It is alocal ecological achievement, shaped by the mediations of the organism.
Conclusion
When aliens speak English, we hear familiarity and assume understanding. But a shared phonological surface conceals divergent sensory systems, cognitive architectures, conceptual repertoires, and social worlds.
Linguistic familiarity promises comprehension, but delivers only the appearance of it. The deeper truth is simple: Knowing our words is not the same as knowing our world.
And neither aliens, reptiles, nor thermometers inhabit the experiential space we map with those words.
Afterword
Reflections like these are precisely why my Anti-Enlightenment project exists. Much contemporary philosophical commentary remains quietly speciesist and stubbornly anthropomorphic, mistaking human perceptual idiosyncrasies for universal structures of mind. It’s an oddly provincial stance for a culture that prides itself on rational self-awareness.
To be clear, I have nothing against Alex O’Connor. He’s engaging, articulate, and serves as a gateway for many encountering these topics for the first time. But there is a difference between introducing philosophy and examining one’s own conceptual vantage point. What frustrates me is not the earnestness, but the unexamined presumption that the human experiential frame is the measure of all frames.
Having encountered these thought experiments decades ago, I’m not interested in posturing as a weary elder shaking his stick at the next generation. My disappointment lies elsewhere: in the persistent inability of otherwise intelligent thinkers to notice how narrow their perspective really is. They speak confidently from inside the human mediation stack without recognising it as a location – not a vantage point outside the world, but one local ecology among many possible ones.
Until this recognition becomes basic philosophical hygiene, we’ll continue to confuse linguistic familiarity for shared ontology and to mistake the limits of our own embodiment for the limits of consciousness itself.
A NotebookLM Cautionary Tale for the Philosophically Curious
Apologies in advance for the didactic nature of this post.
Every so often, the universe gives you a gift. Not the good kind, like an unexpected bottle of Shiraz, but the other kind – the ‘teachable moment’ wrapped in a small tragedy. In this case, a perfectly innocent run of MEOW GPT (my Mediated Encounter Ontology engine) was fed into NotebookLM to generate a pseudo-podcast. And NotebookLM, bless its little algorithmic heart, proceeded to demonstrate every classic mistake people make when confronting a relational ontology.
Audio: The misinterpretation of MEOW GPT: On Progress by NotebookLM that spawned this post.
It’s perfect. I couldn’t have scripted a better example of How Not To Read MEOW GPT if I’d hired a team of Enlightenment rationalists on retainer.
So consider this your public service announcement – and a guide for anyone experimenting with MEOW GPT at home, preferably while sitting down and not holding onto any cherished metaphysical delusions.
Video: Surreal Light through a Prism Clip for no particular reason (No sound)
Mistake 1: Treating a Thick Concept as a Single Glorious Thing
NotebookLM began, earnestly, by trying to uncover the ‘inner architecture of honour’, as if it were a cathedral with blueprints lying around.
This is the central error:
Honour is not a thing. There is no inner architecture.
There are only patterns – drifting, contested, historically mangled patterns – that happen to share a word. If you start with ‘What is honour?’, you’ve already fallen down the stairs.
Mistake 2: Rebuilding Essence From the T0–T3 Layers
MEOW GPT gives you biological (T0), cognitive (T1), linguistic (T2), and institutional/technical (T3) mediation because that’s how constraints emerge. NotebookLM, meanwhile, reconstructed these as ‘layers’ of the same virtue – like honour was a three-storey moral townhouse with a loft conversion.
No. The tiers are co-emergent constraints, not components of a moral particle. If your conclusion looks like a metaphysical onion, you’ve misread the recipe.
Mistake 3: Sneaking Virtue Ethics in Through the Fire Exit
NotebookLM kept returning to:
an ‘internal compass’
a ‘core record of the self’
a ‘lifelong ledger’
a ‘deep personal architecture’
At this point we might as well carve Aristotle’s name into the hull.
MEOW’s stance is simple: the self is not a marble statue – it’s an ongoing social, cognitive, and technical scandal. Treating honour as a personality trait is just the old moral ontology with a new hairstyle.
Mistake 4: Treating Polysemy as Noise, Not Evidence
NotebookLM acknowledged the differing uses of ‘honour’, but always with the implication that beneath the variations lies one pure moral essence. This is backwards. The ambiguity is the point. The polysemy isn’t messy data; it’s the signature of conceptual drift.
If you treat ambiguity as a problem to be ironed out, you’ve missed half the LIH and all of the MEOW.
Mistake 5: Turning MEOW Into a TED Talk
The podcast tried to wrap things up by contrasting honour’s “deep internal permanence” with the ephemerality of digital rating systems.
It’s cute, but it’s still modernist comfort-food. MEOW does not mourn for the ‘permanent self’. It doesn’t recognise such a creature. And digital honour doesn’t ‘replace’ the old patterns; it aggressively rewrites the honour-economy into algorithmic form. If your conclusion sounds like ‘ancient virtue meets modern technology’, that’s TED, not MEOW.
So How Should You Interpret MEOW GPT?
A short cheat-sheet for anyone experimenting at home:
There is no essence. Concepts like honour, truth, integrity, and justice are drift-patterns, not objects.
The tiers describe mediation, not ingredients. They’re co-emergent pressures, not building blocks.
Thick terms lie to you. Their apparent unity is linguistic camouflage.
Ambiguity is structural. If the term looks fuzzy, that’s because the world is fuzzy there.
If a concept feels granite-solid, you’re standing on conceptual quicksand. (Sorry.)
A Friendly Warning Label
Warning: If you believe thick moral concepts have single, universal meanings, MEOW GPT may cause temporary metaphysical discomfort. Consult your ontological physician if symptoms persist.
This isn’t a political post. It’s about language, the insufficiency of it, and the games we play when pretending words carry more weight than they do.
Luigi Mangione is the man accused of killing UnitedHealthcare CEO Brian Thompson. After his arrest, prosecutors stacked the usual charges – murder, firearms, assorted legal bric-a-brac – then added the cherry on top: domestic terrorism.
Audio: NotebookLM podcast on this topic.
Recently, a pretrial judge cut the cherry loose.
Murder, yes. Terrorism, no. Not because murder is less grotesque, but because the statutory definition won’t stretch that far without breaking.
NEW YORK, Sept 16 (Reuters) – A New York state judge dismissed on Tuesday two terrorism-related counts against Luigi Mangione over the December 2024 killing of health insurance executive Brian Thompson, though the 27-year-old remains charged with second-degree murder and eight other criminal counts in the case.
“There was no evidence presented of a desire to terrorize the public, inspire widespread fear, engage in a broader campaign of violence, or to conspire with organized terrorist groups,” Judge Gregory Carro found in a 12-page written decision (pdf). “Here, the crime – the heinous, but targeted and discrete killing of one person – is very different from the examples of terrorism set forth in the statute.” (source)
The prosecution insisted the label fit. The judge disagreed. Cue outrage, applause, and confusion. The crime is still horrific, but suddenly the word “terrorist” is off-limits.
The Elasticity of Terror
How can two educated parties look at the same set of facts and come to opposite conclusions? Because “terrorism” isn’t a Platonic form. It’s an elastic linguistic category. The prosecutor drags it out because “terrorist” is a magical word in American law: it inflates an already ugly act into a civilisation-level threat, unlocks harsher penalties, and lets politicians posture about national security.
The judge, however, reminded everyone that a bullet in Manhattan does not equal al-Qaeda. Murder, yes. Terrorism, no. Not because murder is less grotesque, but because the statutory definition won’t stretch that far without breaking.
Language Games, Legal Hierarchies
This is where it gets trickier. The judge isn’t merely “pulling rank”—though rank does matter. American jurisprudence is hierarchical: trial judges hand down rulings, appellate judges review them, and nine robed partisans in Washington can one day rewrite the whole script. On paper, these tiers are meant to iron out ambiguity. In practice, they multiply it.
Five minds say ‘constitutional,’ four say ‘unconstitutional,’ and the one-vote margin becomes binding law for 330 million people. That’s not truth; it’s hierarchy dressed in robes.
Even co-equal judges, reading the same facts, can diverge wildly. Split decisions at the Supreme Court prove the point: five minds say “constitutional,” four say “unconstitutional,” and the one-vote margin becomes binding law for 330 million people. That’s not the discovery of truth; it’s the triumph of one language game over another, enforced by hierarchy.
The Insufficiency Laid Bare
So we return to Mangioni. He has been charged with murder – the second degree flavour; that much is uncontested. But is he a “terrorist”? The prosecution said yes, the judge said no, and another judge, higher up or sitting elsewhere, might well say yes again. Each claim is defensible. Each is motivated by language, by politics, and by the institutional pressures of the bench.
And that’s the point. Language doesn’t tether itself to reality; it choreographs our endless arguments about reality. The law tries to tame it with hierarchies and definitions, but the seams always show. Mangioni is a murderer. Whether he is a terrorist depends less on his actions than on which interpretive dance is winning in the courtroom that day.
I’ve written a lot on the insufficiency of language, and it’s not even an original idea. Language, our primary tool for sharing thoughts and ideas, harbours a fundamental flaw: it’s inherently insufficient for conveying precise meaning. While this observation isn’t novel, recent developments in artificial intelligence provide us with new ways to illuminate and examine this limitation. Through a progression from simple geometry to complex abstractions, we can explore how language both serves and fails us in different contexts.
The Simple Made Complex
Consider what appears to be a straightforward instruction: Draw a 1-millimetre square in the centre of an A4 sheet of paper using an HB pencil and a ruler. Despite the mathematical precision of these specifications, two people following these exact instructions would likely produce different results. The variables are numerous: ruler calibration, pencil sharpness, line thickness, paper texture, applied pressure, interpretation of “centre,” and even ambient conditions affecting the paper.
This example reveals a paradox: the more precisely we attempt to specify requirements, the more variables we introduce, creating additional points of potential divergence. Even in mathematics and formal logic—languages specifically designed to eliminate ambiguity—we cannot escape this fundamental problem.
Precision vs Accuracy: A Useful Lens
The scientific distinction between precision and accuracy provides a valuable framework for understanding these limitations. In measurement, precision refers to the consistency of results (how close repeated measurements are to each other), while accuracy describes how close these measurements are to the true value.
Returning to our square example:
Precision: Two people might consistently reproduce their own squares with exact dimensions
Accuracy: Yet neither might capture the “true” square we intended to convey
As we move from geometric shapes to natural objects, this distinction becomes even more revealing. Consider a maple tree in autumn. We might precisely convey certain categorical aspects (“maple,” “autumn colours”), but accurately describing the exact arrangement of branches and leaves becomes increasingly difficult.
The Target of Meaning: Precision vs. Accuracy in Communication
To understand language’s limitations, we can borrow an illuminating concept from the world of measurement: the distinction between precision and accuracy. Imagine a target with a bullseye, where the bullseye represents perfect communication of meaning. Just as archers might hit different parts of a target, our attempts at communication can vary in both precision and accuracy.
Consider four scenarios:
Low Precision, Low Accuracy When describing our autumn maple tree, we might say “it’s a big tree with colourful leaves.” This description is neither precise (it could apply to many trees) nor accurate (it misses the specific characteristics that make our maple unique). The communication scatters widely and misses the mark entirely.
High Precision, Low Accuracy We might describe the tree as “a 47-foot tall maple with exactly 23,487 leaves displaying RGB color values of #FF4500.” This description is precisely specific but entirely misses the meaningful essence of the tree we’re trying to describe. Like arrows clustering tightly in the wrong spot, we’re consistently missing the point.
Low Precision, High Accuracy “It’s sort of spreading out, you know, with those typical maple leaves turning reddish-orange, kind of graceful looking.” While imprecise, this description might actually capture something true about the tree’s essence. The arrows scatter, but their centre mass hits the target.
High Precision, High Accuracy This ideal state is rarely achievable in complex communication. Even in our simple geometric example of drawing a 1mm square, achieving both precise specifications and accurate execution proves challenging. With natural objects and abstract concepts, this challenge compounds exponentially.
The Communication Paradox
This framework reveals a crucial paradox in language: often, our attempts to increase precision (by adding more specific details) can actually decrease accuracy (by moving us further from the essential meaning we’re trying to convey). Consider legal documents: their high precision often comes at the cost of accurately conveying meaning to most readers.
Implications for AI Communication
This precision-accuracy framework helps explain why AI systems like our Midjourney experiment show asymptotic behaviour. The system might achieve high precision (consistently generating similar images based on descriptions) while struggling with accuracy (matching the original intended image), or vice versa. The gap between human intention and machine interpretation often manifests as a trade-off between these two qualities.
Our challenge, both in human-to-human and human-to-AI communication, isn’t to achieve perfect precision and accuracy—a likely impossible goal—but to find the optimal balance for each context. Sometimes, like in poetry, low precision might better serve accurate meaning. In other contexts, like technical specifications, high precision becomes crucial despite potential sacrifices in broader accuracy.
The Power and Limits of Distinction
This leads us to a crucial insight from Ferdinand de Saussure’s semiotics about the relationship between signifier (the word) and signified (the concept or object). Language proves remarkably effective when its primary task is distinction among a limited set. In a garden containing three trees—a pine, a maple, and a willow—asking someone to “point to the pine” will likely succeed. The shared understanding of these categorical distinctions allows for reliable communication.
However, this effectiveness dramatically diminishes when we move from distinction to description. In a forest of a thousand pines, describing one specific tree becomes nearly impossible. Each additional descriptive detail (“the tall one with a bent branch pointing east”) paradoxically makes precise identification both more specific and less likely to succeed.
An AI Experiment in Description
To explore this phenomenon systematically, I conducted an experiment using Midjourney 6.1, a state-of-the-art image generation AI. The methodology was simple:
Generate an initial image
Describe the generated image in words
Use that description to generate a new image
Repeat the process multiple times
Attempt to refine the description to close the gap
Continue iterations
The results support an asymptotic hypothesis: while subsequent iterations might approach the original image, they never fully converge. This isn’t merely a limitation of the AI system but rather a demonstration of language’s fundamental insufficiency.
A cute woman and her dog stand next to a tree
One can already analyse this for improvements, but let’s parse it together.
a cute woman
With this, we know we are referencing a woman, a female of the human species. There are billions of women in the world. What does she look like? What colour, height, ethnicity, and phenotypical attributes does she embody?
We also know she’s cute – whatever that means to the sender and receiver of these instructions.
I used an indefinite article, a, so there is one cute woman. Is she alone, or is she one from a group?
It should be obvious that we could provide more adjectives (and perhaps adjectives) to better convey our subject. We’ll get there, but let’s move on.
and
We’ve got a conjunction here. Let’s see what it connects to.
her dog
She’s with a dog. In fact, it’s her dog. This possession may not be conveyable or differentiable from some arbitrary dog, but what type of dog is it? Is it large or small? What colour coat? Is it groomed? Is it on a leash? Let’s continue.
stand
It seems that the verb stand refers to the woman, but is the dog also standing, or is she holding it? More words could qualify this statement better.
next to a tree
A tree is referenced. Similar questions arise regarding this tree. At a minimum, there is one tree or some variety. She and her dog are next to it. Is she on the right or left of it?
We think we can refine our statements with precision and accuracy, but can we? Might we just settle for “close enough”?
Let’s see how AI interpreted this statement.
Image: Eight Midjourney renders from the prompt: A cute woman and her dog stand next to a tree. I’ll choose one of these as my source image.
Let’s deconstruct the eight renders above. Compositionally, we can see that each image contains a woman, a dog, and a tree. Do any of these match what you had in mind? First, let’s see how Midjourney describes the first image.
In a bout of hypocrisy, Midjourney refused to /DESCRIBE the image it just generated.
Last Midjourney description for now.
Let’s cycle through them in turn.
A woman is standing to the left of an old-growth tree – twice identified as an oak tree. She’s wearing faded blue jeans and a loose light-coloured T-shirt. She’s got medium-length (maybe) red-brown hair in a small ponytail. A dog – her black and white dog identified as a pitbull, an American Foxhound, and an American Bulldog – is also standing on his hind legs. I won’t even discuss the implied intent projected on the animal – happy, playful, wants attention… In two of the descriptions, she’s said to be training it. They appear to be in a somewhat residential area given the automobiles in the background. We see descriptions of season, time of day, lighting, angle, quality,
A woman is standing to the right of an old-growth tree. She’s wearing short summer attire. Her dog is perched on the tree.
An older woman and her dog closer up.
A read view of both a woman and her dog near an oak tree.
As it turned out, I wasn’t thrilled with any of these images, so I rendered a different one. Its description follows.
The consensus is that ‘a beautiful girl in a white dress and black boots stands next to a tree’ with a Jack Russell Terrier dog. I see birch trees and snow. It’s overcast. Let’s spend some time trying to reproduce it. To start, I’m consolidating the above descriptions. I notice some elements are missing, but we’ll add them as we try to triangulate to the original image.
A beautiful girl in a white dress and black boots stands next to a tree in a winter forest with birch trees. A Jack Russell Terrier dog is nearby, and the weather is cloudy. The photo has a cinematic style. --ar 128:71
This is pretty far off the mark. We need to account for the overall setting and composition, relative positioning, clothing, hair, camera, perspective – even lighting and film emulsion.
Let’s see how we can refine it with some adjectives. Before this, I asked Anthropic’s Claude 3.5 to describe the image. Perhaps we’ll get more details.
An atmospheric winter photograph was taken in what appears to be a birch and pine forest. The ground is partially covered with snow, creating a patchy pattern of white snow and brown leaves or soil. In the image, there's a person wearing winter attire consisting of a white knit turtleneck sweater, a white skirt, dark knee-high socks, and grey lace-up boots. They're standing against a large tree trunk with their arms crossed. Next to them is a Jack Russell Terrier mix dog with distinctive brown and white colouring. The dog appears to be well-behaved, sitting attentively beside the tree. The composition is quite striking, with the massive dark tree trunk serving as a central focal point against the misty, winter forest background. The white clothing of the person creates a visual echo with the white bark of birch trees visible in the background, as well as with the patches of snow on the ground. The overall mood of the photo is somewhat moody and contemplative, enhanced by what appears to be overcast lighting and the misty quality of the forest in the background. The colour palette is largely monochromatic, dominated by whites, greys, and browns, which adds to the winter atmosphere of the scene.
We don’t seem to be moving in a good direction. Let’s modify the initial prompt.
A beautiful girl in a white dress and tall laced black boots stands facing the camera to the right of large oak tree centred in the foreground of a winter forest with birch trees in the background. To the left of the tree is a Jack Russell Terrier dog looking at the camera, and the weather is cloudy. The photo has a cinematic style. --ar 128:71
I’ll allow the results to speak for themselves. Let’s see if we can’t get her out of the wedding gown and into a white jumper and skirt. I’ll bold the amends.
A beautiful girl in a white jumper and skirt wearing black leggings and tall laced black boots stands facing the camera to the right of large oak tree centred in the foreground of a winter forest with birch trees in the background. To the left of the tree is a Jack Russell Terrier dog looking at the camera, and the weather is cloudy. The photo has a cinematic style. --ar 128:71
s
A beautiful young woman with long brown hair pulled to the side of her face in a white jumper and white skirt wearing black leggings under tall laced black boots stands facing the camera to the right of large oak tree centred in the foreground of a winter forest with birch trees in the background. Patchy snow is on the ground. To the left of the tree is a Jack Russell Terrier dog looking at the camera, and the weather is overcast. The photo has a cinematic style. --ar 128:71
What gives?
I think my point has been reinforced. I’m getting nowhere fast. Let’s give it one more go and see where we end up. I’ve not got a good feeling about this.
A single large oak tree centred in the foreground of a winter forest with birch trees in the background. Patches of snow is on the ground. To the right of the oak tree stands a beautiful young woman with long brown hair pulled to the side of her face in a white jumper and white skirt wearing black boots over tall laced black boots. She stands facing the camera. To the left of the tree is a Jack Russell Terrier dog looking at the camera, and the weather is overcast. The photo has a cinematic style. --ar 128:71
With this last one, I re-uploaded the original render along with this text prompt. Notice that the girl now looks the same and the scene (mostly) appears to be in the same location, but there are still challenges.
After several more divergent attempts, I decided to focus on one element – the girl.
As I regard the image, I’m thinking of a police sketch artist. They get sort of close, don’t they? They’re experts. I’m not confident that I even have the vocabulary to convey accurately what I see. How do I describe her jumper? Is that a turtleneck or a high collar? It appears to be knit. Is is wool or some blend? does that matter for an image? Does this pleated skirt have a particular name or shade of white? It looks as though she’s wearing black leggings – perhaps polyester. And those boots – how to describe them. I’m rerunning just the image above through a describe function to see if I can get any closer.
These descriptions are particularly interesting and telling. First, I’ll point out that AI attempts to identify the subject. I couldn’t find Noa Levin by a Google search, so I’m not sure how prominent she might be if she even exists at all in this capacity. More interesting still, the AI has placed her in a scenario where the pose was taken after a match. Evidently, this image reflects the style of photographer Guy Bourdin. Perhaps the jumper mystery is solved. It identified a turtleneck. I’ll ignore the tree and see if I can capture her with an amalgamation of these descriptions. Let’s see where this goes.
A photo-realistic portrait of Israeli female soccer player Noa Levin wearing a white turtleneck sweater, arms crossed, black boots, and a short skirt, with long brown hair, standing near a tree in a winter park. The image captured a full-length shot taken in a studio setting, using a Canon EOS R5 camera with a Canon L-series 80mm f/2 lens. The image has been professionally color-graded, with soft shadows, low contrast, and a clean, sharp focus. --ar 9:16
Close-ish. Let’s zoom in to get better descriptions of various elements starting with her face and hair.
Now, she’s a sad and angry Russian woman with (very) pale skin; large, sad, grey eyes; long, straight brown hair. Filmed in the style of either David LaChapelle or Alini Aenami (apparently misspelt from Alena Aenami). One thinks it was a SnapChat post. I was focusing on her face and hair, but it notices her wearing a white (oversized yet form-fitting) jumper sweater and crossed arms .
I’ll drop the angry bit – and then the sad.
Stick a fork in it. I’m done. Perhaps it’s not that language is insufficient; it that my language skills are insufficient. If you can get closer to the original image, please forward the image, the prompt, and the seed, so I can post it.
The Complexity Gradient
A clear pattern emerges when we examine how language performs across different levels of complexity:
Categorical Distinction (High Success)
Identifying shapes among limited options
Distinguishing between tree species
Basic color categorization
Simple Description (Moderate Success)
Basic geometric specifications
General object characteristics
Broad emotional states
Complex Description (Low Success)
Specific natural objects
Precise emotional experiences
Unique instances within categories
Abstract Concepts (Lowest Success)
Philosophical ideas
Personal experiences
Qualia
As we move up this complexity gradient, the gap between intended meaning and received understanding widens exponentially.
The Tolerance Problem
Understanding these limitations leads us to a practical question: what level of communicative tolerance is acceptable for different contexts? Just as engineering embraces acceptable tolerances rather than seeking perfect measurements, perhaps effective communication requires:
Acknowledging the gap between intended and received meaning
Establishing context-appropriate tolerance levels
Developing better frameworks for managing these tolerances
Recognizing when precision matters more than accuracy (or vice versa)
Implications for Human-AI Communication
These insights have particular relevance as we develop more sophisticated AI systems. The limitations we’ve explored suggest that:
Some communication problems might be fundamental rather than technical
AI systems may face similar boundaries as human communication
The gap between intended and received meaning might be unbridgeable
Future development should focus on managing rather than eliminating these limitations
Conclusion
Perhaps this is a simple exercise in mental masturbation. Language’s insufficiency isn’t a flaw to be fixed but a fundamental characteristic to be understood and accommodated. By definition, it can’t be fixed. The gap between intended and received meaning may be unbridgeable, but acknowledging this limitation is the first step toward more effective communication. As we continue to develop AI systems and push the boundaries of human-machine interaction, this understanding becomes increasingly critical.
Rather than seeking perfect precision in language, we might instead focus on:
Developing new forms of multimodal communication
Creating better frameworks for establishing shared context
Accepting and accounting for interpretative variance
Building systems that can operate effectively within these constraints
Understanding language’s limitations doesn’t diminish its value; rather, it helps us use it more effectively by working within its natural constraints.
George Harrison asked What Is Life? in a song, but he had a spiritual bent. The question is actually even more fundamental. Science has no settled meaning of what life is. Some posit that a virus is not life, and there is a multicellular organism discovered here on earth that requires no oxygen to survive. So when we are looking for signs of life on other planets, what is it that we are looking for exactly?
I spend a lot of time calling out weasel words, but we can’t even reliably define something we fundamentally are, which is alive. What is life? Forget about truth, justice, love, and freedom. These are abstract concepts, but not life. We live. We see life—experience life. We are a subset of it, but how do we know we’ve accounted for the full domain? Could something non-living be intelligent?—have intelligence?
It’s late and I am heading into a new year, AD 2023 BCE. And I was just thinking. If I am to believe Descartes, at least I’m alive parce que je donc, but I’ve got no answers in this realm.