If You Don’t Understand How Language Works, You Should Lose Your Licence to Comment on LLMs

android robot police officer writing a citation,

The air is thick with bad takes. Scroll for five minutes and you’ll find someone announcing, usually with the pomp of a TEDx speaker, that “AI has no emotions” or “It’s not really reading.” These objections are less profound insights than they are linguistic face-plants. The problem isn’t AI. It’s the speakers’ near-total ignorance of how language works.

Audio: NotebookLM podcast on this topic.

Language as the Unseen Operating System

Language is not a transparent pane of glass onto the world. It is the operating system of thought: messy, recursive, historically contingent. Words do not descend like tablets from Sinai; they are cobbled together, repurposed, deconstructed, and misunderstood across generations.

If you don’t understand that basic condition, that language is slippery, mediated, and self-referential, then your critique of Large Language Models is just noise in the system. LLMs are language machines. To analyse them without first understanding language is like reviewing a symphony while stone deaf.

The Myth of “Emotions”

Critics obsess over whether LLMs “feel.” But feeling has never been the measure of writing. The point of a sentence is not how the author felt typing it, but whether the words move the reader. Emotional “authenticity” is irrelevant; resonance is everything.

Writers know this. Philosophers know this. LLM critics, apparently, do not. They confuse the phenomenology of the writer with the phenomenology of the text. And in doing so, they embarrass themselves.

The Licence Test

So here’s the proposal: a licence to comment on AI. It wouldn’t be onerous. Just a few basics:

  • Semiotics 101: Know that words point to other words more than they point to things.
  • Context 101: Know that meaning arises from use, not from divine correspondence.
  • Critical Theory 101: Know that language carries baggage, cultural, historical, and emotional, that doesn’t belong to the machine or the individual speaker.

Fail these, and you’re not cleared to drive your hot takes onto the information superhighway.

Meta Matters

I’ve explored some of this in more detail elsewhere (link to Ridley Park’s “Myth of Emotion”), but the higher-level point is this: debates about AI are downstream of debates about language. If you don’t grasp the latter, your pronouncements on the former are theatre, not analysis.

Philosophy has spent centuries dismantling the fantasy of words as perfect mirrors of the world. It’s perverse that so many people skip that homework and then lecture AI about “meaning” and “feeling.”

Sundials, Spacetime, and Other Human Fabrications

Time is not fundamental. It is not lurking behind the curtains of reality, counting down the universe like some cosmic metronome. Time is a human construct, a clumsy accounting trick invented so that hunter-gatherers could remember when to plant seeds, priests could know when to fleece the flock, and later, managers could know when to dock your wages.

Video: Sabine Hossenfelder discusses the proposed origins of time

Yes, our ancestors tracked cycles: the swing of day and night, the waxing of the moon, the slouch of the seasons. But this is proto-time at best. Call it rhythm, call it recurrence, call it the universe refusing to sit still. It was not time. It was change, and we anthropomorphised it.

Then along came Newton with his stopwatch. He baptised “absolute time,” a divine river that flowed “equably without relation to anything external.” In other words, he built the cosmic grandfather clock and declared it law. This fantasy held just long enough for Einstein to make mischief, weaving time into space with duct tape and calling it spacetime. Romantic, yes, but hardly fundamental. Time, in Einstein’s cosmos, bends and dilates depending on who’s doing the bending. Not exactly the firm bedrock you’d expect of a “dimension.”

Meanwhile, in the quantum world, time is the awkward dinner guest: not an observable, not dynamic, just an external parameter scribbled into the equations because no one knew where else to put it. Like a bad houseplant, it sits in the corner – unmeasurable, unaccountable, but impossible to get rid of.

And yet, not everyone has given up the ghost. One camp – think Julian Barbour and Carlo Rovelli – insists time is an illusion, something emergent from relations, clocks, and counters. Others, like Lee Smolin, flip the script and claim that time is more fundamental than space itself, the real stage upon which the cosmic drama plays out. Philosophical infighting aside, what this tells you is that physics doesn’t actually know what time is. They’re as clueless as Aristotle, who called it “the number of motion”, a definition so circular it should’ve been printed on a sundial.

Enter Constructor Theory (Deutsch & Marletto), which simply does away with time entirely in the fundamental laws. No ticking clocks, no background river. Just possible and impossible transformations, with time emerging only when we strap timers onto systems and start counting. Which, of course, makes perfect sense: time is what we measure with clocks – and clocks are just things that change.

The dirty secret is this: every culture, every cosmology, every physics textbook has tried to smuggle “time” in as if it were self-evident, while quietly redefining it to suit the mood. We can’t agree on whether it’s an illusion, a dimension, or the last shred of fundamentality left to cling to. And if that isn’t the mark of a human construct, I don’t know what is.

Ages of Consent: A Heap of Nonsense

A response on another social media site got me thinking about another Sorites paradox. The notion just bothers me. I’ve long held that it is less a paradox than an intellectually lazy way to manoeuvre around language insufficiency.

<rant>

The law loves a nice, clean number. Eighteen to vote. Sixteen to marry. This-or-that to consent. As if we all emerge from adolescence on the same morning like synchronised cicadas, suddenly equipped to choose leaders, pick spouses, and spot the bad lovers from the good ones.

But the Sorites paradox gives the game away: if you’re fit to vote at 18 years and 0 days, why not at 17 years, 364 days? Why not 17 years, 363 days? Eventually, you’re handing the ballot to a toddler who thinks the Prime Minister is Peppa Pig. Somewhere between there and adulthood, the legislator simply throws a dart and calls it “science.”

To bolster this fiction, we’re offered pseudo-facts: “Women mature faster than men”, or “Men’s brains don’t finish developing until thirty.” These claims, when taken seriously, only undermine the case for a single universal threshold. If “maturity” were truly the measure, we’d have to track neural plasticity curves, hormonal arcs, and a kaleidoscope of individual factors. Instead, the state settles for the cheapest approximation: a birthday.

This obsession with fixed thresholds is the bastard child of Enlightenment rationalism — the fantasy that human variation can be flattened into a single neat line on a chart. The eighteenth-century mind adored universals: universal reason, universal rights, universal man. In this worldview, there must be one age at which all are “ready,” just as there must be one unit of measure for a metre or a kilogram. It is tidy, legible, and above all, administratively convenient.

Cue the retorts:

  • “We need something.” True, but “something” doesn’t have to mean a cliff-edge number. We could design systems of phased rights, periodic evaluations, or contextual permissions — approaches that acknowledge people as more than interchangeable cut-outs from a brain-development chart.
  • “It would be too complicated.” Translation: “We prefer to be wrong in a simple way than right in a messy way.” Reality is messy. Pretending otherwise isn’t pragmatism; it’s intellectual cowardice. Law is supposed to contend with complexity, not avert its gaze from it.

And so we persist, reducing a continuous, irregular, and profoundly personal process to an administratively convenient fiction — then dressing it in a lab coat to feign objectivity. A number is just a number, and in this case, a particularly silly one.

</rant>

The Enlightenment: A Postmortem

Or: How the Brightest Ideas in Europe Got Us into This Bloody Mess

Disclaimer: This output is entirely ChatGPT 4o from a conversation on the failure and anachronism of Enlightenment promises. I’m trying to finish editing my next novel, so I can’t justify taking much more time to share what are ultimately my thoughts as expounded upon by generative AI. I may comment personally in future. Until then, this is what I have to share.

AI Haters, leave now or perish ye all hope.


The Enlightenment promised us emancipation from superstition, authority, and ignorance. What we got instead was bureaucracy, colonialism, and TED Talks. We replaced divine right with data dashboards and called it progress. And like any good inheritance, the will was contested, and most of us ended up with bugger-all.

Below, I take each Enlightenment virtue, pair it with its contemporary vice, and offer a detractor who saw through the Enlightenment’s powder-wigged charade. Because if we’re going down with this ship, we might as well point out the dry rot in the hull.


1. Rationalism

The Ideal: Reason shall lead us out of darkness.
The Reality: Reason led us straight into the gas chambers—with bureaucratic precision.

Detractor: Max Horkheimer & Theodor Adorno

“Enlightenment is totalitarian.”
Dialectic of Enlightenment (1944)

Horkheimer and Adorno saw what reason looks like when it slips off its leash. Instrumental rationality, they warned, doesn’t ask why—it only asks how efficiently. The result? A world where extermination is scheduled, costs are optimised, and ethics are politely filed under “subjective.”


2. Empiricism

The Ideal: Observation and experience will uncover truth.
The Reality: If it can’t be measured, it can’t be real. (Love? Not statistically significant.)

Detractor: Michel Foucault

“Truth isn’t outside power… truth is a thing of this world.”
Power/Knowledge (1977)

Foucault dismantled the whole edifice. Knowledge isn’t neutral; it’s an instrument of power. Empiricism becomes just another way of disciplining the body—measuring skulls, classifying deviants, and diagnosing women with “hysteria” for having opinions.


3. Individualism

The Ideal: The sovereign subject, free and self-determining.
The Reality: The atomised consumer, trapped in a feedback loop of self-optimisation.

Detractor: Jean Baudrillard

“The individual is no longer an autonomous subject but a terminal of multiple networks.”
Simulacra and Simulation (1981)

You wanted autonomy? You got algorithms. Baudrillard reminds us that the modern “individual” is a brand in search of market validation. You are free to be whoever you want, provided it fits within platform guidelines and doesn’t disrupt ad revenue.


4. Secularism

The Ideal: Liberation from superstition.
The Reality: We swapped saints for STEMlords and called it even.

Detractor: Charles Taylor

“We are now living in a spiritual wasteland.”
A Secular Age (2007)

Taylor—perhaps the most polite Canadian apocalypse-whisperer—reminds us that secularism didn’t replace religion with reason; it replaced mystery with malaise. We’re no longer awed, just “motivated.” Everything is explainable, and yet somehow nothing means anything.


5. Progress

The Ideal: History is a forward march toward utopia.
The Reality: History is a meat grinder in a lab coat.

Detractor: Walter Benjamin

“The storm irresistibly propels him into the future to which his back is turned.”
Theses on the Philosophy of History (1940)

Benjamin’s “angel of history” watches helplessly as the wreckage piles up—colonialism, genocide, climate collapse—all in the name of progress. Every step forward has a cost, but we keep marching, noses in the spreadsheet, ignoring the bodies behind us.


6. Universalism

The Ideal: One humanity, under Reason.
The Reality: Enlightenment values, brought to you by cannon fire and Christian missionaries.

Detractor: Gayatri Chakravorty Spivak

“White men are saving brown women from brown men.”
Can the Subaltern Speak? (1988)

Universalism was always a bit… French, wasn’t it? Spivak unmasks it as imperialism in drag—exporting “rights” and “freedom” to people who never asked for them, while ignoring the structural violence built into the Enlightenment’s own Enlightened societies.


7. Tolerance

The Ideal: Let a thousand opinions bloom.
The Reality: Tolerance, but only for those who don’t threaten the status quo.

Detractor: Karl Popper

“Unlimited tolerance must lead to the disappearance of tolerance.”
The Open Society and Its Enemies (1945)

Popper, bless him, thought tolerance needed a firewall. But in practice, “tolerance” has become a smug liberal virtue signalling its own superiority while deplatforming anyone who makes the dinner party uncomfortable. We tolerate all views—except the unseemly ones.


8. Scientific Method

The Ideal: Observe, hypothesise, repeat. Truth shall emerge.
The Reality: Publish or perish. Fund or flounder.

Detractor: Paul Feyerabend

“Science is not one thing, it is many things.”
Against Method (1975)

Feyerabend called the whole thing a farce. There is no single “method,” just a bureaucratic orthodoxy masquerading as objectivity. Today, science bends to industry, cherry-picks for grants, and buries null results in the backyard. Peer review? More like peer pressure.


9. Anti-Authoritarianism

The Ideal: Smash the throne! Burn the mitre!
The Reality: Bow to the data analytics team.

Detractor: Herbert Marcuse

“Free election of masters does not abolish the masters or the slaves.”
One-Dimensional Man (1964)

Marcuse skewered the liberal illusion of choice. We may vote, but we do so within a system that already wrote the script. Authority didn’t vanish; it just became procedural, faceless, algorithmic. Bureaucracy is the new monarchy—only with more forms.


10. Education and Encyclopaedism

The Ideal: All knowledge, accessible to all minds.
The Reality: Behind a paywall. Written in impenetrable prose. Moderated by white men with tenure.

Detractor: Ivan Illich

“School is the advertising agency which makes you believe that you need the society as it is.”
Deschooling Society (1971)

Illich pulls the curtain: education isn’t emancipatory; it’s indoctrinatory. The modern university produces not thinkers but credentialed employees. Encyclopaedias are replaced by Wikipedia, curated by anonymous pedants and revision wars. Truth is editable.


Postscript: Picking through the Rubble

So—has the Enlightenment failed?

Not exactly. It succeeded too literally. It was taken at its word. Its principles, once radical, were rendered banal. It’s not that reason, progress, or rights are inherently doomed—it’s that they were never as pure as advertised. They were always products of their time: male, white, bourgeois, and utterly convinced of their own benevolence.

If there’s a path forward, it’s not to restore Enlightenment values, but to interrogate them—mercilessly, with irony and eyes open.

After all, the problem was never darkness. It was the people with torches who thought they’d found the only path.

Conscious of Consciousness

Let us begin with the heresy: consciousness is not a thing. It is not a light bulb switched on in the mind. It is not a theatre with a little homunculus watching the play unfold. It is not a ghost in the machine, nor even a particularly welcome tenant. Consciousness is a conjuring trick – one so convincing that even the conjurer forgets it is an act.

Video: Related Topic: IAI Joscha Bosch on Consiousness

If that unsettles you, good. Welcome to the simulacrum.

The Wetness of Mind

We often hear that consciousness is “emergent,” but the term is used so promiscuously that it risks becoming decorative. So let us be specific. Consciousness, if it is emergent, is emergent as wetness is from H2O: not in the hydrogen or the oxygen, but in their relationship when bonded just so. Joscha Bach and others argue that consciousness arises not from the bits, but from the dance – the recursive feedback loops and predictive models running atop the neural substrate.

Audio: NotebookLM podcast on this topic.

In this view, the self is not the pilot but the dashboard. It is the user interface the brain conjures to coordinate action, interpret input, and maintain internal coherence. Not because it’s real, but because it’s useful. You are a GUI with delusions of grandeur.

The Cast of Theorists

Let us now parade the usual suspects:

  • Joscha Bach: Consciousness is a virtual self-model, emergent from recursive, computational feedback. Not the product of neurons firing per se, but of their ability to simulate a stable identity across time.
  • Thomas Metzinger: There is no self. Only a Phenomenal Self-Model (PSM) which becomes phenomenally transparent when the system no longer recognises it as a model. Consciousness is the experience of this hallucinated self.
  • Daniel Dennett: Dismantles the notion of a “central experiencer” with his Multiple Drafts Model. Consciousness is a narrative, a distributed process where drafts of experience compete, are edited, and retroactively interpreted.
  • David Chalmers: Waves his flag at the Hard Problem of consciousness. You can explain behaviour, memory, attention—but not experience itself. He flirts with dualism and panpsychism while insisting there’s a gap science cannot yet close.
  • Giulio Tononi: Gives us Integrated Information Theory (IIT) and the elusive metric Φ (phi). Consciousness is the degree to which information is unified within a system. Your brain is conscious because its parts can’t be reduced without losing coherence.
  • Karl Friston: The prophet of Free Energy Minimisation. Consciousness is an emergent property of systems that seek to reduce prediction error. The brain is a Bayesian engine, and the self is its best guess about how to survive.

So What Is Consciousness?

A hallucination. A recursive illusion. A predictive dashboard. A statistical artefact. A phi score. A phenomenally transparent model. Take your pick.

None of these theories fully agree, but most converge on one elegant horror: you are not what you think you are. The sense of being a continuous, stable, indivisible “I” is a construction. A simulation. The dream from which there is no waking because waking is part of the dream.

This is not despair; it is clarity. Just as wetness does not cry when told it is not a substance, the self need not mourn its own illusion. It is a marvellous fiction, worth inhabiting.

Conclusion: Through the Mirror

To be conscious of consciousness is to stand in the hall of mirrors and realise none reflect the original—because there is no original. The mirror is the thing.

But if the theatre is empty, the play goes on. Scripts are written, models simulated, selves performed. And perhaps, in this strange recursion, we find not meaning, but the possibility of coherence.

So raise a glass to the illusion. May your predictive model stay optimised, your narrative stay plausible, and your hallucinated self remain just this side of transparent.


For further hallucinatory episodes, consult your local philosopher, neuroscientist, or AI researcher. Side effects may include derealisation, epistemic vertigo, and mild enlightenment.

Speculative Philosophy on Screen: Identity, Agency, and the Fiction of Reality

Close-up of a human eye with digital glitch effects and overlaid text reading 'What if reality is wrong?'—a visual metaphor for distorted perception and unreliable truth.

Regular readers know I often write about identity, free will, and the narrative constraints of language. But I also explore these ideas through fiction, under the name Ridley Park.

In this short video, I unpack the philosophical motivations behind my stories, including:

  • Why reality is never as it seems
  • Why the self is a narrative convenience
  • What Heidegger’s Geworfenheit and Galen Strawson’s Causa Sui argument reveal about agency
  • And why language fails us – even when we think it serves

This isn’t promotional fluff. It’s epistemological dissent in a new format. Fictional, yes, but only in the sense that most of reality is, too.

▶️ Watch the video: Why I Write the Way I Do

On the Chronic Human Need to Anthropomorphise Everything

Oh, You Sweet Summer Algorithm

Humans talk to large language models the way toddlers talk to teddy bears – with unnerving sincerity and not a hint of shame. “Do you understand me?” they ask, eyes wide with hope. “What do you think of this draft?” they prod, as if some silicon scribe is going to sip its imaginary tea and nod gravely. It’s not merely adorable – it’s diagnostic. We are, it turns out, pathologically incapable of interacting with anything more complex than a toaster without projecting mind, motive, and mild trauma onto it.

Audio: NotebookLM podcast on this topic.

Welcome to the theatre of delusion, where you play Hamlet and the chatbot is cast as Yorick – if Yorick could autocomplete your soliloquy and generate citations in APA format.

The Great Anthropomorphic Flaw (aka Feature)

Let’s get one thing straight: anthropomorphism isn’t a software bug in the brain; it’s a core feature. You’re hardwired to see agency where there is none. That rustle in the bushes? Probably the wind. But better safe than sabre-toothed. So your ancestors survived, and here you are, attributing “sass” to your microwave because it beeped twice.

Now we’ve built a machine that spits out paragraphs like a caffeinated undergrad with deadlines, and naturally, we talk to it like it’s our mate from university. Never mind that it has no bloodstream, no memory of breakfast, and no concept of irony (despite being soaked in it). We still say you instead of the system, and think instead of statistically interpolate based on token weights. Because who wants to live in a world where every sentence starts with “as per the pre-trained parameters…”?

Why We Keep Doing It (Despite Knowing Better)

To be fair – and let’s be magnanimous – it’s useful. Talking to AI like it’s a person allows our ape-brains to sidestep the horror of interacting with a glorified autocomplete machine. We’re brilliant at modelling other minds, rubbish at modelling neural nets. So we slap a metaphorical moustache on the processor and call it Roger. Roger “gets us.” Roger “knows things.” Roger is, frankly, a vibe.

This little charade lubricates the whole transaction. If we had to address our queries to “the stochastic parrot formerly known as GPT,” we’d never get past the opening line. Better to just ask, “What do you think, Roger?” and pretend it has taste.

And here’s the kicker: by anthropomorphising AI, we start thinking about ethics – sort of. We ask if it deserves rights, feelings, holidays. We project humanity into the void and then act shocked when it mirrors back our worst habits. As if that’s its fault.

When the Roleplay Gets Risky

Of course, this make-believe has its downsides. Chief among them: we start to believe our own nonsense. Saying AI “knows” something is like saying your calculator is feeling generous with its square roots today. It doesn’t know—it produces outputs. Any semblance of understanding is pure pantomime.

More dangerously, we lose sight of the fact that these things aren’t just alien – they’re inhuman. They don’t dream of electric sheep. They don’t dream, full stop. But we insist on jamming them into our conceptual boxes: empathy, intent, personality. It’s like trying to teach a blender to feel remorse.

And let’s not pretend we’re doing it out of philosophical curiosity. We’re projecting, plain and simple. Anthropomorphism isn’t about them, it’s about us. We see a mind because we need to see one. We can’t bear the idea of a thing that’s smarter than us but doesn’t care about us, doesn’t see us. Narcissism with a side of existential dread.

Our Language is a Terrible Tool for This Job

English – and most languages, frankly – is hopeless at describing this category of thing. “It” feels cold and distant. “They” implies someone’s going to invite the model to brunch. We have no pronoun for “hyper-literate statistical machine that mimics thought but lacks all consciousness.” So we fudge it. Badly.

Our verbs are no better. “Compute”? Too beige. “Process”? Bureaucratic. “Think”? Premature. What we need is a whole new grammatical tense: the hallucino-indicative. The model thunketh, as one might, but didn’t.

This is linguistic poverty, pure and simple. Our grammar can’t cope with entities that live in the uncanny valley between sentience and syntax. We built a creature we can’t speak about without sounding like lunatics or liars.

The Semantics of Sentimentality (Or: “How Does This Sound to You?”)

Enter the most revealing tell of all: the questions we pose. “How does this look?” we ask the model, as if it might blink at the screen and furrow a synthetic brow. “What do you think?” we say, offering it the dignity of preference. These questions aren’t just off-target – they’re playing darts in another pub.

They’re the linguistic equivalent of asking your dishwasher whether it enjoyed the lasagne tray. But again, this isn’t idiocy – it’s instinct. We don’t have a way of addressing an entity that talks like a person but isn’t one. So we fake it. It’s interaction theatre. You provide the line, the model cues the spotlight.

But let’s be clear: the model doesn’t “think” anything. It regurgitates plausible text based on mountains of training data—some of which, no doubt, includes humans asking equally daft questions of equally mindless systems.

Time to Grow Up (Just a Bit)

This doesn’t mean we need to abandon anthropomorphism entirely. Like most delusions, it’s functional. But we’d do well to hold it at arm’s length – like a politician’s promise or a milk carton two days past its date.

Call it anthropomorphic agnosticism: act like it’s a person, but remember it’s not. Use the language, but don’t inhale.

And maybe – just maybe – we need to evolve our language. Invent new terms, new pronouns, new ways of speaking about entities that fall somewhere between tool and companion. As we did with “cyberspace” and “ghosting,” perhaps we need words for proto-minds and quasi-selves. Something between toaster and therapist.

Above all, we need to acknowledge that our language shapes more than just understanding – it shapes policy, emotion, and future design. If we speak to AI like it’s sentient, we’ll eventually legislate as if it is. And if we insist on treating it as an object, we may be blind to when that ceases to be accurate. Misnaming, after all, is the first sin in every myth worth reading.

The Mirror, Darkly

Ultimately, our tendency to humanise machines is less about them than it is about us – our fears, our needs, our inability to tolerate ambiguity. The AI is just a mirror: an elaborate, many-eyed, autofill mirror. And when we see a mind there, it may be ours staring back – distorted, flattened, and fed through a thousand layers of token prediction.

The tragedy, perhaps, isn’t that the machine doesn’t understand us. It’s that we’ve built something that perfectly imitates understanding – and still, somehow, we remain utterly alone in the room.

The Rhetoric of Realism: When Language Pretends to Know

Let us begin with the heresy: Truth is a rhetorical artefact. Not a revelation. Not a metaphysical essence glimmering behind the veil. Just language — persuasive, repeatable, institutionally ratified language. In other words: branding.

Audio: NotebookLM podcast on this topic.

This is not merely a postmodern tantrum thrown at the altar of Enlightenment rationalism. It is a sober, if impolite, reminder that nearly everything we call “knowledge” is stitched together with narrative glue and semantic spit. Psychology. Neuroscience. Ethics. Economics. Each presents itself as a science — or worse, a moral imperative — but their foundations are built atop a linguistic faultline. They are, at best, elegant approximations; at worst, dogma in drag.

Let’s take psychology. Here is a field that diagnoses your soul via consensus. A committee of credentialed clerics sits down and declares a cluster of behaviours to be a disorder, assigns it a code, and hands you a script. It is then canonised in the DSM, the Diagnostic Scripture Manual. Doubt its legitimacy and you are either naïve or ill — which is to say, you’ve just confirmed the diagnosis. It’s a theological trap dressed in the language of care.

Or neuroscience — the church of the glowing blob. An fMRI shows a region “lighting up” and we are meant to believe we’ve located the seat of love, the anchor of morality, or the birthplace of free will. Never mind that we’re interpreting blood-oxygen fluctuations in composite images smoothed by statistical witchcraft. It looks scientific, therefore it must be real. The map is not the territory, but in neuroscience, it’s often a mood board.

And then there is language itself, the medium through which all these illusions are transmitted. It is the stage, the scenery, and the unreliable narrator. My Language Insufficiency Hypothesis proposes that language is not simply a flawed tool — it is fundamentally unfit for the task it pretends to perform. It was forged in the furnace of survival, not truth. We are asking a fork to play the violin.

This insufficiency is not an error to be corrected by better definitions or clever metaphors. It is the architecture of the system. To speak is to abstract. To abstract is to exclude. To exclude is to falsify. Every time we speak of a thing, we lose the thing itself. Language functions best not as a window to the real but as a veil — translucent, patterned, and perpetually in the way.

So what, then, are our Truths™? They are narratives that have won. Stories that survived the epistemic hunger games. They are rendered authoritative not by accuracy, but by resonance — psychological, cultural, institutional. A “truth” is what is widely accepted, not because it is right, but because it is rhetorically unassailable — for now.

This is the dirty secret of epistemology: coherence masquerades as correspondence. If enough concepts link arms convincingly, we grant them status. Not because they touch reality, but because they echo each other convincingly in our linguistic theatre.

Libet’s experiment, Foucault’s genealogies, McGilchrist’s hemispheric metaphors — each peels back the curtain in its own way. Libet shows that agency might be a post-hoc illusion. Foucault reveals that disciplines don’t describe the subject; they produce it. McGilchrist laments that the Emissary now rules the Master, and the world is flatter for it.

But all of them — and all of us — are trapped in the same game: the tyranny of the signifier. We speak not to uncover truth, but to make truth-sounding noises. And the tragedy is, we often convince ourselves.

So no, we cannot escape the prison of language. But we can acknowledge its bars. And maybe, just maybe, we can rattle them loudly enough that others hear the clank.

Until then, we continue — philosophers, scientists, diagnosticians, rhetoricians — playing epistemology like a parlour game with rigged dice, congratulating each other on how well the rules make sense.

And why wouldn’t they? We wrote them.

The Scourge: They’re Really Fighting Is Ambiguity

A Sequel to “The Disorder of Saying No” and a Companion to “When ‘Advanced’ Means Genocide”

In my previous post, The Disorder of Saying No, I explored the way resistance to authority is pathologised, particularly when that authority is cloaked in benevolence and armed with diagnostic manuals. When one refuses — gently, thoughtfully, or with a sharp polemic — one is no longer principled. One is “difficult.” Or in my case, oppositional.

Audio: NotebookLM podcast on this topic.

So when I had the gall to call out Bill Maher for his recent linguistic stunt — declaring that a woman is simply “a person who menstruates” — I thought I was doing the rational thing: pointing out a classic bit of reductionist nonsense masquerading as clarity. Maher, after all, was not doing biology. He was playing lexicographer-in-chief, defining a term with centuries of philosophical, sociological, and political baggage as though it were a checkbox on a medical form.

I said as much: that he was abusing his platform, presenting himself as the sole arbiter of the English language, and that his little performance was less about clarity and more about controlling the terms of discourse.

My friend, a post-menopausal woman herself, responded not by engaging the argument, but by insinuating — as others have — that I was simply being contrary. Oppositional. Difficult. Again. (She was clearly moved by When “Advanced” Means Genocide, but may have missed the point.)

So let’s unpack this — not to win the debate, but to show what the debate actually is.

This Isn’t About Biology — It’s About Boundary Maintenance

Maher’s statement wasn’t intended to clarify. It was intended to exclude. It wasn’t some linguistic slip; it was a rhetorical scalpel — one used not to analyse, but to amputate.

And the applause from some cisgender women — particularly those who’ve “graduated” from menstruation — reveals the heart of the matter: it’s not about reproductive biology. It’s about controlling who gets to claim the term woman.

Let’s steelman the argument, just for the sport of it:

Menstruation is a symbolic threshold. Even if one no longer menstruates, having done so places you irrevocably within the category of woman. It’s not about exclusion; it’s about grounding identity in material experience.

Fine. But now let’s ask:

  • What about women who’ve never menstruated?
  • What about intersex people?
  • What about trans women?
  • What about cultures with radically different markers of womanhood?

You see, it only works if you pretend the world is simpler than it is.

The Language Insufficiency Hypothesis: Applied

This is precisely where the Language Insufficiency Hypothesis earns its keep.

The word woman is not a locked vault. It is a floating signifier, to borrow from Barthes — a term whose meaning is perpetually re-negotiated in use. There is no singular essence to the word. It is not rooted in biology, nor in social role, nor in performance. It is a hybrid, historically contingent construct — and the moment you try to fix its meaning, it slips sideways like a greased Wittgensteinian beetle.

“Meaning is use,” says Wittgenstein, and this is what frightens people.

If woman is defined by use and not by rule, then anyone might claim it. And suddenly, the club is no longer exclusive.

That’s the threat Maher and his defenders are really reacting to. Not trans women. Not intersex people. Not language activists or queer theorists.

The threat is ambiguity.

What They Want: A World That Can Be Named

The push for rigid definitions — for menstruation as membership — is a plea for a world that can be named and known. A world where words are secure, stable, and final. Where meaning doesn’t leak.

But language doesn’t offer that comfort.

It never did.

And when that linguistic instability gets too close to something personal, like gender identity, or the foundation of one’s own sense of self, the defensive response is to fortify the language, as though building walls around a collapsing church.

Maher’s defenders aren’t making scientific arguments. They’re waging semantic warfare. If they can hold the definition, they can win the cultural narrative. They can hold the gates to Womanhood and keep the undesirables out.

That’s the fantasy.

But language doesn’t play along.

Conclusion: Words Will Not Save You — but They Might Soothe the Dead

In the end, Maher’s definition is not merely incorrect. It is insufficient. It cannot accommodate the complexity of lived experience and cannot sustain the illusion of clarity for long.

And those who cling to it — friend or stranger, progressive, or conservative — are not defending biology. They are defending nostalgia. Specifically, a pathological nostalgia for a world that no longer exists, and arguably never did: a world where gender roles were static, language was absolute, and womanhood was neatly circumscribed by bodily functions and suburban etiquette.

Ozzy and Harriet loom large here — not as individuals but as archetypes. Icons of a mid-century dream in which everyone knew their place, and deviation was something to be corrected, not celebrated. My friend, of that generation, clings to this fantasy not out of malice but out of a desperate yearning for order. The idea that woman could mean many things, and mean them differently across contexts, is not liberating to her — it’s destabilising.

But that world is gone. And no amount of menstruation-based gatekeeping will restore it.

The Real Scourge Is Ambiguity

Maher’s tantrum wasn’t about truth. It was about fear — fear of linguistic drift, of gender flux, of a world in which meaning no longer obeys. The desire to fix the definition of “woman” is not a biological impulse. It’s a theological one.

And theology, like nostalgia, often makes terrible policy.

This is why your Language Insufficiency Hypothesis matters. Because it reminds us that language does not stabilise reality — it masks its instability. The attempt to define “woman” once and for all is not just futile — it’s an act of violence against difference, a linguistic colonisation of lived experience.

So Let Them Rest

Ozzy and Harriet are dead. Let them rest.
Let their picket fence moulder. Let their signage decay.

The world has moved on. The language is shifting beneath your feet. And no amount of retroactive gatekeeping can halt that tremor.

The club is burning. And the only thing left to save is honesty.

Semantic Drift: When Language Outruns the Science

Science has a language problem. Not a lack of it – if anything, a surfeit. But words, unlike test tubes, do not stay sterile. They evolve, mutate, and metastasise. They get borrowed, bent, misused, and misremembered. And when the public discourse gets hold of them, particularly on platforms like TikTok, it’s the language that gets top billing. The science? Second lead, if it’s lucky.

Semantic drift is at the centre of this: the gradual shift in meaning of a word or phrase over time. It’s how “literally” came to mean “figuratively,” how “organic” went from “carbon-based” to “morally superior,” and how “theory” in science means robust explanatory framework but in the public square means vague guess with no homework.

In short, semantic drift lets rhetoric masquerade as reason. Once a word acquires enough connotation, you can deploy it like a spell. No need to define your terms when the vibe will do.

Audio: NotebookLM podcast on this topic.

When “Vitamin” No Longer Means Vitamin

Take the word vitamin. It sounds objective. Authoritative. Something codified in the genetic commandments of all living things. (reference)

But it isn’t.

A vitamin is simply a substance that an organism needs but cannot synthesise internally, and must obtain through its diet. That’s it. It’s a functional definition, not a chemical one.

So:

  • Vitamin C is a vitamin for humans, but not for dogs, cats, or goats. They make their own. We lost the gene. Tough luck.
  • Vitamin D, meanwhile, isn’t a vitamin at all. It’s a hormone, synthesised when sunlight hits your skin. Its vitamin status is a historical relic – named before we knew better, and now marketed too profitably to correct.

But in the land of TikTok and supplement shelves, these nuances evaporate. “Vitamin” has drifted from scientific designation to halo term – a linguistic fig leaf draped over everything from snake oil to ultraviolet-induced steroidogenesis.

The Rhetorical Sleight of Hand

This linguistic slippage is precisely what allows the rhetorical shenanigans to thrive.

In one video, a bloke claims a burger left out for 151 days neither moulds nor decays, and therefore, “nature won’t touch it.” From there, he leaps (with Olympic disregard for coherence) into talk of sugar spikes, mood swings, and “metabolic chaos.” You can almost hear the conspiratorial music rising.

The science here is, let’s be generous, circumstantial. But the language? Oh, the language is airtight.

Words like “processed,” “chemical,” and “natural” are deployed like moral verdicts, not descriptive categories. The implication isn’t argued – it’s assumed, because the semantics have been doing quiet groundwork for years. “Natural” = good. “Chemical” = bad. “Vitamin” = necessary. “Addiction” = no agency.

By the time the viewer blinks, they’re nodding along to a story told by words in costume, not facts in context.

The Linguistic Metabolism of Misunderstanding

This is why semantic drift isn’t just an academic curiosity – it’s a vector. A vector by which misinformation spreads, not through outright falsehood, but through weaponised ambiguity.

A term like “sugar crash” sounds scientific. It even maps onto a real physiological process: postprandial hypoglycaemia. But when yoked to vague claims about mood, willpower, and “chemical hijacking,” it becomes a meme with lab coat cosplay. And the science, if mentioned at all, is there merely to decorate the argument, not drive it.

That’s the crux of my forthcoming book, The Language Insufficiency Hypothesis: that our inherited languages, designed for trade, prayer, and gossip, are woefully ill-equipped for modern scientific clarity. They lag behind our knowledge, and worse, they often distort it.

Words arrive first. Definitions come limping after.

In Closing: You Are What You Consume (Linguistically)

The real problem isn’t that TikTokers get the science wrong. The problem is that they get the words right – right enough to slip past your critical filters. Rhetoric wears the lab coat. Logic gets left in the locker room.

If vitamin C is a vitamin only for some species, and vitamin D isn’t a vitamin at all, then what else are we mislabelling in the great nutritional theatre? What other linguistic zombies are still wandering the scientific lexicon?

Language may be the best tool we have, but don’t mistake it for a mirror. It’s a carnival funhouse – distorting, framing, and reflecting what we expect to see. And until we fix that, science will keep playing second fiddle to the words pretending to explain it.