As the publication date of A Language Insufficiency Hypothesis (LIH) draws nearer, I feel it’s a good time to promote it (obviously) and to introduce some of the problems it uncovers – including common misperceptions I’ve already heard. Through this feedback, I now understand some of the underlying structural limitations that I hadn’t considered, but this only strengthens my position. As I state at the start of the book, the LIH isn’t a cast-in-stone artefact. Other discoveries will inevitably be made. For now, consider it a way to think about the deficiencies of language, around which remediation strategies can be developed.
Audio: NotebookLM summary podcast of this content.
Let’s clear the undergrowth first. The Language Insufficiency Hypothesis is not concerned with everyday ambiguity, garden-variety polysemy, or the sort of misunderstandings that vanish the moment someone bothers to supply five seconds of context. That terrain is already well-mapped, thoroughly fenced, and frankly dull.
Take the classic sort of example wheeled out whenever someone wants to sound clever without doing much work:
‘I made a 30-foot basket’.
Video: a woman making a large basket
If you’re a basketweaver, you picture an absurdly large basket and quietly question the maker’s life choices. If you’re watching basketball, you hear ‘score’. If you’re anywhere near the context in which the sentence was uttered, the meaning is obvious. If it isn’t, the repair cost is trivial. Add context, move on, live your life.
Language did not fail here. It merely waited for its coat. This is not the sort of thing the LIH loses sleep over.
The Groucho Marx Defence, or: Syntax Is Not the Problem
Logicians and armchair philosophers love to reach for jokes like Groucho Marx’s immortal line:
‘I shot an elephant in my pyjamas. Why it was wearing my pyjamas, I’ll never know’.
Video: A man and elephant in pyjamas (no sound)
Yes, very funny. Yes, the sentence allows for a syntactic misreading. No, nobody actually believes the elephant was lounging about in striped silk. The humour works precisely because the “wrong” parse is momentarily entertained and instantly rejected.
Again, language is not insufficient here. It’s mischievous. There’s a difference.
If the LIH were worried about this sort of thing, its ambitions would be indistinguishable from an undergraduate logic textbook with better branding.
Banks, Rivers, and the Myth of Constant Confusion
Likewise, when someone in a city says, ‘I went to the bank’, no sane listener imagines them strolling along a riverbank, unless they are already knee-deep in pastoral fantasy or French tourism brochures. Context does the heavy lifting. It almost always does.
Video: Rare footage of me trying to withdraw funds at my bank (no sound)
This is not a crisis of meaning. This is language functioning exactly as advertised.
Where the Trouble Actually Starts: Contestables
The LIH begins where these tidy examples stop being helpful. It concerns itself with Contestables: terms like truth, freedom, justice, fairness, harm, equality. Words that look stable, behave politely in sentences, and then detonate the moment you ask two people what they actually mean by them. These are not ambiguous in the casual sense. They are structurally contested.
In political, moral, and cultural contexts, different groups use the same word to gesture at fundamentally incompatible conceptual frameworks, all while assuming a shared understanding that does not exist. The conversation proceeds as if there were common ground, when in fact there is only overlap in spelling.
That’s why attempts to ‘define’ these terms so often collapse into accusation:
That’s not what freedom means. That’s not real justice. You’re redefining truth.
No, the definitions were never shared in the first place. The disagreement was smuggled in with the noun.
‘Just Ignore the Word’ Is Not a Rescue
A common response at this point is to suggest that we simply bypass the troublesome term and discuss the concrete features each party associates with it. Fine. Sensible. Often productive. But notice what this manoeuvre concedes. It does not save the term. It abandons it.
If meaningful discussion can only proceed once the word is set aside and replaced with a list of clarifications, constraints, examples, and exclusions, then the word has already failed at its primary job: conveying shared meaning. This is precisely the point the LIH is making.
The insufficiency is not that language is vague, or flexible, or context-sensitive. It’s that beyond a certain level of conceptual complexity, language becomes a confidence trick. It gives us the feeling of agreement without the substance, the appearance of communication without the transaction.
At that point, words don’t merely underperform. They mislead.
Accusations of writing being AI are becoming more common – an irony so rich it could fund Silicon Valley for another decade. We’ve built machines to detect machines imitating us, and then we congratulate ourselves when they accuse us of being them. It’s biblical in its stupidity.
A year ago, I read an earnest little piece on ‘how to spot AI writing’. The tells? Proper grammar. Logical flow. Parallel structure. Essentially, competence. Imagine that – clarity and coherence as evidence of inhumanity. We’ve spent centuries telling students to write clearly, and now, having finally produced something that does, we call it suspicious.
Audio: NotebookLM podcast on this topic and the next one.
My own prose was recently tried and convicted by Reddit’s self-appointed literati. The charge? Too well-written, apparently. Reddit – where typos go to breed. I pop back there occasionally, against my better judgment, to find the same tribunal of keyboard Calvinists patrolling the comment fields, shouting ‘AI!’ at anything that doesn’t sound like it was composed mid-seizure. The irony, of course, is that most of them wouldn’t recognise good writing unless it came with upvotes attached.
Image: A newspaper entry that may have been generated by an AI with the surname Kahn. 🧐🤣
Now, I’ll admit: my sentences do have a certain mechanical precision. Too many em dashes, too much syntactic symmetry. But that’s not ‘AI’. That’s simply craft. Machines learned from us. They imitate our best habits because we can’t be bothered to keep them ourselves. And yet, here we are, chasing ghosts of our own creation, declaring our children inhuman.
Apparently, there are more diagnostic signs. Incorporating an Alt-26 arrow to represent progress is a telltale infraction → like this. No human, they say, would choose to illustrate A → B that way. Instead, one is faulted for remembering – or at least understanding – that Alt-key combinations exist to reveal a fuller array of options: …, ™, and so on. I’ve used these symbols long before AI Wave 4 hit shore.
Interestingly, I prefer spaced en dashes over em dashes in most cases. The em dash is an Americanism I don’t prefer to adopt, but it does reveal the American bias in the training data. I can consciously adopt a European spin; AI, lacking intent, finds this harder to remember.
I used to use em dashes freely, but now I almost avoid them—if only to sidestep the mass hysteria. Perhaps I’ll start using AI to randomly misspell words and wreck my own grammar. Or maybe I’ll ask it to output everything in AAVE, or some unholy creole of Contemporary English and Chaucer, and call it a stylistic choice. (For the record, the em dashes in this paragraph were injected by the wee-AI gods and left as a badge of shame.)
Meanwhile, I spend half my time wrestling with smaller, dumber AIs – the grammar-checkers and predictive text gremlins who think they know tone but have never felt one. They twitch at ellipses, squirm at irony, and whimper at rhetorical emphasis. They are the hall monitors of prose, the petty bureaucrats of language.
And the final absurdity? These same half-witted algorithms are the ones deputised to decide whether my writing is too good to be human.