In anticipation of the publication of A Language Insufficiency Hypothesis in January 2026, I have created a Language Insufficiency GPT.
In anticipation of the publication of A Language Insufficiency Hypothesis in January 2026, I’ve created a Language Insufficiency GPT.
Today I’m launching something designed to exploit a familiar failure mode with forensic precision:
👉 https://chatgpt.com/g/g-694018a9bbc88191a8360d65a530e50c-language-insufficiency-gpt
Naturally, it will make more sense alongside the book. But it may still provide a bit of entertainment – and mild discomfort – in the meantime.

tl;dr: Language is generally presumed to be stable. Words mean what you think they mean, right? A table is a table. A bird is a bird. Polysemy aside, these are solid, dependable units.
Then we arrive at freedom, justice, truth, and an entire panoply of unstable candidates. And let’s not even pretend qualia are behaving themselves.
So when someone says ‘truth’, ‘free speech’, or ‘IQ’, you may suddenly realise you’ve been arguing with a cardboard cut-out wearing your own assumptions. That isn’t just interpersonal mischief. It’s language doing exactly what it was designed to do: letting you glide over the hard problems while sounding perfectly reasonable.
If that sounds banal, you’ve already fallen for the trap.
Give it a try – or wait until you’ve digested the book. Not literally, unless you’re short on fibre.
Cheers.
* As I’ve cited previously, the quality of NotebookLM varies – usually in predictable directions. This one does well enough, but it doesn’t have enough context to get the story right (because it was only drawing from this page rather than from a fuller accounting of the LIH). Its trailing comment reveals that it doesn’t grasp that “new words” don’t solve the problem.
Earlier, it suggests that language is intentionally vague. This is not an assertion I make. You can read some of the earlier incarnations, or you can wait for it to be published.
