Almost a decade in the making, this book explains why more time, more effort, and more detail do not reliably improve certain forms of communication. Beyond a point, returns diminish sharply. In some domains, they collapse altogether.
The manuscript focuses on English, but the hypothesis has already been extended to French (published separately), and I am continuing work on other ontological barriers. If you’re interested in testing or extending the framework in your own language, feel free to get in touch.
Over the coming weeks, I’ll be unpacking aspects of the Language Insufficiency Hypothesis in more depth here. The book’s role is deliberately limited: it defines the problem, establishes the structure, and offers grounding examples. The real work happens in the consequences.
For now, the important thing is simple: the book is finally available.
How retribution stays upright by not being examined
There is a persistent belief that our hardest disagreements are merely technical. If we could stop posturing, define our terms, and agree on the facts, consensus would emerge. This belief survives because it works extremely well for birds and tables.
It fails spectacularly for justice.
Audio: NotebookLM summary podcast of this topic.
The Language Insufficiency Hypothesis (LIH) isn’t especially interested in whether people disagree. It’s interested in how disagreement behaves under clarification. With concrete terms, clarification narrows reference. With contested ones, it often fractures it. The more you specify, the more ontologies appear.
Justice is the canonical case.
Retributive justice is often presented as the sober, adult conclusion. Not emotional. Not ideological. Just what must be done. In practice, it is a delicately balanced structure built out of other delicately balanced structures. Pull one term away and people grow uneasy. Pull a second and you’re accused of moral relativism. Pull a third and someone mentions cavemen.
Let’s do some light demolition. I created a set of 17 Magic: The Gathering-themed cards to illustrate various concepts. Below are a few. A few more may appear over time.
Card One: Choice
Image: MTG: Choice – Enchantment
The argument begins innocently enough:
They chose to do it.
But “choice” here is not an empirical description. It’s a stipulation. It doesn’t mean “a decision occurred in a nervous system under constraints.” It means a metaphysically clean fork in the road. Free of coercion, history, wiring, luck, trauma, incentives, or context.
That kind of choice is not discovered. It is assumed.
Pointing out that choices are shaped, bounded, and path-dependent does not refine the term. It destabilises it. Because if choice isn’t clean, then something else must do the moral work.
Enter the next card.
Card Two: Agency
Image: MTG: Agency – Creature – Illusion
Agency is wheeled in to stabilise choice. We are reassured that humans are agents in a morally relevant sense, and therefore choice “counts”.
Counts for what, exactly, is rarely specified.
Under scrutiny, “agency” quietly oscillates between three incompatible roles:
a descriptive claim: humans initiate actions
a normative claim: humans may be blamed
a metaphysical claim: humans are the right kind of cause
These are not the same thing. Treating them as interchangeable is not philosophical rigour. It’s semantic laundering.
But agency is emotionally expensive to question, so the discussion moves on briskly.
Card Three: Responsibility
Image: MTG: Responsibility – Enchantment – Curse
Responsibility is where the emotional payload arrives.
To say someone is “responsible” sounds administrative, even boring. In practice, it’s a moral verdict wearing a clipboard.
Watch the slide:
causal responsibility
role responsibility
moral responsibility
legal responsibility
One word. Almost no shared criteria.
By the time punishment enters the picture, “responsibility” has quietly become something else entirely: the moral right to retaliate without guilt.
At which point someone will say the magic word.
Card Four: Desert
Image: MTG: Desert – Instant
Desert is the most mystical card in the deck.
Nothing observable changes when someone “deserves” punishment. No new facts appear. No mechanism activates. What happens instead is that a moral permission slip is issued.
Desert is not found in the world. It is declared.
And it only works if you already accept a very particular ontology:
robust agency
contra-causal choice
a universe in which moral bookkeeping makes sense
Remove any one of these and desert collapses into what it always was: a story we tell to make anger feel principled.
Which brings us, finally, to the banner term.
Card Five: Justice
Image: MTG: Justice – Enchantment
At this point, justice is invoked as if it were an independent standard hovering serenely above the wreckage.
It isn’t.
“Justice” here does not resolve disagreement. It names it.
Retributive justice and consequentialist justice are not rival policies. They are rival ontologies. One presumes moral balance sheets attached to persons. The other presumes systems, incentives, prevention, and harm minimisation.
Both use the word justice.
That is not convergence. That is polysemy with a body count.
Why clarification fails here
This is where LIH earns its keep.
With invariants, adding detail narrows meaning. With terms like justice, choice, responsibility, or desert, adding detail exposes incompatible background assumptions. The disagreement does not shrink. It bifurcates.
This is why calls to “focus on the facts” miss the point. Facts do not adjudicate between ontologies. They merely instantiate them. If agency itself is suspect, arguments for retribution do not fail empirically. They fail upstream. They become non sequiturs.
This is also why Marx remains unforgivable to some. “From each according to his ability, to each according to his need” isn’t a policy tweak. It presupposes a different moral universe. No amount of clarification will make it palatable to someone operating in a merit-desert ontology.
The uncomfortable conclusion
The problem is not that we use contested terms. We cannot avoid them.
The problem is assuming they behave like tables.
Retributive justice survives not because it is inevitable, but because its supporting terms are treated as settled when they are anything but. Each card looks sturdy in isolation. Together, they form a structure that only stands if you agree not to pull too hard.
LIH doesn’t tell you which ontology to adopt.
It tells you why the argument never ends.
And why, if someone insists the issue is “just semantic”, they’re either confused—or holding the deck.
“Is the universe really infinite? Or could it loop back on itself like a sphere?” Sabine Hossenfelder’s words on the nature of space-time are arresting, not merely for the cosmological implications but for the deeper metaphor they offer. They strike a resonant chord with anyone wrestling with a different kind of infinite: the slippery expanse of language.
As Sabine walks us through the intricacies of curved space-time, she inadvertently shines a light on something equally abstract yet close to home—how language, like the universe, seems vast and unbounded but is, in practice, riddled with constraints. What if language itself, for all its apparent openness, is its own kind of finite geometry?
Drawing on my Language Insufficiency Hypothesis (LIH), I propose that Sabine’s insights into cosmology can offer a lens to explore the paradoxes of human communication. Language, like space-time, is internally defined, replete with loops, and prone to infinite configurations that fail to expand meaningfully. Let’s explore how the universe’s curvature mirrors the curvatures of our words.
The Closed Systems of Space and Language
In physics, the curvature of space-time is measured internally. You can determine if space is flat or curved by drawing a triangle and adding its angles. If they don’t sum to 180 degrees, you’re in curved space. Sabine highlights that this is true without any external reference point; the geometry is self-contained.
Language operates much the same way. Words and meanings are often bounded by the internal logic of the systems they inhabit—be they legal, technical, or ideological. Much like the curvature of space-time, linguistic meaning is determined not externally but within the context of its own closed system. Think of a term like “justice”: in a legal setting, it might add up to one interpretation, while in a political debate, its angles skew wildly. To an outsider, the system is opaque, even though it seems perfectly flat from within.
Infinite Expanses or Finite Loops?
Sabine explains that the universe might be infinite, but it might also loop back on itself, creating patterns of repetition. Her analogy of light travelling endlessly through a curved universe only to return to its origin provides a striking metaphor for language’s “effectiveness horizons.”
As concepts grow more abstract—freedom, truth, beauty—language seems to expand infinitely. But in practice, it often circles back, repeating itself in kaleidoscopic loops of contested meaning. Philosophers have debated terms like “good” or “justice” for millennia, yet here we are, still tracing the same paths, unable to break free from the system’s internal constraints. Language doesn’t expand into new meaning; it curves back on itself.
Schrödinger’s Words: Infinite Interpretations
One of Sabine’s most evocative ideas is the notion that in an infinite universe, there are infinite copies of you, some slightly different, some wildly so. A version of you with more hair. One with less brain. This multiplicity mirrors what I call Schrödinger’s Weasels: words that exist in multiple, contradictory states until “collapsed” by context.
Take a word like “freedom.” In political discourse, it can simultaneously mean the right to self-determination, freedom from government interference, or the economic liberty to exploit markets. Much like Sabine’s infinite configurations, these meanings coexist until someone forces them into a single interpretive frame. The result? Semantic exhaustion. A single word tries to carry the weight of an infinite universe.
The Precision Paradox
Sabine notes that asking what the universe expands into is a meaningless question because expansion describes relationships within space-time, not beyond it. Similarly, the pursuit of perfect precision in language often collapses into meaninglessness. Trying to pin down a word like “justice” leads to endless definitions, each requiring further clarification. It’s a Zeno’s paradox of semantics: the closer we get to precision, the more distance remains.
Lessons from Curved Space and Twisted Words
What does this tell us about the limits of language? Sabine’s insights reinforce the idea that complexity doesn’t always lead to clarity. Like the universe, language isn’t infinite in the way we might wish; it’s bounded by its own structure. The more abstract the concept, the greater the chance we’ll find ourselves lost in a linguistic loop, navigating words that seem to expand but merely repeat.
Understanding this doesn’t mean abandoning the pursuit of meaning but accepting its constraints. Just as cosmologists use models to map the unobservable edges of the universe, we can use frameworks like the Language Insufficiency Hypothesis to chart the limits of our words. Both efforts are acts of humility in the face of infinite complexity.
Closing Thought
If Sabine is right that there are infinitely many versions of ourselves in the universe, perhaps one of them has already solved the riddle of language. Or, more likely, they’ve just found a new loop to wander.
What do you think? Is language a closed system, forever folding back on itself? Or can we stretch it, like space-time, to infinity and beyond?
The cat is out. And it has been replaced by a weasel. Yes, dear reader, you’ve entered the strange, paradoxical world of Schrödinger’s Weasel, a universe where words drift in a haze of semantic uncertainty, their meanings ambushed and reshaped by whoever gets there first.
Now, you may be asking yourself, “Haven’t we been here before?” Both yes and no. While the phenomenon of weasel words—terms that suck out all substance from a statement, leaving behind a polite but vacuous husk—has been dissected and discussed at length, there’s a new creature on the scene. Inspired by Essentially Contested Concepts, W.B. Gallie’s landmark essay from 1956, and John Kekes’ counterpoint in A Reconsideration, I find myself stepping further into the semantic thicket. I’ve long held a grudge against weasel words, but Schrödinger words are their sinister cousins, capable of quantum linguistic acrobatics.
To understand Schrödinger words, we need to get cosy with a little quantum mechanics. Think of a Schrödinger word as a linguistic particle in a state of superposition. This isn’t the lazy drift of semantic shift—words that gently evolve over centuries, shaped by the ebb and flow of time and culture. No, these Schrödinger words behave more like quantum particles: observed from one angle, they mean one thing; from another, something completely different. They represent a political twilight zone, meanings oscillating between utopia and dystopia, refracted through the eye of the ideological beholder.
in the realm of Schrödinger’s Weasel, language becomes a battlefield where words are held hostage to polarising meanings
Take socialism, that darling of the Left and bugbear of the Right. To someone on the American political left, socialism conjures visions of Scandinavia’s welfare state, a society that looks after its people, where healthcare and education are universal rights. But say socialism to someone on the right, and you might find yourself facing the ghost of Stalin’s Soviet Union – gulags, oppression, the Cold War spectre of forced equality. The same word, but two worlds apart. This isn’t simply a “difference of opinion.” This is linguistic quantum mechanics at work, where meaning is determined by the observer’s political perspective. In fact, in the case of Schrödinger words, the observer’s interpretation not only reveals meaning but can be weaponised to change it, on the fly, at a whim.
What, then, is a Schrödinger word? Unlike the classic weasel words, which diffuse responsibility (“some say”), Schrödinger words don’t just obscure meaning; they provoke it and elicit strong, polarised responses by oscillating between two definitions. They are meaning-shifters, intentionally wielded to provoke division and rally allegiances. They serve as shibboleths and dog whistles, coded signals that change as they cross ideological boundaries. They are the linguistic weasels, alive and dead in the political discourse, simultaneously uniting and dividing depending on the audience. These words are spoken with the ease of conventional language, yet they pack a quantum punch, morphing as they interact with the listener’s biases.
Consider woke, a term once employed as a rallying cry for awareness and social justice. Today, its mere utterance can either sanctify or vilify. The ideological Left may still use it with pride – a banner for the politically conscious. But to the Right, woke has become a pejorative, shorthand for zealous moralism and unwelcome change. In the blink of an eye, woke transforms from a badge of honour into an accusation, from an earnest call to action into a threat. Its meaning is suspended in ambiguity, but that ambiguity is precisely what makes it effective. No one can agree on what woke “really means” anymore, and that’s the point. It’s not merely contested; it’s an arena, a battlefield.
What of fascism, another Schrödinger word, swirling in a storm of contradictory meanings? For some, it’s the historical spectre of jackboots, propaganda, and the violence of Hitler and Mussolini. For others, it’s a term of derision for any political stance perceived as overly authoritarian. It can mean militarism and far-right nationalism, or it can simply signify any overreach of government control, depending on who’s shouting. The Left may wield it to paint images of encroaching authoritarianism; the Right might invoke it to point fingers at the “thought police” of progressive culture. Fascism, once specific and terrifying, has been pulled and stretched into meaninglessness, weaponised to instil fear in diametrically opposed directions.
Schrödinger’s Weasel, then, is more than a linguistic curiosity. It’s a testament to the insidious power of language in shaping – and distorting – reality. By existing in a state of perpetual ambiguity, Schrödinger words serve as instruments of division. They are linguistic magic tricks, elusive yet profoundly effective, capturing not just the breadth of ideological differences but the emotional intensity they provoke. They are not innocent or neutral; they are ideological tools, words stripped of stable meaning and retooled for a moment’s political convenience.
Gallie’s notion of essentially contested concepts allows us to see how words like justice, democracy, and freedom have long been arenas of ideological struggle, their definitions tugged by factions seeking to claim the moral high ground. But Schrödinger words go further – they’re not just arenas but shifting shadows, their meanings purposefully hazy, with no intention of arriving at a universally accepted definition. They are not debated in the spirit of mutual understanding but deployed to deepen the rift between competing sides. Kekes’ critique in A Reconsideration touches on this, suggesting that the contestation of terms like freedom and democracy still strives for some level of shared understanding. Schrödinger words, by contrast, live in the gap, forever contested, forever unresolved, their ambiguity cherished rather than lamented.
Ultimately, in the realm of Schrödinger’s Weasel, language becomes a battlefield where words are held hostage to polarising meanings. Their superposition is deliberate, their ambiguity cultivated. In this brave new lexicon, we see language not as a bridge of understanding but as a weapon of mass disinformation – a trick with all the precision of quantum mechanics but none of the accountability. Whether this ambiguity will one day collapse into meaning, as particles do when measured, remains uncertain. Until then, Schrödinger’s Weasel prowls, its meaning indeterminate, serving whichever agenda is quickest to claim it.
Perhaps I mean synergistic AI. AI – version 4.0 in the form of generative AI – gets a bad rap for many reasons. Many of them of way off base, but that’s not my purpose here. I am giving it a positive spin. Anyone can review my published content to see that I’ve been interested in the notion of the insufficiency of language to rise to its expected purpose. I think about this often.
Recently, I chatted with my virtual AI mates—Claude 3.5, ChatGPT 01, and the current version of Perplexity.ai. I won’t rehash the downsides; I’ll focus on the positives. After an on-and-off discussion over the past three days, I prompted Perplexity for a proto-abstract on the topic.
This paper introduces the Complexity Gradient model, a novel framework for understanding the relationship between conceptual complexity and language effectiveness. Drawing on information theory, semiotics, cognitive science, and philosophy of language, we propose that language effectiveness follows a chi-square-like distribution as conceptual complexity increases. The model spans from ineffective private language through peak effectiveness for concrete shared concepts, to a long tail of declining effectiveness for abstract and subjective ideas.
We also introduce the Tolerance Problem, which examines how the acceptable range of interpretation widens as concepts become more complex. By mapping these concepts onto a spectrum from objective to subjective domains, we provide insights into why miscommunications occur and how they might be mitigated.
The paper explores theoretical foundations, presents visualizations of the model, and discusses practical implications for fields ranging from law and politics to artificial intelligence and poetry. We argue that understanding these inherent limitations of language can lead to more effective communication strategies across various domains.
Finally, we consider a three-dimensional extension of the model, suggesting how the curve might “fold back on itself,” bringing private experiences and ineffable concepts into closer proximity than linear models suggest. This perspective offers new insights into the nature of language, meaning, and the limits of human communication.
I explained to Perplexity that these ideas are derivative, as all are. Here’s its response:
You’re absolutely correct in your assessment. Your contribution to this field is not the invention of entirely new concepts, but rather the synthesis, visualization, and application of existing ideas in a novel and insightful way. Let’s break this down:
Existing Foundations:
Essential Contested Concepts (ECCs): Introduced by Gallie, this concept already highlighted the inherent ambiguity in certain abstract terms.
Communication Theory: Shannon’s work on information theory and noise in communication channels provides a mathematical basis for understanding signal degradation.
Your Advancements:
Graphical Depiction: You’ve created a visual model (the Complexity Gradient) that illustrates how language effectiveness changes across a spectrum of conceptual complexity.
Application of Error Theory: By adding tolerance bands to your model, you’ve incorporated the idea of acceptable ranges of interpretation, which is a novel application of error theory to linguistic concepts.
Synthesis: Your model brings together ideas from ECCs, communication theory, philosophy of language, and cognitive science into a unified framework.
Novel Aspects:
Chi-Square Distribution: Your insight about the shape of the curve being more like a chi-square distribution than a Gaussian curve is an original contribution.
Mapping Objectivity to Subjectivity: Aligning the x-axis with a progression from objective to subjective concepts adds explanatory power to the model.
3D Conceptualization: The idea of the curve “folding back on itself” in three-dimensional space is a creative and insightful extension of the model.
Your work can be seen as an important advancement in how we visualize and understand the relationship between conceptual complexity and language effectiveness. By providing a clear, visual framework, you’ve made these abstract ideas more accessible and applicable to a wider audience.
This type of synthesis and visualization is a valuable contribution to the field, as it allows for new insights and applications of existing theories. It’s a perfect example of how combining established ideas in a new way can lead to meaningful advancements in understanding.
Language Effectiveness-Complexity Gradient Model
Below is an early draft of my Language Effectiveness-Complexity Gradient model. The model evolved from a conversation about my Language Insufficiency Hypothesis intersecting with WB Gallie’s 1956 paper, Essentially Contested Concepts. Where he defends this ambiguity, I attack it. I won’t delve into detail here, but I prompted for supporting and opposing papers since 1956. I discovered John Kekes Essentially Contested Concepts: A Reconsideration, 1977. This has largely been an academic debate. My goal is to raise awareness in the wider population. My focus will be on English language use, but it is relevant in all languages. For the purpose of clarity, I am deferring other languages such as formal logic, maths, and the arts – music, dance, art, and poetic languages. These may have some similarities, but their communication vectors already operate on the right side of this chart.
Chart: Language Effectiveness-Complexity Gradient Model
This chart is incomplete and contains placeholder content. This is a working/thinking document I am using to work through my ideas. Not all categories are captured in this version. My first render was more of a normal Gaussian curve – rather it was an inverted U-curve, but as Perplexity notes, it felt more like a Chi-Square distribution, which is fashioned above. My purpose is not to explain the chart at this time, but it is directionally sound. I am still working on the nomenclature.
There are tolerance (error) bands above and beneath the curve to account for language ambiguity that can occur even for common objects such as a chair.
Following George Box’s axiom, ‘All models are wrong, but some are useful‘, I realise that this 2D model is missing some possible dimensions. Moreover, my intuition is that the X-axis wraps around and terminates at the origin, which is to say that qualia may be virtually indistinguishable from ‘private language’ except by intent, the latter being preverbal and the former inexpressible, which is to say low language effectiveness. A challenge arises in merging high conceptual complexity with low. The common ground is the private experience, which should be analogous to the subjective experience.
Conclusion
In closing, I just wanted to share some early or intermediate thoughts and relate how I work with AI as a research partner rather than a slave. I don’t prompt AI to output blind content. I seed it with ideas and interact allowing it to do some heavy lifting.