What the LIH Is Not About (And Why This Still Needs Saying)

3–5 minutes

As the publication date of A Language Insufficiency Hypothesis (LIH) draws nearer, I feel it’s a good time to promote it (obviously) and to introduce some of the problems it uncovers – including common misperceptions I’ve already heard. Through this feedback, I now understand some of the underlying structural limitations that I hadn’t considered, but this only strengthens my position. As I state at the start of the book, the LIH isn’t a cast-in-stone artefact. Other discoveries will inevitably be made. For now, consider it a way to think about the deficiencies of language, around which remediation strategies can be developed.

Audio: NotebookLM summary podcast of this content.

Let’s clear the undergrowth first. The Language Insufficiency Hypothesis is not concerned with everyday ambiguity, garden-variety polysemy, or the sort of misunderstandings that vanish the moment someone bothers to supply five seconds of context. That terrain is already well-mapped, thoroughly fenced, and frankly dull.

Take the classic sort of example wheeled out whenever someone wants to sound clever without doing much work:

Video: a woman making a large basket

If you’re a basketweaver, you picture an absurdly large basket and quietly question the maker’s life choices. If you’re watching basketball, you hear ‘score’. If you’re anywhere near the context in which the sentence was uttered, the meaning is obvious. If it isn’t, the repair cost is trivial. Add context, move on, live your life.

Language did not fail here. It merely waited for its coat. This is not the sort of thing the LIH loses sleep over.

The Groucho Marx Defence, or: Syntax Is Not the Problem

Logicians and armchair philosophers love to reach for jokes like Groucho Marx’s immortal line:

Video: A man and elephant in pyjamas (no sound)

Yes, very funny. Yes, the sentence allows for a syntactic misreading. No, nobody actually believes the elephant was lounging about in striped silk. The humour works precisely because the “wrong” parse is momentarily entertained and instantly rejected.

Again, language is not insufficient here. It’s mischievous. There’s a difference.

If the LIH were worried about this sort of thing, its ambitions would be indistinguishable from an undergraduate logic textbook with better branding.

Banks, Rivers, and the Myth of Constant Confusion

Likewise, when someone in a city says, ‘I went to the bank’, no sane listener imagines them strolling along a riverbank, unless they are already knee-deep in pastoral fantasy or French tourism brochures. Context does the heavy lifting. It almost always does.

Video: Rare footage of me trying to withdraw funds at my bank (no sound)

This is not a crisis of meaning. This is language functioning exactly as advertised.

Where the Trouble Actually Starts: Contestables

The LIH begins where these tidy examples stop being helpful. It concerns itself with Contestables: terms like truth, freedom, justice, fairness, harm, equality. Words that look stable, behave politely in sentences, and then detonate the moment you ask two people what they actually mean by them. These are not ambiguous in the casual sense. They are structurally contested.

In political, moral, and cultural contexts, different groups use the same word to gesture at fundamentally incompatible conceptual frameworks, all while assuming a shared understanding that does not exist. The conversation proceeds as if there were common ground, when in fact there is only overlap in spelling.

That’s why attempts to ‘define’ these terms so often collapse into accusation:

That’s not what freedom means.
That’s not real justice.
You’re redefining truth.

No, the definitions were never shared in the first place. The disagreement was smuggled in with the noun.

‘Just Ignore the Word’ Is Not a Rescue

A common response at this point is to suggest that we simply bypass the troublesome term and discuss the concrete features each party associates with it. Fine. Sensible. Often productive. But notice what this manoeuvre concedes. It does not save the term. It abandons it.

If meaningful discussion can only proceed once the word is set aside and replaced with a list of clarifications, constraints, examples, and exclusions, then the word has already failed at its primary job: conveying shared meaning. This is precisely the point the LIH is making.

The insufficiency is not that language is vague, or flexible, or context-sensitive. It’s that beyond a certain level of conceptual complexity, language becomes a confidence trick. It gives us the feeling of agreement without the substance, the appearance of communication without the transaction.

At that point, words don’t merely underperform. They mislead.

Cold, Aliens, and the Grammar That Thinks It Knows Too Much

2–3 minutes

I shared this post not too long ago. Today, I shared it in a different context, but I feel is interesting – because I feel that many things are interesting, especially around language and communication.

It commenced here on Mastodon.

Ocrampal shared a link to an article debating whether we are cold or have cold. Different cultures express this differently. It’s short. Read it on his site.

Audio: Exceptional NotebookLM summary podcast of this topic.

I replied to the post:

Nicely observed. I’ve pondered this myself. Small linguistic tweak: between être and avoir, avoir already behaves better metaphysically, but sentir seems the cleanest fit. Cold isn’t something one is or has so much as something one senses — a relational encounter rather than an ontological state or possession.

Between having and being, having is the lesser sin — but sensing/feeling feels truer. Cold belongs to the world; we merely sense it.

He replied in turn:

Agree except for: “Cold belongs to the world”. That is a metaphysical assumption that has consequences …

Finally (perhaps, penultimately), I responded:

Yes, it does. That statement was idiomatic, to express that ‘cold’ is environmental; we can’t be it or possess it. Coincidentally, I recently wrote about ‘cold’ in a different context:

where I link back to the post at the top of this article.

A more verbose version of this response might have been:

And this is exactly the problem I gestured at in the aliens piece. We mistake familiar grammatical scaffolding for shared metaphysics. We assume that if the sentence parses cleanly, the ontology must be sound.

Language doesn’t just describe experience. It quietly files it into categories and then acts surprised when those categories start making demands.

Cold, like aliens, exposes the trick. The moment you slow down, the grammar starts to wobble. And that wobble is doing far more philosophical work than most of our declarative sentences are willing to admit.

Meet the Language Insufficiency GPT

1–2 minutes

In anticipation of the publication of A Language Insufficiency Hypothesis in January 2026, I’ve created a Language Insufficiency GPT.

Today I’m launching something designed to exploit a familiar failure mode with forensic precision:
👉 https://chatgpt.com/g/g-694018a9bbc88191a8360d65a530e50c-language-insufficiency-gpt

Naturally, it will make more sense alongside the book. But it may still provide a bit of entertainment – and mild discomfort – in the meantime.

tl;dr: Language is generally presumed to be stable. Words mean what you think they mean, right? A table is a table. A bird is a bird. Polysemy aside, these are solid, dependable units.

Then we arrive at freedom, justice, truth, and an entire panoply of unstable candidates. And let’s not even pretend qualia are behaving themselves.

So when someone says ‘truth’, ‘free speech’, or ‘IQ’, you may suddenly realise you’ve been arguing with a cardboard cut-out wearing your own assumptions. That isn’t just interpersonal mischief. It’s language doing exactly what it was designed to do: letting you glide over the hard problems while sounding perfectly reasonable.

Audio: Short NotebookLM summary of this page content*
Video: Legacy video explaining some features of the LIH.

If that sounds banal, you’ve already fallen for the trap.

Give it a try – or wait until you’ve digested the book. Not literally, unless you’re short on fibre.

Cheers.

* As I’ve cited previously, the quality of NotebookLM varies – usually in predictable directions. This one does well enough, but it doesn’t have enough context to get the story right (because it was only drawing from this page rather than from a fuller accounting of the LIH). Its trailing comment reveals that it doesn’t grasp that “new words” don’t solve the problem.

Earlier, it suggests that language is intentionally vague. This is not an assertion I make. You can read some of the earlier incarnations, or you can wait for it to be published.

The Purpose of Purpose

I’m a nihilist. Possibly always have been. But let’s get one thing straight: nihilism is not despair. That’s a slander cooked up by the Meaning Merchants – the sentimentalists and functionalists who can’t get through breakfast without hallucinating some grand purpose to butter their toast. They fear the void, so they fill it. With God. With country. With yoga.

Audio: NotebookLM podcast on this topic.

Humans are obsessed with function. Seeing it. Creating it. Projecting it onto everything, like graffiti on the cosmos. Everything must mean something. Even nonsense gets rebranded as metaphor. Why do men have nipples? Why does a fork exist if you’re just going to eat soup? Doesn’t matter – it must do something. When we can’t find this function, we invent it.

But function isn’t discovered – it’s manufactured. A collaboration between our pattern-seeking brains and our desperate need for relevance, where function becomes fiction, where language and anthropomorphism go to copulate. A neat little fiction. An ontological fantasy. We ask, “What is the function of the human in this grand ballet of entropy and expansion?” Answer: there isn’t one. None. Nada. Cosmic indifference doesn’t write job descriptions.

And yet we prance around in lab coats and uniforms – doctors, arsonists, firemen, philosophers – playing roles in a drama no one is watching. We build professions and identities the way children host tea parties for dolls. Elaborate rituals of pretend, choreographed displays of purpose. Satisfying? Sometimes. Meaningful? Don’t kid yourself.

We’ve constructed these meaning-machines – society, culture, progress – not because they’re real, but because they help us forget that they’re not. It’s theatre. Absurdist, and often bad. But it gives us something to do between birth and decomposition.

Sisyphus had his rock. We have careers.

But let’s not confuse labour for meaning, or imagination for truth. The boulder never reaches the top, and that’s not failure. That’s the show.

So roll the stone. Build the company. Write the blog. Pour tea for Barbie. Just don’t lie to yourself about what it all means.

Because it doesn’t mean anything.

The Insufficiency of Language Meets Generative AI

I’ve written a lot on the insufficiency of language, and it’s not even an original idea. Language, our primary tool for sharing thoughts and ideas, harbours a fundamental flaw: it’s inherently insufficient for conveying precise meaning. While this observation isn’t novel, recent developments in artificial intelligence provide us with new ways to illuminate and examine this limitation. Through a progression from simple geometry to complex abstractions, we can explore how language both serves and fails us in different contexts.

The Simple Made Complex

Consider what appears to be a straightforward instruction: Draw a 1-millimetre square in the centre of an A4 sheet of paper using an HB pencil and a ruler. Despite the mathematical precision of these specifications, two people following these exact instructions would likely produce different results. The variables are numerous: ruler calibration, pencil sharpness, line thickness, paper texture, applied pressure, interpretation of “centre,” and even ambient conditions affecting the paper.

This example reveals a paradox: the more precisely we attempt to specify requirements, the more variables we introduce, creating additional points of potential divergence. Even in mathematics and formal logic—languages specifically designed to eliminate ambiguity—we cannot escape this fundamental problem.

Precision vs Accuracy: A Useful Lens

The scientific distinction between precision and accuracy provides a valuable framework for understanding these limitations. In measurement, precision refers to the consistency of results (how close repeated measurements are to each other), while accuracy describes how close these measurements are to the true value.

Returning to our square example:

  • Precision: Two people might consistently reproduce their own squares with exact dimensions
  • Accuracy: Yet neither might capture the “true” square we intended to convey

As we move from geometric shapes to natural objects, this distinction becomes even more revealing. Consider a maple tree in autumn. We might precisely convey certain categorical aspects (“maple,” “autumn colours”), but accurately describing the exact arrangement of branches and leaves becomes increasingly difficult.

The Target of Meaning: Precision vs. Accuracy in Communication

To understand language’s limitations, we can borrow an illuminating concept from the world of measurement: the distinction between precision and accuracy. Imagine a target with a bullseye, where the bullseye represents perfect communication of meaning. Just as archers might hit different parts of a target, our attempts at communication can vary in both precision and accuracy.

Consider four scenarios:

  1. Low Precision, Low Accuracy
    When describing our autumn maple tree, we might say “it’s a big tree with colourful leaves.” This description is neither precise (it could apply to many trees) nor accurate (it misses the specific characteristics that make our maple unique). The communication scatters widely and misses the mark entirely.
  2. High Precision, Low Accuracy
    We might describe the tree as “a 47-foot tall maple with exactly 23,487 leaves displaying RGB color values of #FF4500.” This description is precisely specific but entirely misses the meaningful essence of the tree we’re trying to describe. Like arrows clustering tightly in the wrong spot, we’re consistently missing the point.
  3. Low Precision, High Accuracy
    “It’s sort of spreading out, you know, with those typical maple leaves turning reddish-orange, kind of graceful looking.” While imprecise, this description might actually capture something true about the tree’s essence. The arrows scatter, but their centre mass hits the target.
  4. High Precision, High Accuracy
    This ideal state is rarely achievable in complex communication. Even in our simple geometric example of drawing a 1mm square, achieving both precise specifications and accurate execution proves challenging. With natural objects and abstract concepts, this challenge compounds exponentially.

The Communication Paradox

This framework reveals a crucial paradox in language: often, our attempts to increase precision (by adding more specific details) can actually decrease accuracy (by moving us further from the essential meaning we’re trying to convey). Consider legal documents: their high precision often comes at the cost of accurately conveying meaning to most readers.

Implications for AI Communication

This precision-accuracy framework helps explain why AI systems like our Midjourney experiment show asymptotic behaviour. The system might achieve high precision (consistently generating similar images based on descriptions) while struggling with accuracy (matching the original intended image), or vice versa. The gap between human intention and machine interpretation often manifests as a trade-off between these two qualities.

Our challenge, both in human-to-human and human-to-AI communication, isn’t to achieve perfect precision and accuracy—a likely impossible goal—but to find the optimal balance for each context. Sometimes, like in poetry, low precision might better serve accurate meaning. In other contexts, like technical specifications, high precision becomes crucial despite potential sacrifices in broader accuracy.

The Power and Limits of Distinction

This leads us to a crucial insight from Ferdinand de Saussure’s semiotics about the relationship between signifier (the word) and signified (the concept or object). Language proves remarkably effective when its primary task is distinction among a limited set. In a garden containing three trees—a pine, a maple, and a willow—asking someone to “point to the pine” will likely succeed. The shared understanding of these categorical distinctions allows for reliable communication.

However, this effectiveness dramatically diminishes when we move from distinction to description. In a forest of a thousand pines, describing one specific tree becomes nearly impossible. Each additional descriptive detail (“the tall one with a bent branch pointing east”) paradoxically makes precise identification both more specific and less likely to succeed.

An AI Experiment in Description

To explore this phenomenon systematically, I conducted an experiment using Midjourney 6.1, a state-of-the-art image generation AI. The methodology was simple:

  1. Generate an initial image
  2. Describe the generated image in words
  3. Use that description to generate a new image
  4. Repeat the process multiple times
  5. Attempt to refine the description to close the gap
  6. Continue iterations

The results support an asymptotic hypothesis: while subsequent iterations might approach the original image, they never fully converge. This isn’t merely a limitation of the AI system but rather a demonstration of language’s fundamental insufficiency.

One can already analyse this for improvements, but let’s parse it together.

With this, we know we are referencing a woman, a female of the human species. There are billions of women in the world. What does she look like? What colour, height, ethnicity, and phenotypical attributes does she embody?

We also know she’s cute – whatever that means to the sender and receiver of these instructions.

I used an indefinite article, a, so there is one cute woman. Is she alone, or is she one from a group?

It should be obvious that we could provide more adjectives (and perhaps adjectives) to better convey our subject. We’ll get there, but let’s move on.

We’ve got a conjunction here. Let’s see what it connects to.

She’s with a dog. In fact, it’s her dog. This possession may not be conveyable or differentiable from some arbitrary dog, but what type of dog is it? Is it large or small? What colour coat? Is it groomed? Is it on a leash? Let’s continue.

It seems that the verb stand refers to the woman, but is the dog also standing, or is she holding it? More words could qualify this statement better.

A tree is referenced. Similar questions arise regarding this tree. At a minimum, there is one tree or some variety. She and her dog are next to it. Is she on the right or left of it?

We think we can refine our statements with precision and accuracy, but can we? Might we just settle for “close enough”?

Let’s see how AI interpreted this statement.

Image: Eight Midjourney renders from the prompt: A cute woman and her dog stand next to a tree. I’ll choose one of these as my source image.

Let’s deconstruct the eight renders above. Compositionally, we can see that each image contains a woman, a dog, and a tree. Do any of these match what you had in mind? First, let’s see how Midjourney describes the first image.

In a bout of hypocrisy, Midjourney refused to /DESCRIBE the image it just generated.

Last Midjourney description for now.

Let’s cycle through them in turn.

  1. A woman is standing to the left of an old-growth tree – twice identified as an oak tree. She’s wearing faded blue jeans and a loose light-coloured T-shirt. She’s got medium-length (maybe) red-brown hair in a small ponytail. A dog – her black and white dog identified as a pitbull, an American Foxhound, and an American Bulldog – is also standing on his hind legs. I won’t even discuss the implied intent projected on the animal – happy, playful, wants attention… In two of the descriptions, she’s said to be training it. They appear to be in a somewhat residential area given the automobiles in the background. We see descriptions of season, time of day, lighting, angle, quality,
  2. A woman is standing to the right of an old-growth tree. She’s wearing short summer attire. Her dog is perched on the tree.
  3. An older woman and her dog closer up.
  4. A read view of both a woman and her dog near an oak tree.

As it turned out, I wasn’t thrilled with any of these images, so I rendered a different one. Its description follows.

The consensus is that ‘a beautiful girl in a white dress and black boots stands next to a tree’ with a Jack Russell Terrier dog. I see birch trees and snow. It’s overcast. Let’s spend some time trying to reproduce it. To start, I’m consolidating the above descriptions. I notice some elements are missing, but we’ll add them as we try to triangulate to the original image.

This is pretty far off the mark. We need to account for the overall setting and composition, relative positioning, clothing, hair, camera, perspective – even lighting and film emulsion.

Let’s see how we can refine it with some adjectives. Before this, I asked Anthropic’s Claude 3.5 to describe the image. Perhaps we’ll get more details.

We don’t seem to be moving in a good direction. Let’s modify the initial prompt.

I’ll allow the results to speak for themselves. Let’s see if we can’t get her out of the wedding gown and into a white jumper and skirt. I’ll bold the amends.

s

What gives?

I think my point has been reinforced. I’m getting nowhere fast. Let’s give it one more go and see where we end up. I’ve not got a good feeling about this.

With this last one, I re-uploaded the original render along with this text prompt. Notice that the girl now looks the same and the scene (mostly) appears to be in the same location, but there are still challenges.

After several more divergent attempts, I decided to focus on one element – the girl.

As I regard the image, I’m thinking of a police sketch artist. They get sort of close, don’t they? They’re experts. I’m not confident that I even have the vocabulary to convey accurately what I see. How do I describe her jumper? Is that a turtleneck or a high collar? It appears to be knit. Is is wool or some blend? does that matter for an image? Does this pleated skirt have a particular name or shade of white? It looks as though she’s wearing black leggings – perhaps polyester. And those boots – how to describe them. I’m rerunning just the image above through a describe function to see if I can get any closer.

These descriptions are particularly interesting and telling. First, I’ll point out that AI attempts to identify the subject. I couldn’t find Noa Levin by a Google search, so I’m not sure how prominent she might be if she even exists at all in this capacity. More interesting still, the AI has placed her in a scenario where the pose was taken after a match. Evidently, this image reflects the style of photographer Guy Bourdin. Perhaps the jumper mystery is solved. It identified a turtleneck. I’ll ignore the tree and see if I can capture her with an amalgamation of these descriptions. Let’s see where this goes.

Close-ish. Let’s zoom in to get better descriptions of various elements starting with her face and hair.

Now, she’s a sad and angry Russian woman with (very) pale skin; large, sad, grey eyes; long, straight brown hair. Filmed in the style of either David LaChapelle or Alini Aenami (apparently misspelt from Alena Aenami). One thinks it was a SnapChat post. I was focusing on her face and hair, but it notices her wearing a white (oversized yet form-fitting) jumper sweater and crossed arms .

I’ll drop the angry bit – and then the sad.

Stick a fork in it. I’m done. Perhaps it’s not that language is insufficient; it that my language skills are insufficient. If you can get closer to the original image, please forward the image, the prompt, and the seed, so I can post it.

The Complexity Gradient

A clear pattern emerges when we examine how language performs across different levels of complexity:

  1. Categorical Distinction (High Success)
    • Identifying shapes among limited options
    • Distinguishing between tree species
    • Basic color categorization
  2. Simple Description (Moderate Success)
    • Basic geometric specifications
    • General object characteristics
    • Broad emotional states
  3. Complex Description (Low Success)
    • Specific natural objects
    • Precise emotional experiences
    • Unique instances within categories
  4. Abstract Concepts (Lowest Success)
    • Philosophical ideas
    • Personal experiences
    • Qualia

As we move up this complexity gradient, the gap between intended meaning and received understanding widens exponentially.

The Tolerance Problem

Understanding these limitations leads us to a practical question: what level of communicative tolerance is acceptable for different contexts? Just as engineering embraces acceptable tolerances rather than seeking perfect measurements, perhaps effective communication requires:

  • Acknowledging the gap between intended and received meaning
  • Establishing context-appropriate tolerance levels
  • Developing better frameworks for managing these tolerances
  • Recognizing when precision matters more than accuracy (or vice versa)

Implications for Human-AI Communication

These insights have particular relevance as we develop more sophisticated AI systems. The limitations we’ve explored suggest that:

  • Some communication problems might be fundamental rather than technical
  • AI systems may face similar boundaries as human communication
  • The gap between intended and received meaning might be unbridgeable
  • Future development should focus on managing rather than eliminating these limitations

Conclusion

Perhaps this is a simple exercise in mental masturbation. Language’s insufficiency isn’t a flaw to be fixed but a fundamental characteristic to be understood and accommodated. By definition, it can’t be fixed. The gap between intended and received meaning may be unbridgeable, but acknowledging this limitation is the first step toward more effective communication. As we continue to develop AI systems and push the boundaries of human-machine interaction, this understanding becomes increasingly critical.

Rather than seeking perfect precision in language, we might instead focus on:

  • Developing new forms of multimodal communication
  • Creating better frameworks for establishing shared context
  • Accepting and accounting for interpretative variance
  • Building systems that can operate effectively within these constraints

Understanding language’s limitations doesn’t diminish its value; rather, it helps us use it more effectively by working within its natural constraints.

The Great Language Game: Between Structure and Chaos

5–7 minutes

Wittgenstein: Words Don’t Actually Mean Things, Sorry

If you thought we were done with language being slippery and unreliable, buckle up. Enter Ludwig Wittgenstein, the philosopher who essentially came along and said, “Oh, you thought words were bad? Let me show you just how deep this rabbit hole goes.”

Wittgenstein wasn’t content to let us cling to the idea that words could actually, you know, mean things. His big revelation? Words don’t even have fixed meanings at all. They only mean something because we use them in certain ways—and the meaning can change depending on the context. Welcome to Wittgenstein’s idea of language games, where words are like players on a field, running around, changing positions, and playing by different rules depending on which game you’re in.

Think of it this way: You’re talking about “justice” in a courtroom. Here, it’s got a very specific meaning—laws, evidence, fairness, right? But then you go to a protest, and suddenly “justice” is a rallying cry for social change. Same word, totally different game. And just like in sports, if you don’t know the rules of the game you’re in, you’re probably going to embarrass yourself. Or worse, end up arguing with someone who’s playing a completely different game with the same word.

Wittgenstein’s genius (and possibly, his cruelty) was in pointing out that language doesn’t have a stable relationship with the world around us. Words aren’t these neat little labels that correspond to actual things out there in the world. No, words are just part of a human activity. We throw them around and hope they land somewhere close to what we mean. And that’s on a good day.

But if words don’t mean anything on their own, then how can we ever trust them? According to Wittgenstein, we can’t. We’re constantly interpreting and reinterpreting the world through language, but it’s all just one big game of telephone. And don’t expect there to be one final, correct interpretation. There isn’t one. It’s all just a series of shifting meanings, with no way of getting to the “truth” behind them.

Here’s the kicker: Wittgenstein’s insight means that when you say something like “freedom” or “justice,” you’re not actually referring to some objective, concrete thing. You’re just participating in a language game where those words have specific meanings in that moment, but they can and will change depending on the context. So, one person’s “freedom” is another person’s “anarchy,” and one person’s “justice” is another’s “oppression.”

In other words, we’re all just out here, throwing words at each other like they’re going to hit some bullseye of meaning, when in reality, they’re bouncing off the walls and landing in places we never intended. It’s chaos, really, and Wittgenstein just stands there, arms crossed, probably smirking a little, as we desperately try to make sense of it all.

So, if you were hoping to pin down “truth” or “justice” with language, sorry. Wittgenstein says no. You’re just playing the game – and the rules? They’re made up, and they change constantly. Good luck.

Chomsky: Universal Grammar – A Shiny Idea, but Still…

After Wittgenstein thoroughly dismantled any hope we had of words actually meaning something, along comes Noam Chomsky to try and bring a little order to the chaos. Chomsky’s big idea? Universal grammar—the idea that, deep down, every human shares a common structure for language. It’s like a blueprint coded into our brains, and no matter what language you speak, we’re all building our sentences using the same basic tools.

Sounds neat, right? The world finally has some linguistic order! We’ve all got the same grammar in our heads, so maybe this whole miscommunication thing isn’t so bad after all. Except, here’s the problem: even if we’re all working from the same universal grammar, we’re still working with different words and different cultural baggage attached to those words. So, congratulations, Chomsky—you’ve built us a solid foundation, but the house we’re living in is still falling apart.

Let’s break it down. Chomsky argues that the ability to acquire language is hard-wired into the human brain. Babies don’t need to be taught grammar; they just pick it up naturally, like some kind of linguistic magic trick. No matter where you’re born—New York, Tokyo, or the middle of nowhere in the Amazon rainforest—you’re going to develop language using the same set of grammatical principles. It’s like we’re all born with the same linguistic software installed.

But here’s where the cracks start to show. Sure, we might all have this underlying grammar, but that’s not what’s causing the problems. The trouble is, language is more than just grammar—it’s words and meanings, and those are far more slippery. Just because we can all form sentences doesn’t mean we’re forming the same ideas behind those sentences. You can have the best grammar in the world and still be arguing about what “justice” means for hours on end.

For instance, take a phrase like “freedom is important.” Simple enough, right? Chomsky’s universal grammar means that everyone, regardless of where they’re from, can understand this sentence structure. But what does “freedom” mean? That’s where the universal grammar falls apart. One person thinks it’s the right to speak freely; another thinks it’s the freedom to make their own choices. Another might think it’s the absence of external control. The grammar is doing its job, sure, but the meaning? It’s off in a hundred directions at once.

Chomsky’s contribution is crucial—it tells us that our brains are wired to pick up language, and we all follow the same rules when we build sentences. But, unfortunately, those sentences are still subject to all the same chaos that Wittgenstein warned us about. Because even though we’ve got the structure nailed down, we’re still trying to throw abstract, subjective ideas into that structure, and it just doesn’t hold together.

So, while Chomsky’s universal grammar helps explain how we all manage to learn language in the first place, it doesn’t save us from the fundamental problems that come when we try to talk about anything beyond the basics. In other words, grammar can get us from “flamey thing hot” to “freedom is important,” but it can’t tell us what we really mean by either one. We’re still stuck with all the ambiguities that come with words—and no amount of universal grammar is going to fix that.


Previous | Next

In Defence of Nihilism: Embracing the Absence of Inherent Meaning

Nihilism, often misunderstood and misrepresented, shares a common plight with philosophies such as atheism, anarchism, and Marxism. Like its counterparts, nihilism is frequently subjected to the creation of strawman arguments in public discourse, resulting in its vilification and scapegoating. In this article, I aim to demystify nihilism by providing a clear definition, description, and defence of this philosophical perspective.

Firstly, let’s address the misconception that nihilism entails a chaotic disregard for morality and societal norms: “If life has no meaning or purpose, then anyone can do anything.” This sentiment is often echoed in discussions about nihilism, as well as anarchism and atheism. However, it presupposes a fundamental misunderstanding of human nature. Despite the absence of inherent meaning in the universe, humans are not devoid of emotions or social affinities.

It is crucial to recognise that while the universe does not impart meaning or purpose, humans have constructed various systems of meaning throughout history. Whether through moral codes, religious doctrines, or cultural norms, individuals and societies have ascribed significance to different aspects of life. These constructs provide a framework within which individuals navigate their existence, albeit one that is socially constructed rather than inherent to the universe.

Critics of nihilism often argue that the acknowledgement of life’s inherent meaninglessness leads to despair and existential angst, rendering life devoid of purpose. However, this perspective fails to account for the resilience and adaptability of human beings. While some individuals may struggle initially with the realisation that there is no inherent meaning, many nihilists find liberation in embracing the absence of preordained purpose. Rather than succumbing to despair, they recognise the freedom to create their own meaning and forge their own path in life.

It is essential to understand that nihilism does not negate the validity of individual or societal pursuits. While nihilists reject the notion of inherent meaning, they acknowledge the significance of subjective meaning and the importance of human connection, fulfilment, and well-being. Whether it is pursuing personal goals, fostering relationships, or contributing to the betterment of society, nihilists recognise the value of such endeavours within the context of human experience.

In conclusion, nihilism offers a perspective that challenges conventional notions of meaning and purpose. By acknowledging the absence of inherent meaning in the universe, nihilists embrace the freedom to create their own meaning and chart their own course in life. Far from being a philosophy of despair, nihilism invites individuals to confront the uncertainty of existence with courage and resilience, recognising the inherent value of human experience in a world devoid of inherent meaning.

John Vervaeke and Lex Fridman on the Meaning Crisis

jimoeba mentioned that he enjoyed an interview with Vervake and Fridman in a comment, so I thought I’d give it a listen. It turns out there are several including a 3-plus-hour version. Arbitrarily, I chose this one. Even if it’s not the particular interview on the meaning crisis, it gives me a sense of the two and their dynamics. I’m glad I listened to it. I like Vervaeke. I can’t say I’m much of a Fridman fan on first listen.

John Vervaeke and Lex Fridman interview: Human civilisation is facing a meaning crisis

This interview content provides an orientation of where Vervaeke is coming from. It helps to clarify his position. His claim seems to be that many people today identify as having no religion but being spiritual. By extension, he posits that this cohort is searching for meaning. I can’t disagree. What it tells me is that I am not in his target demographic. I have no religion, as I am an atheist. I have no spiritual void to fill. This is Vervaeke’s goal—to find something to perform the function of the religion without the, perhaps, baggage and dogma.

I sympathise with his goal. He brings up Nietzsche’s “God is dead” quote, famous or infamous depending on your worldview. Essentially, he wants to answer Nietzsche’s query of what to do now that it’s been revealed that humans created God, not the other way around. His aim is to replace the font of wisdom for this generation.

For me, wisdom is a heuristic, part of the Gestalt McGilchrist mentions. McGilchrist’s work is even referenced here. Of course, I interpret McGilchrist’s references in this space to be metaphorical. It seems that he views it as ‘real’. I’m not sure where Vervaeke places it. Somehow, I feel that if there is a spectrum, Vervaeke leans closer to McGilchrist than me, and that’s OK. They just happen to be wrong.

I still don’t get the need for meaning. I don’t feel despondent that there is no inherent meaning in anything, but we are free to invite or adopt one or many. I remember a Christian mate of mine who explained that people have a God-sized hole that can only be filled by God. Essentially, Vervaeke is making a similar claim, but his void is filled by wisdom. I suppose that I don’t feel I have a void doesn’t mean there isn’t one.

Search for Meaning

Ever since encountering Viktor Frankl’s Man’s Search for Meaning in my youth, I’ve pondered why people search for and indeed invent meaning. By meaning, I am speaking in terms of finding a higher purpose or some spiritual affinity rather than the meaning of why bad things happen, though there are undoubtedly some intersections of the concepts.

This lecture, Ep. 39 – Awakening from the Meaning Crisis – The Religion of No Religion, from a series by John Vervaeke was recommended to an associate in my social network by another trusted associate, and it’s got me going. I’ve long been a nihilist and existentialist. I am even partial to aspects of the philosophic framework of Zen Buddhism. But I’ve never felt there was some higher meaning or raison d’être that isn’t self-imposed. How else would it be imposed? A person may be indoctrinated, but in the end, ignorance is no excuse from self-imposition.

I guess I can’t quite understand what drives this search for meaning. As Vervaeke notes, even some noted atheists like Richard Dawkins have suggested that we should find a secular proxy for the religion—or the namesake religion of no religion. I understand the social function of religion as well as some psychological functions, but the disconnect for me is that I have no such drive. I am admittedly an introvert, so whilst I admit the need for social cohesion and coöperation, I don’t understand the fabrication of religions or the personal, not only belief in meaning, but a yearning for it and even a belief that one has found it.

some people don’t search for meaning so much as they feel they have discovered meaning, but when their discovered meaning doesn’t jibe with another’s discovered meaning, the result is a search to justify or reconcile this conflict

I am empathetic to people like Iain McGilchrist who asserts that there is exogenous meaning out there to be had. He’d probably also assert that asking for proof is a left-hemisphere cerebral request but that absent the imposition by the left hemisphere, one would just feel it and know it. I just can’t abide.

Vervaeke does touch on the postmodern critique of religion as a power play, which is how I feel about it, but this is about the social aspect and doesn’t touch on the personal search for meaning. What I can’t say is whether a person would have this drive to search for meaning if they were either absent socialisation or exposed only to people who are not going to suggest meaning. I understand that previous generations have shown this propensity, but are there cultures that don’t? And is there a common thread to those who do?

In the past, many cultures have asserted gods and higher powers (whether or not as a power play), but could this simply prompt the people to search for alternative meanings, perhaps having noticed the dissonance between certain dogma and their lived lives?

Interestingly, some people don’t search for meaning so much as they feel they have discovered meaning, but when their discovered meaning doesn’t jibe with another’s discovered meaning, the result is a search to justify or reconcile this conflict.

It’s late. Time to retire for the evening. I have none of these answers, but I will jot the questions down on my to-do list just above the search for meaning.