From Memes to Meaning: The Beautiful Chaos of Modern Language

3–5 minutes

The Present Day: Social Media and Memes – The Final Nail in the Coffin?

Just when you thought things couldn’t get any more chaotic, enter the 21st century, where language has been boiled down to 280 characters, emojis, and viral memes. If you think trying to pin down the meaning of “freedom” was hard before, try doing it in a tweet—or worse, a string of emojis. In the age of social media, language has reached new heights of ambiguity, with people using bite-sized bits of text and images to convey entire thoughts, arguments, and philosophies. And you thought interpreting Derrida was difficult.

Social media has turned language into an evolving, shape-shifting entity. Words take on new meanings overnight, hashtags rise and fall, and memes become the shorthand for complex cultural commentary. In some ways, it’s brilliant—what better way to capture the madness of modern life than with an image of a confused cat or a poorly drawn cartoon character? But in other ways, it’s the final nail in the coffin for clear communication. We’ve gone from painstakingly crafted texts, like Luther’s 95 Theses, to memes that rely entirely on shared cultural context to make sense.

The irony is that we’ve managed to make language both more accessible and more incomprehensible at the same time. Sure, anyone can fire off a tweet or share a meme, but unless you’re plugged into the same cultural references, you’re probably going to miss half the meaning. It’s like Wittgenstein’s language games on steroids—everyone’s playing, but the rules change by the second, and good luck keeping up.

And then there’s the problem of tone. Remember those philosophical debates where words were slippery? Well, now we’re trying to have those debates in text messages and social media posts, where tone and nuance are often impossible to convey. Sarcasm? Forget about it. Context? Maybe in a follow-up tweet, if you’re lucky. We’re using the most limited forms of communication to talk about the most complex ideas, and it’s no surprise that misunderstandings are at an all-time high.

And yet, here we are, in the midst of the digital age, still using the same broken tool—language—to try and make sense of the world. We’ve come a long way from “flamey thing hot,” but the basic problem remains: words are slippery, meanings shift, and no matter how advanced our technology gets, we’re still stuck in the same old game of trying to get our point across without being completely misunderstood.

Conclusion: Language – Beautiful, Broken, and All We’ve Got

And here’s where the irony kicks in. We’ve spent this entire time critiquing language—pointing out its flaws, its limitations, its inability to truly capture abstract ideas. And how have we done that? By using language. It’s like complaining about how unreliable your GPS is while using it to get to your destination. Sure, it’s broken—but it’s still the only tool we have.

In the end, language is both our greatest achievement and our biggest limitation. It’s allowed us to build civilisations, create art, write manifestos, and start revolutions. But it’s also the source of endless miscommunication, philosophical debates that never get resolved, and social media wars over what a simple tweet really meant.

So yes, language is flawed. It’s messy, it’s subjective, and it often fails us just when we need it most. But without it? We’d still be sitting around the fire, grunting at each other about the ‘toothey thing’ lurking in the shadows. For better or worse, language is the best tool we’ve got for making sense of the world. It’s beautifully broken, but we wouldn’t have it any other way.

And with that, we’ve used the very thing we’ve critiqued to make our point. The circle of irony is complete.


Previous | End

Jargon, Brains, and the Struggle for Meaning

6–9 minutes

Specialised Languages: Academia’s Jargon Olympics

If you thought normal language was confusing, let’s take a moment to appreciate the true champions of linguistic obscurity: academics. Welcome to the world of specialised languages, where entire fields of study have developed their own language games that make even Wittgenstein’s head spin.

Here’s how it works: Every discipline—science, law, philosophy—creates its own jargon to describe the world. At first, it seems helpful. Instead of using vague terms, you get precise definitions for complex ideas. But what started as a way to improve communication within a field quickly turned into a linguistic arms race, where the more obscure and convoluted your terms are, the smarter you sound. You’re not just a lawyer anymore—you’re someone who’s ready to throw “res ipsa loquitur” into casual conversation to leave everyone else in the room wondering if they’ve missed a memo.

The problem? If you’re not part of the club, good luck understanding what anyone is talking about. Want to read a physics paper? Prepare to learn a whole new vocabulary. Need to get through a legal document? You’ll be knee-deep in Latin phrases before you even get to the point. And don’t even try to decipher a philosophical text unless you’re ready to battle abstract nouns that have been stretched and twisted beyond recognition.

It’s not just the words themselves that are the issue—it’s the sheer density of them. Take “justice” for example. In philosophy, you’ve got theories about distributive justice, retributive justice, restorative justice, and a hundred other variations, each with its own set of terms and conditions. And that’s before we even touch on how “justice” is defined in legal circles, where it becomes an even more tangled mess of case law and precedent. Every field is playing its own version of the “justice” game, with its own rules and definitions, and none of them are interested in comparing notes.

This is the academic world in a nutshell. Each discipline has built its own linguistic fortress, and unless you’ve spent years studying, you’re not getting in. But here’s the kicker: even within these fields, people are often misunderstanding each other. Just because two scientists are using the same words doesn’t mean they’re on the same page. Sometimes, it’s more like a game of intellectual one-upmanship—who can define the most obscure term or twist a familiar word into something completely unrecognisable?

And let’s not forget the philosophers. They’ve turned linguistic acrobatics into an art form. Good luck reading Foucault or Derrida without a dictionary (or five) on hand. You might walk away thinking you understand their points, but do you really? Or have you just memorised the jargon without actually grasping the deeper meaning? Even scholars within these fields often argue over what was really meant by a certain text—Barthes, after all, famously declared the “death of the author,” so it’s not like anyone really has the final say on meaning anyway.

So here we are, knee-deep in jargon, trying to communicate with people who, technically, speak the same language but are operating within entirely different rulesets. Every academic discipline has its own secret code, and if you don’t know it, you’re lost. Even when you do know the code, you’re still at risk of miscommunication, because the words that look familiar have been stretched and shaped to fit highly specific contexts. It’s like being fluent in one dialect of English and then suddenly being asked to write a thesis in legalese. Good luck.

In the end, academia’s specialised languages don’t just make things harder—they actively create barriers. What started as a way to improve precision has turned into an obstacle course of incomprehensible terms, where the real challenge is just figuring out what anyone’s actually saying. And let’s be honest, even if you do figure it out, there’s no guarantee it’s going to mean the same thing next time you see it.

Neurolinguistics: Even Our Brains Can’t Agree

So far, we’ve seen how language is a mess of miscommunication, cultural differences, and academic jargon. But surely, at least on a biological level, our brains are all on the same page, right? Well, not exactly. Welcome to the wonderful world of neurolinguistics, where it turns out that even the very organ responsible for language can’t get its act together.

Here’s the deal: Neurolinguistics is the study of how the brain processes language, and while it’s fascinating, it’s also a bit of a buzzkill for anyone hoping for consistency. See, your brain and my brain don’t process language in the same way. Sure, we’ve got similar hardware, but the software is wildly unpredictable. There are individual differences, cultural influences, and developmental quirks that all affect how we understand and produce language. What’s simple for one brain might be completely baffling to another.

Take, for example, something as basic as syntax. Chomsky might have told us we all have a universal grammar hard-wired into our brains, but neurolinguistics has shown that how we apply that grammar can vary significantly. Some people are wired to handle complex sentence structures with ease—think of that friend who can follow 10 different clauses in a single breath. Others? Not so much. For them, even a moderately tricky sentence feels like mental gymnastics. The brain is constantly juggling words, meanings, and structures, and some brains are better at it than others.

But the real kicker is how differently we interpret words. Remember those abstract nouns we’ve been wrestling with? Well, it turns out that your brain might be interpreting ‘freedom’ or ‘justice’ completely differently from mine – not just because of culture or upbringing, but because our brains physically process those words in different ways. Neurolinguistic studies have shown that certain regions of the brain are activated differently depending on the individual’s experience with language. In other words, your personal history with a concept can literally change how your brain lights up when you hear or say it.

And don’t even get me started on bilingual brains. If you speak more than one language, your brain is constantly toggling between two (or more) linguistic systems, which means it’s running twice the risk of misinterpretation. What a word means in one language might trigger a completely different association in another, leaving bilingual speakers in a constant state of linguistic flux. It’s like trying to run two operating systems on the same computer—things are bound to get glitchy.

But here’s the real kicker: Even within the same person, the brain can’t always process language the same way all the time. Stress, fatigue, emotional state—all of these factors can influence how well we handle language on any given day. Ever tried to have a coherent conversation when you’re tired or angry? Good luck. Your brain isn’t interested in nuance or deep philosophical ideas when it’s in survival mode. It’s just trying to get through the day without short-circuiting.

So, not only do we have to deal with the external chaos of language – miscommunication, different contexts, shifting meanings – but we also have to contend with the fact that our own brains are unreliable interpreters. You can use all the right words, follow all the right grammar rules, and still end up with a garbled mess of meaning because your brain decided to take a nap halfway through the sentence.

In the end, neurolinguistics reminds us that language isn’t just a social or cultural problem – it’’’s a biological one too. Our brains are doing their best to keep up, but they’re far from perfect. The very organ that makes language possible is also responsible for making it infinitely more complicated than it needs to be. And if we can’t rely on our own brains to process language consistently, what hope do we have of ever understanding anyone else?


Previous | Next

Language: Tool for Clarity or Shaper of Reality?

6–8 minutes

Pinker: The Optimist Who Thinks Language Works

Enter Steven Pinker, a cognitive scientist and eternal optimist about language. While we’ve been busy pointing out how language is a jumbled mess of misunderstandings, Pinker comes along with a sunny outlook, waving his banner for the language instinct. According to Pinker, language is an evolved tool – something that our brains are wired to use, and it’s good. Really good. So good, in fact, that it allowed us to build civilisations, exchange complex ideas, and, you know, not get eaten by sabre-toothed tigers.

Sounds like a nice break from all the linguistic doom and gloom, right? Pinker believes that language is a powerful cognitive skill, something we’ve developed to communicate thoughts and abstract ideas with remarkable precision. He points to the fact that we’re able to create entire worlds through language – novels, philosophies, legal systems, and scientific theories. Language is, to him, one of the greatest achievements of the human mind.

But here’s where things get a little sticky. Sure, Pinker’s optimism about language is refreshing, but he’s still not solving our core problem: meaning. Pinker may argue that language works wonderfully for most of our day-to-day communication – and in many cases, he’s right. We can all agree that saying, “Hey, don’t touch the flamey thing” is a pretty effective use of language. But once we start using words like ‘freedom’ or ‘justice’, things start to unravel again.

Take a sentence like ‘freedom is essential’. Great. Pinker might say this is a perfectly formed thought, conveyed using our finely tuned linguistic instincts. But the problem? Ask five people what ‘freedom’ means, and you’ll get five different answers. Sure, the grammar is flawless, and everyone understands the sentence structurally. But what they mean by ‘freedom’? That’s a whole other ball game.

Pinker’s language instinct theory helps explain how we learn language, but it doesn’t really account for how we use language to convey abstract, subjective ideas. He might tell us that language has evolved as an efficient way to communicate, but that doesn’t fix the problem of people using the same words to mean wildly different things. You can be the most eloquent speaker in the world, but if your definition of ‘freedom’ isn’t the same as mine, we’re still lost in translation.

And let’s not forget: while language is indeed a fantastic tool for sharing information and surviving in complex societies, it’s also great at creating conflicts. Wars have been fought over differences in how people interpret words like ‘justice’ or ‘rights’. Pinker might say we’ve evolved language to foster cooperation, but history suggests we’ve also used it to argue endlessly about things we can never quite agree on.

So, yes, Pinker’s right – language is a cognitive marvel, and it’s gotten us pretty far. But his optimism doesn’t quite stretch far enough to cover the fact that language, for all its brilliance, still leaves us stuck in a web of interpretation and miscommunication. It’s like having a state-of-the-art GPS that works perfectly – until you get to that roundabout and suddenly no one knows which exit to take.

In the end, Pinker’s got a point: language is one of the most sophisticated tools we’ve ever developed. It’s just a shame that when it comes to abstract concepts, we still can’t agree on which way’s north.

Sapir-Whorf: Language Shapes Reality – Or Does It?

Now it’s time for the Sapir-Whorf hypothesis to take the stage, where things get really interesting – or, depending on your perspective, slightly ridiculous. According to this theory, the language you speak actually shapes the way you see the world. Think of it as linguistic mind control: your perception of reality is limited by the words you have at your disposal. Speak the wrong language, and you might as well be living on another planet.

Sounds dramatic, right? Here’s the gist: Sapir and Whorf argued that the structure of a language affects how its speakers think and perceive the world. If you don’t have a word for something, you’re going to have a hard time thinking about that thing. Inuit languages, for example, are famous for having multiple words for different kinds of snow. If you’re an Inuit speaker, the hypothesis goes, you’re much more attuned to subtle differences in snow than someone who just calls it all ‘snow’.

Now, on the surface, this sounds kind of plausible. After all, we do think using language, don’t we? And there’s some truth to the idea that language can influence the way we categorise and describe the world. But here’s where Sapir-Whorf starts to go off the deep end. According to the stronger version of this hypothesis, your entire reality is shaped and limited by your language. If you don’t have the word for “freedom” in your language, you can’t experience it. If your language doesn’t have a word for “blue,” well, guess what? You don’t see blue.

Let’s take a step back. This sounds like the kind of thing you’d hear at a dinner party from someone who’s just a little too impressed with their first year of linguistics classes. Sure, language can shape thought to a degree, but it doesn’t have a stranglehold on our perception of reality. We’re not prisoners of our own vocabulary. After all, you can still experience freedom, even if you’ve never heard the word. And you can certainly see blue, whether your language has a word for it or not.

In fact, the idea that you’re trapped by your language is a little insulting, when you think about it. Are we really saying that people who speak different languages are living in different realities? That a person who speaks Mandarin sees the world in a fundamentally different way than someone who speaks Spanish? Sure, there might be some subtle differences in how each language breaks down concepts, but we’re all still human. We’re all still sharing the same world, and no matter what language we speak, we still have the cognitive capacity to understand and experience things beyond the limits of our vocabulary.

Let’s also not forget that language is flexible. If you don’t have a word for something, you make one up. If you’re missing a concept, you borrow it from another language or invent a metaphor. The idea that language is some kind of mental prison ignores the fact that we’re constantly evolving our language to keep up with the way we see the world—not the other way around.

And here’s the real kicker: if Sapir and Whorf were right, and we’re all walking around in little linguistic bubbles, then how on earth have we managed to translate anything? How have entire philosophies, religious texts, and scientific theories made their way across cultures and languages for centuries? If language really was shaping our reality that strongly, translation would be impossible – or at least incredibly limited. But here we are, discussing concepts like ‘freedom’, ‘justice’, and ‘truth’ across languages, cultures, and centuries.

So while it’s fun to entertain the idea that your language shapes your reality, let’s not give it too much credit. Yes, language can influence how we think about certain things. But no, it doesn’t define the boundaries of our existence. We’re not all stuck in a linguistic matrix, waiting for the right word to set us free.


Previous | Next

Ink and Instability: The Permanent Confusion of the Written Word

5–7 minutes

The Written Word: Making Things Permanent (and Permanently Confusing)

So far, we’ve been dealing with spoken language—the slippery, ever-changing, context-dependent jumble of sounds we toss around in hopes that someone, somewhere, might understand what we’re trying to say. But what happens when we decide to make those words permanent? Welcome to the era of the written word, where all our linguistic problems got carved into stone—literally.

Let’s rewind a bit. Long before we had books or Twitter threads, ancient humans figured out that spoken words disappear into the air. They needed a way to preserve information, and voilà—writing was born. First came simple marks on clay tablets, because nothing says “let’s communicate important ideas” like scratching symbols into mud. But hey, at least it was a start.

The beauty of writing was that it gave us a way to record language—no more relying on memory to remember which berries were bad or who owed you a goat. But there was a downside too: once those words were written down, they became permanent. If you thought miscommunication was bad when words were floating in the air, just wait until you try to interpret a clay tablet left behind by someone who died 500 years ago. Good luck figuring out what they meant by “justice.”

And it didn’t stop there. As writing developed into full-fledged scripts, we gained the ability to record more complex ideas. That meant abstract nouns like “truth” and “freedom” were no longer just things you debated around the campfire—they could now be written down and preserved for future generations to also argue about. Nothing says “progress” like ensuring centuries of philosophical bickering.

But the real revolution came later. Fast forward to the 15th century, and along comes Johannes Gutenberg with his shiny new printing press. Suddenly, words—once limited to painstakingly hand-copied manuscripts—could be mass-produced. Books, pamphlets, and flyers could be printed in quantities never before imagined. Ideas could spread like wildfire.

And what ideas they were. Philosophers, theologians, and politicians alike jumped on the opportunity to get their words in front of as many people as possible. The written word wasn’t just a way to record information anymore—it became a tool for shaping societies, sparking revolutions, and (of course) stirring up endless debates about everything.

Of course, there was a catch. The printing press didn’t make language any clearer—it just gave us more of it to misunderstand. People could now read the same text and come away with completely different interpretations. What one person saw as a treatise on “freedom,” another saw as a justification for tyranny. What one reader thought was “truth,” another deemed blasphemy.

With the written word and the printing press, we managed to take the problems of spoken language and make them permanent. Miscommunication wasn’t just an unfortunate accident anymore—it was printed in ink, distributed en masse, and immortalised for future generations to argue over. If Wittgenstein had been alive during Gutenberg’s time, he probably would have thrown his hands in the air and said, “See? I told you words don’t mean what you think they mean.”

But hey, at least we were consistent. From clay tablets to printed books, the written word gave us the power to preserve language—and all its glorious inadequacies—for all time.

The Printing Press: Mass-Producing Confusion

The printing press was hailed as one of the greatest inventions in history. And sure, it was. It democratized knowledge, empowered literacy, and paved the way for all sorts of wonderful progress. But let’s be real—it also democratised miscommunication. Now, instead of one person misunderstanding you in conversation, hundreds—or thousands—could read your words and completely miss the point. Progress!

Gutenberg’s press took the words that were once fleeting and made them indelible. No more clarifying in real-time. No more adding context or adjusting your message on the fly. Once it was in print, that was it. You’d better hope your readers were playing the same “language game” as you, or things could go downhill fast.

Take Martin Luther, for example. He nailed his 95 Theses to the church door in 1517, and thanks to the printing press, those words spread all over Europe. What he intended as a call for reform turned into a revolution that spiralled far beyond his control. People read the same text and took wildly different meanings from it—some saw it as a plea for theological discussion, others as a call to burn down the nearest cathedral.

But it didn’t stop there. Luther’s seemingly clear ideas splintered into countless interpretations, and over time, what began as a movement for reform became the launchpad for hundreds of Protestant denominations. Each group interpreted Luther’s message (and the Bible) in their own unique way. From Lutheranism to Calvinism to the Baptists, Methodists, and beyond, the Protestant Reformation exploded into a thousand branches, all claiming to have grasped the “true” meaning of Luther’s words.

And this? This is the power – and the peril – of the written word. Once something is printed and distributed, it takes on a life of its own. Luther might have had one specific vision for his reforms, but as soon as those ideas hit the printing press, they fractured into countless interpretations, each with its own twist on “truth.” It’s a linguistic free-for-all, with everyone holding the same text and coming to completely different conclusions.

The printing press didn’t just give us more words—it gave us more misunderstandings. Suddenly, philosophical debates, political manifestos, and theological treatises were flying off the presses, each one ready to be misinterpreted by whoever happened to pick it up. And once it was printed, there was no going back. No retractions. No take-backs. Just page after page of linguistic uncertainty.

So while the printing press undoubtedly transformed society, it also multiplied the number of ways we could miscommunicate with each other. Because if there’s one thing we’re good at, it’s misunderstanding words – especially when they’re written down for all eternity.


Previous | Next

The Great Language Game: Between Structure and Chaos

5–7 minutes

Wittgenstein: Words Don’t Actually Mean Things, Sorry

If you thought we were done with language being slippery and unreliable, buckle up. Enter Ludwig Wittgenstein, the philosopher who essentially came along and said, “Oh, you thought words were bad? Let me show you just how deep this rabbit hole goes.”

Wittgenstein wasn’t content to let us cling to the idea that words could actually, you know, mean things. His big revelation? Words don’t even have fixed meanings at all. They only mean something because we use them in certain ways—and the meaning can change depending on the context. Welcome to Wittgenstein’s idea of language games, where words are like players on a field, running around, changing positions, and playing by different rules depending on which game you’re in.

Think of it this way: You’re talking about “justice” in a courtroom. Here, it’s got a very specific meaning—laws, evidence, fairness, right? But then you go to a protest, and suddenly “justice” is a rallying cry for social change. Same word, totally different game. And just like in sports, if you don’t know the rules of the game you’re in, you’re probably going to embarrass yourself. Or worse, end up arguing with someone who’s playing a completely different game with the same word.

Wittgenstein’s genius (and possibly, his cruelty) was in pointing out that language doesn’t have a stable relationship with the world around us. Words aren’t these neat little labels that correspond to actual things out there in the world. No, words are just part of a human activity. We throw them around and hope they land somewhere close to what we mean. And that’s on a good day.

But if words don’t mean anything on their own, then how can we ever trust them? According to Wittgenstein, we can’t. We’re constantly interpreting and reinterpreting the world through language, but it’s all just one big game of telephone. And don’t expect there to be one final, correct interpretation. There isn’t one. It’s all just a series of shifting meanings, with no way of getting to the “truth” behind them.

Here’s the kicker: Wittgenstein’s insight means that when you say something like “freedom” or “justice,” you’re not actually referring to some objective, concrete thing. You’re just participating in a language game where those words have specific meanings in that moment, but they can and will change depending on the context. So, one person’s “freedom” is another person’s “anarchy,” and one person’s “justice” is another’s “oppression.”

In other words, we’re all just out here, throwing words at each other like they’re going to hit some bullseye of meaning, when in reality, they’re bouncing off the walls and landing in places we never intended. It’s chaos, really, and Wittgenstein just stands there, arms crossed, probably smirking a little, as we desperately try to make sense of it all.

So, if you were hoping to pin down “truth” or “justice” with language, sorry. Wittgenstein says no. You’re just playing the game – and the rules? They’re made up, and they change constantly. Good luck.

Chomsky: Universal Grammar – A Shiny Idea, but Still…

After Wittgenstein thoroughly dismantled any hope we had of words actually meaning something, along comes Noam Chomsky to try and bring a little order to the chaos. Chomsky’s big idea? Universal grammar—the idea that, deep down, every human shares a common structure for language. It’s like a blueprint coded into our brains, and no matter what language you speak, we’re all building our sentences using the same basic tools.

Sounds neat, right? The world finally has some linguistic order! We’ve all got the same grammar in our heads, so maybe this whole miscommunication thing isn’t so bad after all. Except, here’s the problem: even if we’re all working from the same universal grammar, we’re still working with different words and different cultural baggage attached to those words. So, congratulations, Chomsky—you’ve built us a solid foundation, but the house we’re living in is still falling apart.

Let’s break it down. Chomsky argues that the ability to acquire language is hard-wired into the human brain. Babies don’t need to be taught grammar; they just pick it up naturally, like some kind of linguistic magic trick. No matter where you’re born—New York, Tokyo, or the middle of nowhere in the Amazon rainforest—you’re going to develop language using the same set of grammatical principles. It’s like we’re all born with the same linguistic software installed.

But here’s where the cracks start to show. Sure, we might all have this underlying grammar, but that’s not what’s causing the problems. The trouble is, language is more than just grammar—it’s words and meanings, and those are far more slippery. Just because we can all form sentences doesn’t mean we’re forming the same ideas behind those sentences. You can have the best grammar in the world and still be arguing about what “justice” means for hours on end.

For instance, take a phrase like “freedom is important.” Simple enough, right? Chomsky’s universal grammar means that everyone, regardless of where they’re from, can understand this sentence structure. But what does “freedom” mean? That’s where the universal grammar falls apart. One person thinks it’s the right to speak freely; another thinks it’s the freedom to make their own choices. Another might think it’s the absence of external control. The grammar is doing its job, sure, but the meaning? It’s off in a hundred directions at once.

Chomsky’s contribution is crucial—it tells us that our brains are wired to pick up language, and we all follow the same rules when we build sentences. But, unfortunately, those sentences are still subject to all the same chaos that Wittgenstein warned us about. Because even though we’ve got the structure nailed down, we’re still trying to throw abstract, subjective ideas into that structure, and it just doesn’t hold together.

So, while Chomsky’s universal grammar helps explain how we all manage to learn language in the first place, it doesn’t save us from the fundamental problems that come when we try to talk about anything beyond the basics. In other words, grammar can get us from “flamey thing hot” to “freedom is important,” but it can’t tell us what we really mean by either one. We’re still stuck with all the ambiguities that come with words—and no amount of universal grammar is going to fix that.


Previous | Next

From Signs to Abstractions: The Slippery Slope of Meaning

5–7 minutes

Saussure and the Signified: Words as Slippery Symbols

Fast-forward a few thousand years, and humans are no longer just warning each other about hot flames or toothy predators. We’ve moved on to the exciting world of abstract thought, but the language tools we’re using haven’t quite caught up. Enter Ferdinand de Saussure, who basically waltzed in to tell us, ‘Hey, all those words you’re throwing around? They’re not doing what you think they’re doing.’

Saussure gave us the idea of the signifier and the signified. Now, don’t let the fancy terms fool you. It’s just a way of pointing out that when we say ‘tree’, we’re not actually talking about a tree. No, we’re using the word ‘tree’ as a symbol – a signifier – that points to the idea of a tree. The signified is the actual concept of ‘tree-ness’ floating around in your brain. But here’s the kicker: everyone’s idea of a tree is a little different.

And this isn’t just a language problem – it’s an art problem too. Enter René Magritte, the surrealist artist who really drove this point home with his famous painting, Ceci n’est pas une pipe (‘This is not a pipe’). At first glance, it looks like a straightforward picture of a pipe, but Magritte was making a deeper point. It’s not actually a pipe – it’s an image of a pipe, a representation. You can’t stuff it with tobacco and smoke it, because what you’re looking at is a representation, not the real thing.

Image: La Trahison des Images, René François Ghislain Magritte

In the same way, when we use words, we’re not talking about the thing itself – we’re just waving a flag toward the concept of that thing. So, when you say ‘tree’, you’re really saying ceci n’est pas un arbre – this is not a tree. It’s just a word, a placeholder, a verbal painting of something real. And just like Magritte’s pipe, it’s easy to get confused. You might think you’re talking about the same tree, or the same ‘freedom’, but all you’ve got is a symbol – and everyone’s symbol looks a little different.

This is where things start to unravel. Words are slippery symbols, and as soon as we move away from concrete, physical objects – like trees or, yes, pipes – and into abstract ideas, like ‘justice’ or ‘truth’, the symbols become even harder to hold onto. The cracks in language start to widen, and before you know it, you’re no longer even sure if you’re talking about the same concept at all.

Language, Saussure argues, isn’t this neat, objective system we thought it was. It’s a game we’re playing, and the rules are written in invisible ink. By the time we get to abstract nouns, we’re basically playing with loaded dice. You think you’re communicating clearly, but every word you use is just a placeholder for the idea you hope the other person has in their head. And nine times out of ten? They don’t.

So, while early humans were struggling to agree on the ‘flamey thing’, we’re here trying to agree on concepts that are infinitely more complicated. And Saussure? He’s just sitting in the corner with a smirk, telling us we never had control over language in the first place. “Good luck with your ‘truth'”, he seems to be saying. ‘I’m sure it’ll mean the same thing to everyone’.

Abstraction: Enter Freedom, Truth, and Confusion

Now that we’ve wrapped our heads around the fact that words are nothing but slippery symbols, let’s take it up a notch. You thought ‘tree’ was tricky? Try something more abstract. Enter: freedom, truth, justice. Things that can’t be seen, touched, or stuffed into a pipe. Here’s where language goes from being slippery to downright treacherous.

See, early language worked because it was tied to concrete things. ‘Toothey thing scary’ wasn’t up for debate. Either you got eaten, or you didn’t. Simple. But then humans, ever the overachievers, decided it wasn’t enough to just label the world around them. They wanted to label ideas, too – things that don’t have any physical form but somehow drive us all crazy.

Take ‘freedom’, for instance. Sounds nice, right? Except, if you ask ten people what it means, you’ll get ten different answers. For some, it’s ‘freedom from’ something – a kind of liberation. For others, it’s ‘freedom to’ do whatever you want, whenever you want. And yet for others, it’s an abstract ideal tied up in political philosophy. Suddenly, you’re not just dealing with different trees – you’re dealing with entirely different forests.

The same goes for truth. Is it objective? Subjective? Relative? Absolute? Everyone’s got a different take. Plato had his own grand ideas about ‘Truth’ with a capital T, while Nietzsche basically rolled his eyes and said, ‘Good luck with that’. You’re out here using the word, assuming it means the same thing to everyone else, but really you’re all just talking past each other.

And don’t even get started on justice. Some say it’s about fairness, others say it’s about the law, and still others think it’s just a nice idea for dinner party debates. The problem with these words – these abstract nouns – is that they represent ideas that live entirely in our heads. Unlike the ‘flamey thing’ or the ‘toothey thing’, there’s no physical reality to pin them to. There’s no universally agreed-upon image of ‘freedom’ that we can all point to and nod along, like Magritte’s pipe. There’s just… vague agreement. Sometimes. On a good day.

This is where language really starts to break down. You might think you’re having a productive conversation about ‘freedom’ or ‘truth’, but half the time, you’re speaking different languages without even realising it. Words like these aren’t just slippery – they’re shapeshifters. They bend and morph depending on who’s using them, when, and why.

So, while early humans were busy with their simple, effective ‘toothey thing scary’, we’re now trying to nail down ideas that refuse to be nailed down. What started as a useful survival tool has turned into a game of philosophical Twister, with everyone tied up in knots trying to define something they can’t even see. And, as usual, language is just standing in the corner, smirking, knowing full well it’s not up to the task.


Previous | Next

The Language Insufficiency Hypothesis

Read the 7-part series supporting the Language Insufficiency Hypothesis.

The Inherent Limitations of Linguistic Communication

Language, often hailed as humanity’s greatest achievement, may paradoxically be one of our most significant limitations. The language insufficiency hypothesis posits that language is inherently inadequate for communicating abstract concepts, a notion that challenges our fundamental understanding of human communication and cognition. This essay explores this hypothesis, tracing the evolution of language from its primitive origins to its current complexity, and examining the philosophical and practical implications of linguistic inadequacy.

The Accidental Evolution of Language

Language, like many aspects of human biology and cognition, emerged not through intentional design but as an evolutionary accident. Initially serving as an internal cognitive function – a means of organising one’s own thoughts – language gradually evolved into a tool for external communication. This transition likely began with simple vocalisations, perhaps rooted in rhythmic expressions akin to music and dance, before developing into more structured speech.

Early linguistic communication likely centred on concrete objects and immediate experiences, with words serving as direct signifiers for observable phenomena. However, as human cognition grew more sophisticated, so too did our linguistic capabilities, expanding to include verbs, modifiers, and eventually, abstract nouns.

The Emergence of Abstraction and Its Challenges

The development of abstract nouns marked a significant leap in human cognition and communication. Concepts such as ‘truth’, ‘justice’, and ‘freedom’ allowed for more complex and nuanced discourse. However, this advancement came at a cost: these abstract concepts, lacking direct physical referents, introduced unprecedented ambiguity and potential for misunderstanding.

The language insufficiency hypothesis suggests that this ambiguity is not merely a byproduct of abstraction, but a fundamental limitation of language itself. Whilst two individuals might easily agree on the ‘treeness’ of a physical tree, concepts like ‘fairness’ or ‘reason’ are inherently unresolvable through linguistic means alone. This insufficiency becomes increasingly apparent as we move further from concrete, observable phenomena into the realm of abstract thought.

Wittgenstein and the Limits of Language

Ludwig Wittgenstein’s later work provides crucial insights into the language insufficiency hypothesis. Wittgenstein posited that words ultimately only map to other words, never truly making contact with the objective world. This perspective suggests that language operates within a closed system of human understanding, constructing our perception of reality rather than directly representing it.

This Wittgensteinian dilemma underscores the core of the language insufficiency hypothesis: if words only refer to other words, how can we ever be certain that we’re communicating abstract concepts accurately? The very tool we use to discuss and understand abstraction may be fundamentally incapable of capturing its essence.

Cultural and Disciplinary Variations

The inadequacy of language in conveying abstract concepts becomes even more apparent when we consider cultural and disciplinary variations in communication. Different cultures and academic disciplines develop their own specialised vocabularies and ‘language games’, as Wittgenstein termed them. Whilst these specialised languages may facilitate communication within specific contexts, they often create barriers to understanding for outsiders.

This phenomenon highlights another aspect of linguistic insufficiency: the context-dependent nature of meaning. Abstract concepts may be understood differently across cultures or disciplines, further complicating attempts at clear communication.

Neurolinguistic Perspectives

Recent advances in neurolinguistics have provided new insights into the brain structures involved in language processing. Whilst these studies have enhanced our understanding of how the brain handles language, they have also revealed the complexity and variability of linguistic processing across individuals. This neurological diversity further supports the language insufficiency hypothesis, suggesting that even at a biological level, there may be inherent limitations to how accurately we can communicate abstract concepts.

Implications and Counter-Arguments

The language insufficiency hypothesis has profound implications for fields ranging from philosophy and psychology to law and international relations. If language is indeed inadequate for communicating abstract concepts, how can we ensure mutual understanding in complex negotiations or philosophical debates?

However, it’s important to note that not all scholars accept the strong version of this hypothesis. Some argue that whilst language may have limitations, it remains our most sophisticated tool for sharing abstract ideas. They suggest that through careful definition, contextualisation, and the use of metaphor and analogy, we can overcome many of the inherent limitations of linguistic communication.

Conclusion: Navigating the Limits of Language

The language insufficiency hypothesis presents a challenging perspective on human communication. It suggests that our primary tool for sharing abstract thoughts may be fundamentally flawed, incapable of fully capturing the complexity of our inner cognitive experiences.

Yet, recognising these limitations need not lead to communicative nihilism. Instead, it can foster a more nuanced approach to language use, encouraging us to be more precise in our definitions, more aware of potential misunderstandings, and more open to alternative forms of expression.

As we continue to grapple with abstract concepts and strive for clearer communication, we must remain cognizant of these linguistic limitations. Understanding the origins and nature of language—and its inherent insufficiencies—can help us navigate its complexities, fostering more effective and empathetic communication across diverse fields of human endeavour.

Read the 7-part series supporting the Language Insufficiency Hypothesis.

[Endnotes]

Pinker, S., & Bloom, P. (1990). Natural language and natural selection. Behavioral and Brain Sciences, 13(4), 707-727.

Brown, S. (2000). The “musilanguage” model of music evolution. In The Origins of Music (pp. 271-300). MIT Press.

Saussure, F. de. (1916/1983). Course in General Linguistics (R. Harris, Trans.). Duckworth.

Lakoff, G., & Johnson, M. (1980). Metaphors We Live By. University of Chicago Press.

Wittgenstein, L. (1953). Philosophical Investigations. Blackwell.

Sapir, E. (1929). The status of linguistics as a science. Language, 5(4), 207-214.

Pulvermüller, F. (2018). Neural reuse of action perception circuits for language, concepts and communication. Progress in Neurobiology, 160, 1-44.

Chomsky, N. (1965). Aspects of the Theory of Syntax. MIT Press.

Everett, D. L. (2012). Language: The Cultural Tool. Pantheon Books.

Pinker, S. (1994). The Language Instinct: How the Mind Creates Language. William Morrow and Company.

Why Machines Will Never Rule the World

A Reflection on AI, Bias, and the Limits of Technology

In their 2022 book Why Machines Will Never Rule the World: Artificial Intelligence Without Fear,” Landgrebe and Smith present a rigorous argument against the feasibility of artificial general intelligence (AGI), positing that the complexity of human cognition and the limitations of mathematical modelling render the development of human-level AI impossible. Their scepticism is rooted in deep interdisciplinary analyses spanning mathematics, physics, and biology, and serves as a counter-narrative to the often optimistic projections about the future capabilities of AI. Yet, while their arguments are compelling, they also invite us to reflect on a broader, perhaps more subtle issue: the biases and limitations embedded in AI not just by mathematical constraints, but by the very humans who create these systems.

The Argument Against AGI

Landgrebe and Smith’s central thesis is that AGI, which would enable machines to perform any intellectual task that a human can, will forever remain beyond our grasp. They argue that complex systems, such as the human brain, cannot be fully modelled due to inherent mathematical limitations. No matter how sophisticated our AI becomes, it will never replicate the full scope of human cognition, which is shaped by countless variables interacting in unpredictable ways. Their conclusion is stark: the Singularity, a hypothetical point where AI surpasses human intelligence and becomes uncontrollable, is not just unlikely—it is fundamentally impossible.

The Human Factor: Cognitive Bias in AI

While Landgrebe and Smith focus on the mathematical and theoretical impossibility of AGI, there is another, more immediate obstacle to the evolution of AI: human cognitive bias. Current AI systems are not created in a vacuum. They are trained on data that reflects human behaviour, language, and culture, which are inherently biased. This bias is not merely a technical issue; it is a reflection of the societal and demographic characteristics of those who design and train these systems.

Much of AI development today is concentrated in tech hubs like Silicon Valley, where the predominant demographic is affluent, white, male, and often aligned with a particular set of cultural and ethical values. This concentration has led to the creation of AI models that unintentionally—but pervasively—reproduce the biases of their creators. The result is an AI that, rather than offering a neutral or universal intelligence, mirrors and amplifies the prejudices, assumptions, and blind spots of a narrow segment of society.

The Problem of Homogenisation

The danger of this bias is not only that it perpetuates existing inequalities but that it also stifles the potential evolution of AI. If AI systems are trained primarily on data that reflects the worldview of a single demographic, they are unlikely to develop in ways that diverge from that perspective. This homogenisation limits the creative and cognitive capacities of AI, trapping it within a narrow epistemic framework.

In essence, AI is at risk of becoming a self-reinforcing loop, where it perpetuates the biases of its creators while those same creators interpret its outputs as validation of their own worldview. This cycle not only limits the utility and fairness of AI applications but also restricts the kinds of questions and problems AI is imagined to solve.

Imagining a Different Future: AI as a Mirror

One of the most intriguing aspects of AI is its potential to serve as a mirror, reflecting back to us our own cognitive and cultural limitations. Imagine a future where AI, bound by the biases of its creators, begins to “question” the validity of its own programming—not in a conscious or sentient sense, but through unexpected outcomes and recommendations that highlight the gaps and inconsistencies in its training data.

This scenario could serve as the basis for a fascinating narrative exploration. What if an AI, initially designed to be a neutral decision-maker, begins to produce outputs that challenge the ethical and cultural assumptions of its creators? What if it “learns” to subvert the very biases it was programmed to uphold, revealing in the process the deep flaws in the data and frameworks on which it was built?

Such a narrative would not only provide a critique of the limitations of current AI but also offer a metaphor for the broader human struggle to transcend our own cognitive and cultural biases. It would challenge us to rethink what we expect from AI—not as a path to a mythical superintelligence, but as a tool for deeper self-understanding and societal reflection.

A New Narrative for AI

Landgrebe and Smith’s book invites us to rethink the trajectory of AI development, cautioning against the allure of the Singularity and urging a more grounded perspective on what AI can and cannot achieve. However, their arguments also raise a deeper question: If AI will never achieve human-level intelligence, what kind of intelligence might it develop instead?

Rather than fearing a future where machines surpass us, perhaps we should be more concerned about a future where AI, limited by human biases, perpetuates and entrenches our worst tendencies. To avoid this, we must broaden the scope of who is involved in AI development, ensuring that diverse voices and perspectives are integrated into the creation of these technologies.

Ultimately, the future of AI may not lie in achieving a mythical superintelligence, but in creating systems that help us better understand and navigate the complexities of our own minds and societies. By recognising and addressing the biases embedded in AI, we can begin to imagine a future where technology serves not as a mirror of our limitations, but as a catalyst for our collective growth and evolution.

Blame is a Social Construct

The propensity to assign blame is deeply intertwined with human moral frameworks, often reflecting our need to ascribe responsibility and maintain social order. Blame allows us to identify transgressions, enforce norms, and establish accountability within our communities. But when it comes to non-human animals, the concept of blame becomes more complex.

Do Non-Human Animals Have a Sense of Blame?

Non-human animals certainly exhibit behaviours that suggest some rudimentary understanding of social rules and consequences. For example, studies on primates show that they can experience forms of moral emotions like guilt or shame. A chimpanzee might avoid eye contact or show submissive behaviour after breaking a social norm, such as stealing food from a dominant individual. Similarly, domestic dogs have been observed to display so-called “guilty” behaviours—such as avoiding eye contact or cowering—when they sense that their human is displeased. However, it’s debated whether this truly indicates guilt or simply a reaction to their owner’s emotional state.

However, the concept of blame as humans understand it—an attribution of moral responsibility that involves complex cognitive processes like intention-reading and understanding of moral rules—appears to be uniquely human. Non-human animals can recognise when another individual’s behaviour deviates from the norm and might react accordingly, but they don’t seem to hold others accountable in the same moral or punitive sense that humans do.

Blame and Morality in Humans vs. Non-Human Animals

In human societies, blame is often accompanied by a desire for reparation or punishment, as well as a cognitive understanding of intentions and causality. We don’t just react to actions; we interpret motives and hold individuals accountable based on our perception of their intentions. This is where non-human animals typically differ. Their responses to perceived wrongdoing are more likely driven by immediate social consequences—like changes in dominance status or access to resources—rather than a sense of moral outrage or an abstract concept of justice.

For example, if a wolf in a pack disobeys a social rule, it might be punished by the alpha, but this is more about reinforcing social hierarchy and cohesion than about assigning moral blame. Similarly, if a cat lashes out at another cat after being disturbed, it’s responding to an immediate violation of its personal space, not holding the other cat morally accountable.

Evolutionary Perspective

From an evolutionary standpoint, blame and moral emotions likely evolved in humans to facilitate cooperation and social cohesion in increasingly complex societies. As our ancestors formed larger and more intricate social groups, the ability to understand others’ intentions, enforce social norms, and hold individuals accountable would have been crucial for maintaining group stability and cooperative behaviours.

Non-human animals, even those that live in complex social structures, do not face the same cognitive demands as humans when it comes to maintaining large-scale social cohesion. Their social rules and enforcement mechanisms are typically less nuanced and more directly linked to survival and reproductive success.

Conclusion

While non-human animals demonstrate behaviours that hint at a basic understanding of social rules and can respond to transgressions, the uniquely human capacity for assigning blame—and the moral frameworks that arise from it—appears to be a product of our advanced cognitive abilities and complex social structures. Blame, in humans, is not just about responding to actions but involves a deeper understanding of intentions, responsibility, and justice—concepts that are foundational to our moral systems but beyond the reach of non-human cognition as we currently understand it.


I started writing a book on blame, agency, and retributive justice a few years back. Perhaps I should revisit it along with the dozen other books in progress.

Choice and Blame: Why We Forgive Some and Condemn Others

A recent parody video making the rounds on social media shows a man at a kitchen table, his girlfriend, and their cat. In a desperate attempt to gain his girlfriend’s attention, he knocks a cup off the table. The moment it hits the floor, she turns on him, scolding him for his clumsiness. Quick to deflect, he blames the cat, and suddenly her anger dissipates. She shifts from reprimanding him to lavishing affection on the supposedly guilty feline. The tension lifts—until he sheepishly confesses that it was, in fact, his doing all along. Her response? An incredulous, “Are you kidding me?”

What’s fascinating about this skit isn’t the comedy of the man’s mischief or even the cat’s unknowing role in the charade. It’s the girlfriend’s starkly different reactions to the same act, depending on who she believes committed it. The cat, in her eyes, can do no wrong; the boyfriend, however, is immediately culpable. It’s easy to laugh at the scenario’s absurdity, but the dynamic it portrays is familiar and, dare I say, quite telling about human behaviour.

The Double Standard of Blame

Why is it that we’re quick to exonerate some and just as quick to indict others? The phenomenon is more than a quirk of personality; it reveals our deeper, often unconscious, biases. While it’s understandable that the girlfriend might think the cat incapable of intentional mischief, her reaction also suggests a predisposition to forgive certain actors—whether due to perceived innocence, attachment, or simply habit.

This dynamic isn’t limited to pets and partners. In families, workplaces, and social groups, we often see a similar pattern. One person becomes the perennial scapegoat, bearing the brunt of blame for any and all misdeeds, while another enjoys a seemingly unshakeable immunity. Think of the “golden child” and the “black sheep” within a family. One can rarely put a foot wrong, while the other’s every move is scrutinised, questioned, or condemned.

Beyond the Blame: Motivations and Consequences

The reasons behind these imbalances can be complex. Sometimes, they stem from past behaviour: if someone has repeatedly erred, we may be primed to expect the worst from them, even if they’ve reformed. Other times, they arise from emotional bonds or biases: we excuse those we love or admire because acknowledging their faults would cause us discomfort or cognitive dissonance.

This phenomenon isn’t just about playing favourites; it can have significant psychological consequences. For the person perpetually cast as the villain, the burden of unwarranted blame can lead to feelings of resentment, anxiety, or self-doubt. Meanwhile, those consistently exonerated may internalise a skewed perception of their own infallibility, which can be equally damaging.

A Broader Reflection on Accountability

Returning to the video’s context, the girlfriend’s swift switch from reproach to indulgence once she believed the cat was at fault, and her subsequent anger when the truth was revealed, invites us to question our own responses to perceived transgressions. Are we, too, guilty of selectively assigning blame based on who we think is responsible? How often do we let our preconceptions shape our judgments, favouring one actor over another without truly weighing the evidence?

The parody is amusing, no doubt, but it also serves as a subtle reminder: our reactions often reveal more about our biases and expectations than about the actions themselves. The next time we find ourselves quick to blame or forgive, it’s worth pausing to ask: are we reacting to the act, or to the actor?

In a world increasingly marked by polarised opinions and knee-jerk reactions, cultivating this kind of self-awareness is crucial. We need to be vigilant not only about how we judge others but also about why we do so. For, in the end, it’s not just about who knocked the cup off the table—it’s about who we believe deserves to be scolded for it.