Don’t Care Much about History

As the years pass and my disappointment matures like a fine wine (spoiler alert: it’s vinegar), I’m reminded of the average intelligence quotient floating about in the wild. A few years back, I stumbled upon The Half-Life of Knowledge. Cute title, but it’s more optimistic than it should be. Why assume knowledge even has a shelf life? It’s one thing for once-useful information to spoil thanks to “progress,” but what about the things that were never true to begin with? Ah, yes, the fabrications, the lies we’re spoon-fed under the guise of education.

I’m well-versed in the lies they peddle in the United States, but I’d bet good money (not that I have any) that every nation’s curriculum comes with its own patriotic propaganda. What am I on about, you ask? Let’s just say I’ve been reading How the World Made the West by Josephine Quinn, and it’s got me thinking. You see, I’ve also been simmering on an anti-democracy book for the better part of five years, and it’s starting to boil over.

Here in the good ol’ US of A, they like to wax lyrical about how Athens was the birthplace of democracy. Sure, Athens had its democratic dabblings. But let’s not get it twisted—if you really look at it, Athens was more akin to the Taliban than to any modern Western state. Shocked? Don’t be. For starters, only property-owning men could vote, and women—brace yourselves—were “forced” to wear veils. Sounds familiar? “It’s a start,” you say. True, American women couldn’t vote until 1920, so let’s all pat ourselves on the back for that—Progress™️.

But no, hold your applause. First off, let’s remember that Athens and Sparta were city-states, not some cohesive entity called “Greece” as we so lovingly imagine. Just a bunch of Greek-speaking neighbours constantly squabbling like reality TV contestants. Meanwhile, over in Persia—yes, the supposed enemy of all things free and democratic—they had participative democracy, too. And guess what? Women in Persia could vote, own property, and serve as soldiers or military officers. So much for the idea that Athens was the singular beacon of democratic virtue.

More than this, Persian democracy was instituted by lottery, so many more people participated in the process by serving one-year terms. At the end of their term, they were audited to check for corruption. Now, you can see why we adopted the so-called Greek version. These blokes don’t welcome any oversight of scrutiny.

As a postmodern subjectivist, I tend to side-eye any grand narrative, and the history of Western civilisation is just one long parade of questionable claims and hidden agendas. Every time I think I’ve seen the last of the historical jump scares, another one comes lurking around the corner. Boo!

Polemics

People often ask why I churn out so many polemic, contrarian articles. The answer? It’s simply how I think. My brain naturally questions everything, not out of a desire to be difficult, but because that’s just my worldview. I’m not inventing challenges for the sake of argument—the challenges are already there, embedded in the world as I see it.

Another reason is solidarity. I write in hopes that others, whose thoughts run along similar lines, might stumble across my material and feel less alone. There’s something deeply reassuring in discovering that someone else has been on the same mental journey—that feeling of “Ah, I’m not alone in this.” Many times, I’ve had ideas only to find that philosophers, thinkers, or whoever have already penned volumes on the subject. And honestly? That grounds me. Even better if they’ve gone further, articulated it more eloquently, or ventured into new depths. It’s all useful. Plus, their critics then become my critics, and I get to sharpen my thoughts in response—or at least build my own defences.

And finally, I write for the potential spark. Maybe someone out there reads a piece of mine and feels inspired to take it further, push an idea beyond what I could imagine. After all, entire Nobel Prize-winning theories have started as someone else’s footnotes. There’s nothing wrong with being someone’s footnote.

So, now you know.


NB: I’ll be in surgery when this posts, so I’ve scheduled this in advance so as not to have a gap…that may occur anyway.

The Limits of Language: Why Philosophical Paradoxes Might Be Illusions of Mapping

Philosophical paradoxes have long captured our imagination, from Zeno’s paradoxes about movement to the Liar Paradox that tangles truth and falsehood into an endless loop. Often, these puzzles are treated as fundamental mysteries of the universe—windows into the limits of human understanding or insight into the hidden structure of reality. But what if, rather than reflecting deep truths about existence, many of these paradoxes are artefacts of language itself—symptoms of our conceptual tools struggling to adequately map a complex terrain? Perhaps, more often than not, the perplexities we face are the result of an inadequate mapping—a linguistic or cognitive misfire—rather than true paradoxes of the underlying terrain of reality.

This notion—that many paradoxes arise from the limitations of language and cognition—finds resonance in the work of philosophers like Ludwig Wittgenstein. Wittgenstein argued that many philosophical problems arise because we misuse language, taking words beyond their natural context, confusing what our words describe with the objects or concepts themselves. In this sense, our maps (the linguistic and logical structures we use) often lead us astray when navigating the conceptual terrains of ethics, metaphysics, or the nature of truth.

This idea can be articulated under what we might call the Language Insufficiency Hypothesis: the view that the limitations of language itself are at the root of many philosophical paradoxes. According to this hypothesis, the apparent contradictions or puzzles that emerge in philosophical discourse often reveal more about the shortcomings of our representational tools than about any deep metaphysical truths. The Language Insufficiency Hypothesis suggests that our conceptual maps are inadequate for fully capturing the richness of the terrains we attempt to describe, and that this inadequacy leads us to mistake linguistic confusion for genuine philosophical mystery.

The Inherent Limitations of Linguistic Communication

Language, often hailed as humanity’s greatest achievement, may paradoxically be one of our most significant limitations. The Language Insufficiency Hypothesis posits that language is inherently inadequate for communicating abstract concepts, a notion that challenges our fundamental understanding of human communication and cognition. This perspective traces the evolution of language from its primitive origins to its current complexity, revealing the philosophical and practical implications of linguistic inadequacy.

The Accidental Evolution of Language

Language, like many aspects of human biology and cognition, emerged not through intentional design but as an evolutionary accident. Initially serving as an internal cognitive function—a means of organising one’s own thoughts—language gradually evolved into a tool for external communication. This transition likely began with simple vocalisations, perhaps rooted in rhythmic expressions akin to music and dance, before developing into more structured speech.

Early linguistic communication likely centred on concrete objects and immediate experiences, with words serving as direct signifiers for observable phenomena. However, as human cognition grew more sophisticated, so too did our linguistic capabilities, expanding to include verbs, modifiers, and eventually, abstract nouns.

The Emergence of Abstraction and Its Challenges

The development of abstract nouns marked a significant leap in human cognition and communication. Concepts such as ‘truth’, ‘justice’, and ‘freedom’ allowed for more complex and nuanced discourse. However, this advancement came at a cost: these abstract concepts, lacking direct physical referents, introduced unprecedented ambiguity and potential for misunderstanding.

The Language Insufficiency Hypothesis suggests that this ambiguity is not merely a byproduct of abstraction, but a fundamental limitation of language itself. While two individuals might easily agree on the ‘treeness’ of a physical tree, concepts like ‘fairness’ or ‘reason’ are inherently unresolvable through linguistic means alone. This insufficiency becomes increasingly apparent as we move further from concrete, observable phenomena into the realm of abstract thought.

Wittgenstein and the Limits of Language

Ludwig Wittgenstein’s later work provides crucial insights into the Language Insufficiency Hypothesis. Wittgenstein posited that words ultimately only map to other words, never truly making contact with the objective world. This perspective suggests that language operates within a closed system of human understanding, constructing our perception of reality rather than directly representing it.

This Wittgensteinian dilemma underscores the core of the Language Insufficiency Hypothesis: if words only refer to other words, how can we ever be certain that we’re communicating abstract concepts accurately? The very tool we use to discuss and understand abstraction may be fundamentally incapable of capturing its essence.

Cultural and Disciplinary Variations

The inadequacy of language in conveying abstract concepts becomes even more apparent when we consider cultural and disciplinary variations in communication. Different cultures and academic disciplines develop their own specialised vocabularies and ‘language games’, as Wittgenstein termed them. While these specialised languages may facilitate communication within specific contexts, they often create barriers to understanding for outsiders.

This phenomenon highlights another aspect of linguistic insufficiency: the context-dependent nature of meaning. Abstract concepts may be understood differently across cultures or disciplines, further complicating attempts at clear communication.

Neurolinguistic Perspectives

Recent advances in neurolinguistics have provided new insights into the brain structures involved in language processing. While these studies have enhanced our understanding of how the brain handles language, they have also revealed the complexity and variability of linguistic processing across individuals. This neurological diversity further supports the Language Insufficiency Hypothesis, suggesting that even at a biological level, there may be inherent limitations to how accurately we can communicate abstract concepts.

Implications and Counter-Arguments

The Language Insufficiency Hypothesis has profound implications for fields ranging from philosophy and psychology to law and international relations. If language is indeed inadequate for communicating abstract concepts, how can we ensure mutual understanding in complex negotiations or philosophical debates?

However, it’s important to note that not all scholars accept the strong version of this hypothesis. Some argue that while language may have limitations, it remains our most sophisticated tool for sharing abstract ideas. They suggest that through careful definition, contextualisation, and the use of metaphor and analogy, we can overcome many of the inherent limitations of linguistic communication.

Navigating the Limits of Language

The Language Insufficiency Hypothesis presents a challenging perspective on human communication. It suggests that our primary tool for sharing abstract thoughts may be fundamentally flawed, incapable of fully capturing the complexity of our inner cognitive experiences.

Yet, recognising these limitations need not lead to communicative nihilism. Instead, it can foster a more nuanced approach to language use, encouraging us to be more precise in our definitions, more aware of potential misunderstandings, and more open to alternative forms of expression.

As we continue to grapple with abstract concepts and strive for clearer communication, we must remain cognizant of these linguistic limitations. Understanding the origins and nature of language—and its inherent insufficiencies—can help us navigate its complexities, fostering more effective and empathetic communication across diverse fields of human endeavour.

The Frege–Geach Problem as an Illustration of Linguistic Limitations

One pertinent example of this idea is the Frege–Geach problem, a challenge often faced by expressivist theories of ethics. Expressivists maintain that moral statements do not describe facts but rather express attitudes or emotions—a statement like “lying is wrong” is an expression of disapproval rather than a factual assertion. The Frege–Geach problem arises when such moral statements are embedded in logical constructions like conditionals or arguments: “If lying is wrong, then getting your little brother to lie is wrong.” In this context, expressivists face a challenge in explaining how the meaning of “lying is wrong” remains coherent across different uses, without reducing moral expressions to descriptive claims.

The Frege–Geach problem thus illustrates a fundamental limitation: attempting to apply truth-conditional logic, designed for descriptive language, to moral discourse, which serves a different function altogether. In trying to map evaluative terrain—which involves emotions, commitments, and subjective attitudes—using the same structures meant for factual landscapes, we encounter conceptual misalignments. This problem—a confusion of the terrain for the map—is not necessarily a genuine paradox about moral truths but rather a reflection of the inadequacy of our current linguistic tools. Just as a physical map may fail to capture the emotional experience of a journey, so too do our linguistic and logical maps fail to adequately capture the moral landscape.

Wittgenstein’s later work is helpful in framing this issue. He emphasised the importance of recognising different language-games: the rules and purposes that guide different forms of discourse. Moral language is not like scientific language; it follows different rules and aims to express and influence attitudes rather than establish empirically verifiable facts. The Frege–Geach problem emerges precisely because we attempt to impose a single logical structure onto forms of language that serve different purposes, confusing the distinct games we are playing. This attempt to force moral language into a framework designed for empirical propositions produces an apparent paradox, where the real issue lies in our misuse of the conceptual map.

This pattern of misinterpretation is not unique to moral discourse. Many philosophical paradoxes—from problems of identity and personal continuity to issues of free will and determinism—arise when we try to map different terrains with the same linguistic structures, or when we push our conceptual tools beyond their natural limits. Cognitive limitations also play a role; our tendency to think in binary oppositions, our reliance on categories, and our need for consistent narratives often lead to oversimplifications of complex realities. These cognitive tools—essential for everyday functioning—can prove inadequate for capturing the nuance of the philosophical landscapes we attempt to navigate.

The map-terrain challenge is thus at the core of why philosophical paradoxes can seem so intractable. Our maps—the languages and logical frameworks that structure our thinking—are, by their nature, simplifications of a world that is far more nuanced than we can readily articulate. When the terrain is moral, aesthetic, or otherwise not reducible to simple truths or falsehoods, the inadequacies of our maps become evident. We are left facing paradoxes that may, in truth, be nothing more than indicators that our representational systems need refinement or expansion.

Rather than treating these paradoxes as unresolvable, we might benefit from seeing them as invitations to reconsider our linguistic and cognitive frameworks. In recognising that the Frege–Geach problem, for instance, may reflect an ill-suited mapping of moral discourse rather than a genuine mystery about moral reality, we open the door to a pluralistic approach: different terrains require different maps. Perhaps, in some cases, the best solution is not to attempt to solve the paradox in traditional terms but to change the way we map the terrain altogether—to allow for multiple, context-sensitive tools that respect the particularity of each domain of discourse.

Ultimately, this perspective suggests a more flexible and cautious approach to philosophical inquiry—one that acknowledges the limits of our conceptual tools and remains open to the possibility that the terrain is far richer and more varied than our maps can currently capture.

Can Zombies Ever Be Conscious?

In the world of consciousness studies, few topics spark as much heated debate as the possibility of philosophical zombies—hypothetical beings that behave exactly like humans but lack subjective experience, or qualia. On the surface, zombies seem like an interesting thought experiment, but they quickly turn into a battleground for deeper issues about the nature of consciousness itself.

This post explores two key perspectives in this debate: Daniel Dennett’s functionalist critique of zombies and a recent scientific paper that argues zombies are biologically impossible. While both reject the possibility of zombies, they do so for different reasons, and the discussion leaves room for future possibilities that could disrupt the current consensus.

Dennett’s Zombies and Zimboes: Consciousness as Function

Daniel Dennett, one of the most influential philosophers of mind, is known for his no-nonsense rejection of philosophical zombies. Dennett argues that if something behaves exactly like a conscious being, it is conscious. For him, there is no hidden metaphysical property—such as subjective experience—that separates a “zombie” from a conscious human. Consciousness, in his view, is entirely explainable by physical processes and functional behaviour.

Dennett extends his argument with the concept of zimboes, satirical creatures that not only act like conscious beings but can even reflect on their states, claiming to be conscious, despite supposedly lacking any inner experience. For Dennett, if a being can behave as though it has introspective awareness and engage in the full spectrum of human behaviour, there’s no meaningful distinction between that being and a conscious person.

In short, Dennett collapses the distinction between zombies and conscious beings. If something passes all the behavioural and functional tests of consciousness, it might as well be conscious. Zombies, as typically conceived, are simply an illusion—a misunderstanding of what consciousness is.

A Biological Rejection: Zombies Are Impossible

On the other hand, a more recent paper offers a different, biologically grounded argument against zombies. The authors propose that consciousness is the result of self-organising systems. In this view, biological organisms maintain their survival through adaptive behaviours constrained by policies—rules that govern how they react to environmental stimuli. These policies require a first-order self: a basic form of consciousness that allows an organism to navigate and interpret its environment.

The authors argue that without this first-order self, an organism would not be able to exhibit the fitness-driven behaviours needed for survival. Therefore, zombies—beings that behave like humans without consciousness—are biologically impossible. For these researchers, consciousness is not just a side effect of complex behaviour; it’s a necessary condition for such behaviour. Their framework dissolves the so-called “hard problem” of consciousness, asserting that subjective experience, or qualia, arises directly from the qualitative nature of self-organising systems.

In their view, zombies cannot exist because behaviour as complex as that of conscious beings requires consciousness.

The Open Question: What About Future Technology?

However, there is a tension between these two perspectives, particularly when we consider future possibilities in technology and artificial intelligence. Both Dennett and the authors of the biological paper argue that zombies—whether defined as Dennett’s “behaviourally indistinguishable” beings or the biologically impossible entities proposed by the paper—are not real. But could this change?

What if advanced AI or synthetic biological systems could simulate human behaviour so perfectly that they effectively become zombies—performing all the actions and behaviours we associate with consciousness, but lacking any subjective experience? Dennett might still argue that these systems are conscious, as long as they behave as though they are. But the biological view complicates this, since it ties consciousness directly to the survival and adaptive behaviours of self-organising systems.

Could a highly advanced AI system bypass the need for subjective experience while still exhibiting complex, adaptive behaviour? If so, it would challenge the current consensus and potentially create a new class of entities—artificial zombies—that neither behave nor function like traditional conscious beings but still perform human-like actions.

I Wonder What’s Next?

This philosophical conflict leaves us with an intriguing, open-ended question: are zombies truly impossible, or are they merely improbable given our current understanding of biology and consciousness? Dennett’s view seems to collapse the distinction between behaviour and consciousness, while the biological argument insists that the two are inseparable. But both positions could be challenged by future technologies that mimic human consciousness without having it.

Could we one day create a true zombie—a being that acts like us, thinks like us, but is as empty inside as a rock? The debate remains open, and as our understanding of consciousness and artificial intelligence deepens, so too will our exploration of the zombie question.

For now, the answer to whether zombies can exist seems to depend on what you believe consciousness really is.

The Illusion of the “Temporarily Embarrassed Millionaire”: How Capitalism’s Defenders Uphold Their Own Exploitation


In the contemporary world of deepening inequality and environmental degradation, capitalism continues to hold a powerful ideological grip on much of the global population. Yet the irony is that many of its staunchest defenders are not the elites or the true beneficiaries of the system, but the very workers and middle-class individuals whose lives it exploits and controls. These defenders are not capitalists themselves; they are, in fact, cogs in the machinery of a system they imagine will eventually reward their loyalty. This illusion is strikingly captured in a quote often misattributed to John Steinbeck: “Socialism never took root in America because the poor see themselves not as an exploited proletariat but as temporarily embarrassed millionaires.”[1]

This phenomenon, which we might call the temporarily embarrassed millionaire syndrome, reflects not only a profound misunderstanding of capitalism but also the effectiveness of the system in controlling its participants through hope and aspiration. Capitalism promises upward mobility, convincing even those at the bottom of the economic ladder that their current misfortunes are temporary. But as Karl Marx and Friedrich Engels observed, this is a system of exploitation that not only alienates workers but effectively destroys them.


Survivorship Bias and the Myth of the “Rising Tide”

Capitalism’s defenders frequently invoke the idea that “a rising tide lifts all boats.” The metaphor suggests that when capitalism prospers, everyone benefits. However, this vision of progress masks the reality of capitalism’s winners and losers. As economist David Harvey has pointed out, capitalism is not a neutral system of wealth creation—it is a system of accumulation by dispossession, constantly expropriating wealth from others, often through privatisation and the commodification of public goods.[2] The rising tide does lift some boats, but it simultaneously leaves others stranded, or worse, sinking.

Survivorship bias is essential to understanding how capitalism maintains its legitimacy. The success stories—the wealthy entrepreneurs, the individuals who “made it”—are lauded as proof that the system works. But the vast numbers of people left behind, those who toil in exploitative conditions or who die from poverty and neglect, are erased from the narrative. In Engels’ terms, these are victims of social murder—individuals who die prematurely not by direct violence, but through the structural forces of deprivation imposed by capitalism.[3] Their deaths are rendered invisible, falling out of the metrics of rising living standards and growth.

Engels’ critique of industrial capitalism is as relevant today as it was in the 19th century. The modern mechanisms of exploitation may be more complex, but they are no less deadly. In a late capitalist world, the poor and marginalised are still being “murdered” through the structural violence of inadequate healthcare, poor working conditions, and environmental degradation. The millions left out of the capitalist success story are not anomalies but integral to the system’s operation.


Alienation and the Tragedy of Defending the System

Marx’s theory of alienation provides another crucial lens through which to understand why capitalism’s defenders often remain blind to their own exploitation. Under capitalism, workers are alienated from the products of their labour, the process of production, their own humanity, and from each other.[4] The worker becomes a cog in a machine, detached from the value they create, and unable to control their working life. Yet, even in this state of alienation, many still defend the system, believing that their hard work will eventually lead them to wealth and freedom.

This defence of capitalism, often articulated by those whose lives it degrades, reflects Antonio Gramsci’s concept of cultural hegemony. Gramsci argued that the ruling class maintains power not just through economic domination, but by shaping the cultural and ideological landscape.[5] Capitalism’s defenders are, in part, products of this hegemony, believing in the very values—individualism, competition, the ‘American Dream’—that bind them to a system of exploitation.

This illusion of freedom under capitalism is deepened by what Herbert Marcuse calls repressive desublimation. Capitalism offers false freedoms in the form of consumer choice and superficial pleasures, giving individuals the illusion that they are exercising autonomy, even as the system remains unchallenged.[6] Workers may identify themselves in their commodities—luxury goods, tech gadgets, cars—but these objects only serve to reinforce their alienation and dependence on the capitalist system. The temporarily embarrassed millionaire clings to the dream of eventual success, all the while contributing to a system that offers only superficial rewards in return.


Social Murder and the Structural Violence of Late Capitalism

The notion of social murder offers a stark framework for understanding capitalism’s indirect, yet pervasive, violence. As Engels explained, this form of violence is not inflicted through overt means, but through the systematic neglect of basic human needs. Whether it’s the millions who die due to lack of access to healthcare or the global poor displaced by climate-induced disasters, capitalism perpetuates a form of structural violence that is invisible to those who benefit from the system’s success.[7]

The American political theorist Naomi Klein extends this analysis through her concept of disaster capitalism, where crises are exploited for profit. Whether it’s natural disasters or financial crises, capitalism uses these events as opportunities to privatise public resources, dismantle social safety nets, and deepen inequality.[8] The victims of these disasters—often the poor and vulnerable—are, in Engels’ terms, socially murdered by a system that thrives on their dispossession.


The Temporarily Embarrassed Millionaire as a Tool of Control

The illusion that one’s current position is only temporary—that any individual can rise to capitalist wealth if they work hard enough—is central to maintaining the capitalist system. This aspiration prevents individuals from seeing their exploitation for what it is. They do not identify as part of an exploited class but instead believe they are merely waiting for their turn at wealth. Zygmunt Bauman’s concept of liquid modernity—the perpetual state of instability and insecurity produced by late capitalism—helps explain this phenomenon.[9] Individuals are constantly told that their position is fluid, changeable, and that their big break is just around the corner.

But for most, this “big break” never comes. The dream of becoming a millionaire is a powerful form of social control, one that keeps individuals invested in a system that benefits only a small fraction of its participants. As Marx reminds us, “the worker becomes all the poorer the more wealth he produces, the more his production increases in power and range.”[10] Capitalism does not reward the many; it exploits the many for the benefit of the few.


Conclusion: Facing the Irony and Imagining a Post-Capitalist Future

The greatest irony of capitalism is that those who defend it most fervently are often those who will never realise its promises. These are not the capitalists of the system, but its workers, its underclass, and its exploited. They see themselves not as oppressed, but as temporarily embarrassed millionaires—an illusion that keeps them bound to a system that offers them no real future.

In this light, the true success of capitalism is not in its creation of wealth, but in its ability to mask the conditions of exploitation, alienation, and social murder that underpin it. The path forward requires a dismantling of these illusions and a recognition that the system’s failures are not accidental but integral to its design.

Only by facing these uncomfortable truths can we begin to imagine a future beyond the constraints of capitalist ideology, a world where human flourishing is no longer measured by wealth accumulation but by the collective well-being of all.


Endnotes:

[1]: Misattributed to John Steinbeck, this quote encapsulates a critical observation about American capitalism’s appeal to aspiration rather than solidarity.
[2]: David Harvey, The New Imperialism (Oxford University Press, 2005), pp. 145-147.
[3]: Friedrich Engels, The Condition of the Working Class in England (Oxford University Press, 1845), p. 112.
[4]: Karl Marx, Economic and Philosophic Manuscripts of 1844 (Progress Publishers, 1959).
[5]: Antonio Gramsci, Selections from the Prison Notebooks (International Publishers, 1971), p. 12.
[6]: Herbert Marcuse, One-Dimensional Man (Beacon Press, 1964), p. 10.
[7]: Friedrich Engels, The Condition of the Working Class in England, p. 114.
[8]: Naomi Klein, The Shock Doctrine: The Rise of Disaster Capitalism (Picador, 2007), pp. 9-10.
[9]: Zygmunt Bauman, Liquid Modernity (Polity, 2000), p. 14.
[10]: Karl Marx, Economic and Philosophic Manuscripts of 1844, p. 68.


words

Jargon, Brains, and the Struggle for Meaning

6–9 minutes

Specialised Languages: Academia’s Jargon Olympics

If you thought normal language was confusing, let’s take a moment to appreciate the true champions of linguistic obscurity: academics. Welcome to the world of specialised languages, where entire fields of study have developed their own language games that make even Wittgenstein’s head spin.

Here’s how it works: Every discipline—science, law, philosophy—creates its own jargon to describe the world. At first, it seems helpful. Instead of using vague terms, you get precise definitions for complex ideas. But what started as a way to improve communication within a field quickly turned into a linguistic arms race, where the more obscure and convoluted your terms are, the smarter you sound. You’re not just a lawyer anymore—you’re someone who’s ready to throw “res ipsa loquitur” into casual conversation to leave everyone else in the room wondering if they’ve missed a memo.

The problem? If you’re not part of the club, good luck understanding what anyone is talking about. Want to read a physics paper? Prepare to learn a whole new vocabulary. Need to get through a legal document? You’ll be knee-deep in Latin phrases before you even get to the point. And don’t even try to decipher a philosophical text unless you’re ready to battle abstract nouns that have been stretched and twisted beyond recognition.

It’s not just the words themselves that are the issue—it’s the sheer density of them. Take “justice” for example. In philosophy, you’ve got theories about distributive justice, retributive justice, restorative justice, and a hundred other variations, each with its own set of terms and conditions. And that’s before we even touch on how “justice” is defined in legal circles, where it becomes an even more tangled mess of case law and precedent. Every field is playing its own version of the “justice” game, with its own rules and definitions, and none of them are interested in comparing notes.

This is the academic world in a nutshell. Each discipline has built its own linguistic fortress, and unless you’ve spent years studying, you’re not getting in. But here’s the kicker: even within these fields, people are often misunderstanding each other. Just because two scientists are using the same words doesn’t mean they’re on the same page. Sometimes, it’s more like a game of intellectual one-upmanship—who can define the most obscure term or twist a familiar word into something completely unrecognisable?

And let’s not forget the philosophers. They’ve turned linguistic acrobatics into an art form. Good luck reading Foucault or Derrida without a dictionary (or five) on hand. You might walk away thinking you understand their points, but do you really? Or have you just memorised the jargon without actually grasping the deeper meaning? Even scholars within these fields often argue over what was really meant by a certain text—Barthes, after all, famously declared the “death of the author,” so it’s not like anyone really has the final say on meaning anyway.

So here we are, knee-deep in jargon, trying to communicate with people who, technically, speak the same language but are operating within entirely different rulesets. Every academic discipline has its own secret code, and if you don’t know it, you’re lost. Even when you do know the code, you’re still at risk of miscommunication, because the words that look familiar have been stretched and shaped to fit highly specific contexts. It’s like being fluent in one dialect of English and then suddenly being asked to write a thesis in legalese. Good luck.

In the end, academia’s specialised languages don’t just make things harder—they actively create barriers. What started as a way to improve precision has turned into an obstacle course of incomprehensible terms, where the real challenge is just figuring out what anyone’s actually saying. And let’s be honest, even if you do figure it out, there’s no guarantee it’s going to mean the same thing next time you see it.

Neurolinguistics: Even Our Brains Can’t Agree

So far, we’ve seen how language is a mess of miscommunication, cultural differences, and academic jargon. But surely, at least on a biological level, our brains are all on the same page, right? Well, not exactly. Welcome to the wonderful world of neurolinguistics, where it turns out that even the very organ responsible for language can’t get its act together.

Here’s the deal: Neurolinguistics is the study of how the brain processes language, and while it’s fascinating, it’s also a bit of a buzzkill for anyone hoping for consistency. See, your brain and my brain don’t process language in the same way. Sure, we’ve got similar hardware, but the software is wildly unpredictable. There are individual differences, cultural influences, and developmental quirks that all affect how we understand and produce language. What’s simple for one brain might be completely baffling to another.

Take, for example, something as basic as syntax. Chomsky might have told us we all have a universal grammar hard-wired into our brains, but neurolinguistics has shown that how we apply that grammar can vary significantly. Some people are wired to handle complex sentence structures with ease—think of that friend who can follow 10 different clauses in a single breath. Others? Not so much. For them, even a moderately tricky sentence feels like mental gymnastics. The brain is constantly juggling words, meanings, and structures, and some brains are better at it than others.

But the real kicker is how differently we interpret words. Remember those abstract nouns we’ve been wrestling with? Well, it turns out that your brain might be interpreting ‘freedom’ or ‘justice’ completely differently from mine – not just because of culture or upbringing, but because our brains physically process those words in different ways. Neurolinguistic studies have shown that certain regions of the brain are activated differently depending on the individual’s experience with language. In other words, your personal history with a concept can literally change how your brain lights up when you hear or say it.

And don’t even get me started on bilingual brains. If you speak more than one language, your brain is constantly toggling between two (or more) linguistic systems, which means it’s running twice the risk of misinterpretation. What a word means in one language might trigger a completely different association in another, leaving bilingual speakers in a constant state of linguistic flux. It’s like trying to run two operating systems on the same computer—things are bound to get glitchy.

But here’s the real kicker: Even within the same person, the brain can’t always process language the same way all the time. Stress, fatigue, emotional state—all of these factors can influence how well we handle language on any given day. Ever tried to have a coherent conversation when you’re tired or angry? Good luck. Your brain isn’t interested in nuance or deep philosophical ideas when it’s in survival mode. It’s just trying to get through the day without short-circuiting.

So, not only do we have to deal with the external chaos of language – miscommunication, different contexts, shifting meanings – but we also have to contend with the fact that our own brains are unreliable interpreters. You can use all the right words, follow all the right grammar rules, and still end up with a garbled mess of meaning because your brain decided to take a nap halfway through the sentence.

In the end, neurolinguistics reminds us that language isn’t just a social or cultural problem – it’’’s a biological one too. Our brains are doing their best to keep up, but they’re far from perfect. The very organ that makes language possible is also responsible for making it infinitely more complicated than it needs to be. And if we can’t rely on our own brains to process language consistently, what hope do we have of ever understanding anyone else?


Previous | Next

The Language Insufficiency Hypothesis

Read the 7-part series supporting the Language Insufficiency Hypothesis.

The Inherent Limitations of Linguistic Communication

Language, often hailed as humanity’s greatest achievement, may paradoxically be one of our most significant limitations. The language insufficiency hypothesis posits that language is inherently inadequate for communicating abstract concepts, a notion that challenges our fundamental understanding of human communication and cognition. This essay explores this hypothesis, tracing the evolution of language from its primitive origins to its current complexity, and examining the philosophical and practical implications of linguistic inadequacy.

The Accidental Evolution of Language

Language, like many aspects of human biology and cognition, emerged not through intentional design but as an evolutionary accident. Initially serving as an internal cognitive function – a means of organising one’s own thoughts – language gradually evolved into a tool for external communication. This transition likely began with simple vocalisations, perhaps rooted in rhythmic expressions akin to music and dance, before developing into more structured speech.

Early linguistic communication likely centred on concrete objects and immediate experiences, with words serving as direct signifiers for observable phenomena. However, as human cognition grew more sophisticated, so too did our linguistic capabilities, expanding to include verbs, modifiers, and eventually, abstract nouns.

The Emergence of Abstraction and Its Challenges

The development of abstract nouns marked a significant leap in human cognition and communication. Concepts such as ‘truth’, ‘justice’, and ‘freedom’ allowed for more complex and nuanced discourse. However, this advancement came at a cost: these abstract concepts, lacking direct physical referents, introduced unprecedented ambiguity and potential for misunderstanding.

The language insufficiency hypothesis suggests that this ambiguity is not merely a byproduct of abstraction, but a fundamental limitation of language itself. Whilst two individuals might easily agree on the ‘treeness’ of a physical tree, concepts like ‘fairness’ or ‘reason’ are inherently unresolvable through linguistic means alone. This insufficiency becomes increasingly apparent as we move further from concrete, observable phenomena into the realm of abstract thought.

Wittgenstein and the Limits of Language

Ludwig Wittgenstein’s later work provides crucial insights into the language insufficiency hypothesis. Wittgenstein posited that words ultimately only map to other words, never truly making contact with the objective world. This perspective suggests that language operates within a closed system of human understanding, constructing our perception of reality rather than directly representing it.

This Wittgensteinian dilemma underscores the core of the language insufficiency hypothesis: if words only refer to other words, how can we ever be certain that we’re communicating abstract concepts accurately? The very tool we use to discuss and understand abstraction may be fundamentally incapable of capturing its essence.

Cultural and Disciplinary Variations

The inadequacy of language in conveying abstract concepts becomes even more apparent when we consider cultural and disciplinary variations in communication. Different cultures and academic disciplines develop their own specialised vocabularies and ‘language games’, as Wittgenstein termed them. Whilst these specialised languages may facilitate communication within specific contexts, they often create barriers to understanding for outsiders.

This phenomenon highlights another aspect of linguistic insufficiency: the context-dependent nature of meaning. Abstract concepts may be understood differently across cultures or disciplines, further complicating attempts at clear communication.

Neurolinguistic Perspectives

Recent advances in neurolinguistics have provided new insights into the brain structures involved in language processing. Whilst these studies have enhanced our understanding of how the brain handles language, they have also revealed the complexity and variability of linguistic processing across individuals. This neurological diversity further supports the language insufficiency hypothesis, suggesting that even at a biological level, there may be inherent limitations to how accurately we can communicate abstract concepts.

Implications and Counter-Arguments

The language insufficiency hypothesis has profound implications for fields ranging from philosophy and psychology to law and international relations. If language is indeed inadequate for communicating abstract concepts, how can we ensure mutual understanding in complex negotiations or philosophical debates?

However, it’s important to note that not all scholars accept the strong version of this hypothesis. Some argue that whilst language may have limitations, it remains our most sophisticated tool for sharing abstract ideas. They suggest that through careful definition, contextualisation, and the use of metaphor and analogy, we can overcome many of the inherent limitations of linguistic communication.

Conclusion: Navigating the Limits of Language

The language insufficiency hypothesis presents a challenging perspective on human communication. It suggests that our primary tool for sharing abstract thoughts may be fundamentally flawed, incapable of fully capturing the complexity of our inner cognitive experiences.

Yet, recognising these limitations need not lead to communicative nihilism. Instead, it can foster a more nuanced approach to language use, encouraging us to be more precise in our definitions, more aware of potential misunderstandings, and more open to alternative forms of expression.

As we continue to grapple with abstract concepts and strive for clearer communication, we must remain cognizant of these linguistic limitations. Understanding the origins and nature of language—and its inherent insufficiencies—can help us navigate its complexities, fostering more effective and empathetic communication across diverse fields of human endeavour.

Read the 7-part series supporting the Language Insufficiency Hypothesis.

[Endnotes]

Pinker, S., & Bloom, P. (1990). Natural language and natural selection. Behavioral and Brain Sciences, 13(4), 707-727.

Brown, S. (2000). The “musilanguage” model of music evolution. In The Origins of Music (pp. 271-300). MIT Press.

Saussure, F. de. (1916/1983). Course in General Linguistics (R. Harris, Trans.). Duckworth.

Lakoff, G., & Johnson, M. (1980). Metaphors We Live By. University of Chicago Press.

Wittgenstein, L. (1953). Philosophical Investigations. Blackwell.

Sapir, E. (1929). The status of linguistics as a science. Language, 5(4), 207-214.

Pulvermüller, F. (2018). Neural reuse of action perception circuits for language, concepts and communication. Progress in Neurobiology, 160, 1-44.

Chomsky, N. (1965). Aspects of the Theory of Syntax. MIT Press.

Everett, D. L. (2012). Language: The Cultural Tool. Pantheon Books.

Pinker, S. (1994). The Language Instinct: How the Mind Creates Language. William Morrow and Company.

Blame is a Social Construct

The propensity to assign blame is deeply intertwined with human moral frameworks, often reflecting our need to ascribe responsibility and maintain social order. Blame allows us to identify transgressions, enforce norms, and establish accountability within our communities. But when it comes to non-human animals, the concept of blame becomes more complex.

Do Non-Human Animals Have a Sense of Blame?

Non-human animals certainly exhibit behaviours that suggest some rudimentary understanding of social rules and consequences. For example, studies on primates show that they can experience forms of moral emotions like guilt or shame. A chimpanzee might avoid eye contact or show submissive behaviour after breaking a social norm, such as stealing food from a dominant individual. Similarly, domestic dogs have been observed to display so-called “guilty” behaviours—such as avoiding eye contact or cowering—when they sense that their human is displeased. However, it’s debated whether this truly indicates guilt or simply a reaction to their owner’s emotional state.

However, the concept of blame as humans understand it—an attribution of moral responsibility that involves complex cognitive processes like intention-reading and understanding of moral rules—appears to be uniquely human. Non-human animals can recognise when another individual’s behaviour deviates from the norm and might react accordingly, but they don’t seem to hold others accountable in the same moral or punitive sense that humans do.

Blame and Morality in Humans vs. Non-Human Animals

In human societies, blame is often accompanied by a desire for reparation or punishment, as well as a cognitive understanding of intentions and causality. We don’t just react to actions; we interpret motives and hold individuals accountable based on our perception of their intentions. This is where non-human animals typically differ. Their responses to perceived wrongdoing are more likely driven by immediate social consequences—like changes in dominance status or access to resources—rather than a sense of moral outrage or an abstract concept of justice.

For example, if a wolf in a pack disobeys a social rule, it might be punished by the alpha, but this is more about reinforcing social hierarchy and cohesion than about assigning moral blame. Similarly, if a cat lashes out at another cat after being disturbed, it’s responding to an immediate violation of its personal space, not holding the other cat morally accountable.

Evolutionary Perspective

From an evolutionary standpoint, blame and moral emotions likely evolved in humans to facilitate cooperation and social cohesion in increasingly complex societies. As our ancestors formed larger and more intricate social groups, the ability to understand others’ intentions, enforce social norms, and hold individuals accountable would have been crucial for maintaining group stability and cooperative behaviours.

Non-human animals, even those that live in complex social structures, do not face the same cognitive demands as humans when it comes to maintaining large-scale social cohesion. Their social rules and enforcement mechanisms are typically less nuanced and more directly linked to survival and reproductive success.

Conclusion

While non-human animals demonstrate behaviours that hint at a basic understanding of social rules and can respond to transgressions, the uniquely human capacity for assigning blame—and the moral frameworks that arise from it—appears to be a product of our advanced cognitive abilities and complex social structures. Blame, in humans, is not just about responding to actions but involves a deeper understanding of intentions, responsibility, and justice—concepts that are foundational to our moral systems but beyond the reach of non-human cognition as we currently understand it.


I started writing a book on blame, agency, and retributive justice a few years back. Perhaps I should revisit it along with the dozen other books in progress.

Choice and Blame: Why We Forgive Some and Condemn Others

A recent parody video making the rounds on social media shows a man at a kitchen table, his girlfriend, and their cat. In a desperate attempt to gain his girlfriend’s attention, he knocks a cup off the table. The moment it hits the floor, she turns on him, scolding him for his clumsiness. Quick to deflect, he blames the cat, and suddenly her anger dissipates. She shifts from reprimanding him to lavishing affection on the supposedly guilty feline. The tension lifts—until he sheepishly confesses that it was, in fact, his doing all along. Her response? An incredulous, “Are you kidding me?”

What’s fascinating about this skit isn’t the comedy of the man’s mischief or even the cat’s unknowing role in the charade. It’s the girlfriend’s starkly different reactions to the same act, depending on who she believes committed it. The cat, in her eyes, can do no wrong; the boyfriend, however, is immediately culpable. It’s easy to laugh at the scenario’s absurdity, but the dynamic it portrays is familiar and, dare I say, quite telling about human behaviour.

The Double Standard of Blame

Why is it that we’re quick to exonerate some and just as quick to indict others? The phenomenon is more than a quirk of personality; it reveals our deeper, often unconscious, biases. While it’s understandable that the girlfriend might think the cat incapable of intentional mischief, her reaction also suggests a predisposition to forgive certain actors—whether due to perceived innocence, attachment, or simply habit.

This dynamic isn’t limited to pets and partners. In families, workplaces, and social groups, we often see a similar pattern. One person becomes the perennial scapegoat, bearing the brunt of blame for any and all misdeeds, while another enjoys a seemingly unshakeable immunity. Think of the “golden child” and the “black sheep” within a family. One can rarely put a foot wrong, while the other’s every move is scrutinised, questioned, or condemned.

Beyond the Blame: Motivations and Consequences

The reasons behind these imbalances can be complex. Sometimes, they stem from past behaviour: if someone has repeatedly erred, we may be primed to expect the worst from them, even if they’ve reformed. Other times, they arise from emotional bonds or biases: we excuse those we love or admire because acknowledging their faults would cause us discomfort or cognitive dissonance.

This phenomenon isn’t just about playing favourites; it can have significant psychological consequences. For the person perpetually cast as the villain, the burden of unwarranted blame can lead to feelings of resentment, anxiety, or self-doubt. Meanwhile, those consistently exonerated may internalise a skewed perception of their own infallibility, which can be equally damaging.

A Broader Reflection on Accountability

Returning to the video’s context, the girlfriend’s swift switch from reproach to indulgence once she believed the cat was at fault, and her subsequent anger when the truth was revealed, invites us to question our own responses to perceived transgressions. Are we, too, guilty of selectively assigning blame based on who we think is responsible? How often do we let our preconceptions shape our judgments, favouring one actor over another without truly weighing the evidence?

The parody is amusing, no doubt, but it also serves as a subtle reminder: our reactions often reveal more about our biases and expectations than about the actions themselves. The next time we find ourselves quick to blame or forgive, it’s worth pausing to ask: are we reacting to the act, or to the actor?

In a world increasingly marked by polarised opinions and knee-jerk reactions, cultivating this kind of self-awareness is crucial. We need to be vigilant not only about how we judge others but also about why we do so. For, in the end, it’s not just about who knocked the cup off the table—it’s about who we believe deserves to be scolded for it.

Cognitive Processing Flow Model

The Cognitive Process Flow Model illustrates how we process the phenomenal world. It’s reductionist and is missing aspects because it is just a back-of-the-napkin sketch. I created it because I uttered, “I can model it for you”. And so I did.

EDIT: I’ve updated the model slightly as the article head image, but the copy content refers to the first draft.

My response was to a person making the claim, that all you need to facts and logic prevails. Rather than restate the argument, I’ll just walk through the diagramme.

There’s meta information to set it up. We are subjective entities in the world. We have a sense-perception apparatus as we exist in it. Countless events occur in this world. We recognise only a few of them within our limited range, though technology expands this range in various ways.

Most of us interact in the world. Some are less ambulatory, so the world visits them. Some have sense-perception deficits whilst others have cognitive deficits. My point is not to capture every edge and corner case. This is just a generalised model.

It starts with an event. Events occur ceaselessly. In our small portion of the world and elsewhere. For the purpose of the model, the first thing that happens is an event catches our attention. We might notice a shape, a colour, or a movement; we might hear a sound, smell an aroma, feel a sensation, or taste something.

A pre-emotion, pre-logic function serves to process these available inputs. Perhaps, you hear a report on anthropogenic climate change or read something about a political candidate. This emotional filter will police sensory inputs and unconsciously or preconsciously determine if you will react to the initial stimulus. If not, you’ll continue in an attention-seeking loop. Not that kind of attention-seeking.

As my dialogue was about the presentation of facts, our next stop will be logical evaluation. Does this make sense to us, or can we otherwise make it? This is a process in itself. I’ll assume here that it requires little elaboration. Instead, I’ll focus on the operating environment.

Our logical processes are coloured by past experiences and tainted by cognitive biases and deficits. We may also trigger the calling of additional facts through past experiences or the current engagement.

We’ll process these fragments and reach some logical conclusion. But we’re not done. We take this intermediate conclusion and run it through more emotional processing. Cognitive biases come back into play. If the event conforms with your past experiences and cognitive biases, we may run it through a cognitive dissonance routine. To be honest, this probably is part of the emotional reconciliation process, but I’ve drawn it here, so I’ll let it be. In this case, it’s just a filter. If it happens to conform to our belief system, it will pass unfettered; otherwise, it will be squared with our beliefs. Again, this leads me to believe it’s a subcomponent of emotional reconciliation. I’ll update the chart later.

In any case, we’ll end at Final Acceptance. This acceptance may be that we accept or reject the logic, but we arrive at an opinion that gets catalogued with the rest of them. Some may be elevated to facts or truths in the epistemological hierarchy. Although an end marker is identified, it’s really a wait state for the next event. Rinse and repeat until death.

I’ll update this presently. Be on the lookout. It could include more dimensions and interactions, but that might have to wait until version 3.

Meantime, does this feel right to you? Did it even get your attention?

An Example: Anthropogenic Climate Change

Let’s wrap up with an example. I’ll use climate change. An article comes into your attention field, and you have an interest in these things, so it passes through the emotional filter. If your propensity for these articles is high, it might race to the next stage.

You read the article, and it contains some facts—rather, it contains claims for evaluation. To do this, you’ll recall past experiences and cognitive biases are always lying in wait. You may have to look for new facts to add to the mix. These will have to take a similar route past your attention gatekeeper and emotional sidekick.

If you are already predisposed that climate change is a hoax, these facts will filter through that lens—or vice versa.

When all of this is resolved, you’ll have arrived at a conclusion—perhaps we’ll call it a proto-conclusion. It hasn’t been set yet.

You are still going to introspect emotionally and decide if this is a position you want to hold. Perhaps, you feel that climate change is a hoax but this doesn’t jive with that position. Here, you’ll either accept these facts and flip a bit to a sceptical believer or cognitive dissonance will kick in and ensure your sense of the world isn’t thrown off kilter. You may update your belief system to include this datum for future assessments.

Now we are ready for final acceptance. You can now express your established opinion. If the net event is to counter that acceptance, rinse and repeat ad infinitum.