Excess Deaths Attributable to Capitalism

A System Built on Exploitation and Neglect

Capitalism, often celebrated for its ability to generate wealth and innovation, also brings with it a darker legacy: the untold millions of lives prematurely lost due to its systemic failures. Capitalism can be attributed to more than 10 million excess deaths per year, and these numbers will continue to increase. These deaths are not simply unfortunate byproducts but are structurally baked into the system itself. Whether through poverty, healthcare inequality, environmental destruction, or war, capitalism’s logic of profit maximisation places human life at the mercy of market forces, with devastating consequences.

Audio: NotebookLM podcast on this topic.

Friedrich Engels famously referred to these preventable deaths as social murder, a term that highlights how capitalism creates conditions in which certain populations are systematically neglected, deprived, and ultimately destroyed. Today, Engels’ critique is more relevant than ever as we examine the staggering human toll that capitalism has left in its wake, often invisible in the glow of GDP figures and economic growth.


Poverty and Hunger: The Silent Killers

One of the most pervasive ways capitalism generates excess deaths is through poverty and hunger. Despite the extraordinary wealth produced by capitalist economies, millions still die from hunger-related causes every year. According to the World Health Organization (WHO), around 9 million people die annually from hunger and malnutrition, mostly in regions where capitalist-driven global inequality has made basic necessities unaffordable or inaccessible.[1]

Capitalism’s defenders often point to rising standards of living as evidence of the system’s success, but this narrative suffers from survivorship bias. The success stories of those who have benefited from capitalist growth obscure the countless lives that have been lost to the system’s structural inequalities. As Engels noted, these deaths are not natural or inevitable—they are preventable. They occur because the capitalist system concentrates wealth in the hands of a few while leaving vast populations to suffer without access to food, healthcare, or basic resources.

This disparity in wealth and access to resources creates a global system of social murder, where the deaths of the poor are written off as collateral damage in the pursuit of profit. These deaths are not merely unfortunate consequences; they are inherent to the capitalist system’s prioritisation of wealth accumulation over human life.


Healthcare Inequality and Preventable Deaths

The lack of access to adequate healthcare is another major driver of deaths attributable to capitalism. In the United States, the richest nation in the world, an estimated 500,000 deaths between 1990 and 2010 were linked to healthcare inequality, according to a Lancet study.[2] Globally, millions die each year from preventable causes—such as pneumonia, diarrhoea, and malaria—because market-driven healthcare systems fail to provide for those without the means to pay.

In a for-profit healthcare system, those without money are often denied life-saving treatment. Healthcare becomes a commodity, rather than a human right. This commodification of care creates deadly disparities, where a wealthy few receive world-class medical attention while millions die from treatable conditions. Engels’ notion of social murder is evident here as well: the system does not kill through direct violence but by neglecting the vulnerable.

This situation is exacerbated by the ongoing commodification of healthcare through privatisation and austerity measures, which strip public systems of resources and force them to operate on capitalist principles. The result is a world where profit motives dictate who lives and who dies.


Environmental Destruction and Climate Change: Capitalism’s Long-Term Death Toll

Capitalism’s unrelenting focus on short-term profit also drives environmental destruction, contributing to a growing death toll linked to climate change. The WHO estimates that by 2030, climate change will cause approximately 250,000 additional deaths each year, driven by heat stress, malnutrition, and the spread of diseases like malaria and diarrhoea.[3] These figures are conservative, as the cascading effects of climate-induced migration and conflict are difficult to quantify.

David Harvey’s concept of accumulation by dispossession is central to understanding how capitalism contributes to environmental devastation. Capitalist economies extract and commodify natural resources, often at the expense of local populations who bear the brunt of environmental degradation. Deforestation, mining, and fossil fuel extraction displace communities and destroy ecosystems, creating conditions that lead to death, displacement, and disease.

This environmental violence is compounded by disaster capitalism, a term coined by Naomi Klein to describe how capitalist interests exploit crises like natural disasters or financial collapses for profit.[4] The destruction of vulnerable communities by climate change is not simply a tragedy—it is a consequence of capitalist expansion into every corner of the planet, sacrificing human and ecological health for economic gain.


War and Imperialism: Capitalism’s Violent Expansion

The human toll of capitalism extends beyond poverty and environmental degradation to include the millions of lives lost to wars driven by capitalist interests. The illegal invasion of Iraq in 2003, for example, led to hundreds of thousands of deaths, many of which were tied to the geopolitical aims of securing control over oil reserves. Wars like Iraq are not isolated failures of policy but integral to the functioning of a global capitalist system that seeks to dominate resources and expand markets through military force.

David Harvey’s theory of new imperialism explains how capitalist economies rely on the expansion of markets and the extraction of resources from other nations, often through military means.[5] The military-industrial complex, as described by President Dwight D. Eisenhower, thrives under capitalism, profiting from perpetual war and the destruction of human life.

The death toll of wars driven by capitalist expansion is staggering. From the millions killed in conflicts over resources to the long-term destabilisation of regions like the Middle East, these deaths are directly tied to capitalism’s global ambitions. The victims of these wars—like those who suffer from poverty and environmental destruction—are casualties of a system that prioritises wealth and power over human life.


Conclusion: Reckoning with Capitalism’s Death Toll

The deaths attributable to capitalism are not abstract or incidental; they are the direct consequences of a system that places profit above all else. From hunger and poverty to healthcare inequality, environmental destruction, and war, the capitalist system has claimed millions of lives—lives that could have been saved under a more just and equitable economic model.

The true success of capitalism, then, is not in its ability to generate wealth for the few, but in its capacity to obscure the structural violence that sustains it. By framing poverty, healthcare inequality, and environmental destruction as unfortunate consequences of “market forces,” capitalism avoids accountability for the millions it leaves behind.

It is time to reckon with this hidden death toll. Only by facing the human cost of capitalism can we begin to imagine a future where economic systems prioritise human life over profit. The victims of capitalism are not just numbers—they are the casualties of a system that, as Engels pointed out, murders through neglect, exploitation, and greed.


Endnotes:

[1]: World Health Organization, “Hunger and Malnutrition: Key Facts,” 2022.
[2]: “The Lancet Public Health,” Study on healthcare inequality in the U.S., 2010.
[3]: World Health Organization, “Climate Change and Health,” 2022.
[4]: Naomi Klein, The Shock Doctrine: The Rise of Disaster Capitalism (Picador, 2007), pp. 9-10.
[5]: David Harvey, The New Imperialism (Oxford University Press, 2005), pp. 145-147.


The Limits of Language: Why Philosophical Paradoxes Might Be Illusions of Mapping

Philosophical paradoxes have long captured our imagination, from Zeno’s paradoxes about movement to the Liar Paradox that tangles truth and falsehood into an endless loop. Often, these puzzles are treated as fundamental mysteries of the universe—windows into the limits of human understanding or insight into the hidden structure of reality. But what if, rather than reflecting deep truths about existence, many of these paradoxes are artefacts of language itself—symptoms of our conceptual tools struggling to adequately map a complex terrain? Perhaps, more often than not, the perplexities we face are the result of an inadequate mapping—a linguistic or cognitive misfire—rather than true paradoxes of the underlying terrain of reality.

This notion—that many paradoxes arise from the limitations of language and cognition—finds resonance in the work of philosophers like Ludwig Wittgenstein. Wittgenstein argued that many philosophical problems arise because we misuse language, taking words beyond their natural context, confusing what our words describe with the objects or concepts themselves. In this sense, our maps (the linguistic and logical structures we use) often lead us astray when navigating the conceptual terrains of ethics, metaphysics, or the nature of truth.

This idea can be articulated under what we might call the Language Insufficiency Hypothesis: the view that the limitations of language itself are at the root of many philosophical paradoxes. According to this hypothesis, the apparent contradictions or puzzles that emerge in philosophical discourse often reveal more about the shortcomings of our representational tools than about any deep metaphysical truths. The Language Insufficiency Hypothesis suggests that our conceptual maps are inadequate for fully capturing the richness of the terrains we attempt to describe, and that this inadequacy leads us to mistake linguistic confusion for genuine philosophical mystery.

The Inherent Limitations of Linguistic Communication

Language, often hailed as humanity’s greatest achievement, may paradoxically be one of our most significant limitations. The Language Insufficiency Hypothesis posits that language is inherently inadequate for communicating abstract concepts, a notion that challenges our fundamental understanding of human communication and cognition. This perspective traces the evolution of language from its primitive origins to its current complexity, revealing the philosophical and practical implications of linguistic inadequacy.

The Accidental Evolution of Language

Language, like many aspects of human biology and cognition, emerged not through intentional design but as an evolutionary accident. Initially serving as an internal cognitive function—a means of organising one’s own thoughts—language gradually evolved into a tool for external communication. This transition likely began with simple vocalisations, perhaps rooted in rhythmic expressions akin to music and dance, before developing into more structured speech.

Early linguistic communication likely centred on concrete objects and immediate experiences, with words serving as direct signifiers for observable phenomena. However, as human cognition grew more sophisticated, so too did our linguistic capabilities, expanding to include verbs, modifiers, and eventually, abstract nouns.

The Emergence of Abstraction and Its Challenges

The development of abstract nouns marked a significant leap in human cognition and communication. Concepts such as ‘truth’, ‘justice’, and ‘freedom’ allowed for more complex and nuanced discourse. However, this advancement came at a cost: these abstract concepts, lacking direct physical referents, introduced unprecedented ambiguity and potential for misunderstanding.

The Language Insufficiency Hypothesis suggests that this ambiguity is not merely a byproduct of abstraction, but a fundamental limitation of language itself. While two individuals might easily agree on the ‘treeness’ of a physical tree, concepts like ‘fairness’ or ‘reason’ are inherently unresolvable through linguistic means alone. This insufficiency becomes increasingly apparent as we move further from concrete, observable phenomena into the realm of abstract thought.

Wittgenstein and the Limits of Language

Ludwig Wittgenstein’s later work provides crucial insights into the Language Insufficiency Hypothesis. Wittgenstein posited that words ultimately only map to other words, never truly making contact with the objective world. This perspective suggests that language operates within a closed system of human understanding, constructing our perception of reality rather than directly representing it.

This Wittgensteinian dilemma underscores the core of the Language Insufficiency Hypothesis: if words only refer to other words, how can we ever be certain that we’re communicating abstract concepts accurately? The very tool we use to discuss and understand abstraction may be fundamentally incapable of capturing its essence.

Cultural and Disciplinary Variations

The inadequacy of language in conveying abstract concepts becomes even more apparent when we consider cultural and disciplinary variations in communication. Different cultures and academic disciplines develop their own specialised vocabularies and ‘language games’, as Wittgenstein termed them. While these specialised languages may facilitate communication within specific contexts, they often create barriers to understanding for outsiders.

This phenomenon highlights another aspect of linguistic insufficiency: the context-dependent nature of meaning. Abstract concepts may be understood differently across cultures or disciplines, further complicating attempts at clear communication.

Neurolinguistic Perspectives

Recent advances in neurolinguistics have provided new insights into the brain structures involved in language processing. While these studies have enhanced our understanding of how the brain handles language, they have also revealed the complexity and variability of linguistic processing across individuals. This neurological diversity further supports the Language Insufficiency Hypothesis, suggesting that even at a biological level, there may be inherent limitations to how accurately we can communicate abstract concepts.

Implications and Counter-Arguments

The Language Insufficiency Hypothesis has profound implications for fields ranging from philosophy and psychology to law and international relations. If language is indeed inadequate for communicating abstract concepts, how can we ensure mutual understanding in complex negotiations or philosophical debates?

However, it’s important to note that not all scholars accept the strong version of this hypothesis. Some argue that while language may have limitations, it remains our most sophisticated tool for sharing abstract ideas. They suggest that through careful definition, contextualisation, and the use of metaphor and analogy, we can overcome many of the inherent limitations of linguistic communication.

Navigating the Limits of Language

The Language Insufficiency Hypothesis presents a challenging perspective on human communication. It suggests that our primary tool for sharing abstract thoughts may be fundamentally flawed, incapable of fully capturing the complexity of our inner cognitive experiences.

Yet, recognising these limitations need not lead to communicative nihilism. Instead, it can foster a more nuanced approach to language use, encouraging us to be more precise in our definitions, more aware of potential misunderstandings, and more open to alternative forms of expression.

As we continue to grapple with abstract concepts and strive for clearer communication, we must remain cognizant of these linguistic limitations. Understanding the origins and nature of language—and its inherent insufficiencies—can help us navigate its complexities, fostering more effective and empathetic communication across diverse fields of human endeavour.

The Frege–Geach Problem as an Illustration of Linguistic Limitations

One pertinent example of this idea is the Frege–Geach problem, a challenge often faced by expressivist theories of ethics. Expressivists maintain that moral statements do not describe facts but rather express attitudes or emotions—a statement like “lying is wrong” is an expression of disapproval rather than a factual assertion. The Frege–Geach problem arises when such moral statements are embedded in logical constructions like conditionals or arguments: “If lying is wrong, then getting your little brother to lie is wrong.” In this context, expressivists face a challenge in explaining how the meaning of “lying is wrong” remains coherent across different uses, without reducing moral expressions to descriptive claims.

The Frege–Geach problem thus illustrates a fundamental limitation: attempting to apply truth-conditional logic, designed for descriptive language, to moral discourse, which serves a different function altogether. In trying to map evaluative terrain—which involves emotions, commitments, and subjective attitudes—using the same structures meant for factual landscapes, we encounter conceptual misalignments. This problem—a confusion of the terrain for the map—is not necessarily a genuine paradox about moral truths but rather a reflection of the inadequacy of our current linguistic tools. Just as a physical map may fail to capture the emotional experience of a journey, so too do our linguistic and logical maps fail to adequately capture the moral landscape.

Wittgenstein’s later work is helpful in framing this issue. He emphasised the importance of recognising different language-games: the rules and purposes that guide different forms of discourse. Moral language is not like scientific language; it follows different rules and aims to express and influence attitudes rather than establish empirically verifiable facts. The Frege–Geach problem emerges precisely because we attempt to impose a single logical structure onto forms of language that serve different purposes, confusing the distinct games we are playing. This attempt to force moral language into a framework designed for empirical propositions produces an apparent paradox, where the real issue lies in our misuse of the conceptual map.

This pattern of misinterpretation is not unique to moral discourse. Many philosophical paradoxes—from problems of identity and personal continuity to issues of free will and determinism—arise when we try to map different terrains with the same linguistic structures, or when we push our conceptual tools beyond their natural limits. Cognitive limitations also play a role; our tendency to think in binary oppositions, our reliance on categories, and our need for consistent narratives often lead to oversimplifications of complex realities. These cognitive tools—essential for everyday functioning—can prove inadequate for capturing the nuance of the philosophical landscapes we attempt to navigate.

The map-terrain challenge is thus at the core of why philosophical paradoxes can seem so intractable. Our maps—the languages and logical frameworks that structure our thinking—are, by their nature, simplifications of a world that is far more nuanced than we can readily articulate. When the terrain is moral, aesthetic, or otherwise not reducible to simple truths or falsehoods, the inadequacies of our maps become evident. We are left facing paradoxes that may, in truth, be nothing more than indicators that our representational systems need refinement or expansion.

Rather than treating these paradoxes as unresolvable, we might benefit from seeing them as invitations to reconsider our linguistic and cognitive frameworks. In recognising that the Frege–Geach problem, for instance, may reflect an ill-suited mapping of moral discourse rather than a genuine mystery about moral reality, we open the door to a pluralistic approach: different terrains require different maps. Perhaps, in some cases, the best solution is not to attempt to solve the paradox in traditional terms but to change the way we map the terrain altogether—to allow for multiple, context-sensitive tools that respect the particularity of each domain of discourse.

Ultimately, this perspective suggests a more flexible and cautious approach to philosophical inquiry—one that acknowledges the limits of our conceptual tools and remains open to the possibility that the terrain is far richer and more varied than our maps can currently capture.

Can Zombies Ever Be Conscious?

In the world of consciousness studies, few topics spark as much heated debate as the possibility of philosophical zombies—hypothetical beings that behave exactly like humans but lack subjective experience, or qualia. On the surface, zombies seem like an interesting thought experiment, but they quickly turn into a battleground for deeper issues about the nature of consciousness itself.

This post explores two key perspectives in this debate: Daniel Dennett’s functionalist critique of zombies and a recent scientific paper that argues zombies are biologically impossible. While both reject the possibility of zombies, they do so for different reasons, and the discussion leaves room for future possibilities that could disrupt the current consensus.

Dennett’s Zombies and Zimboes: Consciousness as Function

Daniel Dennett, one of the most influential philosophers of mind, is known for his no-nonsense rejection of philosophical zombies. Dennett argues that if something behaves exactly like a conscious being, it is conscious. For him, there is no hidden metaphysical property—such as subjective experience—that separates a “zombie” from a conscious human. Consciousness, in his view, is entirely explainable by physical processes and functional behaviour.

Dennett extends his argument with the concept of zimboes, satirical creatures that not only act like conscious beings but can even reflect on their states, claiming to be conscious, despite supposedly lacking any inner experience. For Dennett, if a being can behave as though it has introspective awareness and engage in the full spectrum of human behaviour, there’s no meaningful distinction between that being and a conscious person.

In short, Dennett collapses the distinction between zombies and conscious beings. If something passes all the behavioural and functional tests of consciousness, it might as well be conscious. Zombies, as typically conceived, are simply an illusion—a misunderstanding of what consciousness is.

A Biological Rejection: Zombies Are Impossible

On the other hand, a more recent paper offers a different, biologically grounded argument against zombies. The authors propose that consciousness is the result of self-organising systems. In this view, biological organisms maintain their survival through adaptive behaviours constrained by policies—rules that govern how they react to environmental stimuli. These policies require a first-order self: a basic form of consciousness that allows an organism to navigate and interpret its environment.

The authors argue that without this first-order self, an organism would not be able to exhibit the fitness-driven behaviours needed for survival. Therefore, zombies—beings that behave like humans without consciousness—are biologically impossible. For these researchers, consciousness is not just a side effect of complex behaviour; it’s a necessary condition for such behaviour. Their framework dissolves the so-called “hard problem” of consciousness, asserting that subjective experience, or qualia, arises directly from the qualitative nature of self-organising systems.

In their view, zombies cannot exist because behaviour as complex as that of conscious beings requires consciousness.

The Open Question: What About Future Technology?

However, there is a tension between these two perspectives, particularly when we consider future possibilities in technology and artificial intelligence. Both Dennett and the authors of the biological paper argue that zombies—whether defined as Dennett’s “behaviourally indistinguishable” beings or the biologically impossible entities proposed by the paper—are not real. But could this change?

What if advanced AI or synthetic biological systems could simulate human behaviour so perfectly that they effectively become zombies—performing all the actions and behaviours we associate with consciousness, but lacking any subjective experience? Dennett might still argue that these systems are conscious, as long as they behave as though they are. But the biological view complicates this, since it ties consciousness directly to the survival and adaptive behaviours of self-organising systems.

Could a highly advanced AI system bypass the need for subjective experience while still exhibiting complex, adaptive behaviour? If so, it would challenge the current consensus and potentially create a new class of entities—artificial zombies—that neither behave nor function like traditional conscious beings but still perform human-like actions.

I Wonder What’s Next?

This philosophical conflict leaves us with an intriguing, open-ended question: are zombies truly impossible, or are they merely improbable given our current understanding of biology and consciousness? Dennett’s view seems to collapse the distinction between behaviour and consciousness, while the biological argument insists that the two are inseparable. But both positions could be challenged by future technologies that mimic human consciousness without having it.

Could we one day create a true zombie—a being that acts like us, thinks like us, but is as empty inside as a rock? The debate remains open, and as our understanding of consciousness and artificial intelligence deepens, so too will our exploration of the zombie question.

For now, the answer to whether zombies can exist seems to depend on what you believe consciousness really is.

The Illusion of the “Temporarily Embarrassed Millionaire”: How Capitalism’s Defenders Uphold Their Own Exploitation


In the contemporary world of deepening inequality and environmental degradation, capitalism continues to hold a powerful ideological grip on much of the global population. Yet the irony is that many of its staunchest defenders are not the elites or the true beneficiaries of the system, but the very workers and middle-class individuals whose lives it exploits and controls. These defenders are not capitalists themselves; they are, in fact, cogs in the machinery of a system they imagine will eventually reward their loyalty. This illusion is strikingly captured in a quote often misattributed to John Steinbeck: “Socialism never took root in America because the poor see themselves not as an exploited proletariat but as temporarily embarrassed millionaires.”[1]

This phenomenon, which we might call the temporarily embarrassed millionaire syndrome, reflects not only a profound misunderstanding of capitalism but also the effectiveness of the system in controlling its participants through hope and aspiration. Capitalism promises upward mobility, convincing even those at the bottom of the economic ladder that their current misfortunes are temporary. But as Karl Marx and Friedrich Engels observed, this is a system of exploitation that not only alienates workers but effectively destroys them.


Survivorship Bias and the Myth of the “Rising Tide”

Capitalism’s defenders frequently invoke the idea that “a rising tide lifts all boats.” The metaphor suggests that when capitalism prospers, everyone benefits. However, this vision of progress masks the reality of capitalism’s winners and losers. As economist David Harvey has pointed out, capitalism is not a neutral system of wealth creation—it is a system of accumulation by dispossession, constantly expropriating wealth from others, often through privatisation and the commodification of public goods.[2] The rising tide does lift some boats, but it simultaneously leaves others stranded, or worse, sinking.

Survivorship bias is essential to understanding how capitalism maintains its legitimacy. The success stories—the wealthy entrepreneurs, the individuals who “made it”—are lauded as proof that the system works. But the vast numbers of people left behind, those who toil in exploitative conditions or who die from poverty and neglect, are erased from the narrative. In Engels’ terms, these are victims of social murder—individuals who die prematurely not by direct violence, but through the structural forces of deprivation imposed by capitalism.[3] Their deaths are rendered invisible, falling out of the metrics of rising living standards and growth.

Engels’ critique of industrial capitalism is as relevant today as it was in the 19th century. The modern mechanisms of exploitation may be more complex, but they are no less deadly. In a late capitalist world, the poor and marginalised are still being “murdered” through the structural violence of inadequate healthcare, poor working conditions, and environmental degradation. The millions left out of the capitalist success story are not anomalies but integral to the system’s operation.


Alienation and the Tragedy of Defending the System

Marx’s theory of alienation provides another crucial lens through which to understand why capitalism’s defenders often remain blind to their own exploitation. Under capitalism, workers are alienated from the products of their labour, the process of production, their own humanity, and from each other.[4] The worker becomes a cog in a machine, detached from the value they create, and unable to control their working life. Yet, even in this state of alienation, many still defend the system, believing that their hard work will eventually lead them to wealth and freedom.

This defence of capitalism, often articulated by those whose lives it degrades, reflects Antonio Gramsci’s concept of cultural hegemony. Gramsci argued that the ruling class maintains power not just through economic domination, but by shaping the cultural and ideological landscape.[5] Capitalism’s defenders are, in part, products of this hegemony, believing in the very values—individualism, competition, the ‘American Dream’—that bind them to a system of exploitation.

This illusion of freedom under capitalism is deepened by what Herbert Marcuse calls repressive desublimation. Capitalism offers false freedoms in the form of consumer choice and superficial pleasures, giving individuals the illusion that they are exercising autonomy, even as the system remains unchallenged.[6] Workers may identify themselves in their commodities—luxury goods, tech gadgets, cars—but these objects only serve to reinforce their alienation and dependence on the capitalist system. The temporarily embarrassed millionaire clings to the dream of eventual success, all the while contributing to a system that offers only superficial rewards in return.


Social Murder and the Structural Violence of Late Capitalism

The notion of social murder offers a stark framework for understanding capitalism’s indirect, yet pervasive, violence. As Engels explained, this form of violence is not inflicted through overt means, but through the systematic neglect of basic human needs. Whether it’s the millions who die due to lack of access to healthcare or the global poor displaced by climate-induced disasters, capitalism perpetuates a form of structural violence that is invisible to those who benefit from the system’s success.[7]

The American political theorist Naomi Klein extends this analysis through her concept of disaster capitalism, where crises are exploited for profit. Whether it’s natural disasters or financial crises, capitalism uses these events as opportunities to privatise public resources, dismantle social safety nets, and deepen inequality.[8] The victims of these disasters—often the poor and vulnerable—are, in Engels’ terms, socially murdered by a system that thrives on their dispossession.


The Temporarily Embarrassed Millionaire as a Tool of Control

The illusion that one’s current position is only temporary—that any individual can rise to capitalist wealth if they work hard enough—is central to maintaining the capitalist system. This aspiration prevents individuals from seeing their exploitation for what it is. They do not identify as part of an exploited class but instead believe they are merely waiting for their turn at wealth. Zygmunt Bauman’s concept of liquid modernity—the perpetual state of instability and insecurity produced by late capitalism—helps explain this phenomenon.[9] Individuals are constantly told that their position is fluid, changeable, and that their big break is just around the corner.

But for most, this “big break” never comes. The dream of becoming a millionaire is a powerful form of social control, one that keeps individuals invested in a system that benefits only a small fraction of its participants. As Marx reminds us, “the worker becomes all the poorer the more wealth he produces, the more his production increases in power and range.”[10] Capitalism does not reward the many; it exploits the many for the benefit of the few.


Conclusion: Facing the Irony and Imagining a Post-Capitalist Future

The greatest irony of capitalism is that those who defend it most fervently are often those who will never realise its promises. These are not the capitalists of the system, but its workers, its underclass, and its exploited. They see themselves not as oppressed, but as temporarily embarrassed millionaires—an illusion that keeps them bound to a system that offers them no real future.

In this light, the true success of capitalism is not in its creation of wealth, but in its ability to mask the conditions of exploitation, alienation, and social murder that underpin it. The path forward requires a dismantling of these illusions and a recognition that the system’s failures are not accidental but integral to its design.

Only by facing these uncomfortable truths can we begin to imagine a future beyond the constraints of capitalist ideology, a world where human flourishing is no longer measured by wealth accumulation but by the collective well-being of all.


Endnotes:

[1]: Misattributed to John Steinbeck, this quote encapsulates a critical observation about American capitalism’s appeal to aspiration rather than solidarity.
[2]: David Harvey, The New Imperialism (Oxford University Press, 2005), pp. 145-147.
[3]: Friedrich Engels, The Condition of the Working Class in England (Oxford University Press, 1845), p. 112.
[4]: Karl Marx, Economic and Philosophic Manuscripts of 1844 (Progress Publishers, 1959).
[5]: Antonio Gramsci, Selections from the Prison Notebooks (International Publishers, 1971), p. 12.
[6]: Herbert Marcuse, One-Dimensional Man (Beacon Press, 1964), p. 10.
[7]: Friedrich Engels, The Condition of the Working Class in England, p. 114.
[8]: Naomi Klein, The Shock Doctrine: The Rise of Disaster Capitalism (Picador, 2007), pp. 9-10.
[9]: Zygmunt Bauman, Liquid Modernity (Polity, 2000), p. 14.
[10]: Karl Marx, Economic and Philosophic Manuscripts of 1844, p. 68.


words

Why did God create atheists?

A rabbi was asked by one of his students “Why did God create atheists?” After a long pause, the rabbi finally responded with a soft but sincere voice. “God created atheists” he said, “to teach us the most important lesson of them all – the lesson of true compassion. You see, when an atheist performs an act of charity, visits someone who is sick, helps someone in need, and cares for the world, he is not doing so because of some religious teaching. He does not believe that God commanded him to perform this act. In fact, he does not believe in God at all, so his actions are based on his sense of morality. Look at the kindness he bestows on others simply because he feels it to be right. When someone reaches out to you for help. You should never say ‘I’ll pray that God will help you.’ Instead, for that moment, you should become an atheist – imagine there is no God who could help, and say ‘I will help you’.”

— Martin Buber, “Tales of the Hasidim”

This has come across my Facebook feed several times. It resonates with me, so I’m sharing it. I don’t need to add commentary because it speaks volumes for itself. It’s amazing when people actually understand the assignment.

From Memes to Meaning: The Beautiful Chaos of Modern Language

3–5 minutes

The Present Day: Social Media and Memes – The Final Nail in the Coffin?

Just when you thought things couldn’t get any more chaotic, enter the 21st century, where language has been boiled down to 280 characters, emojis, and viral memes. If you think trying to pin down the meaning of “freedom” was hard before, try doing it in a tweet—or worse, a string of emojis. In the age of social media, language has reached new heights of ambiguity, with people using bite-sized bits of text and images to convey entire thoughts, arguments, and philosophies. And you thought interpreting Derrida was difficult.

Social media has turned language into an evolving, shape-shifting entity. Words take on new meanings overnight, hashtags rise and fall, and memes become the shorthand for complex cultural commentary. In some ways, it’s brilliant—what better way to capture the madness of modern life than with an image of a confused cat or a poorly drawn cartoon character? But in other ways, it’s the final nail in the coffin for clear communication. We’ve gone from painstakingly crafted texts, like Luther’s 95 Theses, to memes that rely entirely on shared cultural context to make sense.

The irony is that we’ve managed to make language both more accessible and more incomprehensible at the same time. Sure, anyone can fire off a tweet or share a meme, but unless you’re plugged into the same cultural references, you’re probably going to miss half the meaning. It’s like Wittgenstein’s language games on steroids—everyone’s playing, but the rules change by the second, and good luck keeping up.

And then there’s the problem of tone. Remember those philosophical debates where words were slippery? Well, now we’re trying to have those debates in text messages and social media posts, where tone and nuance are often impossible to convey. Sarcasm? Forget about it. Context? Maybe in a follow-up tweet, if you’re lucky. We’re using the most limited forms of communication to talk about the most complex ideas, and it’s no surprise that misunderstandings are at an all-time high.

And yet, here we are, in the midst of the digital age, still using the same broken tool—language—to try and make sense of the world. We’ve come a long way from “flamey thing hot,” but the basic problem remains: words are slippery, meanings shift, and no matter how advanced our technology gets, we’re still stuck in the same old game of trying to get our point across without being completely misunderstood.

Conclusion: Language – Beautiful, Broken, and All We’ve Got

And here’s where the irony kicks in. We’ve spent this entire time critiquing language—pointing out its flaws, its limitations, its inability to truly capture abstract ideas. And how have we done that? By using language. It’s like complaining about how unreliable your GPS is while using it to get to your destination. Sure, it’s broken—but it’s still the only tool we have.

In the end, language is both our greatest achievement and our biggest limitation. It’s allowed us to build civilisations, create art, write manifestos, and start revolutions. But it’s also the source of endless miscommunication, philosophical debates that never get resolved, and social media wars over what a simple tweet really meant.

So yes, language is flawed. It’s messy, it’s subjective, and it often fails us just when we need it most. But without it? We’d still be sitting around the fire, grunting at each other about the ‘toothey thing’ lurking in the shadows. For better or worse, language is the best tool we’ve got for making sense of the world. It’s beautifully broken, but we wouldn’t have it any other way.

And with that, we’ve used the very thing we’ve critiqued to make our point. The circle of irony is complete.


Previous | End

Jargon, Brains, and the Struggle for Meaning

6–9 minutes

Specialised Languages: Academia’s Jargon Olympics

If you thought normal language was confusing, let’s take a moment to appreciate the true champions of linguistic obscurity: academics. Welcome to the world of specialised languages, where entire fields of study have developed their own language games that make even Wittgenstein’s head spin.

Here’s how it works: Every discipline—science, law, philosophy—creates its own jargon to describe the world. At first, it seems helpful. Instead of using vague terms, you get precise definitions for complex ideas. But what started as a way to improve communication within a field quickly turned into a linguistic arms race, where the more obscure and convoluted your terms are, the smarter you sound. You’re not just a lawyer anymore—you’re someone who’s ready to throw “res ipsa loquitur” into casual conversation to leave everyone else in the room wondering if they’ve missed a memo.

The problem? If you’re not part of the club, good luck understanding what anyone is talking about. Want to read a physics paper? Prepare to learn a whole new vocabulary. Need to get through a legal document? You’ll be knee-deep in Latin phrases before you even get to the point. And don’t even try to decipher a philosophical text unless you’re ready to battle abstract nouns that have been stretched and twisted beyond recognition.

It’s not just the words themselves that are the issue—it’s the sheer density of them. Take “justice” for example. In philosophy, you’ve got theories about distributive justice, retributive justice, restorative justice, and a hundred other variations, each with its own set of terms and conditions. And that’s before we even touch on how “justice” is defined in legal circles, where it becomes an even more tangled mess of case law and precedent. Every field is playing its own version of the “justice” game, with its own rules and definitions, and none of them are interested in comparing notes.

This is the academic world in a nutshell. Each discipline has built its own linguistic fortress, and unless you’ve spent years studying, you’re not getting in. But here’s the kicker: even within these fields, people are often misunderstanding each other. Just because two scientists are using the same words doesn’t mean they’re on the same page. Sometimes, it’s more like a game of intellectual one-upmanship—who can define the most obscure term or twist a familiar word into something completely unrecognisable?

And let’s not forget the philosophers. They’ve turned linguistic acrobatics into an art form. Good luck reading Foucault or Derrida without a dictionary (or five) on hand. You might walk away thinking you understand their points, but do you really? Or have you just memorised the jargon without actually grasping the deeper meaning? Even scholars within these fields often argue over what was really meant by a certain text—Barthes, after all, famously declared the “death of the author,” so it’s not like anyone really has the final say on meaning anyway.

So here we are, knee-deep in jargon, trying to communicate with people who, technically, speak the same language but are operating within entirely different rulesets. Every academic discipline has its own secret code, and if you don’t know it, you’re lost. Even when you do know the code, you’re still at risk of miscommunication, because the words that look familiar have been stretched and shaped to fit highly specific contexts. It’s like being fluent in one dialect of English and then suddenly being asked to write a thesis in legalese. Good luck.

In the end, academia’s specialised languages don’t just make things harder—they actively create barriers. What started as a way to improve precision has turned into an obstacle course of incomprehensible terms, where the real challenge is just figuring out what anyone’s actually saying. And let’s be honest, even if you do figure it out, there’s no guarantee it’s going to mean the same thing next time you see it.

Neurolinguistics: Even Our Brains Can’t Agree

So far, we’ve seen how language is a mess of miscommunication, cultural differences, and academic jargon. But surely, at least on a biological level, our brains are all on the same page, right? Well, not exactly. Welcome to the wonderful world of neurolinguistics, where it turns out that even the very organ responsible for language can’t get its act together.

Here’s the deal: Neurolinguistics is the study of how the brain processes language, and while it’s fascinating, it’s also a bit of a buzzkill for anyone hoping for consistency. See, your brain and my brain don’t process language in the same way. Sure, we’ve got similar hardware, but the software is wildly unpredictable. There are individual differences, cultural influences, and developmental quirks that all affect how we understand and produce language. What’s simple for one brain might be completely baffling to another.

Take, for example, something as basic as syntax. Chomsky might have told us we all have a universal grammar hard-wired into our brains, but neurolinguistics has shown that how we apply that grammar can vary significantly. Some people are wired to handle complex sentence structures with ease—think of that friend who can follow 10 different clauses in a single breath. Others? Not so much. For them, even a moderately tricky sentence feels like mental gymnastics. The brain is constantly juggling words, meanings, and structures, and some brains are better at it than others.

But the real kicker is how differently we interpret words. Remember those abstract nouns we’ve been wrestling with? Well, it turns out that your brain might be interpreting ‘freedom’ or ‘justice’ completely differently from mine – not just because of culture or upbringing, but because our brains physically process those words in different ways. Neurolinguistic studies have shown that certain regions of the brain are activated differently depending on the individual’s experience with language. In other words, your personal history with a concept can literally change how your brain lights up when you hear or say it.

And don’t even get me started on bilingual brains. If you speak more than one language, your brain is constantly toggling between two (or more) linguistic systems, which means it’s running twice the risk of misinterpretation. What a word means in one language might trigger a completely different association in another, leaving bilingual speakers in a constant state of linguistic flux. It’s like trying to run two operating systems on the same computer—things are bound to get glitchy.

But here’s the real kicker: Even within the same person, the brain can’t always process language the same way all the time. Stress, fatigue, emotional state—all of these factors can influence how well we handle language on any given day. Ever tried to have a coherent conversation when you’re tired or angry? Good luck. Your brain isn’t interested in nuance or deep philosophical ideas when it’s in survival mode. It’s just trying to get through the day without short-circuiting.

So, not only do we have to deal with the external chaos of language – miscommunication, different contexts, shifting meanings – but we also have to contend with the fact that our own brains are unreliable interpreters. You can use all the right words, follow all the right grammar rules, and still end up with a garbled mess of meaning because your brain decided to take a nap halfway through the sentence.

In the end, neurolinguistics reminds us that language isn’t just a social or cultural problem – it’’’s a biological one too. Our brains are doing their best to keep up, but they’re far from perfect. The very organ that makes language possible is also responsible for making it infinitely more complicated than it needs to be. And if we can’t rely on our own brains to process language consistently, what hope do we have of ever understanding anyone else?


Previous | Next

Language: Tool for Clarity or Shaper of Reality?

6–8 minutes

Pinker: The Optimist Who Thinks Language Works

Enter Steven Pinker, a cognitive scientist and eternal optimist about language. While we’ve been busy pointing out how language is a jumbled mess of misunderstandings, Pinker comes along with a sunny outlook, waving his banner for the language instinct. According to Pinker, language is an evolved tool – something that our brains are wired to use, and it’s good. Really good. So good, in fact, that it allowed us to build civilisations, exchange complex ideas, and, you know, not get eaten by sabre-toothed tigers.

Sounds like a nice break from all the linguistic doom and gloom, right? Pinker believes that language is a powerful cognitive skill, something we’ve developed to communicate thoughts and abstract ideas with remarkable precision. He points to the fact that we’re able to create entire worlds through language – novels, philosophies, legal systems, and scientific theories. Language is, to him, one of the greatest achievements of the human mind.

But here’s where things get a little sticky. Sure, Pinker’s optimism about language is refreshing, but he’s still not solving our core problem: meaning. Pinker may argue that language works wonderfully for most of our day-to-day communication – and in many cases, he’s right. We can all agree that saying, “Hey, don’t touch the flamey thing” is a pretty effective use of language. But once we start using words like ‘freedom’ or ‘justice’, things start to unravel again.

Take a sentence like ‘freedom is essential’. Great. Pinker might say this is a perfectly formed thought, conveyed using our finely tuned linguistic instincts. But the problem? Ask five people what ‘freedom’ means, and you’ll get five different answers. Sure, the grammar is flawless, and everyone understands the sentence structurally. But what they mean by ‘freedom’? That’s a whole other ball game.

Pinker’s language instinct theory helps explain how we learn language, but it doesn’t really account for how we use language to convey abstract, subjective ideas. He might tell us that language has evolved as an efficient way to communicate, but that doesn’t fix the problem of people using the same words to mean wildly different things. You can be the most eloquent speaker in the world, but if your definition of ‘freedom’ isn’t the same as mine, we’re still lost in translation.

And let’s not forget: while language is indeed a fantastic tool for sharing information and surviving in complex societies, it’s also great at creating conflicts. Wars have been fought over differences in how people interpret words like ‘justice’ or ‘rights’. Pinker might say we’ve evolved language to foster cooperation, but history suggests we’ve also used it to argue endlessly about things we can never quite agree on.

So, yes, Pinker’s right – language is a cognitive marvel, and it’s gotten us pretty far. But his optimism doesn’t quite stretch far enough to cover the fact that language, for all its brilliance, still leaves us stuck in a web of interpretation and miscommunication. It’s like having a state-of-the-art GPS that works perfectly – until you get to that roundabout and suddenly no one knows which exit to take.

In the end, Pinker’s got a point: language is one of the most sophisticated tools we’ve ever developed. It’s just a shame that when it comes to abstract concepts, we still can’t agree on which way’s north.

Sapir-Whorf: Language Shapes Reality – Or Does It?

Now it’s time for the Sapir-Whorf hypothesis to take the stage, where things get really interesting – or, depending on your perspective, slightly ridiculous. According to this theory, the language you speak actually shapes the way you see the world. Think of it as linguistic mind control: your perception of reality is limited by the words you have at your disposal. Speak the wrong language, and you might as well be living on another planet.

Sounds dramatic, right? Here’s the gist: Sapir and Whorf argued that the structure of a language affects how its speakers think and perceive the world. If you don’t have a word for something, you’re going to have a hard time thinking about that thing. Inuit languages, for example, are famous for having multiple words for different kinds of snow. If you’re an Inuit speaker, the hypothesis goes, you’re much more attuned to subtle differences in snow than someone who just calls it all ‘snow’.

Now, on the surface, this sounds kind of plausible. After all, we do think using language, don’t we? And there’s some truth to the idea that language can influence the way we categorise and describe the world. But here’s where Sapir-Whorf starts to go off the deep end. According to the stronger version of this hypothesis, your entire reality is shaped and limited by your language. If you don’t have the word for “freedom” in your language, you can’t experience it. If your language doesn’t have a word for “blue,” well, guess what? You don’t see blue.

Let’s take a step back. This sounds like the kind of thing you’d hear at a dinner party from someone who’s just a little too impressed with their first year of linguistics classes. Sure, language can shape thought to a degree, but it doesn’t have a stranglehold on our perception of reality. We’re not prisoners of our own vocabulary. After all, you can still experience freedom, even if you’ve never heard the word. And you can certainly see blue, whether your language has a word for it or not.

In fact, the idea that you’re trapped by your language is a little insulting, when you think about it. Are we really saying that people who speak different languages are living in different realities? That a person who speaks Mandarin sees the world in a fundamentally different way than someone who speaks Spanish? Sure, there might be some subtle differences in how each language breaks down concepts, but we’re all still human. We’re all still sharing the same world, and no matter what language we speak, we still have the cognitive capacity to understand and experience things beyond the limits of our vocabulary.

Let’s also not forget that language is flexible. If you don’t have a word for something, you make one up. If you’re missing a concept, you borrow it from another language or invent a metaphor. The idea that language is some kind of mental prison ignores the fact that we’re constantly evolving our language to keep up with the way we see the world—not the other way around.

And here’s the real kicker: if Sapir and Whorf were right, and we’re all walking around in little linguistic bubbles, then how on earth have we managed to translate anything? How have entire philosophies, religious texts, and scientific theories made their way across cultures and languages for centuries? If language really was shaping our reality that strongly, translation would be impossible – or at least incredibly limited. But here we are, discussing concepts like ‘freedom’, ‘justice’, and ‘truth’ across languages, cultures, and centuries.

So while it’s fun to entertain the idea that your language shapes your reality, let’s not give it too much credit. Yes, language can influence how we think about certain things. But no, it doesn’t define the boundaries of our existence. We’re not all stuck in a linguistic matrix, waiting for the right word to set us free.


Previous | Next

Ink and Instability: The Permanent Confusion of the Written Word

5–7 minutes

The Written Word: Making Things Permanent (and Permanently Confusing)

So far, we’ve been dealing with spoken language—the slippery, ever-changing, context-dependent jumble of sounds we toss around in hopes that someone, somewhere, might understand what we’re trying to say. But what happens when we decide to make those words permanent? Welcome to the era of the written word, where all our linguistic problems got carved into stone—literally.

Let’s rewind a bit. Long before we had books or Twitter threads, ancient humans figured out that spoken words disappear into the air. They needed a way to preserve information, and voilà—writing was born. First came simple marks on clay tablets, because nothing says “let’s communicate important ideas” like scratching symbols into mud. But hey, at least it was a start.

The beauty of writing was that it gave us a way to record language—no more relying on memory to remember which berries were bad or who owed you a goat. But there was a downside too: once those words were written down, they became permanent. If you thought miscommunication was bad when words were floating in the air, just wait until you try to interpret a clay tablet left behind by someone who died 500 years ago. Good luck figuring out what they meant by “justice.”

And it didn’t stop there. As writing developed into full-fledged scripts, we gained the ability to record more complex ideas. That meant abstract nouns like “truth” and “freedom” were no longer just things you debated around the campfire—they could now be written down and preserved for future generations to also argue about. Nothing says “progress” like ensuring centuries of philosophical bickering.

But the real revolution came later. Fast forward to the 15th century, and along comes Johannes Gutenberg with his shiny new printing press. Suddenly, words—once limited to painstakingly hand-copied manuscripts—could be mass-produced. Books, pamphlets, and flyers could be printed in quantities never before imagined. Ideas could spread like wildfire.

And what ideas they were. Philosophers, theologians, and politicians alike jumped on the opportunity to get their words in front of as many people as possible. The written word wasn’t just a way to record information anymore—it became a tool for shaping societies, sparking revolutions, and (of course) stirring up endless debates about everything.

Of course, there was a catch. The printing press didn’t make language any clearer—it just gave us more of it to misunderstand. People could now read the same text and come away with completely different interpretations. What one person saw as a treatise on “freedom,” another saw as a justification for tyranny. What one reader thought was “truth,” another deemed blasphemy.

With the written word and the printing press, we managed to take the problems of spoken language and make them permanent. Miscommunication wasn’t just an unfortunate accident anymore—it was printed in ink, distributed en masse, and immortalised for future generations to argue over. If Wittgenstein had been alive during Gutenberg’s time, he probably would have thrown his hands in the air and said, “See? I told you words don’t mean what you think they mean.”

But hey, at least we were consistent. From clay tablets to printed books, the written word gave us the power to preserve language—and all its glorious inadequacies—for all time.

The Printing Press: Mass-Producing Confusion

The printing press was hailed as one of the greatest inventions in history. And sure, it was. It democratized knowledge, empowered literacy, and paved the way for all sorts of wonderful progress. But let’s be real—it also democratised miscommunication. Now, instead of one person misunderstanding you in conversation, hundreds—or thousands—could read your words and completely miss the point. Progress!

Gutenberg’s press took the words that were once fleeting and made them indelible. No more clarifying in real-time. No more adding context or adjusting your message on the fly. Once it was in print, that was it. You’d better hope your readers were playing the same “language game” as you, or things could go downhill fast.

Take Martin Luther, for example. He nailed his 95 Theses to the church door in 1517, and thanks to the printing press, those words spread all over Europe. What he intended as a call for reform turned into a revolution that spiralled far beyond his control. People read the same text and took wildly different meanings from it—some saw it as a plea for theological discussion, others as a call to burn down the nearest cathedral.

But it didn’t stop there. Luther’s seemingly clear ideas splintered into countless interpretations, and over time, what began as a movement for reform became the launchpad for hundreds of Protestant denominations. Each group interpreted Luther’s message (and the Bible) in their own unique way. From Lutheranism to Calvinism to the Baptists, Methodists, and beyond, the Protestant Reformation exploded into a thousand branches, all claiming to have grasped the “true” meaning of Luther’s words.

And this? This is the power – and the peril – of the written word. Once something is printed and distributed, it takes on a life of its own. Luther might have had one specific vision for his reforms, but as soon as those ideas hit the printing press, they fractured into countless interpretations, each with its own twist on “truth.” It’s a linguistic free-for-all, with everyone holding the same text and coming to completely different conclusions.

The printing press didn’t just give us more words—it gave us more misunderstandings. Suddenly, philosophical debates, political manifestos, and theological treatises were flying off the presses, each one ready to be misinterpreted by whoever happened to pick it up. And once it was printed, there was no going back. No retractions. No take-backs. Just page after page of linguistic uncertainty.

So while the printing press undoubtedly transformed society, it also multiplied the number of ways we could miscommunicate with each other. Because if there’s one thing we’re good at, it’s misunderstanding words – especially when they’re written down for all eternity.


Previous | Next

From Signs to Abstractions: The Slippery Slope of Meaning

5–7 minutes

Saussure and the Signified: Words as Slippery Symbols

Fast-forward a few thousand years, and humans are no longer just warning each other about hot flames or toothy predators. We’ve moved on to the exciting world of abstract thought, but the language tools we’re using haven’t quite caught up. Enter Ferdinand de Saussure, who basically waltzed in to tell us, ‘Hey, all those words you’re throwing around? They’re not doing what you think they’re doing.’

Saussure gave us the idea of the signifier and the signified. Now, don’t let the fancy terms fool you. It’s just a way of pointing out that when we say ‘tree’, we’re not actually talking about a tree. No, we’re using the word ‘tree’ as a symbol – a signifier – that points to the idea of a tree. The signified is the actual concept of ‘tree-ness’ floating around in your brain. But here’s the kicker: everyone’s idea of a tree is a little different.

And this isn’t just a language problem – it’s an art problem too. Enter René Magritte, the surrealist artist who really drove this point home with his famous painting, Ceci n’est pas une pipe (‘This is not a pipe’). At first glance, it looks like a straightforward picture of a pipe, but Magritte was making a deeper point. It’s not actually a pipe – it’s an image of a pipe, a representation. You can’t stuff it with tobacco and smoke it, because what you’re looking at is a representation, not the real thing.

Image: La Trahison des Images, René François Ghislain Magritte

In the same way, when we use words, we’re not talking about the thing itself – we’re just waving a flag toward the concept of that thing. So, when you say ‘tree’, you’re really saying ceci n’est pas un arbre – this is not a tree. It’s just a word, a placeholder, a verbal painting of something real. And just like Magritte’s pipe, it’s easy to get confused. You might think you’re talking about the same tree, or the same ‘freedom’, but all you’ve got is a symbol – and everyone’s symbol looks a little different.

This is where things start to unravel. Words are slippery symbols, and as soon as we move away from concrete, physical objects – like trees or, yes, pipes – and into abstract ideas, like ‘justice’ or ‘truth’, the symbols become even harder to hold onto. The cracks in language start to widen, and before you know it, you’re no longer even sure if you’re talking about the same concept at all.

Language, Saussure argues, isn’t this neat, objective system we thought it was. It’s a game we’re playing, and the rules are written in invisible ink. By the time we get to abstract nouns, we’re basically playing with loaded dice. You think you’re communicating clearly, but every word you use is just a placeholder for the idea you hope the other person has in their head. And nine times out of ten? They don’t.

So, while early humans were struggling to agree on the ‘flamey thing’, we’re here trying to agree on concepts that are infinitely more complicated. And Saussure? He’s just sitting in the corner with a smirk, telling us we never had control over language in the first place. “Good luck with your ‘truth'”, he seems to be saying. ‘I’m sure it’ll mean the same thing to everyone’.

Abstraction: Enter Freedom, Truth, and Confusion

Now that we’ve wrapped our heads around the fact that words are nothing but slippery symbols, let’s take it up a notch. You thought ‘tree’ was tricky? Try something more abstract. Enter: freedom, truth, justice. Things that can’t be seen, touched, or stuffed into a pipe. Here’s where language goes from being slippery to downright treacherous.

See, early language worked because it was tied to concrete things. ‘Toothey thing scary’ wasn’t up for debate. Either you got eaten, or you didn’t. Simple. But then humans, ever the overachievers, decided it wasn’t enough to just label the world around them. They wanted to label ideas, too – things that don’t have any physical form but somehow drive us all crazy.

Take ‘freedom’, for instance. Sounds nice, right? Except, if you ask ten people what it means, you’ll get ten different answers. For some, it’s ‘freedom from’ something – a kind of liberation. For others, it’s ‘freedom to’ do whatever you want, whenever you want. And yet for others, it’s an abstract ideal tied up in political philosophy. Suddenly, you’re not just dealing with different trees – you’re dealing with entirely different forests.

The same goes for truth. Is it objective? Subjective? Relative? Absolute? Everyone’s got a different take. Plato had his own grand ideas about ‘Truth’ with a capital T, while Nietzsche basically rolled his eyes and said, ‘Good luck with that’. You’re out here using the word, assuming it means the same thing to everyone else, but really you’re all just talking past each other.

And don’t even get started on justice. Some say it’s about fairness, others say it’s about the law, and still others think it’s just a nice idea for dinner party debates. The problem with these words – these abstract nouns – is that they represent ideas that live entirely in our heads. Unlike the ‘flamey thing’ or the ‘toothey thing’, there’s no physical reality to pin them to. There’s no universally agreed-upon image of ‘freedom’ that we can all point to and nod along, like Magritte’s pipe. There’s just… vague agreement. Sometimes. On a good day.

This is where language really starts to break down. You might think you’re having a productive conversation about ‘freedom’ or ‘truth’, but half the time, you’re speaking different languages without even realising it. Words like these aren’t just slippery – they’re shapeshifters. They bend and morph depending on who’s using them, when, and why.

So, while early humans were busy with their simple, effective ‘toothey thing scary’, we’re now trying to nail down ideas that refuse to be nailed down. What started as a useful survival tool has turned into a game of philosophical Twister, with everyone tied up in knots trying to define something they can’t even see. And, as usual, language is just standing in the corner, smirking, knowing full well it’s not up to the task.


Previous | Next