The Insufficiency of Language in an Agile World

I wrote and published this article on LinkedIn. I even recycled the cover image. Although it is about the particular topic of Agile, it relates to the Language Insufficiency Hypothesis, so I felt it would be apt here as well. It demonstrates how to think about language insufficiency through the framework.

Agile in Name Only

For over two decades, I’ve been immersed in Agile and its myriad interpretations. One refrain has persisted throughout: Agile™ is “just about agility,” a term that anyone can define as they see fit. The ambiguity begs the question: What does it really mean?

On its face, this sounds inclusive, but it never passed my intuitive sniff test. I carried on, but as I reflected on my broader work concerning the insufficiency of language, this persistent fuzziness started to make sense. Agile’s conceptual murkiness can be understood through the lens of language and identity—particularly through in-group and out-group dynamics.

Otherness and the Myth of Universality

To those who truly understand agility, no elaborate definition is required. It’s instinctive, embedded in their DNA. They don’t need to label it; they simply are agile. Yet, for the out-group—the ones who aspire to the status without the substance—Agile™ becomes a muddy abstraction. Unable to grasp the core, they question its very existence, claiming, “Who really knows what Agile means?”

The answer is simple: Everyone but those asking this question.

The Agility Crisis

This disconnect creates a power shift. The in-group, small and focused, operates with quiet competence. Meanwhile, the out-group, larger and louder, hijacks the conversation. What follows is an inevitable dilution: “Agile is dead,” “Agile doesn’t work,” they declare. But these proclamations often reflect their own failures to execute or evolve, not flaws inherent to agility itself.

This pattern follows a familiar playbook: create a strawman—define Agile™ as something it’s not—then decry its inability to deliver. The result? Performative agility, a theatre of motion without progress, where the players confuse activity for achievement and rely on brittle, inextensible infrastructures.

Agile Beyond the Label

Ironically, the true practitioners of agility remain unbothered by these debates. They adapt, innovate, and thrive—with or without the label. Agile™ has become a victim of its own success, co-opted by those who misunderstand it, leading to a paradox: the louder the chorus claiming “Agile doesn’t work,” the more it underscores the gap between those who do agility and those who merely wear its name.

The lesson here is not just about Agile™ but about language itself. Words, when untethered from their essence, fail. They cease to communicate, becoming tools of obfuscation rather than clarity. In this, Agile™ mirrors a broader phenomenon: the insufficiency of language in the face of complexity and its misuse by those unwilling or unable to engage with its deeper truths.

Guns, Germs, and Steel

I am reading Jared Diamond’s Guns, Germs, and Steel: The Fates of Human Societies, the first and likely most famous of an informal trilogy. I thought I had already read it, but I think I only saw the PBS show. Having recently finished Josephine Quinn’s How the World Made the West, I wanted to revisit this perspective. The two books are presented in different styles and represent different perspectives, but they seem to be complementary.

Where Diamond focuses on environmental factors (an oft-voiced critique), Quinn focuses on human agency.

Diamond takes a bird’ s-eye view, looking for universal patterns and systemic explanations, whilst Quinn adopts a granular, specific approach, highlighting the fluidity and contingency of history.

Diamond deconstructs European dominance by attributing it to environmental luck, but his narrative risks sidelining the agency of colonised peoples. Quinn critiques the very idea of Western dominance, arguing that the concept of the West itself is a myth born of appropriation and exchange.

Rather than being wholly opposed, Diamond and Quinn’s approaches might be seen as complementary. Diamond provides the structural scaffolding – the environmental and geographic conditions that shape societies – whilst Quinn fills in the cultural and human dynamics that Diamond often glosses over. Together, they represent two sides of the historiographical coin: one focusing on systemic patterns, the other on the messiness of cultural particularities.

Quinn’s approach is more aligned with The Dawn of Everything: A New History of Humanity, co-authored by David Graeber and archaeologist David Wengrow, if you can use that as a reference point.

Blinded by Bias: The Irony of Greed and Self-Perception

Greed is a vice we readily recognise in others but often overlook in ourselves. This selective perception was strikingly evident during a recent conversation I had with a man who was quick to condemn another’s greed while remaining oblivious to his own similar tendencies. I told him about the escalating greed of certain companies who profit greatly from selling their printer inks and toner brands. I’ll spare you this history. This encounter underscores the powerful influence of fundamental attribution bias on our judgments and self-awareness.

Exploring Greed

Greed can be defined as an intense and selfish desire for something, especially wealth, power, or food. Psychologically, it is considered a natural human impulse that, when unchecked, can lead to unethical behaviour and strained relationships. Societally, greed is often condemned, yet it persists across cultures and histories.

We tend to label others as greedy when their actions negatively impact us or violate social norms. However, when we aggressively pursue our interests, we might frame it as ambition or resourcefulness. This dichotomy reveals a discrepancy in how we perceive greed in ourselves versus others.

Understanding Fundamental Attribution Bias

Fundamental attribution bias, or fundamental attribution error, is the tendency to attribute others’ actions to their character while attributing our own actions to external circumstances. This cognitive bias allows us to excuse our behaviour while holding others fully accountable for theirs.

For example, if someone cuts us off in traffic, we might think they’re reckless or inconsiderate. But if we cut someone off, we might justify it by claiming we were late or didn’t see them. This bias preserves our self-image but distorts our understanding of others.

The Conversation

Our conversation was centred on an HP printer that has shown a ‘low ink – please replace’ message since the cartridge was first installed. I recounted the history of the ink and toner industry. HP had a monopoly on ink for their products, a situation that earned them substantial marginal profits. Upstarts entered the marketplace. This started an escalating arms war. HP spent R&D dollars trying to defend their profit margins with nil benefit to the consumers of their product. In fact, it kept costs artificially higher. Competitors who wanted a slice of those fat margins found ways around these interventions. Eventually, HP installed chips on their toner cartridges. Unfortunately, they have a bug – or is it a feature? If you install a cartridge and remove it, it assumes you’re up to something shady, so it spawns this false alert. Some people believe this out of hand, so HP benefits twice.

If this bloke had worked for HP and had been responsible for revenue acquisition and protection, he would have swooned over the opportunity. Have no doubt. At arm’s length, he recognised this as sleazy, unethical business practices.

This conversation revealed how easily we can fall into the trap of judging others without reflecting on our own behaviour. His indignation seemed justified to him, yet he remained unaware of how his actions mirrored those he criticised.

Biblical Reference and Moral Implications

This situation brings to mind the biblical passage from Matthew 7:3-5:

“Why do you look at the speck of sawdust in your brother’s eye and pay no attention to the plank in your own eye? … You hypocrite, first take the plank out of your own eye, and then you will see clearly to remove the speck from your brother’s eye.”

The verse poignantly captures the human tendency to overlook our flaws while magnifying those of others. It calls for introspection and humility, urging us to address our shortcomings before passing judgment.

The Asymmetry of Self-Perception

Several psychological factors contribute to this asymmetry:

  • Self-Serving Bias: We attribute our successes to internal factors and our failures to external ones.
  • Cognitive Dissonance: Conflicting beliefs about ourselves and our actions create discomfort, leading us to rationalize or ignore discrepancies.
  • Social Comparison: We often compare ourselves favourably against others to boost self-esteem.

This skewed self-perception can hinder personal growth and damage relationships, as it prevents honest self-assessment and accountability.

Overcoming the Bias

Awareness is the first step toward mitigating fundamental attribution bias. Here are some strategies:

  1. Mindful Reflection: Regularly assess your actions and motivations. Ask yourself if you’re holding others to a standard you’re not meeting. Riffing from ancient moral dictates, just ask yourself if this is how you would want to be treated. Adopt Kant’s moral imperative framework.
  2. Seek Feedback: Encourage honest input from trusted friends or colleagues about your behaviour.
  3. Empathy Development: Practice seeing situations from others’ perspectives to understand their actions more fully.
  4. Challenge Assumptions: Before making judgments, consider external factors that might influence someone’s behaviour.

By actively recognising and adjusting for our biases, we can develop more balanced perceptions of ourselves and others.

Conclusion

The irony of condemning in others what we excuse in ourselves is a common human pitfall rooted in fundamental attribution bias. The adage, ‘Know thyself’ might come into view here. We can overcome these biases by striving for self-awareness and empathy, leading to more authentic relationships and personal integrity.

The Relativity of Morality: A Penguin’s Tale

I recently watched The Penguin on HBO Max, a series set in DC’s Batman universe. Ordinarily, I avoid television – especially the superhero genre – but this one intrigued me. Less spandex, more mob drama. An origin story with a dash of noir. I’ll spare you spoilers, but suffice it to say that it was an enjoyable detour, even for someone like me who prefers philosophy over fistfights.

This post isn’t a review, though. It’s a springboard into a larger idea: morality’s subjectivity – or, more precisely, its relativity.

Audio: Spotify podcast related to this topic.

Morality in a Vacuum

Morality, as I see it, is a social construct. You might carry a private moral compass, but without society, it’s about as useful as a clock on a desert island. A personal code of ethics might guide you in solitary moments, but breaking your own rules – eating that forbidden biscuit after vowing to abstain, for instance – doesn’t carry the weight of a true moral transgression. It’s more akin to reneging on a New Year’s resolution. Who’s harmed? Who’s holding you accountable? The answer is: no one but yourself, and even then, only if you care.

The Social Contract

Introduce a second person, and suddenly, morality gains traction. Agreements form – explicit or tacit – about how to behave. Multiply that to the level of a community or society, and morality becomes a kind of currency, exchanged and enforced by the group. Sometimes, these codes are elevated to laws. And, ironically, the act of adhering to a law – even one devoid of moral content – can itself become the moral thing to do. Not because the act is inherently right, but because it reinforces the structure society depends upon.

But morality is neither universal nor monolithic. It is as fractured and kaleidoscopic as the societies and subcultures that create it. Which brings us back to The Penguin.

Crime’s Moral Code

The Penguin thrives in a criminal underworld where the moral compass points in a different direction. In the dominant society’s eyes, crime is immoral. Robbery, murder, racketeering – all “bad,” all forbidden. But within the subculture of organised crime, a parallel morality exists. Honour among thieves, loyalty to the family, the unspoken rules of the game – these are their ethics, and they matter deeply to those who live by them.

When one criminal praises another – “You done good” – after a successful heist or a precise hit, it’s a moral judgement within their own framework. Outside that framework, society condemns the same actions as abhorrent. Yet even dominant societies carve out their own moral exceptions. Killing, for instance, is broadly considered immoral. Murder is outlawed. But capital punishment? That’s legal, and often deemed not only acceptable but righteous. Kant argued it was a moral imperative. Nietzsche, ever the cynic, saw this duality for what it was: a power dynamic cloaked in self-righteousness.

In The Penguin, we see this dichotomy laid bare. The underworld isn’t without morals; it simply operates on a different axis. And while the larger society might disdain it, the hypocrisy of their own shifting moral codes remains unexamined.

Final Thoughts on the Series

I’ll save other philosophical musings about The Penguin for another time – spoilers would be unavoidable, after all. But here’s a quick review: the series leans into drama, eschewing flashy gimmicks for a grittier, more grounded tone. The writing is generally strong, though there are moments of inconsistency – plot holes and contrivances that mar an otherwise immersive experience. Whether these flaws stem from the writers, director, or editor is anyone’s guess, but the effect is the same: they momentarily yank the viewer out of the world they’ve built.

Still, it’s a worthwhile watch, especially if you’re a fan of mob-style crime dramas. The final episode was, in my estimation, the best of the lot – a satisfying culmination that leaves the door ajar for philosophical ruminations like these.

Have you seen it? What are your thoughts – philosophical or otherwise? Drop a comment below. Let’s discuss.

When Hollywood Tried to Cheer Up Less Than Zero and Missed the Point Entirely

Let’s talk about Less Than Zero. No, not the film. I’m talking about the book—Bret Easton Ellis’s nihilistic masterpiece that drags you through a moral cesspit of 1980s Los Angeles. You might remember it as the story that makes American Psycho look like a quirky self-help guide. It’s dark, it’s bleak, and it doesn’t pretend to offer you a shred of hope.

And then there’s the movie adaptation.

Oh, the movie. It’s as though someone read Ellis’s unflinching tale of moral rot and thought, You know what this needs? Friendship. And a redemption arc. And maybe some heartfelt music in the background. Hollywood, in all its infinite wisdom, decided that audiences couldn’t handle the book’s existential despair. So, they took a story about the void—about the emptiness of privilege, the suffocation of apathy, and the complete erosion of human connection—and gave it a fuzzy moral centre.

Here’s the gist: The book is nihilism incarnate. It follows Clay, a disaffected college student who comes home to LA for Christmas and is immediately swallowed whole by a world of cocaine, vapid socialites, and casual cruelty. No one learns anything. No one grows. In fact, the whole point is that these characters are so morally bankrupt, so irreparably hollow, that they’re beyond redemption. If you’re looking for a happy ending, don’t bother—Ellis leaves you stranded in the abyss, staring into the void, wondering if there’s any point to anything. Spoiler: there’s not.

Then along comes the 1987 film, directed by Marek Kanievska. It keeps the names of the characters—Clay, Blair, Julian—but not much else. Instead of being an icy observer of LA’s decadence, Clay is transformed into a love-struck saviour. Blair, a passive figure in the novel, becomes a supportive girlfriend. And Julian—oh, poor Julian—is turned into a sacrificial lamb for the sake of a heartfelt narrative about friendship and second chances.

The film turns Less Than Zero into an anti-drug PSA. It’s basically Nancy Reagan Presents: a story of addiction, redemption, and the power of love, wrapped in a slick 80s aesthetic. Robert Downey Jr., to his credit, gives a brilliant performance as Julian, the doomed addict. But the character is barely recognisable compared to his literary counterpart. In the book, Julian’s descent into drug-fuelled depravity isn’t a cautionary tale—it’s just another symptom of a world where nothing and no one has any value. In the film, Julian is tragic, yes, but in a way that invites sympathy and, crucially, an attempt at salvation.

Let’s not forget the ending. The novel ends on a note so cold it could freeze your soul: Clay leaves Los Angeles, unchanged, unbothered, and unmoved. The film, however, concludes with Clay and Blair driving off into the sunset, having vowed to turn their lives around. It’s saccharine. It’s pandering. It’s the cinematic equivalent of slapping a motivational poster over a painting by Francis Bacon.

Why did Hollywood do this? Simple: nihilism doesn’t sell. You can’t slap it on a movie poster and expect audiences to line up at the box office. People want catharsis, not existential despair. And so, the filmmakers gutted Less Than Zero of its soul (or lack thereof), replacing its stark nihilism with a hopeful narrative about the power of human connection.

Here’s the kicker, though: by doing this, the film completely misses the point of Ellis’s novel. Less Than Zero is a critique of LA’s shallow, soulless culture—a world where connection is impossible because no one feels anything. Turning it into a feel-good story about saving a friend from addiction is not just a betrayal; it’s downright laughable. It’s like adapting 1984 into a rom-com where Winston and Julia overthrow Big Brother and live happily ever after.

To be fair, the film isn’t bad—if you forget the source material exists. It’s well-acted, stylishly shot, and undeniably entertaining. But as an adaptation, it’s a travesty. It’s Ellis’s Less Than Zero with all the edges sanded down, the grit scrubbed clean, and a shiny coat of sentimentality slapped on top.

So, if you’ve read the book and thought, Wow, that was bleak—I wonder if the movie is any lighter?, the answer is yes, but not in a good way. It’s lighter because it’s hollowed out, stripped of its existential weight, and repackaged as something safe and digestible.

And if you haven’t read the book? Do yourself a favour: skip the movie, pour yourself a stiff drink, and dive into Ellis’s bleak masterpiece. Just don’t expect any warm, fuzzy feelings—it’s called Less Than Zero for a reason.

Paul Feyerabend’s Against Method: Chapter 1

What if science’s greatest achievements came not from following rules, but from breaking them? What if progress depends more on chaos than on order? In Against Method, philosopher Paul Feyerabend presents a provocative thesis: there is no universal scientific method, and the progress we celebrate often emerges from breaking established rules rather than following them.

I read Against Method years ago but decided to re-read it. It’s especially interesting to me because although I advocate systems thinking, I don’t believe everything should be or can be systematised. More generally, this bleeds into my feelings about government, politics, and institutions.

Whilst Feyerabend’s focus is on science, one can pull back the lens and see that it covers all such systems and systematic beliefs. I may write a separate article on this, but for now, I’ll focus on Against Method.

The Anarchist’s View of Science

Feyerabend’s critique strikes at the heart of how we think about knowledge and progress. He argues that science has advanced not through rigid adherence to methodology, but through a combination of creativity, rhetoric, and sometimes even deception. His concept of “epistemological anarchism” suggests that no single approach to knowledge should dominate – instead, multiple methods and perspectives should compete and coexist.

Consider Galileo’s defense of heliocentrism. Rather than relying solely on empirical evidence, Galileo employed persuasive rhetoric, selective data, and careful manipulation of public opinion. For Feyerabend, this isn’t an aberration but a typical example of how scientific progress actually occurs. The story we tell ourselves about the scientific method – as a systematic, purely rational pursuit of truth – is more myth than reality.

From Religious Dogma to Scientific Orthodoxy

The Age of Enlightenment marked humanity’s shift from religious authority to scientific rationality. Yet Feyerabend argues that we simply replaced one form of dogma with another. Scientism – the belief that science alone provides meaningful knowledge – has become our new orthodoxy. What began as a liberation from religious constraints has evolved into its own form of intellectual tyranny.

This transition could have taken a different path. Rather than elevating scientific rationality as the sole arbiter of truth, we might have embraced a more pluralistic approach where multiple ways of understanding the world – scientific, artistic, spiritual – could coexist and cross-pollinate. Instead, we’ve created a hierarchy where other forms of knowledge are dismissed as inferior or irrational.

The Chaos of Progress

In Chapter 1 of Against Method, Feyerabend lays the groundwork for his radical critique. He demonstrates how strict adherence to methodological rules would have prevented many of science’s greatest discoveries. Progress, he argues, often emerges from what appears to be irrational – from breaking rules, following hunches, and embracing contradiction. Indeed, rationalism is over-rated.

This isn’t to say that science lacks value or that methodology is meaningless. Rather, Feyerabend suggests that real progress requires flexibility, creativity, and a willingness to break from convention. Many breakthrough discoveries have been accidental or emerged from practices that would be considered unscientific by contemporary standards.

Beyond the Monolith

Our tendency to view pre- and post-Enlightenment thought as a simple dichotomy – superstition versus reason – obscures a richer reality. Neither period was monolithic, and our current reverence for scientific method might be constraining rather than enabling progress. Feyerabend’s work suggests an alternative: a world where knowledge emerges from the interplay of multiple approaches, where science exists alongside other ways of understanding rather than above them.

As we begin this exploration of Against Method, we’re invited to question our assumptions about knowledge and truth. Perhaps progress depends not on rigid adherence to method, but on the freedom to break from it when necessary. In questioning science’s monopoly on truth, we might discover a richer, more nuanced understanding of the world – one that embraces the chaos and contradiction inherent in human inquiry.

This is the first in a series of articles exploring Feyerabend’s Against Method. Join me as we challenge our assumptions about science, knowledge, and the nature of progress itself.

Polemics

People often ask why I churn out so many polemic, contrarian articles. The answer? It’s simply how I think. My brain naturally questions everything, not out of a desire to be difficult, but because that’s just my worldview. I’m not inventing challenges for the sake of argument—the challenges are already there, embedded in the world as I see it.

Another reason is solidarity. I write in hopes that others, whose thoughts run along similar lines, might stumble across my material and feel less alone. There’s something deeply reassuring in discovering that someone else has been on the same mental journey—that feeling of “Ah, I’m not alone in this.” Many times, I’ve had ideas only to find that philosophers, thinkers, or whoever have already penned volumes on the subject. And honestly? That grounds me. Even better if they’ve gone further, articulated it more eloquently, or ventured into new depths. It’s all useful. Plus, their critics then become my critics, and I get to sharpen my thoughts in response—or at least build my own defences.

And finally, I write for the potential spark. Maybe someone out there reads a piece of mine and feels inspired to take it further, push an idea beyond what I could imagine. After all, entire Nobel Prize-winning theories have started as someone else’s footnotes. There’s nothing wrong with being someone’s footnote.

So, now you know.


NB: I’ll be in surgery when this posts, so I’ve scheduled this in advance so as not to have a gap…that may occur anyway.

The Limits of Language: Why Philosophical Paradoxes Might Be Illusions of Mapping

Philosophical paradoxes have long captured our imagination, from Zeno’s paradoxes about movement to the Liar Paradox that tangles truth and falsehood into an endless loop. Often, these puzzles are treated as fundamental mysteries of the universe—windows into the limits of human understanding or insight into the hidden structure of reality. But what if, rather than reflecting deep truths about existence, many of these paradoxes are artefacts of language itself—symptoms of our conceptual tools struggling to adequately map a complex terrain? Perhaps, more often than not, the perplexities we face are the result of an inadequate mapping—a linguistic or cognitive misfire—rather than true paradoxes of the underlying terrain of reality.

This notion—that many paradoxes arise from the limitations of language and cognition—finds resonance in the work of philosophers like Ludwig Wittgenstein. Wittgenstein argued that many philosophical problems arise because we misuse language, taking words beyond their natural context, confusing what our words describe with the objects or concepts themselves. In this sense, our maps (the linguistic and logical structures we use) often lead us astray when navigating the conceptual terrains of ethics, metaphysics, or the nature of truth.

This idea can be articulated under what we might call the Language Insufficiency Hypothesis: the view that the limitations of language itself are at the root of many philosophical paradoxes. According to this hypothesis, the apparent contradictions or puzzles that emerge in philosophical discourse often reveal more about the shortcomings of our representational tools than about any deep metaphysical truths. The Language Insufficiency Hypothesis suggests that our conceptual maps are inadequate for fully capturing the richness of the terrains we attempt to describe, and that this inadequacy leads us to mistake linguistic confusion for genuine philosophical mystery.

The Inherent Limitations of Linguistic Communication

Language, often hailed as humanity’s greatest achievement, may paradoxically be one of our most significant limitations. The Language Insufficiency Hypothesis posits that language is inherently inadequate for communicating abstract concepts, a notion that challenges our fundamental understanding of human communication and cognition. This perspective traces the evolution of language from its primitive origins to its current complexity, revealing the philosophical and practical implications of linguistic inadequacy.

The Accidental Evolution of Language

Language, like many aspects of human biology and cognition, emerged not through intentional design but as an evolutionary accident. Initially serving as an internal cognitive function—a means of organising one’s own thoughts—language gradually evolved into a tool for external communication. This transition likely began with simple vocalisations, perhaps rooted in rhythmic expressions akin to music and dance, before developing into more structured speech.

Early linguistic communication likely centred on concrete objects and immediate experiences, with words serving as direct signifiers for observable phenomena. However, as human cognition grew more sophisticated, so too did our linguistic capabilities, expanding to include verbs, modifiers, and eventually, abstract nouns.

The Emergence of Abstraction and Its Challenges

The development of abstract nouns marked a significant leap in human cognition and communication. Concepts such as ‘truth’, ‘justice’, and ‘freedom’ allowed for more complex and nuanced discourse. However, this advancement came at a cost: these abstract concepts, lacking direct physical referents, introduced unprecedented ambiguity and potential for misunderstanding.

The Language Insufficiency Hypothesis suggests that this ambiguity is not merely a byproduct of abstraction, but a fundamental limitation of language itself. While two individuals might easily agree on the ‘treeness’ of a physical tree, concepts like ‘fairness’ or ‘reason’ are inherently unresolvable through linguistic means alone. This insufficiency becomes increasingly apparent as we move further from concrete, observable phenomena into the realm of abstract thought.

Wittgenstein and the Limits of Language

Ludwig Wittgenstein’s later work provides crucial insights into the Language Insufficiency Hypothesis. Wittgenstein posited that words ultimately only map to other words, never truly making contact with the objective world. This perspective suggests that language operates within a closed system of human understanding, constructing our perception of reality rather than directly representing it.

This Wittgensteinian dilemma underscores the core of the Language Insufficiency Hypothesis: if words only refer to other words, how can we ever be certain that we’re communicating abstract concepts accurately? The very tool we use to discuss and understand abstraction may be fundamentally incapable of capturing its essence.

Cultural and Disciplinary Variations

The inadequacy of language in conveying abstract concepts becomes even more apparent when we consider cultural and disciplinary variations in communication. Different cultures and academic disciplines develop their own specialised vocabularies and ‘language games’, as Wittgenstein termed them. While these specialised languages may facilitate communication within specific contexts, they often create barriers to understanding for outsiders.

This phenomenon highlights another aspect of linguistic insufficiency: the context-dependent nature of meaning. Abstract concepts may be understood differently across cultures or disciplines, further complicating attempts at clear communication.

Neurolinguistic Perspectives

Recent advances in neurolinguistics have provided new insights into the brain structures involved in language processing. While these studies have enhanced our understanding of how the brain handles language, they have also revealed the complexity and variability of linguistic processing across individuals. This neurological diversity further supports the Language Insufficiency Hypothesis, suggesting that even at a biological level, there may be inherent limitations to how accurately we can communicate abstract concepts.

Implications and Counter-Arguments

The Language Insufficiency Hypothesis has profound implications for fields ranging from philosophy and psychology to law and international relations. If language is indeed inadequate for communicating abstract concepts, how can we ensure mutual understanding in complex negotiations or philosophical debates?

However, it’s important to note that not all scholars accept the strong version of this hypothesis. Some argue that while language may have limitations, it remains our most sophisticated tool for sharing abstract ideas. They suggest that through careful definition, contextualisation, and the use of metaphor and analogy, we can overcome many of the inherent limitations of linguistic communication.

Navigating the Limits of Language

The Language Insufficiency Hypothesis presents a challenging perspective on human communication. It suggests that our primary tool for sharing abstract thoughts may be fundamentally flawed, incapable of fully capturing the complexity of our inner cognitive experiences.

Yet, recognising these limitations need not lead to communicative nihilism. Instead, it can foster a more nuanced approach to language use, encouraging us to be more precise in our definitions, more aware of potential misunderstandings, and more open to alternative forms of expression.

As we continue to grapple with abstract concepts and strive for clearer communication, we must remain cognizant of these linguistic limitations. Understanding the origins and nature of language—and its inherent insufficiencies—can help us navigate its complexities, fostering more effective and empathetic communication across diverse fields of human endeavour.

The Frege–Geach Problem as an Illustration of Linguistic Limitations

One pertinent example of this idea is the Frege–Geach problem, a challenge often faced by expressivist theories of ethics. Expressivists maintain that moral statements do not describe facts but rather express attitudes or emotions—a statement like “lying is wrong” is an expression of disapproval rather than a factual assertion. The Frege–Geach problem arises when such moral statements are embedded in logical constructions like conditionals or arguments: “If lying is wrong, then getting your little brother to lie is wrong.” In this context, expressivists face a challenge in explaining how the meaning of “lying is wrong” remains coherent across different uses, without reducing moral expressions to descriptive claims.

The Frege–Geach problem thus illustrates a fundamental limitation: attempting to apply truth-conditional logic, designed for descriptive language, to moral discourse, which serves a different function altogether. In trying to map evaluative terrain—which involves emotions, commitments, and subjective attitudes—using the same structures meant for factual landscapes, we encounter conceptual misalignments. This problem—a confusion of the terrain for the map—is not necessarily a genuine paradox about moral truths but rather a reflection of the inadequacy of our current linguistic tools. Just as a physical map may fail to capture the emotional experience of a journey, so too do our linguistic and logical maps fail to adequately capture the moral landscape.

Wittgenstein’s later work is helpful in framing this issue. He emphasised the importance of recognising different language-games: the rules and purposes that guide different forms of discourse. Moral language is not like scientific language; it follows different rules and aims to express and influence attitudes rather than establish empirically verifiable facts. The Frege–Geach problem emerges precisely because we attempt to impose a single logical structure onto forms of language that serve different purposes, confusing the distinct games we are playing. This attempt to force moral language into a framework designed for empirical propositions produces an apparent paradox, where the real issue lies in our misuse of the conceptual map.

This pattern of misinterpretation is not unique to moral discourse. Many philosophical paradoxes—from problems of identity and personal continuity to issues of free will and determinism—arise when we try to map different terrains with the same linguistic structures, or when we push our conceptual tools beyond their natural limits. Cognitive limitations also play a role; our tendency to think in binary oppositions, our reliance on categories, and our need for consistent narratives often lead to oversimplifications of complex realities. These cognitive tools—essential for everyday functioning—can prove inadequate for capturing the nuance of the philosophical landscapes we attempt to navigate.

The map-terrain challenge is thus at the core of why philosophical paradoxes can seem so intractable. Our maps—the languages and logical frameworks that structure our thinking—are, by their nature, simplifications of a world that is far more nuanced than we can readily articulate. When the terrain is moral, aesthetic, or otherwise not reducible to simple truths or falsehoods, the inadequacies of our maps become evident. We are left facing paradoxes that may, in truth, be nothing more than indicators that our representational systems need refinement or expansion.

Rather than treating these paradoxes as unresolvable, we might benefit from seeing them as invitations to reconsider our linguistic and cognitive frameworks. In recognising that the Frege–Geach problem, for instance, may reflect an ill-suited mapping of moral discourse rather than a genuine mystery about moral reality, we open the door to a pluralistic approach: different terrains require different maps. Perhaps, in some cases, the best solution is not to attempt to solve the paradox in traditional terms but to change the way we map the terrain altogether—to allow for multiple, context-sensitive tools that respect the particularity of each domain of discourse.

Ultimately, this perspective suggests a more flexible and cautious approach to philosophical inquiry—one that acknowledges the limits of our conceptual tools and remains open to the possibility that the terrain is far richer and more varied than our maps can currently capture.

Can Zombies Ever Be Conscious?

In the world of consciousness studies, few topics spark as much heated debate as the possibility of philosophical zombies—hypothetical beings that behave exactly like humans but lack subjective experience, or qualia. On the surface, zombies seem like an interesting thought experiment, but they quickly turn into a battleground for deeper issues about the nature of consciousness itself.

This post explores two key perspectives in this debate: Daniel Dennett’s functionalist critique of zombies and a recent scientific paper that argues zombies are biologically impossible. While both reject the possibility of zombies, they do so for different reasons, and the discussion leaves room for future possibilities that could disrupt the current consensus.

Dennett’s Zombies and Zimboes: Consciousness as Function

Daniel Dennett, one of the most influential philosophers of mind, is known for his no-nonsense rejection of philosophical zombies. Dennett argues that if something behaves exactly like a conscious being, it is conscious. For him, there is no hidden metaphysical property—such as subjective experience—that separates a “zombie” from a conscious human. Consciousness, in his view, is entirely explainable by physical processes and functional behaviour.

Dennett extends his argument with the concept of zimboes, satirical creatures that not only act like conscious beings but can even reflect on their states, claiming to be conscious, despite supposedly lacking any inner experience. For Dennett, if a being can behave as though it has introspective awareness and engage in the full spectrum of human behaviour, there’s no meaningful distinction between that being and a conscious person.

In short, Dennett collapses the distinction between zombies and conscious beings. If something passes all the behavioural and functional tests of consciousness, it might as well be conscious. Zombies, as typically conceived, are simply an illusion—a misunderstanding of what consciousness is.

A Biological Rejection: Zombies Are Impossible

On the other hand, a more recent paper offers a different, biologically grounded argument against zombies. The authors propose that consciousness is the result of self-organising systems. In this view, biological organisms maintain their survival through adaptive behaviours constrained by policies—rules that govern how they react to environmental stimuli. These policies require a first-order self: a basic form of consciousness that allows an organism to navigate and interpret its environment.

The authors argue that without this first-order self, an organism would not be able to exhibit the fitness-driven behaviours needed for survival. Therefore, zombies—beings that behave like humans without consciousness—are biologically impossible. For these researchers, consciousness is not just a side effect of complex behaviour; it’s a necessary condition for such behaviour. Their framework dissolves the so-called “hard problem” of consciousness, asserting that subjective experience, or qualia, arises directly from the qualitative nature of self-organising systems.

In their view, zombies cannot exist because behaviour as complex as that of conscious beings requires consciousness.

The Open Question: What About Future Technology?

However, there is a tension between these two perspectives, particularly when we consider future possibilities in technology and artificial intelligence. Both Dennett and the authors of the biological paper argue that zombies—whether defined as Dennett’s “behaviourally indistinguishable” beings or the biologically impossible entities proposed by the paper—are not real. But could this change?

What if advanced AI or synthetic biological systems could simulate human behaviour so perfectly that they effectively become zombies—performing all the actions and behaviours we associate with consciousness, but lacking any subjective experience? Dennett might still argue that these systems are conscious, as long as they behave as though they are. But the biological view complicates this, since it ties consciousness directly to the survival and adaptive behaviours of self-organising systems.

Could a highly advanced AI system bypass the need for subjective experience while still exhibiting complex, adaptive behaviour? If so, it would challenge the current consensus and potentially create a new class of entities—artificial zombies—that neither behave nor function like traditional conscious beings but still perform human-like actions.

I Wonder What’s Next?

This philosophical conflict leaves us with an intriguing, open-ended question: are zombies truly impossible, or are they merely improbable given our current understanding of biology and consciousness? Dennett’s view seems to collapse the distinction between behaviour and consciousness, while the biological argument insists that the two are inseparable. But both positions could be challenged by future technologies that mimic human consciousness without having it.

Could we one day create a true zombie—a being that acts like us, thinks like us, but is as empty inside as a rock? The debate remains open, and as our understanding of consciousness and artificial intelligence deepens, so too will our exploration of the zombie question.

For now, the answer to whether zombies can exist seems to depend on what you believe consciousness really is.

The Illusion of the “Temporarily Embarrassed Millionaire”: How Capitalism’s Defenders Uphold Their Own Exploitation


In the contemporary world of deepening inequality and environmental degradation, capitalism continues to hold a powerful ideological grip on much of the global population. Yet the irony is that many of its staunchest defenders are not the elites or the true beneficiaries of the system, but the very workers and middle-class individuals whose lives it exploits and controls. These defenders are not capitalists themselves; they are, in fact, cogs in the machinery of a system they imagine will eventually reward their loyalty. This illusion is strikingly captured in a quote often misattributed to John Steinbeck: “Socialism never took root in America because the poor see themselves not as an exploited proletariat but as temporarily embarrassed millionaires.”[1]

This phenomenon, which we might call the temporarily embarrassed millionaire syndrome, reflects not only a profound misunderstanding of capitalism but also the effectiveness of the system in controlling its participants through hope and aspiration. Capitalism promises upward mobility, convincing even those at the bottom of the economic ladder that their current misfortunes are temporary. But as Karl Marx and Friedrich Engels observed, this is a system of exploitation that not only alienates workers but effectively destroys them.


Survivorship Bias and the Myth of the “Rising Tide”

Capitalism’s defenders frequently invoke the idea that “a rising tide lifts all boats.” The metaphor suggests that when capitalism prospers, everyone benefits. However, this vision of progress masks the reality of capitalism’s winners and losers. As economist David Harvey has pointed out, capitalism is not a neutral system of wealth creation—it is a system of accumulation by dispossession, constantly expropriating wealth from others, often through privatisation and the commodification of public goods.[2] The rising tide does lift some boats, but it simultaneously leaves others stranded, or worse, sinking.

Survivorship bias is essential to understanding how capitalism maintains its legitimacy. The success stories—the wealthy entrepreneurs, the individuals who “made it”—are lauded as proof that the system works. But the vast numbers of people left behind, those who toil in exploitative conditions or who die from poverty and neglect, are erased from the narrative. In Engels’ terms, these are victims of social murder—individuals who die prematurely not by direct violence, but through the structural forces of deprivation imposed by capitalism.[3] Their deaths are rendered invisible, falling out of the metrics of rising living standards and growth.

Engels’ critique of industrial capitalism is as relevant today as it was in the 19th century. The modern mechanisms of exploitation may be more complex, but they are no less deadly. In a late capitalist world, the poor and marginalised are still being “murdered” through the structural violence of inadequate healthcare, poor working conditions, and environmental degradation. The millions left out of the capitalist success story are not anomalies but integral to the system’s operation.


Alienation and the Tragedy of Defending the System

Marx’s theory of alienation provides another crucial lens through which to understand why capitalism’s defenders often remain blind to their own exploitation. Under capitalism, workers are alienated from the products of their labour, the process of production, their own humanity, and from each other.[4] The worker becomes a cog in a machine, detached from the value they create, and unable to control their working life. Yet, even in this state of alienation, many still defend the system, believing that their hard work will eventually lead them to wealth and freedom.

This defence of capitalism, often articulated by those whose lives it degrades, reflects Antonio Gramsci’s concept of cultural hegemony. Gramsci argued that the ruling class maintains power not just through economic domination, but by shaping the cultural and ideological landscape.[5] Capitalism’s defenders are, in part, products of this hegemony, believing in the very values—individualism, competition, the ‘American Dream’—that bind them to a system of exploitation.

This illusion of freedom under capitalism is deepened by what Herbert Marcuse calls repressive desublimation. Capitalism offers false freedoms in the form of consumer choice and superficial pleasures, giving individuals the illusion that they are exercising autonomy, even as the system remains unchallenged.[6] Workers may identify themselves in their commodities—luxury goods, tech gadgets, cars—but these objects only serve to reinforce their alienation and dependence on the capitalist system. The temporarily embarrassed millionaire clings to the dream of eventual success, all the while contributing to a system that offers only superficial rewards in return.


Social Murder and the Structural Violence of Late Capitalism

The notion of social murder offers a stark framework for understanding capitalism’s indirect, yet pervasive, violence. As Engels explained, this form of violence is not inflicted through overt means, but through the systematic neglect of basic human needs. Whether it’s the millions who die due to lack of access to healthcare or the global poor displaced by climate-induced disasters, capitalism perpetuates a form of structural violence that is invisible to those who benefit from the system’s success.[7]

The American political theorist Naomi Klein extends this analysis through her concept of disaster capitalism, where crises are exploited for profit. Whether it’s natural disasters or financial crises, capitalism uses these events as opportunities to privatise public resources, dismantle social safety nets, and deepen inequality.[8] The victims of these disasters—often the poor and vulnerable—are, in Engels’ terms, socially murdered by a system that thrives on their dispossession.


The Temporarily Embarrassed Millionaire as a Tool of Control

The illusion that one’s current position is only temporary—that any individual can rise to capitalist wealth if they work hard enough—is central to maintaining the capitalist system. This aspiration prevents individuals from seeing their exploitation for what it is. They do not identify as part of an exploited class but instead believe they are merely waiting for their turn at wealth. Zygmunt Bauman’s concept of liquid modernity—the perpetual state of instability and insecurity produced by late capitalism—helps explain this phenomenon.[9] Individuals are constantly told that their position is fluid, changeable, and that their big break is just around the corner.

But for most, this “big break” never comes. The dream of becoming a millionaire is a powerful form of social control, one that keeps individuals invested in a system that benefits only a small fraction of its participants. As Marx reminds us, “the worker becomes all the poorer the more wealth he produces, the more his production increases in power and range.”[10] Capitalism does not reward the many; it exploits the many for the benefit of the few.


Conclusion: Facing the Irony and Imagining a Post-Capitalist Future

The greatest irony of capitalism is that those who defend it most fervently are often those who will never realise its promises. These are not the capitalists of the system, but its workers, its underclass, and its exploited. They see themselves not as oppressed, but as temporarily embarrassed millionaires—an illusion that keeps them bound to a system that offers them no real future.

In this light, the true success of capitalism is not in its creation of wealth, but in its ability to mask the conditions of exploitation, alienation, and social murder that underpin it. The path forward requires a dismantling of these illusions and a recognition that the system’s failures are not accidental but integral to its design.

Only by facing these uncomfortable truths can we begin to imagine a future beyond the constraints of capitalist ideology, a world where human flourishing is no longer measured by wealth accumulation but by the collective well-being of all.


Endnotes:

[1]: Misattributed to John Steinbeck, this quote encapsulates a critical observation about American capitalism’s appeal to aspiration rather than solidarity.
[2]: David Harvey, The New Imperialism (Oxford University Press, 2005), pp. 145-147.
[3]: Friedrich Engels, The Condition of the Working Class in England (Oxford University Press, 1845), p. 112.
[4]: Karl Marx, Economic and Philosophic Manuscripts of 1844 (Progress Publishers, 1959).
[5]: Antonio Gramsci, Selections from the Prison Notebooks (International Publishers, 1971), p. 12.
[6]: Herbert Marcuse, One-Dimensional Man (Beacon Press, 1964), p. 10.
[7]: Friedrich Engels, The Condition of the Working Class in England, p. 114.
[8]: Naomi Klein, The Shock Doctrine: The Rise of Disaster Capitalism (Picador, 2007), pp. 9-10.
[9]: Zygmunt Bauman, Liquid Modernity (Polity, 2000), p. 14.
[10]: Karl Marx, Economic and Philosophic Manuscripts of 1844, p. 68.


words