Where you from, Homie?

This skit is a comical take on in-group versus out-group language insufficiency. It’s a couple years old, so you may have seen it before.

This video illustrates how easy it is for miscommunication to occur in mixed-group settings.
Trigger Warning: The humour is a bit weak and the focus is on stereotypes. If this isn’t quite up your street, just move on. Nothing to see here.

The Insufficiency of Language in an Agile World

I wrote and published this article on LinkedIn. I even recycled the cover image. Although it is about the particular topic of Agile, it relates to the Language Insufficiency Hypothesis, so I felt it would be apt here as well. It demonstrates how to think about language insufficiency through the framework.

Agile in Name Only

For over two decades, I’ve been immersed in Agile and its myriad interpretations. One refrain has persisted throughout: Agile™ is “just about agility,” a term that anyone can define as they see fit. The ambiguity begs the question: What does it really mean?

On its face, this sounds inclusive, but it never passed my intuitive sniff test. I carried on, but as I reflected on my broader work concerning the insufficiency of language, this persistent fuzziness started to make sense. Agile’s conceptual murkiness can be understood through the lens of language and identity—particularly through in-group and out-group dynamics.

Otherness and the Myth of Universality

To those who truly understand agility, no elaborate definition is required. It’s instinctive, embedded in their DNA. They don’t need to label it; they simply are agile. Yet, for the out-group—the ones who aspire to the status without the substance—Agile™ becomes a muddy abstraction. Unable to grasp the core, they question its very existence, claiming, “Who really knows what Agile means?”

The answer is simple: Everyone but those asking this question.

The Agility Crisis

This disconnect creates a power shift. The in-group, small and focused, operates with quiet competence. Meanwhile, the out-group, larger and louder, hijacks the conversation. What follows is an inevitable dilution: “Agile is dead,” “Agile doesn’t work,” they declare. But these proclamations often reflect their own failures to execute or evolve, not flaws inherent to agility itself.

This pattern follows a familiar playbook: create a strawman—define Agile™ as something it’s not—then decry its inability to deliver. The result? Performative agility, a theatre of motion without progress, where the players confuse activity for achievement and rely on brittle, inextensible infrastructures.

Agile Beyond the Label

Ironically, the true practitioners of agility remain unbothered by these debates. They adapt, innovate, and thrive—with or without the label. Agile™ has become a victim of its own success, co-opted by those who misunderstand it, leading to a paradox: the louder the chorus claiming “Agile doesn’t work,” the more it underscores the gap between those who do agility and those who merely wear its name.

The lesson here is not just about Agile™ but about language itself. Words, when untethered from their essence, fail. They cease to communicate, becoming tools of obfuscation rather than clarity. In this, Agile™ mirrors a broader phenomenon: the insufficiency of language in the face of complexity and its misuse by those unwilling or unable to engage with its deeper truths.

Guns, Germs, and Steel

I am reading Jared Diamond’s Guns, Germs, and Steel: The Fates of Human Societies, the first and likely most famous of an informal trilogy. I thought I had already read it, but I think I only saw the PBS show. Having recently finished Josephine Quinn’s How the World Made the West, I wanted to revisit this perspective. The two books are presented in different styles and represent different perspectives, but they seem to be complementary.

Where Diamond focuses on environmental factors (an oft-voiced critique), Quinn focuses on human agency.

Diamond takes a bird’ s-eye view, looking for universal patterns and systemic explanations, whilst Quinn adopts a granular, specific approach, highlighting the fluidity and contingency of history.

Diamond deconstructs European dominance by attributing it to environmental luck, but his narrative risks sidelining the agency of colonised peoples. Quinn critiques the very idea of Western dominance, arguing that the concept of the West itself is a myth born of appropriation and exchange.

Rather than being wholly opposed, Diamond and Quinn’s approaches might be seen as complementary. Diamond provides the structural scaffolding – the environmental and geographic conditions that shape societies – whilst Quinn fills in the cultural and human dynamics that Diamond often glosses over. Together, they represent two sides of the historiographical coin: one focusing on systemic patterns, the other on the messiness of cultural particularities.

Quinn’s approach is more aligned with The Dawn of Everything: A New History of Humanity, co-authored by David Graeber and archaeologist David Wengrow, if you can use that as a reference point.

The Narcissist’s Playbook

I’ve lived in Los Angeles a couple of times for a sum total of perhaps 15 years. The first time, I loved it. The next time, I was running on fumes. The first time, I was in my twenties – the second time in my forties. What a difference perspective and ageing makes. In my twenties, I was a pretty-boy punk-ass who owned the club scene on the Strip. In my forties, I was a wage slave.

Audio: NotebookLM podcast on this topic.

This morning, I heard a country song on Insta with a line claiming ‘there are nines and dimes in all 50’, and it reminded me of a phrase we used when I lived in Los Angeles – ‘LA 7’. This is constructed on the egoist, sexist notion that if you were a 10, you’d have already moved to LA. If you still lived in, say, Iowa and were considered a 10, the exchange rate to LA would be a 7.

Then, I thought about the LA-NYC rivalry and wrote this article with some help from ChatGPT.

How L.A. and NYC Became the Centres of the Universe (According to Them)

It is a truth universally acknowledged that Los Angeles and New York City—those bickering siblings of American exceptionalism—believe themselves to be the sun around which the rest of us drearily orbit. Each is utterly convinced of its centrality to the human experience, and neither can fathom that people outside their borders might actually exist without yearning to be them. This is the essence of the ‘Centre of the Universe Complex,’ a condition in which self-importance metastasises into a full-blown cultural identity.

Let us begin with Los Angeles, the influencer of cities. L.A. doesn’t merely think it’s the centre of the universe; it believes it’s the universe, replete with its own atmosphere of smog-filtered sunlight and an economy powered entirely by dreams, green juice, and Botox. For L.A., beauty isn’t just a priority—it’s a moral imperative. Hence the concept of the ‘L.A. 10,’ a stunningly arrogant bit of mathematics whereby physical attractiveness is recalculated based on proximity to the Pacific Coast Highway.

Here’s how it works: a ’10’ in some picturesque-but-hopelessly-provincial state, say Nebraska, is automatically downgraded to a ‘7’ upon arrival in Los Angeles. Why? Because, according to L.A.’s warped ‘arithmetic, if she were a real 10, she’d already be there, lounging by an infinity pool in Malibu and ignoring your DMs. This isn’t just vanity—it’s top-tier delusion. L.A. sees itself as a black hole of good looks, sucking the beautiful people from every corner of the earth while leaving the ‘merely pretty’ to languish in flyover country. The Midwest, then, isn’t so much a place as it is an agricultural waiting room for future Angelenos.

But don’t be fooled—New York City is no better. Where L.A. is obsessed with beauty, NYC worships hustle. The city doesn’t just believe it’s important; it believes it’s the only place on earth where anything important happens. While L.A. is out perfecting its tan, NYC is busy perfecting its reputation as the cultural and intellectual capital of the world—or, at least, its part of the world, which conveniently ends somewhere in Connecticut.

This mindset is best summed up by that sanctimonious mantra, If you can make it here, you can make it anywhere. Translation: if you survive the daily humiliation of paying $4,000 a month for a shoebox apartment while dodging both rats and an existential crisis, you’ve unlocked the secret to life itself. New York isn’t about looking good; it’s about enduring bad conditions and then boasting about it as if suffering were an Olympic sport. In this worldview, the rest of the world is simply an unworthy understudy in NYC’s perpetual Broadway production.

And here’s the thing: neither city can resist taking cheap shots at the other. L.A. dismisses NYC as a grim, grey treadmill where fun goes to die, while NYC scoffs at L.A. as a vapid bubble of avocado toast and Instagram filters. It’s brains versus beauty, grit versus glamour, black turtlenecks versus Lululemon. And yet, in their relentless need to outshine one another, they reveal a shared truth: both are equally narcissistic.

This mutual self-obsession is as exhausting as it is entertaining. While L.A. and NYC bicker over who wears the crown, the rest of the world is quietly rolling its eyes and enjoying a life unencumbered by astronomical rent or the constant pressure to appear important. The people of Iowa, for example, couldn’t care less if they’re an ‘LA 7’ or if they’ve “made it” in New York. They’re too busy living comfortably, surrounded by affordable housing and neighbours who might actually help them move a sofa.

But let’s give credit where it’s due. For all their flaws, these two cities do keep the rest of us entertained. Their constant self-aggrandisement fuels the cultural zeitgeist: without L.A., we’d have no Kardashians; without NYC, no Broadway. Their rivalry is the stuff of legend, a never-ending soap opera in which both cities play the lead role.

So, let them have their delusions of grandeur. After all, the world needs a little drama—and nobody does it better than the cities that think they’re the centre of it.

Complexity and Chaos

I participated in a thread recently. It prompted me to create a diagramme* to explain. This one:

I also wrote an article, but I’d like to share more here. As I was drafting this, I went through several iterations. This is just where I landed at the end. I’ll walk through it.

Keep It Simple, Stupid

Let’s begin at the origin. At the start, we’ve got simplicity in all directions. These are ingredients and building blocks. Nothing fancy. Perhaps these are atoms or Legos. They are easy to describe and easy to reproduce.

It’s Complicated

One can remain simple or venture up the Y-axis or out on the X-axis. Let’s go right. Eventually, we leave the land of Simplicity and cross into the land of Complication. This might be thought of as a linear journey, a land of addition and multiplication. Instead of a Lego piece, a single cell, or an atom, we’ve assembled structures from building blocks. Perhaps a Lego car or house. Perhaps a Death Star or Hogwarts. Perhaps we’ve gone from a single-celled organism to multicellular organisms or from an atom to a molecule or even a cluster of molecules. We’ve gone from a couple of hydrogens and an oxygen to a water molecule. Repeat this enough and we’ve got a glass or water – or an ocean.

In a nutshell, simple and complicated objects are predictable, designable, and controllable.

It’s Complex

Let’s travel up the Y-axis instead. Leaving the land of Simplicity, we end up in Complexity. Here, all bets are off. We are in a land of probability and uncertainty. Combining simple components results in unpredictable results, emergent properties, and self-organisation. Life is not linear on this path, so small differences in inputs can create wildly different outputs in each of magnitude and direction.

Morever, when a thing gets too complicated, it can no longer become complex. Emergence is not for the complicated.

Emergence is somewhat reserved for the complex, but where simple transitions to complicated, there is a tiny window that may allow both complicated and complex.

The Rise of AI: Why the Rote Professions Are on the Chopping Block

Medical doctors, lawyers, and judges have been the undisputed titans of professional authority for centuries. Their expertise, we are told, is sacrosanct, earned through gruelling education, prodigious memory, and painstaking application of established knowledge. But peel back the robes and white coats, and you’ll find something unsettling: a deep reliance on rote learning—an intellectual treadmill prioritising recall over reasoning. In an age where artificial intelligence can memorise and synthesise at scale, this dependence on predictable, replicable processes makes these professions ripe for automation.

Rote Professions in AI’s Crosshairs

AI thrives in environments that value pattern recognition, procedural consistency, and brute-force memory—the hallmarks of medical and legal practice.

  1. Medicine: The Diagnosis Factory
    Despite its life-saving veneer, medicine is largely a game of matching symptoms to diagnoses, dosing regimens, and protocols. Enter an AI with access to the sum of human medical knowledge: not only does it diagnose faster, but it also skips the inefficiencies of human memory, emotional bias, and fatigue. Sure, we still need trauma surgeons and such, but diagnosticians are so yesterday’s news.
    Why pay a six-figure salary to someone recalling pharmacology tables when AI can recall them perfectly every time? Future healthcare models are likely to see Medical Technicians replacing high-cost doctors. These techs, trained to gather patient data and operate alongside AI diagnostic systems, will be cheaper, faster, and—ironically—more consistent.
  2. Law: The Precedent Machine
    Lawyers, too, sit precariously on the rote-learning precipice. Case law is a glorified memory game: citing the right precedent, drafting contracts based on templates, and arguing within frameworks so well-trodden that they resemble legal Mad Libs. AI, with its infinite recall and ability to synthesise case law across jurisdictions, makes human attorneys seem quaintly inefficient. The future isn’t lawyers furiously flipping through books—it’s Legal Technicians trained to upload case facts, cross-check statutes, and act as intermediaries between clients and the system. The $500-per-hour billable rate? A relic of a pre-algorithmic era.
  3. Judges: Justice, Blind and Algorithmic
    The bench isn’t safe, either. Judicial reasoning, at its core, is rule-based logic applied with varying degrees of bias. Once AI can reliably parse case law, evidence, and statutes while factoring in safeguards for fairness, why retain expensive and potentially biased judges? An AI judge, governed by a logic verification layer and monitored for compliance with established legal frameworks, could render verdicts untainted by ego or prejudice.
    Wouldn’t justice be more blind without a human in the equation?

The Techs Will Rise

Replacing professionals with AI doesn’t mean removing the human element entirely. Instead, it redefines roles, creating new, lower-cost positions such as Medical and Legal Technicians. These workers will:

  • Collect and input data into AI systems.
  • Act as liaisons between AI outputs and human clients or patients.
  • Provide emotional support—something AI still struggles to deliver effectively.

The shift also democratises expertise. Why restrict life-saving diagnostics or legal advice to those who can afford traditional professionals when AI-driven systems make these services cheaper and more accessible?

But Can AI Handle This? A Call for Logic Layers

AI critics often point to hallucinations and errors as proof of its limitations, but this objection is shortsighted. What’s needed is a logic layer: a system that verifies whether the AI’s conclusions follow rationally from its inputs.

  • In law, this could ensure AI judgments align with precedent and statute.
  • In medicine, it could cross-check diagnoses against the DSM, treatment protocols, and patient data.

A second fact-verification layer could further bolster reliability, scanning conclusions for factual inconsistencies. Together, these layers would mitigate the risks of automation while enabling AI to confidently replace rote professionals.

Resistance and the Real Battle Ahead

Predictably, the entrenched elites of medicine, law, and the judiciary will resist these changes. After all, their prestige and salaries are predicated on the illusion that their roles are irreplaceable. But history isn’t on their side. Industries driven by memorisation and routine application—think bank tellers, travel agents, and factory workers—have already been disrupted by technology. Why should these professions be exempt?

The real challenge lies not in whether AI can replace these roles but in public trust and regulatory inertia. The transformation will be swift and irreversible once safeguards are implemented and AI earns confidence.

Critical Thinking: The Human Stronghold

Professions that thrive on unstructured problem-solving, creativity, and emotional intelligence—artists, philosophers, innovators—will remain AI-resistant, at least for now. But the rote professions, with their dependency on standardisation and precedent, have no such immunity. And that is precisely why they are AI’s lowest-hanging fruit.

It’s time to stop pretending that memorisation is intelligence, that precedent is innovation, or that authority lies in a gown or white coat. AI isn’t here to make humans obsolete; it’s here to liberate us from the tyranny of rote. For those willing to adapt, the future looks bright. For the rest? The machines are coming—and they’re cheaper, faster, and better at your job.

Blinded by Bias: The Irony of Greed and Self-Perception

Greed is a vice we readily recognise in others but often overlook in ourselves. This selective perception was strikingly evident during a recent conversation I had with a man who was quick to condemn another’s greed while remaining oblivious to his own similar tendencies. I told him about the escalating greed of certain companies who profit greatly from selling their printer inks and toner brands. I’ll spare you this history. This encounter underscores the powerful influence of fundamental attribution bias on our judgments and self-awareness.

Exploring Greed

Greed can be defined as an intense and selfish desire for something, especially wealth, power, or food. Psychologically, it is considered a natural human impulse that, when unchecked, can lead to unethical behaviour and strained relationships. Societally, greed is often condemned, yet it persists across cultures and histories.

We tend to label others as greedy when their actions negatively impact us or violate social norms. However, when we aggressively pursue our interests, we might frame it as ambition or resourcefulness. This dichotomy reveals a discrepancy in how we perceive greed in ourselves versus others.

Understanding Fundamental Attribution Bias

Fundamental attribution bias, or fundamental attribution error, is the tendency to attribute others’ actions to their character while attributing our own actions to external circumstances. This cognitive bias allows us to excuse our behaviour while holding others fully accountable for theirs.

For example, if someone cuts us off in traffic, we might think they’re reckless or inconsiderate. But if we cut someone off, we might justify it by claiming we were late or didn’t see them. This bias preserves our self-image but distorts our understanding of others.

The Conversation

Our conversation was centred on an HP printer that has shown a ‘low ink – please replace’ message since the cartridge was first installed. I recounted the history of the ink and toner industry. HP had a monopoly on ink for their products, a situation that earned them substantial marginal profits. Upstarts entered the marketplace. This started an escalating arms war. HP spent R&D dollars trying to defend their profit margins with nil benefit to the consumers of their product. In fact, it kept costs artificially higher. Competitors who wanted a slice of those fat margins found ways around these interventions. Eventually, HP installed chips on their toner cartridges. Unfortunately, they have a bug – or is it a feature? If you install a cartridge and remove it, it assumes you’re up to something shady, so it spawns this false alert. Some people believe this out of hand, so HP benefits twice.

If this bloke had worked for HP and had been responsible for revenue acquisition and protection, he would have swooned over the opportunity. Have no doubt. At arm’s length, he recognised this as sleazy, unethical business practices.

This conversation revealed how easily we can fall into the trap of judging others without reflecting on our own behaviour. His indignation seemed justified to him, yet he remained unaware of how his actions mirrored those he criticised.

Biblical Reference and Moral Implications

This situation brings to mind the biblical passage from Matthew 7:3-5:

“Why do you look at the speck of sawdust in your brother’s eye and pay no attention to the plank in your own eye? … You hypocrite, first take the plank out of your own eye, and then you will see clearly to remove the speck from your brother’s eye.”

The verse poignantly captures the human tendency to overlook our flaws while magnifying those of others. It calls for introspection and humility, urging us to address our shortcomings before passing judgment.

The Asymmetry of Self-Perception

Several psychological factors contribute to this asymmetry:

  • Self-Serving Bias: We attribute our successes to internal factors and our failures to external ones.
  • Cognitive Dissonance: Conflicting beliefs about ourselves and our actions create discomfort, leading us to rationalize or ignore discrepancies.
  • Social Comparison: We often compare ourselves favourably against others to boost self-esteem.

This skewed self-perception can hinder personal growth and damage relationships, as it prevents honest self-assessment and accountability.

Overcoming the Bias

Awareness is the first step toward mitigating fundamental attribution bias. Here are some strategies:

  1. Mindful Reflection: Regularly assess your actions and motivations. Ask yourself if you’re holding others to a standard you’re not meeting. Riffing from ancient moral dictates, just ask yourself if this is how you would want to be treated. Adopt Kant’s moral imperative framework.
  2. Seek Feedback: Encourage honest input from trusted friends or colleagues about your behaviour.
  3. Empathy Development: Practice seeing situations from others’ perspectives to understand their actions more fully.
  4. Challenge Assumptions: Before making judgments, consider external factors that might influence someone’s behaviour.

By actively recognising and adjusting for our biases, we can develop more balanced perceptions of ourselves and others.

Conclusion

The irony of condemning in others what we excuse in ourselves is a common human pitfall rooted in fundamental attribution bias. The adage, ‘Know thyself’ might come into view here. We can overcome these biases by striving for self-awareness and empathy, leading to more authentic relationships and personal integrity.

Dune: Prophecy – Eugenics, Lies, and Weak CGI

So, you watched Dune: Prophecy episode 1 on HBO Max. Congratulations on your bravery. Let’s face it—Dune adaptations are a minefield. Remember David Lynch’s Dune? Of course, you do because it’s impossible to unsee Sting in that ridiculous winged codpiece. And whilst Denis Villeneuve’s recent entries managed to elevate the franchise from high-school drama club aesthetics to actual cinema, they also came dangerously close to being too good—almost like Dune took itself seriously.

And now, here we are, back on shaky ground with Dune: Prophecy. Sure, the first episode was watchable, despite some environmental CGI that looks like it came out of a Sims expansion pack. But this isn’t a film review channel, so let’s dive into the show’s actual content—or, as I like to call it, The Philosophy 101 Drinking Game.


Eugenics: Creepy, Even by Dune Standards

Ah, eugenics. Nothing screams cosy sci-fi night in like a narrative steeped in genetic elitism. The Bene Gesserit’s obsessive fixation on a “pure bloodline” takes centre stage, making you wonder if they’re auditioning for a dystopian version of Who Do You Think You Are?. Creepy is putting it mildly. It’s all very master race, but with better posture and less obvious moustaches.


Righteousness vs. Power: The Valya Harken Show

Valya Harken is an enigma—or perhaps just your classic power-hungry sociopath cloaked in the silky veil of duty. Is she righteous? Maybe. Is she using morality as a smokescreen for her own ambition? Absolutely. Watching her wrestle with her supposed “deontological duty” to the sisterhood is like watching a cat pretend it cares about knocking over your wine glass. Sure, it’s interesting, but it’s also patently obvious there’s an ulterior motive.

Her quest for power is unmistakable. But here’s the kicker: the sisterhood needs someone like her. Systems, after all, fight to survive, and Valya is just the ruthless gladiator they require. Whether her motives are noble or nefarious is irrelevant because survival trumps all in the Dune universe. Her arc underscores the show’s recurring obsession with false dichotomies—righteousness versus calculated ambition. It’s not “one or the other,” folks. It’s always both.


Progress as a Façade

Progress, Dune-style, is a beautifully brutal illusion. One group’s advancement always comes at another’s expense, a message that’s summed up perfectly by the episode’s pull quote: “Adversity Always Lies in the Path of Advancement.” In other words, progress is just oppression with better PR. It’s a meta-narrative as old as civilisation, and Dune leans into it with an almost smug glee.


Lies, Manipulation, and the Human Condition

If humanity’s greatest weapon is the lie, then the Bene Gesserit are armed to the teeth. For a group that claims to seek truth, they certainly have no qualms about spinning elaborate deceptions. Case in point: the mind games encapsulated by “You and I remember things differently.” It’s a phrase so loaded with gaslighting potential it should come with a trigger warning.

This manipulation isn’t just a tool; it’s the cornerstone of their ethos. Truth-seeking? Sure. But only if the “truth” serves their interests. It’s classic utilitarianism: the ends justify the means, even if those means involve rewriting history—or someone else’s memory.


Fatalism, Virtue Ethics, and the Inescapable Past

The Dune universe loves a good dose of fatalism, and Prophecy is no exception. The idea that “our past always finds us” is hammered home repeatedly as characters grapple with choices, bloodlines, and cultural memory. It’s as though everyone is permanently stuck in a Freudian therapy session, doomed to relive ancestral traumas ad infinitum. In this world, identity is less a personal construct and more a hand-me-down curse.


Self-Discipline and Sacrifice: The Dune Holy Grail

Finally, we come to self-discipline and sacrifice, the twin pillars of Dune’s moral framework. Whether voluntarily undertaken or brutally enforced, these themes dominate the narrative. It’s a trope as old as time, but it works because it’s relatable—who among us hasn’t sacrificed something important for an uncertain future? Of course, in Dune, that sacrifice usually involves something more dramatic than skipping dessert. Think more along the lines of betraying allies, murdering rivals, or, you know, manipulating an entire galaxy.


The Verdict

Dune: Prophecy has potential. It’s rich in philosophical musings, political intrigue, and that uniquely Dune blend of high drama and existential dread. Sure, the CGI needs work, and some of the dialogue could use an upgrade (how about less exposition, more nuance?), but there’s enough meat here to keep you chewing. Whether it evolves into something truly epic—or collapses under the weight of its own ambition—remains to be seen. Either way, it’s worth watching, if only to see how far humanity’s greatest weapon—the lie—can take the sisterhood.

Exploring Antinatalist Philosophies

A Comparative Analysis of Sarah Perry, Emil Cioran, and Contemporaries

In a world where procreation is often celebrated as a fundamental human aspiration, a group of philosophers challenges this deeply ingrained belief by questioning the ethical implications of bringing new life into existence. Antinatalism, the philosophical stance that posits procreation is morally problematic due to the inherent suffering embedded in life, invites us to reexamine our assumptions about birth, existence, and the value we assign to life itself.

Audio: Podcast related to the content on this page

Central to this discourse are thinkers like Sarah Perry, whose work “Every Cradle is a Grave: Rethinking the Ethics of Birth and Suicide” intertwines the ethics of procreation with the right to die, emphasizing personal autonomy and critiquing societal norms. Alongside Perry, philosophers such as Emil Cioran, David Benatar, Thomas Ligotti, and Peter Wessel Zapffe offer profound insights into the human condition, consciousness, and our existential burdens.

This article delves into the complex and often unsettling arguments presented by these philosophers, comparing and contrasting their perspectives on antinatalism. By exploring their works, we aim to shed light on the profound ethical considerations surrounding birth, suffering, and autonomy over one’s existence.

The Inherent Suffering of Existence

At the heart of antinatalist philosophy lies the recognition of life’s intrinsic suffering. This theme is a common thread among our featured philosophers, each articulating it through their unique lenses.

Sarah Perry argues that suffering is an unavoidable aspect of life, stemming from physical ailments, emotional pains, and existential anxieties. In “Every Cradle is a Grave,” she states:

“Existence is imposed without consent, bringing inevitable suffering.”

Perry emphasises that since every human will experience hardship, bringing a new person into the world exposes them to harm they did not choose.

Similarly, David Benatar, in his seminal work “Better Never to Have Been: The Harm of Coming into Existence,” presents the asymmetry argument. He posits that coming into existence is always a harm:

“Coming into existence is always a serious harm.”

Benatar reasons that while the absence of pain is good even if no one benefits from it, the absence of pleasure is not bad unless there is someone for whom this absence is a deprivation. Therefore, non-existence spares potential beings from suffering without depriving them of pleasures they would not miss.

Emil Cioran, a Romanian philosopher known for his profound pessimism, delves deep into the despair inherent in life. In “The Trouble with Being Born,” he reflects:

“Suffering is the substance of life and the root of personality.”

Cioran’s aphoristic musings suggest that life’s essence is intertwined with pain, and acknowledging this is crucial to understanding our existence.

Thomas Ligotti, blending horror and philosophy in “The Conspiracy Against the Human Race,” portrays consciousness as a cosmic error:

“Consciousness is a mistake of evolution.”

Ligotti argues that human awareness amplifies suffering, making us uniquely burdened by the knowledge of our mortality and the futility of our endeavours.

Peter Wessel Zapffe, in his essay “The Last Messiah,” examines how human consciousness leads to existential angst:

“Man is a biological paradox due to excessive consciousness.”

Zapffe contends that our heightened self-awareness results in an acute recognition of life’s absurdities, causing inevitable psychological suffering.



Ethics of Procreation

Building upon the acknowledgement of life’s inherent suffering, these philosophers explore the moral dimensions of bringing new life into the world.

Sarah Perry focuses on the issue of consent. She argues that since we cannot obtain consent from potential beings before birth, procreation imposes life—and its accompanying suffering—upon them without their agreement. She writes:

“Procreation perpetuates harm by introducing new sufferers.”

Perry challenges the societal norm that views having children as an unquestioned good, highlighting parents’ moral responsibility for the inevitable pain their children will face.

In David Benatar’s asymmetry argument, he extends this ethical concern by suggesting that non-existence is preferable. He explains that while the absence of pain is inherently good, the absence of pleasure is not bad because no one is deprived of it. Therefore, bringing someone into existence who will undoubtedly experience suffering is moral harm.

Emil Cioran questions the value of procreation given the futility and despair inherent in life. While not explicitly formulating an antinatalist argument, his reflections imply scepticism about the act of bringing new life into a suffering world.

Peter Wessel Zapffe proposes that refraining from procreation is a logical response to the human condition. By not having children, we can halt the perpetuation of existential suffering. He suggests that humanity’s self-awareness is a burden that should not be passed on to future generations.

The Right to Die and Autonomy over Existence

A distinctive aspect of Sarah Perry’s work is her advocacy for the right to die. She asserts that just as individuals did not consent to be born into suffering, they should have the autonomy to choose to end their lives. Perry critiques societal and legal barriers that prevent people from exercising this choice, arguing:

“Autonomy over one’s life includes the right to die.”

By decriminalizing and destigmatizing suicide, she believes society can respect individual sovereignty and potentially alleviate prolonged suffering.

Emil Cioran contemplates suicide not necessarily as an action to be taken but as a philosophical consideration. In “On the Heights of Despair,” he muses:

“It is not worth the bother of killing yourself, since you always kill yourself too late.”

Cioran views the option of ending one’s life as a paradox that underscores the absurdity of existence.

While Benatar, Ligotti, and Zapffe acknowledge the despair that can accompany life, they do not extensively advocate for the right to die. Their focus remains on the ethical implications of procreation and the existential burdens of consciousness.

Coping Mechanisms and Societal Norms

Peter Wessel Zapffe delves into how humans cope with the existential angst resulting from excessive consciousness. He identifies four defence mechanisms:

  1. Isolation: Repressing disturbing thoughts from consciousness.
  2. Anchoring: Creating or adopting values and ideals to provide meaning.
  3. Distraction: Engaging in activities to avoid self-reflection.
  4. Sublimation: Channeling despair into creative or intellectual pursuits.

According to Zapffe, these mechanisms help individuals avoid confronting life’s inherent meaninglessness.

Thomas Ligotti echoes this sentiment, suggesting that optimism is a psychological strategy to cope with the horror of existence. He writes:

“Optimism is a coping mechanism against the horror of existence.”

Sarah Perry and Emil Cioran also critique societal norms that discourage open discussions about suffering, death, and the choice not to procreate. They argue that societal pressures often silence individuals who question the value of existence, thereby perpetuating cycles of unexamined procreation and stigmatizing those who consider alternative perspectives.

Comparative Insights

While united in their acknowledgement of life’s inherent suffering, these philosophers approach antinatalism and existential pessimism through varied lenses.

  • Sarah Perry emphasises personal autonomy and societal critique, advocating for policy changes regarding birth and suicide.
  • Emil Cioran offers a deeply personal exploration of despair, using poetic language to express the futility he perceives in existence.
  • David Benatar provides a structured, logical argument against procreation, focusing on the ethical asymmetry between pain and pleasure.
  • Thomas Ligotti combines horror and philosophy to illustrate the bleakness of consciousness and its implications for human suffering.
  • Peter Wessel Zapffe analyzes the psychological mechanisms humans employ to avoid confronting existential angst.

Critiques and Counterarguments

Critics of antinatalism often point to an overemphasis on suffering, arguing that it neglects the joys, love, and meaningful experiences that life can offer. They contend that while suffering is a part of life, it is not the totality of existence.

In response, antinatalist philosophers acknowledge the presence of pleasure but question whether it justifies the inevitable suffering every person will face. Benatar argues that while positive experiences are good, they do not negate the moral harm of bringing someone into existence without their consent.

Regarding the right to die, opponents express concern over the potential neglect of mental health issues. They worry that normalizing suicide could prevent individuals from seeking help and support that might alleviate their suffering.

Sarah Perry addresses this by emphasizing the importance of autonomy and the need for compassionate support systems. She advocates for open discussions about suicide to better understand and assist those contemplating it rather than stigmatizing or criminalizing their considerations.

Societal and Cultural Implications

These philosophers’ works challenge pro-natalist biases ingrained in many cultures. By questioning the assumption that procreation is inherently positive, they open a dialogue about the ethical responsibilities associated with bringing new life into the world.

Sarah Perry critiques how society glorifies parenthood while marginalizing those who choose not to have children. She calls for reevaluating societal norms that pressure individuals into procreation without considering the ethical implications.

Similarly, Emil Cioran and Thomas Ligotti highlight how societal denial of life’s inherent suffering perpetuates illusions that hinder genuine understanding and acceptance of the human condition.

Conclusion

The exploration of antinatalist philosophy through the works of Sarah Perry, Emil Cioran, and their contemporaries presents profound ethical considerations about life, suffering, and personal autonomy. Their arguments compel us to reflect on the nature of existence and the responsibilities we bear in perpetuating life.

While one may not fully embrace antinatalist positions, engaging with these ideas challenges us to consider the complexities of the human condition. It encourages a deeper examination of our choices, the societal norms we accept, and how we confront or avoid the fundamental truths about existence.

Final Thoughts

These philosophers’ discussions are not merely abstract musings but have real-world implications for how we live our lives and make decisions about the future. Whether it’s rethinking the ethics of procreation, advocating for personal autonomy over life and death, or understanding the coping mechanisms we employ, their insights offer valuable perspectives.

By bringing these often-taboo topics into the open, we can foster a more compassionate and thoughtful society that respects individual choices and acknowledges the full spectrum of human experience.

Encouraging Dialogue

As we conclude this exploration, readers are invited to reflect on their own beliefs and experiences. Engaging in open, respectful discussions about these complex topics can lead to greater understanding and empathy.

What are your thoughts on the ethical considerations of procreation? How do you perceive the balance between life’s joys and its inherent suffering? Share your perspectives and join the conversation.


References and Further Reading

  • Perry, Sarah. Every Cradle is a Grave: Rethinking the Ethics of Birth and Suicide. Nine-Banded Books, 2014.
  • Benatar, David. Better Never to Have Been: The Harm of Coming into Existence. Oxford University Press, 2006.
  • Cioran, Emil. The Trouble with Being Born. Arcade Publishing, 1973.
  • Ligotti, Thomas. The Conspiracy Against the Human Race. Hippocampus Press, 2010.
  • Zapffe, Peter Wessel. “The Last Messiah.” Philosophy Now, 1933.

For more in-depth analyses and reviews, consider exploring the following blog posts:

  • Book Review: Better Never to Have Been (Link)
  • Book Review: The Conspiracy Against the Human Race (Link)
  • Reading ‘The Last Messiah’ by Peter Zapffe (Link)

Note to Readers

This ChatGPT o1-generated article aims to thoughtfully and respectfully present the philosophical positions on antinatalism and existential pessimism. The discussions about suffering, procreation, and the right to die are complex and sensitive. If you or someone you know is struggling with such thoughts, please seek support from mental health professionals or trusted individuals in your community.

Next Steps

Based on reader interest and engagement, future articles may delve deeper into individual philosophers’ works, explore thematic elements such as consciousness and suffering, or address counterarguments in more detail. Your feedback and participation are valuable in shaping these discussions.

Let us continue this journey of philosophical exploration together.