Where you from, Homie?

This skit is a comical take on in-group versus out-group language insufficiency. It’s a couple years old, so you may have seen it before.

This video illustrates how easy it is for miscommunication to occur in mixed-group settings.
Trigger Warning: The humour is a bit weak and the focus is on stereotypes. If this isn’t quite up your street, just move on. Nothing to see here.

Guns, Germs, and Steel

I am reading Jared Diamond’s Guns, Germs, and Steel: The Fates of Human Societies, the first and likely most famous of an informal trilogy. I thought I had already read it, but I think I only saw the PBS show. Having recently finished Josephine Quinn’s How the World Made the West, I wanted to revisit this perspective. The two books are presented in different styles and represent different perspectives, but they seem to be complementary.

Where Diamond focuses on environmental factors (an oft-voiced critique), Quinn focuses on human agency.

Diamond takes a bird’ s-eye view, looking for universal patterns and systemic explanations, whilst Quinn adopts a granular, specific approach, highlighting the fluidity and contingency of history.

Diamond deconstructs European dominance by attributing it to environmental luck, but his narrative risks sidelining the agency of colonised peoples. Quinn critiques the very idea of Western dominance, arguing that the concept of the West itself is a myth born of appropriation and exchange.

Rather than being wholly opposed, Diamond and Quinn’s approaches might be seen as complementary. Diamond provides the structural scaffolding – the environmental and geographic conditions that shape societies – whilst Quinn fills in the cultural and human dynamics that Diamond often glosses over. Together, they represent two sides of the historiographical coin: one focusing on systemic patterns, the other on the messiness of cultural particularities.

Quinn’s approach is more aligned with The Dawn of Everything: A New History of Humanity, co-authored by David Graeber and archaeologist David Wengrow, if you can use that as a reference point.

The Narcissist’s Playbook

I’ve lived in Los Angeles a couple of times for a sum total of perhaps 15 years. The first time, I loved it. The next time, I was running on fumes. The first time, I was in my twenties – the second time in my forties. What a difference perspective and ageing makes. In my twenties, I was a pretty-boy punk-ass who owned the club scene on the Strip. In my forties, I was a wage slave.

Audio: NotebookLM podcast on this topic.

This morning, I heard a country song on Insta with a line claiming ‘there are nines and dimes in all 50’, and it reminded me of a phrase we used when I lived in Los Angeles – ‘LA 7’. This is constructed on the egoist, sexist notion that if you were a 10, you’d have already moved to LA. If you still lived in, say, Iowa and were considered a 10, the exchange rate to LA would be a 7.

Then, I thought about the LA-NYC rivalry and wrote this article with some help from ChatGPT.

How L.A. and NYC Became the Centres of the Universe (According to Them)

It is a truth universally acknowledged that Los Angeles and New York City—those bickering siblings of American exceptionalism—believe themselves to be the sun around which the rest of us drearily orbit. Each is utterly convinced of its centrality to the human experience, and neither can fathom that people outside their borders might actually exist without yearning to be them. This is the essence of the ‘Centre of the Universe Complex,’ a condition in which self-importance metastasises into a full-blown cultural identity.

Let us begin with Los Angeles, the influencer of cities. L.A. doesn’t merely think it’s the centre of the universe; it believes it’s the universe, replete with its own atmosphere of smog-filtered sunlight and an economy powered entirely by dreams, green juice, and Botox. For L.A., beauty isn’t just a priority—it’s a moral imperative. Hence the concept of the ‘L.A. 10,’ a stunningly arrogant bit of mathematics whereby physical attractiveness is recalculated based on proximity to the Pacific Coast Highway.

Here’s how it works: a ’10’ in some picturesque-but-hopelessly-provincial state, say Nebraska, is automatically downgraded to a ‘7’ upon arrival in Los Angeles. Why? Because, according to L.A.’s warped ‘arithmetic, if she were a real 10, she’d already be there, lounging by an infinity pool in Malibu and ignoring your DMs. This isn’t just vanity—it’s top-tier delusion. L.A. sees itself as a black hole of good looks, sucking the beautiful people from every corner of the earth while leaving the ‘merely pretty’ to languish in flyover country. The Midwest, then, isn’t so much a place as it is an agricultural waiting room for future Angelenos.

But don’t be fooled—New York City is no better. Where L.A. is obsessed with beauty, NYC worships hustle. The city doesn’t just believe it’s important; it believes it’s the only place on earth where anything important happens. While L.A. is out perfecting its tan, NYC is busy perfecting its reputation as the cultural and intellectual capital of the world—or, at least, its part of the world, which conveniently ends somewhere in Connecticut.

This mindset is best summed up by that sanctimonious mantra, If you can make it here, you can make it anywhere. Translation: if you survive the daily humiliation of paying $4,000 a month for a shoebox apartment while dodging both rats and an existential crisis, you’ve unlocked the secret to life itself. New York isn’t about looking good; it’s about enduring bad conditions and then boasting about it as if suffering were an Olympic sport. In this worldview, the rest of the world is simply an unworthy understudy in NYC’s perpetual Broadway production.

And here’s the thing: neither city can resist taking cheap shots at the other. L.A. dismisses NYC as a grim, grey treadmill where fun goes to die, while NYC scoffs at L.A. as a vapid bubble of avocado toast and Instagram filters. It’s brains versus beauty, grit versus glamour, black turtlenecks versus Lululemon. And yet, in their relentless need to outshine one another, they reveal a shared truth: both are equally narcissistic.

This mutual self-obsession is as exhausting as it is entertaining. While L.A. and NYC bicker over who wears the crown, the rest of the world is quietly rolling its eyes and enjoying a life unencumbered by astronomical rent or the constant pressure to appear important. The people of Iowa, for example, couldn’t care less if they’re an ‘LA 7’ or if they’ve “made it” in New York. They’re too busy living comfortably, surrounded by affordable housing and neighbours who might actually help them move a sofa.

But let’s give credit where it’s due. For all their flaws, these two cities do keep the rest of us entertained. Their constant self-aggrandisement fuels the cultural zeitgeist: without L.A., we’d have no Kardashians; without NYC, no Broadway. Their rivalry is the stuff of legend, a never-ending soap opera in which both cities play the lead role.

So, let them have their delusions of grandeur. After all, the world needs a little drama—and nobody does it better than the cities that think they’re the centre of it.

The Rise of AI: Why the Rote Professions Are on the Chopping Block

Medical doctors, lawyers, and judges have been the undisputed titans of professional authority for centuries. Their expertise, we are told, is sacrosanct, earned through gruelling education, prodigious memory, and painstaking application of established knowledge. But peel back the robes and white coats, and you’ll find something unsettling: a deep reliance on rote learning—an intellectual treadmill prioritising recall over reasoning. In an age where artificial intelligence can memorise and synthesise at scale, this dependence on predictable, replicable processes makes these professions ripe for automation.

Rote Professions in AI’s Crosshairs

AI thrives in environments that value pattern recognition, procedural consistency, and brute-force memory—the hallmarks of medical and legal practice.

  1. Medicine: The Diagnosis Factory
    Despite its life-saving veneer, medicine is largely a game of matching symptoms to diagnoses, dosing regimens, and protocols. Enter an AI with access to the sum of human medical knowledge: not only does it diagnose faster, but it also skips the inefficiencies of human memory, emotional bias, and fatigue. Sure, we still need trauma surgeons and such, but diagnosticians are so yesterday’s news.
    Why pay a six-figure salary to someone recalling pharmacology tables when AI can recall them perfectly every time? Future healthcare models are likely to see Medical Technicians replacing high-cost doctors. These techs, trained to gather patient data and operate alongside AI diagnostic systems, will be cheaper, faster, and—ironically—more consistent.
  2. Law: The Precedent Machine
    Lawyers, too, sit precariously on the rote-learning precipice. Case law is a glorified memory game: citing the right precedent, drafting contracts based on templates, and arguing within frameworks so well-trodden that they resemble legal Mad Libs. AI, with its infinite recall and ability to synthesise case law across jurisdictions, makes human attorneys seem quaintly inefficient. The future isn’t lawyers furiously flipping through books—it’s Legal Technicians trained to upload case facts, cross-check statutes, and act as intermediaries between clients and the system. The $500-per-hour billable rate? A relic of a pre-algorithmic era.
  3. Judges: Justice, Blind and Algorithmic
    The bench isn’t safe, either. Judicial reasoning, at its core, is rule-based logic applied with varying degrees of bias. Once AI can reliably parse case law, evidence, and statutes while factoring in safeguards for fairness, why retain expensive and potentially biased judges? An AI judge, governed by a logic verification layer and monitored for compliance with established legal frameworks, could render verdicts untainted by ego or prejudice.
    Wouldn’t justice be more blind without a human in the equation?

The Techs Will Rise

Replacing professionals with AI doesn’t mean removing the human element entirely. Instead, it redefines roles, creating new, lower-cost positions such as Medical and Legal Technicians. These workers will:

  • Collect and input data into AI systems.
  • Act as liaisons between AI outputs and human clients or patients.
  • Provide emotional support—something AI still struggles to deliver effectively.

The shift also democratises expertise. Why restrict life-saving diagnostics or legal advice to those who can afford traditional professionals when AI-driven systems make these services cheaper and more accessible?

But Can AI Handle This? A Call for Logic Layers

AI critics often point to hallucinations and errors as proof of its limitations, but this objection is shortsighted. What’s needed is a logic layer: a system that verifies whether the AI’s conclusions follow rationally from its inputs.

  • In law, this could ensure AI judgments align with precedent and statute.
  • In medicine, it could cross-check diagnoses against the DSM, treatment protocols, and patient data.

A second fact-verification layer could further bolster reliability, scanning conclusions for factual inconsistencies. Together, these layers would mitigate the risks of automation while enabling AI to confidently replace rote professionals.

Resistance and the Real Battle Ahead

Predictably, the entrenched elites of medicine, law, and the judiciary will resist these changes. After all, their prestige and salaries are predicated on the illusion that their roles are irreplaceable. But history isn’t on their side. Industries driven by memorisation and routine application—think bank tellers, travel agents, and factory workers—have already been disrupted by technology. Why should these professions be exempt?

The real challenge lies not in whether AI can replace these roles but in public trust and regulatory inertia. The transformation will be swift and irreversible once safeguards are implemented and AI earns confidence.

Critical Thinking: The Human Stronghold

Professions that thrive on unstructured problem-solving, creativity, and emotional intelligence—artists, philosophers, innovators—will remain AI-resistant, at least for now. But the rote professions, with their dependency on standardisation and precedent, have no such immunity. And that is precisely why they are AI’s lowest-hanging fruit.

It’s time to stop pretending that memorisation is intelligence, that precedent is innovation, or that authority lies in a gown or white coat. AI isn’t here to make humans obsolete; it’s here to liberate us from the tyranny of rote. For those willing to adapt, the future looks bright. For the rest? The machines are coming—and they’re cheaper, faster, and better at your job.

Blinded by Bias: The Irony of Greed and Self-Perception

Greed is a vice we readily recognise in others but often overlook in ourselves. This selective perception was strikingly evident during a recent conversation I had with a man who was quick to condemn another’s greed while remaining oblivious to his own similar tendencies. I told him about the escalating greed of certain companies who profit greatly from selling their printer inks and toner brands. I’ll spare you this history. This encounter underscores the powerful influence of fundamental attribution bias on our judgments and self-awareness.

Exploring Greed

Greed can be defined as an intense and selfish desire for something, especially wealth, power, or food. Psychologically, it is considered a natural human impulse that, when unchecked, can lead to unethical behaviour and strained relationships. Societally, greed is often condemned, yet it persists across cultures and histories.

We tend to label others as greedy when their actions negatively impact us or violate social norms. However, when we aggressively pursue our interests, we might frame it as ambition or resourcefulness. This dichotomy reveals a discrepancy in how we perceive greed in ourselves versus others.

Understanding Fundamental Attribution Bias

Fundamental attribution bias, or fundamental attribution error, is the tendency to attribute others’ actions to their character while attributing our own actions to external circumstances. This cognitive bias allows us to excuse our behaviour while holding others fully accountable for theirs.

For example, if someone cuts us off in traffic, we might think they’re reckless or inconsiderate. But if we cut someone off, we might justify it by claiming we were late or didn’t see them. This bias preserves our self-image but distorts our understanding of others.

The Conversation

Our conversation was centred on an HP printer that has shown a ‘low ink – please replace’ message since the cartridge was first installed. I recounted the history of the ink and toner industry. HP had a monopoly on ink for their products, a situation that earned them substantial marginal profits. Upstarts entered the marketplace. This started an escalating arms war. HP spent R&D dollars trying to defend their profit margins with nil benefit to the consumers of their product. In fact, it kept costs artificially higher. Competitors who wanted a slice of those fat margins found ways around these interventions. Eventually, HP installed chips on their toner cartridges. Unfortunately, they have a bug – or is it a feature? If you install a cartridge and remove it, it assumes you’re up to something shady, so it spawns this false alert. Some people believe this out of hand, so HP benefits twice.

If this bloke had worked for HP and had been responsible for revenue acquisition and protection, he would have swooned over the opportunity. Have no doubt. At arm’s length, he recognised this as sleazy, unethical business practices.

This conversation revealed how easily we can fall into the trap of judging others without reflecting on our own behaviour. His indignation seemed justified to him, yet he remained unaware of how his actions mirrored those he criticised.

Biblical Reference and Moral Implications

This situation brings to mind the biblical passage from Matthew 7:3-5:

“Why do you look at the speck of sawdust in your brother’s eye and pay no attention to the plank in your own eye? … You hypocrite, first take the plank out of your own eye, and then you will see clearly to remove the speck from your brother’s eye.”

The verse poignantly captures the human tendency to overlook our flaws while magnifying those of others. It calls for introspection and humility, urging us to address our shortcomings before passing judgment.

The Asymmetry of Self-Perception

Several psychological factors contribute to this asymmetry:

  • Self-Serving Bias: We attribute our successes to internal factors and our failures to external ones.
  • Cognitive Dissonance: Conflicting beliefs about ourselves and our actions create discomfort, leading us to rationalize or ignore discrepancies.
  • Social Comparison: We often compare ourselves favourably against others to boost self-esteem.

This skewed self-perception can hinder personal growth and damage relationships, as it prevents honest self-assessment and accountability.

Overcoming the Bias

Awareness is the first step toward mitigating fundamental attribution bias. Here are some strategies:

  1. Mindful Reflection: Regularly assess your actions and motivations. Ask yourself if you’re holding others to a standard you’re not meeting. Riffing from ancient moral dictates, just ask yourself if this is how you would want to be treated. Adopt Kant’s moral imperative framework.
  2. Seek Feedback: Encourage honest input from trusted friends or colleagues about your behaviour.
  3. Empathy Development: Practice seeing situations from others’ perspectives to understand their actions more fully.
  4. Challenge Assumptions: Before making judgments, consider external factors that might influence someone’s behaviour.

By actively recognising and adjusting for our biases, we can develop more balanced perceptions of ourselves and others.

Conclusion

The irony of condemning in others what we excuse in ourselves is a common human pitfall rooted in fundamental attribution bias. The adage, ‘Know thyself’ might come into view here. We can overcome these biases by striving for self-awareness and empathy, leading to more authentic relationships and personal integrity.

Exploring Antinatalist Philosophies

A Comparative Analysis of Sarah Perry, Emil Cioran, and Contemporaries

In a world where procreation is often celebrated as a fundamental human aspiration, a group of philosophers challenges this deeply ingrained belief by questioning the ethical implications of bringing new life into existence. Antinatalism, the philosophical stance that posits procreation is morally problematic due to the inherent suffering embedded in life, invites us to reexamine our assumptions about birth, existence, and the value we assign to life itself.

Audio: Podcast related to the content on this page

Central to this discourse are thinkers like Sarah Perry, whose work “Every Cradle is a Grave: Rethinking the Ethics of Birth and Suicide” intertwines the ethics of procreation with the right to die, emphasizing personal autonomy and critiquing societal norms. Alongside Perry, philosophers such as Emil Cioran, David Benatar, Thomas Ligotti, and Peter Wessel Zapffe offer profound insights into the human condition, consciousness, and our existential burdens.

This article delves into the complex and often unsettling arguments presented by these philosophers, comparing and contrasting their perspectives on antinatalism. By exploring their works, we aim to shed light on the profound ethical considerations surrounding birth, suffering, and autonomy over one’s existence.

The Inherent Suffering of Existence

At the heart of antinatalist philosophy lies the recognition of life’s intrinsic suffering. This theme is a common thread among our featured philosophers, each articulating it through their unique lenses.

Sarah Perry argues that suffering is an unavoidable aspect of life, stemming from physical ailments, emotional pains, and existential anxieties. In “Every Cradle is a Grave,” she states:

“Existence is imposed without consent, bringing inevitable suffering.”

Perry emphasises that since every human will experience hardship, bringing a new person into the world exposes them to harm they did not choose.

Similarly, David Benatar, in his seminal work “Better Never to Have Been: The Harm of Coming into Existence,” presents the asymmetry argument. He posits that coming into existence is always a harm:

“Coming into existence is always a serious harm.”

Benatar reasons that while the absence of pain is good even if no one benefits from it, the absence of pleasure is not bad unless there is someone for whom this absence is a deprivation. Therefore, non-existence spares potential beings from suffering without depriving them of pleasures they would not miss.

Emil Cioran, a Romanian philosopher known for his profound pessimism, delves deep into the despair inherent in life. In “The Trouble with Being Born,” he reflects:

“Suffering is the substance of life and the root of personality.”

Cioran’s aphoristic musings suggest that life’s essence is intertwined with pain, and acknowledging this is crucial to understanding our existence.

Thomas Ligotti, blending horror and philosophy in “The Conspiracy Against the Human Race,” portrays consciousness as a cosmic error:

“Consciousness is a mistake of evolution.”

Ligotti argues that human awareness amplifies suffering, making us uniquely burdened by the knowledge of our mortality and the futility of our endeavours.

Peter Wessel Zapffe, in his essay “The Last Messiah,” examines how human consciousness leads to existential angst:

“Man is a biological paradox due to excessive consciousness.”

Zapffe contends that our heightened self-awareness results in an acute recognition of life’s absurdities, causing inevitable psychological suffering.



Ethics of Procreation

Building upon the acknowledgement of life’s inherent suffering, these philosophers explore the moral dimensions of bringing new life into the world.

Sarah Perry focuses on the issue of consent. She argues that since we cannot obtain consent from potential beings before birth, procreation imposes life—and its accompanying suffering—upon them without their agreement. She writes:

“Procreation perpetuates harm by introducing new sufferers.”

Perry challenges the societal norm that views having children as an unquestioned good, highlighting parents’ moral responsibility for the inevitable pain their children will face.

In David Benatar’s asymmetry argument, he extends this ethical concern by suggesting that non-existence is preferable. He explains that while the absence of pain is inherently good, the absence of pleasure is not bad because no one is deprived of it. Therefore, bringing someone into existence who will undoubtedly experience suffering is moral harm.

Emil Cioran questions the value of procreation given the futility and despair inherent in life. While not explicitly formulating an antinatalist argument, his reflections imply scepticism about the act of bringing new life into a suffering world.

Peter Wessel Zapffe proposes that refraining from procreation is a logical response to the human condition. By not having children, we can halt the perpetuation of existential suffering. He suggests that humanity’s self-awareness is a burden that should not be passed on to future generations.

The Right to Die and Autonomy over Existence

A distinctive aspect of Sarah Perry’s work is her advocacy for the right to die. She asserts that just as individuals did not consent to be born into suffering, they should have the autonomy to choose to end their lives. Perry critiques societal and legal barriers that prevent people from exercising this choice, arguing:

“Autonomy over one’s life includes the right to die.”

By decriminalizing and destigmatizing suicide, she believes society can respect individual sovereignty and potentially alleviate prolonged suffering.

Emil Cioran contemplates suicide not necessarily as an action to be taken but as a philosophical consideration. In “On the Heights of Despair,” he muses:

“It is not worth the bother of killing yourself, since you always kill yourself too late.”

Cioran views the option of ending one’s life as a paradox that underscores the absurdity of existence.

While Benatar, Ligotti, and Zapffe acknowledge the despair that can accompany life, they do not extensively advocate for the right to die. Their focus remains on the ethical implications of procreation and the existential burdens of consciousness.

Coping Mechanisms and Societal Norms

Peter Wessel Zapffe delves into how humans cope with the existential angst resulting from excessive consciousness. He identifies four defence mechanisms:

  1. Isolation: Repressing disturbing thoughts from consciousness.
  2. Anchoring: Creating or adopting values and ideals to provide meaning.
  3. Distraction: Engaging in activities to avoid self-reflection.
  4. Sublimation: Channeling despair into creative or intellectual pursuits.

According to Zapffe, these mechanisms help individuals avoid confronting life’s inherent meaninglessness.

Thomas Ligotti echoes this sentiment, suggesting that optimism is a psychological strategy to cope with the horror of existence. He writes:

“Optimism is a coping mechanism against the horror of existence.”

Sarah Perry and Emil Cioran also critique societal norms that discourage open discussions about suffering, death, and the choice not to procreate. They argue that societal pressures often silence individuals who question the value of existence, thereby perpetuating cycles of unexamined procreation and stigmatizing those who consider alternative perspectives.

Comparative Insights

While united in their acknowledgement of life’s inherent suffering, these philosophers approach antinatalism and existential pessimism through varied lenses.

  • Sarah Perry emphasises personal autonomy and societal critique, advocating for policy changes regarding birth and suicide.
  • Emil Cioran offers a deeply personal exploration of despair, using poetic language to express the futility he perceives in existence.
  • David Benatar provides a structured, logical argument against procreation, focusing on the ethical asymmetry between pain and pleasure.
  • Thomas Ligotti combines horror and philosophy to illustrate the bleakness of consciousness and its implications for human suffering.
  • Peter Wessel Zapffe analyzes the psychological mechanisms humans employ to avoid confronting existential angst.

Critiques and Counterarguments

Critics of antinatalism often point to an overemphasis on suffering, arguing that it neglects the joys, love, and meaningful experiences that life can offer. They contend that while suffering is a part of life, it is not the totality of existence.

In response, antinatalist philosophers acknowledge the presence of pleasure but question whether it justifies the inevitable suffering every person will face. Benatar argues that while positive experiences are good, they do not negate the moral harm of bringing someone into existence without their consent.

Regarding the right to die, opponents express concern over the potential neglect of mental health issues. They worry that normalizing suicide could prevent individuals from seeking help and support that might alleviate their suffering.

Sarah Perry addresses this by emphasizing the importance of autonomy and the need for compassionate support systems. She advocates for open discussions about suicide to better understand and assist those contemplating it rather than stigmatizing or criminalizing their considerations.

Societal and Cultural Implications

These philosophers’ works challenge pro-natalist biases ingrained in many cultures. By questioning the assumption that procreation is inherently positive, they open a dialogue about the ethical responsibilities associated with bringing new life into the world.

Sarah Perry critiques how society glorifies parenthood while marginalizing those who choose not to have children. She calls for reevaluating societal norms that pressure individuals into procreation without considering the ethical implications.

Similarly, Emil Cioran and Thomas Ligotti highlight how societal denial of life’s inherent suffering perpetuates illusions that hinder genuine understanding and acceptance of the human condition.

Conclusion

The exploration of antinatalist philosophy through the works of Sarah Perry, Emil Cioran, and their contemporaries presents profound ethical considerations about life, suffering, and personal autonomy. Their arguments compel us to reflect on the nature of existence and the responsibilities we bear in perpetuating life.

While one may not fully embrace antinatalist positions, engaging with these ideas challenges us to consider the complexities of the human condition. It encourages a deeper examination of our choices, the societal norms we accept, and how we confront or avoid the fundamental truths about existence.

Final Thoughts

These philosophers’ discussions are not merely abstract musings but have real-world implications for how we live our lives and make decisions about the future. Whether it’s rethinking the ethics of procreation, advocating for personal autonomy over life and death, or understanding the coping mechanisms we employ, their insights offer valuable perspectives.

By bringing these often-taboo topics into the open, we can foster a more compassionate and thoughtful society that respects individual choices and acknowledges the full spectrum of human experience.

Encouraging Dialogue

As we conclude this exploration, readers are invited to reflect on their own beliefs and experiences. Engaging in open, respectful discussions about these complex topics can lead to greater understanding and empathy.

What are your thoughts on the ethical considerations of procreation? How do you perceive the balance between life’s joys and its inherent suffering? Share your perspectives and join the conversation.


References and Further Reading

  • Perry, Sarah. Every Cradle is a Grave: Rethinking the Ethics of Birth and Suicide. Nine-Banded Books, 2014.
  • Benatar, David. Better Never to Have Been: The Harm of Coming into Existence. Oxford University Press, 2006.
  • Cioran, Emil. The Trouble with Being Born. Arcade Publishing, 1973.
  • Ligotti, Thomas. The Conspiracy Against the Human Race. Hippocampus Press, 2010.
  • Zapffe, Peter Wessel. “The Last Messiah.” Philosophy Now, 1933.

For more in-depth analyses and reviews, consider exploring the following blog posts:

  • Book Review: Better Never to Have Been (Link)
  • Book Review: The Conspiracy Against the Human Race (Link)
  • Reading ‘The Last Messiah’ by Peter Zapffe (Link)

Note to Readers

This ChatGPT o1-generated article aims to thoughtfully and respectfully present the philosophical positions on antinatalism and existential pessimism. The discussions about suffering, procreation, and the right to die are complex and sensitive. If you or someone you know is struggling with such thoughts, please seek support from mental health professionals or trusted individuals in your community.

Next Steps

Based on reader interest and engagement, future articles may delve deeper into individual philosophers’ works, explore thematic elements such as consciousness and suffering, or address counterarguments in more detail. Your feedback and participation are valuable in shaping these discussions.

Let us continue this journey of philosophical exploration together.

Morality: The Mirage of Subjectivity Within a Relative Framework

Morality, that ever-elusive beacon of human conduct, is often treated as an immutable entity—a granite monolith dictating the terms of right and wrong. Yet, upon closer inspection, morality reveals itself to be a mirage: a construct contingent upon cultural frameworks, historical conditions, and individual subjectivity. It is neither absolute nor universal but, rather, relative and ultimately subjective, lacking any intrinsic meaning outside of the context that gives it shape.

Audio: Spotify podcast conversation about this topic.

Nietzsche: Beyond Good and Evil, Beyond Absolutes

Friedrich Nietzsche, in his polemical Beyond Good and Evil and On the Genealogy of Morality, exposes the illusion of objective morality. For Nietzsche, moral systems are inherently the products of human fabrication—tools of power masquerading as eternal truths. He describes two primary moralities: master morality and slave morality. Master morality, derived from the strong, values power, creativity, and self-affirmation. Slave morality, by contrast, is reactive, rooted in the resentment (ressentiment) of the weak, who redefine strength as “evil” and weakness as “good.”

Nietzsche’s critique dismantles the notion that morality exists independently of cultural, historical, or power dynamics. What is “moral” for one era or society may be utterly abhorrent to another. Consider the glorification of war and conquest in ancient Sparta versus the modern valorisation of equality and human rights. Each framework exalts its own virtues not because they are universally true but because they serve the prevailing cultural and existential needs of their time.

The Myth of Monolithic Morality

Even viewed through a relativistic lens—and despite the protestations of Immanuel Kant or Jordan Peterson—morality is not and has never been monolithic. The belief in a singular, unchanging moral order is, at best, a Pollyanna myth or wishful thinking, perpetuated by those who prefer their moral compass untroubled by nuance. History is not the story of one moral narrative, but of a multiplicity of subcultures and countercultures, each with its own moral orientation. These orientations, while judged by the dominant moral compass of the era, always resist and redefine what is acceptable and good.

If the tables are turned, so is the moral compass reoriented. The Man in the High Castle captures this truth chillingly. Had the Nazis won World War II, Americans—despite their lofty self-perceptions—would have quickly adopted the morality of their new rulers. The foundations of American morality would have been reimagined in the image of the Third Reich, not through inherent belief but through cultural osmosis, survival instincts, and institutionalised pressure. What we now consider abhorrent might have become, under those circumstances, morally unremarkable. Morality, in this view, is not timeless but endlessly pliable, bending to the will of power and circumstance.

The Case for Moral Objectivity: Kantian Ethics

In contrast to Nietzsche’s relativism, Immanuel Kant offers a vision of morality as rational, universal, and objective. Kant’s categorical imperative asserts that moral principles must be universally applicable, derived not from cultural or historical contingencies but from pure reason. For Kant, the moral law is intrinsic to rational beings and can be expressed as: “Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.”

This framework provides a stark rebuttal to Nietzsche’s subjectivity. If morality is rooted in reason, then it transcends the whims of power dynamics or cultural specificity. Under Kant’s system, slavery, war, and exploitation are not morally permissible, regardless of historical acceptance or cultural norms, because they cannot be willed universally without contradiction. Kant’s moral absolutism thus offers a bulwark against the potential nihilism of Nietzschean subjectivity.

Cultural Pressure: The Birthplace of Moral Adoption

The individual’s adoption of morality is rarely a matter of pure, autonomous choice. Rather, it is shaped by the relentless pressures of culture. Michel Foucault’s analysis of disciplinary power in works such as Discipline and Punish highlights how societies engineer moral behaviours through surveillance, normalisation, and institutional reinforcement. From childhood, individuals are inculcated with the moral codes of their culture, internalising these norms until they appear natural and self-evident.

Yet this adoption is not passive. Even within the constraints of culture, individuals exercise agency, reshaping or rejecting the moral frameworks imposed upon them. Nietzsche’s Übermensch represents the apotheosis of this rebellion: a figure who transcends societal norms to create their own values, living authentically in the absence of universal moral truths. By contrast, Kantian ethics and utilitarianism might critique the Übermensch as solipsistic, untethered from the responsibilities of shared moral life.

Morality in a Shifting World

Morality’s subjectivity is its double-edged sword. While its flexibility allows adaptation to changing societal needs, it also exposes the fragility of moral consensus. Consider how modern societies have redefined morality over decades, from colonialism to civil rights, from gender roles to ecological responsibility. What was once moral is now abhorrent; what was once abhorrent is now a moral imperative. Yet even as society evolves, its subcultures and countercultures continue to resist and reshape dominant moral paradigms. If history teaches us anything, it is that morality is less a fixed star and more a flickering flame, always at the mercy of shifting winds.

Conclusion: The Artifice of Moral Meaning

Morality, then, is not a universal truth etched into the fabric of existence but a subjective artifice, constructed by cultures to serve their needs and adopted by individuals under varying degrees of pressure. Nietzsche’s philosophy teaches us that morality, stripped of its pretensions, is not an arbiter of truth but a symptom of human striving—one more manifestation of the will to power. In contrast, Kantian ethics and utilitarianism offer structured visions of morality, but even these grapple with the tensions between universal principles and the messy realities of history and culture.

As The Man in the High Castle suggests, morality is a contingent, situational artefact, liable to be rewritten at the whim of those in power. Its apparent stability is an illusion, a construct that shifts with every epoch, every conquest, every revolution. To ignore this truth is to cling to a comforting, but ultimately deceptive, myth. Morality, like all human constructs, is both a triumph and a deception, forever relative, ever mutable, yet persistently contested by those who would impose an impossible order on its chaos.

The Relativity of Morality: A Penguin’s Tale

I recently watched The Penguin on HBO Max, a series set in DC’s Batman universe. Ordinarily, I avoid television – especially the superhero genre – but this one intrigued me. Less spandex, more mob drama. An origin story with a dash of noir. I’ll spare you spoilers, but suffice it to say that it was an enjoyable detour, even for someone like me who prefers philosophy over fistfights.

This post isn’t a review, though. It’s a springboard into a larger idea: morality’s subjectivity – or, more precisely, its relativity.

Audio: Spotify podcast related to this topic.

Morality in a Vacuum

Morality, as I see it, is a social construct. You might carry a private moral compass, but without society, it’s about as useful as a clock on a desert island. A personal code of ethics might guide you in solitary moments, but breaking your own rules – eating that forbidden biscuit after vowing to abstain, for instance – doesn’t carry the weight of a true moral transgression. It’s more akin to reneging on a New Year’s resolution. Who’s harmed? Who’s holding you accountable? The answer is: no one but yourself, and even then, only if you care.

The Social Contract

Introduce a second person, and suddenly, morality gains traction. Agreements form – explicit or tacit – about how to behave. Multiply that to the level of a community or society, and morality becomes a kind of currency, exchanged and enforced by the group. Sometimes, these codes are elevated to laws. And, ironically, the act of adhering to a law – even one devoid of moral content – can itself become the moral thing to do. Not because the act is inherently right, but because it reinforces the structure society depends upon.

But morality is neither universal nor monolithic. It is as fractured and kaleidoscopic as the societies and subcultures that create it. Which brings us back to The Penguin.

Crime’s Moral Code

The Penguin thrives in a criminal underworld where the moral compass points in a different direction. In the dominant society’s eyes, crime is immoral. Robbery, murder, racketeering – all “bad,” all forbidden. But within the subculture of organised crime, a parallel morality exists. Honour among thieves, loyalty to the family, the unspoken rules of the game – these are their ethics, and they matter deeply to those who live by them.

When one criminal praises another – “You done good” – after a successful heist or a precise hit, it’s a moral judgement within their own framework. Outside that framework, society condemns the same actions as abhorrent. Yet even dominant societies carve out their own moral exceptions. Killing, for instance, is broadly considered immoral. Murder is outlawed. But capital punishment? That’s legal, and often deemed not only acceptable but righteous. Kant argued it was a moral imperative. Nietzsche, ever the cynic, saw this duality for what it was: a power dynamic cloaked in self-righteousness.

In The Penguin, we see this dichotomy laid bare. The underworld isn’t without morals; it simply operates on a different axis. And while the larger society might disdain it, the hypocrisy of their own shifting moral codes remains unexamined.

Final Thoughts on the Series

I’ll save other philosophical musings about The Penguin for another time – spoilers would be unavoidable, after all. But here’s a quick review: the series leans into drama, eschewing flashy gimmicks for a grittier, more grounded tone. The writing is generally strong, though there are moments of inconsistency – plot holes and contrivances that mar an otherwise immersive experience. Whether these flaws stem from the writers, director, or editor is anyone’s guess, but the effect is the same: they momentarily yank the viewer out of the world they’ve built.

Still, it’s a worthwhile watch, especially if you’re a fan of mob-style crime dramas. The final episode was, in my estimation, the best of the lot – a satisfying culmination that leaves the door ajar for philosophical ruminations like these.

Have you seen it? What are your thoughts – philosophical or otherwise? Drop a comment below. Let’s discuss.

Switching Teams, Same Game: How Politics Is the New Religion

Jean-François Lyotard’s Le Différend has a way of gnawing at you—not with profound revelations, but with the slow, disquieting erosion of assumptions. It got me thinking about something uncomfortably obvious: political orientation is nothing more than the secular cousin of religious indoctrination. Just as most people will, without much scrutiny, cling to the religion of their upbringing and defend it as the One True Faith, the same applies to their political worldview. Whether you’re baptised into Anglicanism or wade knee-deep into the waters of neoliberalism, the zeal is indistinguishable.

Of course, there are the self-proclaimed rebels who smugly declare they’ve rejected their parents’ politics. The ones who went left when Mum and Dad leaned right or discovered anarchism in the ruins of a conservative household. But let’s not be fooled by the patina of rebellion: they may have switched teams, but they’re still playing the same game. They’ve accepted the foundational myths of institutions and democracy—those hallowed, untouchable idols. Like religion, these constructs are not just defended but sanctified, preached as the best or only possible versions of salvation. Dissenters are heretics; non-believers are unthinkable.

It’s not that political ideologies are inherently bad (just like religion has its occasional charm). It’s that the devout rarely stop to question whether the framework itself might be the problem. They assume the boundaries are fixed, the terms are immutable, and the debate is merely about the correct interpretation of the catechism. But if Lyotard has taught us anything, it’s this: the real battles—the différends—are the ones no one’s even acknowledging because the language to articulate them doesn’t exist in the prevailing orthodoxy.

Dukkha, the Path of Pain, and the Illusion of Freedom: Buddhism, Antinatalism, and the Lonely Road of Individuation

The First Noble Truth of Buddhism—the notion that life is suffering, or dukkha—is often misinterpreted as a bleak condemnation of existence. But perhaps there’s something deeper here, something challenging yet quietly liberating. Buddhism doesn’t merely suggest that life is marred by occasional suffering; rather, it proposes that suffering is woven into the very fabric of life itself. Far from relegating pain to an exception, dukkha posits that dissatisfaction, discomfort, and unfulfilled longing are the baseline conditions of existence.

This isn’t to say that life is an unending stream of torment; even in nature, suffering may seem the exception rather than the rule, often concealed by survival-driven instincts and primal ignorance. But we, as conscious beings, are haunted by awareness. Aware of our mortality, our desires, our inadequacies, and ultimately, of our impotence to escape this pervasive friction. And so, if suffering is indeed the constant, how do we respond? Buddhism, antinatalism, and Jungian psychology each offer their own, starkly different paths.

The Buddhist Response: Letting Go of the Illusion

In Buddhism, dukkha is a truth that urges us not to look away but to peer more closely into the nature of suffering itself. The Buddha, with his diagnosis, didn’t suggest we simply “cope” with suffering but rather transform our entire understanding of it. Suffering, he argued, is born from attachment—from clinging to transient things, ideas, people, and identities. We build our lives on desires and expectations, only to find ourselves caught in a cycle of wanting, attaining, and inevitably losing. It’s a form of existential whiplash, one that keeps us bound to dissatisfaction because we can’t accept the impermanence of what we seek.

The Buddhist approach is both radical and elusive: by dissolving attachment and breaking the cycle of clinging, we supposedly dissolve suffering itself. The destination of this path—Nirvana—is not a state of elation or contentment but a transcendence beyond the very conditions of suffering. In reaching Nirvana, one no longer relies on external or internal validation, and the violence of social judgment, cultural obligation, and personal ambition falls away. This may seem austere, yet it offers a powerful antidote to a world that equates happiness with accumulation and possession.

Antinatalism: Opting Out of Existence’s Violence

Where Buddhism seeks liberation within life, antinatalism takes an even more radical stance: why bring new beings into an existence steeped in suffering? For antinatalists, the suffering embedded in life renders procreation ethically questionable. By creating life, we induct a new being into dukkha, with all its attendant violences—society’s harsh judgments, culture’s rigid impositions, the bureaucratic machinery that governs our daily lives, and the inescapable tyranny of time. In essence, to give birth is to invite someone into the struggle of being.

This perspective holds that the most humane action may not be to mend the suffering we encounter, nor even to accept it as Buddhism advises, but to prevent it altogether. It sees the cycle of life and death not as a majestic dance but as a tragic spiral, in which each generation inherits suffering from the last, perpetuating violence, hardship, and dissatisfaction. Antinatalism, therefore, could be seen as the ultimate recognition of dukkha—an extreme empathy for potential beings and a refusal to impose the weight of existence upon them.

Jungian Individuation: The Lonely Path of Becoming

Jung’s concept of individuation offers yet another approach: to delve deeply into the self, to integrate all aspects of the psyche—the conscious and the unconscious—and to emerge as a fully realised individual. For Jung, suffering is not to be escaped but understood and incorporated. Individuation is a journey through one’s darkest shadows, a confrontation with the parts of oneself that society, culture, and even one’s own ego would rather ignore. It is, in a way, an anti-social act, as individuation requires the courage to step away from societal norms and embrace parts of oneself that might be seen as disturbing or unconventional.

But individuation is a lonely road. Unlike the Buddhist path, which seeks to transcend suffering, individuation requires one to face it head-on, risking rejection and alienation. Society’s judgment, a kind of violence in itself, awaits those who deviate from accepted roles. The individuated person may, in effect, be punished by the very structures that insist upon conformity. And yet, individuation holds the promise of a more authentic existence, a self that is not a mere amalgam of cultural expectations but a reflection of one’s truest nature.

The Delusions That Keep Us Tethered to Suffering

Yet, for all their starkness, these paths might seem almost abstract, philosophical abstractions that don’t fully capture the reality of living within the constraints of society, culture, and self. Human beings are armed with powerful psychological mechanisms that obscure dukkha: self-delusion, cognitive dissonance, and hubris. We fabricate beliefs about happiness, purpose, and progress to protect ourselves from dukkha’s existential weight. We convince ourselves that fulfilment lies in achievements, relationships, or material success. Cognitive dissonance allows us to live in a world that we know, on some level, will disappoint us without being paralysed by that knowledge.

It’s worth noting that even those who acknowledge dukkha—who glimpse the violence of existence and the illusory nature of happiness—may still find themselves clinging to these mental defences. They are shields against despair, the comforting armours that allow us to navigate a world in which suffering is the baseline condition. This is why Buddhism, antinatalism, and individuation require such rigorous, often painful honesty: they each ask us to set down these shields, to face suffering not as a solvable problem but as an intrinsic truth. In this light, psychological defences are seen not as failures of awareness but as survival strategies, albeit strategies that limit us from ever fully confronting the nature of existence.

Finding Meaning Amidst the Violence of Being

To pursue any of these paths—Buddhist enlightenment, antinatalism, or Jungian individuation—one must be prepared to question everything society holds dear. They are radical responses to a radical insight: that suffering is not accidental but foundational. Each path offers a different form of liberation, whether through transcendence, abstention, or self-integration, but they all require a certain fearlessness, a willingness to look deeply into the uncomfortable truths about life and existence.

Buddhism calls us to renounce attachment and embrace impermanence, transcending suffering by reshaping the mind. Antinatalism challenges us to consider whether it is ethical to bring life into a world marked by dukkha, advocating non-existence as an escape from suffering. And individuation asks us to become fully ourselves, embracing the loneliness and alienation that come with resisting society’s violence against the individual.

Perhaps the most realistic approach is to accept that suffering exists, to choose the path that resonates with us, and to walk it with as much awareness as possible. Whether we seek to transcend suffering, avoid it, or integrate it, each path is a confrontation with the violence of being. And maybe, in that confrontation, we find a fleeting peace—not in the absence of suffering, but in the freedom to choose our response to it. Dukkha remains, but we may find ourselves less bound by it, able to move through the world with a deeper, quieter understanding.