I’ve lived in Los Angeles a couple of times for a sum total of perhaps 15 years. The first time, I loved it. The next time, I was running on fumes. The first time, I was in my twenties – the second time in my forties. What a difference perspective and ageing makes. In my twenties, I was a pretty-boy punk-ass who owned the club scene on the Strip. In my forties, I was a wage slave.
This morning, I heard a country song on Insta with a line claiming ‘there are nines and dimes in all 50’, and it reminded me of a phrase we used when I lived in Los Angeles – ‘LA 7’. This is constructed on the egoist, sexist notion that if you were a 10, you’d have already moved to LA. If you still lived in, say, Iowa and were considered a 10, the exchange rate to LA would be a 7.
Then, I thought about the LA-NYC rivalry and wrote this article with some help from ChatGPT.
How L.A. and NYC Became the Centres of the Universe (According to Them)
It is a truth universally acknowledged that Los Angeles and New York City—those bickering siblings of American exceptionalism—believe themselves to be the sun around which the rest of us drearily orbit. Each is utterly convinced of its centrality to the human experience, and neither can fathom that people outside their borders might actually exist without yearning to be them. This is the essence of the ‘Centre of the Universe Complex,’ a condition in which self-importance metastasises into a full-blown cultural identity.
Let us begin with Los Angeles, the influencer of cities. L.A. doesn’t merely think it’s the centre of the universe; it believes it’s the universe, replete with its own atmosphere of smog-filtered sunlight and an economy powered entirely by dreams, green juice, and Botox. For L.A., beauty isn’t just a priority—it’s a moral imperative. Hence the concept of the ‘L.A. 10,’ a stunningly arrogant bit of mathematics whereby physical attractiveness is recalculated based on proximity to the Pacific Coast Highway.
Here’s how it works: a ’10’ in some picturesque-but-hopelessly-provincial state, say Nebraska, is automatically downgraded to a ‘7’ upon arrival in Los Angeles. Why? Because, according to L.A.’s warped ‘arithmetic, if she were a real 10, she’d already be there, lounging by an infinity pool in Malibu and ignoring your DMs. This isn’t just vanity—it’s top-tier delusion. L.A. sees itself as a black hole of good looks, sucking the beautiful people from every corner of the earth while leaving the ‘merely pretty’ to languish in flyover country. The Midwest, then, isn’t so much a place as it is an agricultural waiting room for future Angelenos.
But don’t be fooled—New York City is no better. Where L.A. is obsessed with beauty, NYC worships hustle. The city doesn’t just believe it’s important; it believes it’s the only place on earth where anything important happens. While L.A. is out perfecting its tan, NYC is busy perfecting its reputation as the cultural and intellectual capital of the world—or, at least, its part of the world, which conveniently ends somewhere in Connecticut.
This mindset is best summed up by that sanctimonious mantra, If you can make it here, you can make it anywhere. Translation: if you survive the daily humiliation of paying $4,000 a month for a shoebox apartment while dodging both rats and an existential crisis, you’ve unlocked the secret to life itself. New York isn’t about looking good; it’s about enduring bad conditions and then boasting about it as if suffering were an Olympic sport. In this worldview, the rest of the world is simply an unworthy understudy in NYC’s perpetual Broadway production.
And here’s the thing: neither city can resist taking cheap shots at the other. L.A. dismisses NYC as a grim, grey treadmill where fun goes to die, while NYC scoffs at L.A. as a vapid bubble of avocado toast and Instagram filters. It’s brains versus beauty, grit versus glamour, black turtlenecks versus Lululemon. And yet, in their relentless need to outshine one another, they reveal a shared truth: both are equally narcissistic.
This mutual self-obsession is as exhausting as it is entertaining. While L.A. and NYC bicker over who wears the crown, the rest of the world is quietly rolling its eyes and enjoying a life unencumbered by astronomical rent or the constant pressure to appear important. The people of Iowa, for example, couldn’t care less if they’re an ‘LA 7’ or if they’ve “made it” in New York. They’re too busy living comfortably, surrounded by affordable housing and neighbours who might actually help them move a sofa.
But let’s give credit where it’s due. For all their flaws, these two cities do keep the rest of us entertained. Their constant self-aggrandisement fuels the cultural zeitgeist: without L.A., we’d have no Kardashians; without NYC, no Broadway. Their rivalry is the stuff of legend, a never-ending soap opera in which both cities play the lead role.
So, let them have their delusions of grandeur. After all, the world needs a little drama—and nobody does it better than the cities that think they’re the centre of it.
I participated in a thread recently. It prompted me to create a diagramme* to explain. This one:
I also wrote an article, but I’d like to share more here. As I was drafting this, I went through several iterations. This is just where I landed at the end. I’ll walk through it.
Keep It Simple, Stupid
Let’s begin at the origin. At the start, we’ve got simplicity in all directions. These are ingredients and building blocks. Nothing fancy. Perhaps these are atoms or Legos. They are easy to describe and easy to reproduce.
It’s Complicated
One can remain simple or venture up the Y-axis or out on the X-axis. Let’s go right. Eventually, we leave the land of Simplicity and cross into the land of Complication. This might be thought of as a linear journey, a land of addition and multiplication. Instead of a Lego piece, a single cell, or an atom, we’ve assembled structures from building blocks. Perhaps a Lego car or house. Perhaps a Death Star or Hogwarts. Perhaps we’ve gone from a single-celled organism to multicellular organisms or from an atom to a molecule or even a cluster of molecules. We’ve gone from a couple of hydrogens and an oxygen to a water molecule. Repeat this enough and we’ve got a glass or water – or an ocean.
In a nutshell, simple and complicated objects are predictable, designable, and controllable.
It’s Complex
Let’s travel up the Y-axis instead. Leaving the land of Simplicity, we end up in Complexity. Here, all bets are off. We are in a land of probability and uncertainty. Combining simple components results in unpredictable results, emergent properties, and self-organisation. Life is not linear on this path, so small differences in inputs can create wildly different outputs in each of magnitude and direction.
Morever, when a thing gets too complicated, it can no longer become complex. Emergence is not for the complicated.
Emergence is somewhat reserved for the complex, but where simple transitions to complicated, there is a tiny window that may allow both complicated and complex.
Medical doctors, lawyers, and judges have been the undisputed titans of professional authority for centuries. Their expertise, we are told, is sacrosanct, earned through gruelling education, prodigious memory, and painstaking application of established knowledge. But peel back the robes and white coats, and you’ll find something unsettling: a deep reliance on rote learning—an intellectual treadmill prioritising recall over reasoning. In an age where artificial intelligence can memorise and synthesise at scale, this dependence on predictable, replicable processes makes these professions ripe for automation.
Rote Professions in AI’s Crosshairs
AI thrives in environments that value pattern recognition, procedural consistency, and brute-force memory—the hallmarks of medical and legal practice.
Medicine: The Diagnosis Factory Despite its life-saving veneer, medicine is largely a game of matching symptoms to diagnoses, dosing regimens, and protocols. Enter an AI with access to the sum of human medical knowledge: not only does it diagnose faster, but it also skips the inefficiencies of human memory, emotional bias, and fatigue. Sure, we still need trauma surgeons and such, but diagnosticians are so yesterday’s news. Why pay a six-figure salary to someone recalling pharmacology tables when AI can recall them perfectly every time? Future healthcare models are likely to see Medical Technicians replacing high-cost doctors. These techs, trained to gather patient data and operate alongside AI diagnostic systems, will be cheaper, faster, and—ironically—more consistent.
Law: The Precedent Machine Lawyers, too, sit precariously on the rote-learning precipice. Case law is a glorified memory game: citing the right precedent, drafting contracts based on templates, and arguing within frameworks so well-trodden that they resemble legal Mad Libs. AI, with its infinite recall and ability to synthesise case law across jurisdictions, makes human attorneys seem quaintly inefficient. The future isn’t lawyers furiously flipping through books—it’s Legal Technicians trained to upload case facts, cross-check statutes, and act as intermediaries between clients and the system. The $500-per-hour billable rate? A relic of a pre-algorithmic era.
Judges: Justice, Blind and Algorithmic The bench isn’t safe, either. Judicial reasoning, at its core, is rule-based logic applied with varying degrees of bias. Once AI can reliably parse case law, evidence, and statutes while factoring in safeguards for fairness, why retain expensive and potentially biased judges? An AI judge, governed by a logic verification layer and monitored for compliance with established legal frameworks, could render verdicts untainted by ego or prejudice. Wouldn’t justice be more blind without a human in the equation?
The Techs Will Rise
Replacing professionals with AI doesn’t mean removing the human element entirely. Instead, it redefines roles, creating new, lower-cost positions such as Medical and Legal Technicians. These workers will:
Collect and input data into AI systems.
Act as liaisons between AI outputs and human clients or patients.
Provide emotional support—something AI still struggles to deliver effectively.
The shift also democratises expertise. Why restrict life-saving diagnostics or legal advice to those who can afford traditional professionals when AI-driven systems make these services cheaper and more accessible?
But Can AI Handle This? A Call for Logic Layers
AI critics often point to hallucinations and errors as proof of its limitations, but this objection is shortsighted. What’s needed is a logic layer: a system that verifies whether the AI’s conclusions follow rationally from its inputs.
In law, this could ensure AI judgments align with precedent and statute.
In medicine, it could cross-check diagnoses against the DSM, treatment protocols, and patient data.
A second fact-verification layer could further bolster reliability, scanning conclusions for factual inconsistencies. Together, these layers would mitigate the risks of automation while enabling AI to confidently replace rote professionals.
Resistance and the Real Battle Ahead
Predictably, the entrenched elites of medicine, law, and the judiciary will resist these changes. After all, their prestige and salaries are predicated on the illusion that their roles are irreplaceable. But history isn’t on their side. Industries driven by memorisation and routine application—think bank tellers, travel agents, and factory workers—have already been disrupted by technology. Why should these professions be exempt?
The real challenge lies not in whether AI can replace these roles but in public trust and regulatory inertia. The transformation will be swift and irreversible once safeguards are implemented and AI earns confidence.
Critical Thinking: The Human Stronghold
Professions that thrive on unstructured problem-solving, creativity, and emotional intelligence—artists, philosophers, innovators—will remain AI-resistant, at least for now. But the rote professions, with their dependency on standardisation and precedent, have no such immunity. And that is precisely why they are AI’s lowest-hanging fruit.
It’s time to stop pretending that memorisation is intelligence, that precedent is innovation, or that authority lies in a gown or white coat. AI isn’t here to make humans obsolete; it’s here to liberate us from the tyranny of rote. For those willing to adapt, the future looks bright. For the rest? The machines are coming—and they’re cheaper, faster, and better at your job.
Greed is a vice we readily recognise in others but often overlook in ourselves. This selective perception was strikingly evident during a recent conversation I had with a man who was quick to condemn another’s greed while remaining oblivious to his own similar tendencies. I told him about the escalating greed of certain companies who profit greatly from selling their printer inks and toner brands. I’ll spare you this history. This encounter underscores the powerful influence of fundamental attribution bias on our judgments and self-awareness.
Exploring Greed
Greed can be defined as an intense and selfish desire for something, especially wealth, power, or food. Psychologically, it is considered a natural human impulse that, when unchecked, can lead to unethical behaviour and strained relationships. Societally, greed is often condemned, yet it persists across cultures and histories.
We tend to label others as greedy when their actions negatively impact us or violate social norms. However, when we aggressively pursue our interests, we might frame it as ambition or resourcefulness. This dichotomy reveals a discrepancy in how we perceive greed in ourselves versus others.
Understanding Fundamental Attribution Bias
Fundamental attribution bias, or fundamental attribution error, is the tendency to attribute others’ actions to their character while attributing our own actions to external circumstances. This cognitive bias allows us to excuse our behaviour while holding others fully accountable for theirs.
For example, if someone cuts us off in traffic, we might think they’re reckless or inconsiderate. But if we cut someone off, we might justify it by claiming we were late or didn’t see them. This bias preserves our self-image but distorts our understanding of others.
The Conversation
Our conversation was centred on an HP printer that has shown a ‘low ink – please replace’ message since the cartridge was first installed. I recounted the history of the ink and toner industry. HP had a monopoly on ink for their products, a situation that earned them substantial marginal profits. Upstarts entered the marketplace. This started an escalating arms war. HP spent R&D dollars trying to defend their profit margins with nil benefit to the consumers of their product. In fact, it kept costs artificially higher. Competitors who wanted a slice of those fat margins found ways around these interventions. Eventually, HP installed chips on their toner cartridges. Unfortunately, they have a bug – or is it a feature? If you install a cartridge and remove it, it assumes you’re up to something shady, so it spawns this false alert. Some people believe this out of hand, so HP benefits twice.
If this bloke had worked for HP and had been responsible for revenue acquisition and protection, he would have swooned over the opportunity. Have no doubt. At arm’s length, he recognised this as sleazy, unethical business practices.
This conversation revealed how easily we can fall into the trap of judging others without reflecting on our own behaviour. His indignation seemed justified to him, yet he remained unaware of how his actions mirrored those he criticised.
Biblical Reference and Moral Implications
This situation brings to mind the biblical passage from Matthew 7:3-5:
“Why do you look at the speck of sawdust in your brother’s eye and pay no attention to the plank in your own eye? … You hypocrite, first take the plank out of your own eye, and then you will see clearly to remove the speck from your brother’s eye.”
The verse poignantly captures the human tendency to overlook our flaws while magnifying those of others. It calls for introspection and humility, urging us to address our shortcomings before passing judgment.
The Asymmetry of Self-Perception
Several psychological factors contribute to this asymmetry:
Self-Serving Bias: We attribute our successes to internal factors and our failures to external ones.
Cognitive Dissonance: Conflicting beliefs about ourselves and our actions create discomfort, leading us to rationalize or ignore discrepancies.
Social Comparison: We often compare ourselves favourably against others to boost self-esteem.
This skewed self-perception can hinder personal growth and damage relationships, as it prevents honest self-assessment and accountability.
Overcoming the Bias
Awareness is the first step toward mitigating fundamental attribution bias. Here are some strategies:
Mindful Reflection: Regularly assess your actions and motivations. Ask yourself if you’re holding others to a standard you’re not meeting. Riffing from ancient moral dictates, just ask yourself if this is how you would want to be treated. Adopt Kant’s moral imperative framework.
Seek Feedback: Encourage honest input from trusted friends or colleagues about your behaviour.
Empathy Development: Practice seeing situations from others’ perspectives to understand their actions more fully.
Challenge Assumptions: Before making judgments, consider external factors that might influence someone’s behaviour.
By actively recognising and adjusting for our biases, we can develop more balanced perceptions of ourselves and others.
Conclusion
The irony of condemning in others what we excuse in ourselves is a common human pitfall rooted in fundamental attribution bias. The adage, ‘Know thyself’ might come into view here. We can overcome these biases by striving for self-awareness and empathy, leading to more authentic relationships and personal integrity.
So, you watched Dune: Prophecy episode 1 on HBO Max. Congratulations on your bravery. Let’s face it—Dune adaptations are a minefield. Remember David Lynch’s Dune? Of course, you do because it’s impossible to unsee Sting in that ridiculous winged codpiece. And whilst Denis Villeneuve’s recent entries managed to elevate the franchise from high-school drama club aesthetics to actual cinema, they also came dangerously close to being too good—almost like Dune took itself seriously.
And now, here we are, back on shaky ground with Dune: Prophecy. Sure, the first episode was watchable, despite some environmental CGI that looks like it came out of a Sims expansion pack. But this isn’t a film review channel, so let’s dive into the show’s actual content—or, as I like to call it, The Philosophy 101 Drinking Game.
Eugenics: Creepy, Even by Dune Standards
Ah, eugenics. Nothing screams cosy sci-fi night in like a narrative steeped in genetic elitism. The Bene Gesserit’s obsessive fixation on a “pure bloodline” takes centre stage, making you wonder if they’re auditioning for a dystopian version of Who Do You Think You Are?. Creepy is putting it mildly. It’s all very master race, but with better posture and less obvious moustaches.
Righteousness vs. Power: The Valya Harken Show
Valya Harken is an enigma—or perhaps just your classic power-hungry sociopath cloaked in the silky veil of duty. Is she righteous? Maybe. Is she using morality as a smokescreen for her own ambition? Absolutely. Watching her wrestle with her supposed “deontological duty” to the sisterhood is like watching a cat pretend it cares about knocking over your wine glass. Sure, it’s interesting, but it’s also patently obvious there’s an ulterior motive.
Her quest for power is unmistakable. But here’s the kicker: the sisterhood needs someone like her. Systems, after all, fight to survive, and Valya is just the ruthless gladiator they require. Whether her motives are noble or nefarious is irrelevant because survival trumps all in the Dune universe. Her arc underscores the show’s recurring obsession with false dichotomies—righteousness versus calculated ambition. It’s not “one or the other,” folks. It’s always both.
Progress as a Façade
Progress, Dune-style, is a beautifully brutal illusion. One group’s advancement always comes at another’s expense, a message that’s summed up perfectly by the episode’s pull quote: “Adversity Always Lies in the Path of Advancement.” In other words, progress is just oppression with better PR. It’s a meta-narrative as old as civilisation, and Dune leans into it with an almost smug glee.
Lies, Manipulation, and the Human Condition
If humanity’s greatest weapon is the lie, then the Bene Gesserit are armed to the teeth. For a group that claims to seek truth, they certainly have no qualms about spinning elaborate deceptions. Case in point: the mind games encapsulated by “You and I remember things differently.” It’s a phrase so loaded with gaslighting potential it should come with a trigger warning.
This manipulation isn’t just a tool; it’s the cornerstone of their ethos. Truth-seeking? Sure. But only if the “truth” serves their interests. It’s classic utilitarianism: the ends justify the means, even if those means involve rewriting history—or someone else’s memory.
Fatalism, Virtue Ethics, and the Inescapable Past
The Dune universe loves a good dose of fatalism, and Prophecy is no exception. The idea that “our past always finds us” is hammered home repeatedly as characters grapple with choices, bloodlines, and cultural memory. It’s as though everyone is permanently stuck in a Freudian therapy session, doomed to relive ancestral traumas ad infinitum. In this world, identity is less a personal construct and more a hand-me-down curse.
Self-Discipline and Sacrifice: The Dune Holy Grail
Finally, we come to self-discipline and sacrifice, the twin pillars of Dune’s moral framework. Whether voluntarily undertaken or brutally enforced, these themes dominate the narrative. It’s a trope as old as time, but it works because it’s relatable—who among us hasn’t sacrificed something important for an uncertain future? Of course, in Dune, that sacrifice usually involves something more dramatic than skipping dessert. Think more along the lines of betraying allies, murdering rivals, or, you know, manipulating an entire galaxy.
The Verdict
Dune: Prophecy has potential. It’s rich in philosophical musings, political intrigue, and that uniquely Dune blend of high drama and existential dread. Sure, the CGI needs work, and some of the dialogue could use an upgrade (how about less exposition, more nuance?), but there’s enough meat here to keep you chewing. Whether it evolves into something truly epic—or collapses under the weight of its own ambition—remains to be seen. Either way, it’s worth watching, if only to see how far humanity’s greatest weapon—the lie—can take the sisterhood.
A Comparative Analysis of Sarah Perry, Emil Cioran, and Contemporaries
In a world where procreation is often celebrated as a fundamental human aspiration, a group of philosophers challenges this deeply ingrained belief by questioning the ethical implications of bringing new life into existence. Antinatalism, the philosophical stance that posits procreation is morally problematic due to the inherent suffering embedded in life, invites us to reexamine our assumptions about birth, existence, and the value we assign to life itself.
Audio: Podcast related to the content on this page
Central to this discourse are thinkers like Sarah Perry, whose work “Every Cradle is a Grave: Rethinking the Ethics of Birth and Suicide” intertwines the ethics of procreation with the right to die, emphasizing personal autonomy and critiquing societal norms. Alongside Perry, philosophers such as Emil Cioran, David Benatar, Thomas Ligotti, and Peter Wessel Zapffe offer profound insights into the human condition, consciousness, and our existential burdens.
This article delves into the complex and often unsettling arguments presented by these philosophers, comparing and contrasting their perspectives on antinatalism. By exploring their works, we aim to shed light on the profound ethical considerations surrounding birth, suffering, and autonomy over one’s existence.
The Inherent Suffering of Existence
At the heart of antinatalist philosophy lies the recognition of life’s intrinsic suffering. This theme is a common thread among our featured philosophers, each articulating it through their unique lenses.
Sarah Perry argues that suffering is an unavoidable aspect of life, stemming from physical ailments, emotional pains, and existential anxieties. In “Every Cradle is a Grave,” she states:
“Existence is imposed without consent, bringing inevitable suffering.”
Perry emphasises that since every human will experience hardship, bringing a new person into the world exposes them to harm they did not choose.
Similarly, David Benatar, in his seminal work “Better Never to Have Been: The Harm of Coming into Existence,” presents the asymmetry argument. He posits that coming into existence is always a harm:
“Coming into existence is always a serious harm.”
Benatar reasons that while the absence of pain is good even if no one benefits from it, the absence of pleasure is not bad unless there is someone for whom this absence is a deprivation. Therefore, non-existence spares potential beings from suffering without depriving them of pleasures they would not miss.
Emil Cioran, a Romanian philosopher known for his profound pessimism, delves deep into the despair inherent in life. In “The Trouble with Being Born,” he reflects:
“Suffering is the substance of life and the root of personality.”
Cioran’s aphoristic musings suggest that life’s essence is intertwined with pain, and acknowledging this is crucial to understanding our existence.
Thomas Ligotti, blending horror and philosophy in “The Conspiracy Against the Human Race,” portrays consciousness as a cosmic error:
“Consciousness is a mistake of evolution.”
Ligotti argues that human awareness amplifies suffering, making us uniquely burdened by the knowledge of our mortality and the futility of our endeavours.
Peter Wessel Zapffe, in his essay “The Last Messiah,” examines how human consciousness leads to existential angst:
“Man is a biological paradox due to excessive consciousness.”
Zapffe contends that our heightened self-awareness results in an acute recognition of life’s absurdities, causing inevitable psychological suffering.
Ethics of Procreation
Building upon the acknowledgement of life’s inherent suffering, these philosophers explore the moral dimensions of bringing new life into the world.
Sarah Perry focuses on the issue of consent. She argues that since we cannot obtain consent from potential beings before birth, procreation imposes life—and its accompanying suffering—upon them without their agreement. She writes:
“Procreation perpetuates harm by introducing new sufferers.”
Perry challenges the societal norm that views having children as an unquestioned good, highlighting parents’ moral responsibility for the inevitable pain their children will face.
In David Benatar’s asymmetry argument, he extends this ethical concern by suggesting that non-existence is preferable. He explains that while the absence of pain is inherently good, the absence of pleasure is not bad because no one is deprived of it. Therefore, bringing someone into existence who will undoubtedly experience suffering is moral harm.
Emil Cioran questions the value of procreation given the futility and despair inherent in life. While not explicitly formulating an antinatalist argument, his reflections imply scepticism about the act of bringing new life into a suffering world.
Peter Wessel Zapffe proposes that refraining from procreation is a logical response to the human condition. By not having children, we can halt the perpetuation of existential suffering. He suggests that humanity’s self-awareness is a burden that should not be passed on to future generations.
The Right to Die and Autonomy over Existence
A distinctive aspect of Sarah Perry’s work is her advocacy for the right to die. She asserts that just as individuals did not consent to be born into suffering, they should have the autonomy to choose to end their lives. Perry critiques societal and legal barriers that prevent people from exercising this choice, arguing:
“Autonomy over one’s life includes the right to die.”
By decriminalizing and destigmatizing suicide, she believes society can respect individual sovereignty and potentially alleviate prolonged suffering.
Emil Cioran contemplates suicide not necessarily as an action to be taken but as a philosophical consideration. In “On the Heights of Despair,” he muses:
“It is not worth the bother of killing yourself, since you always kill yourself too late.”
Cioran views the option of ending one’s life as a paradox that underscores the absurdity of existence.
While Benatar, Ligotti, and Zapffe acknowledge the despair that can accompany life, they do not extensively advocate for the right to die. Their focus remains on the ethical implications of procreation and the existential burdens of consciousness.
Coping Mechanisms and Societal Norms
Peter Wessel Zapffe delves into how humans cope with the existential angst resulting from excessive consciousness. He identifies four defence mechanisms:
Isolation: Repressing disturbing thoughts from consciousness.
Anchoring: Creating or adopting values and ideals to provide meaning.
Distraction: Engaging in activities to avoid self-reflection.
Sublimation: Channeling despair into creative or intellectual pursuits.
According to Zapffe, these mechanisms help individuals avoid confronting life’s inherent meaninglessness.
Thomas Ligotti echoes this sentiment, suggesting that optimism is a psychological strategy to cope with the horror of existence. He writes:
“Optimism is a coping mechanism against the horror of existence.”
Sarah Perry and Emil Cioran also critique societal norms that discourage open discussions about suffering, death, and the choice not to procreate. They argue that societal pressures often silence individuals who question the value of existence, thereby perpetuating cycles of unexamined procreation and stigmatizing those who consider alternative perspectives.
Comparative Insights
While united in their acknowledgement of life’s inherent suffering, these philosophers approach antinatalism and existential pessimism through varied lenses.
Sarah Perry emphasises personal autonomy and societal critique, advocating for policy changes regarding birth and suicide.
Emil Cioran offers a deeply personal exploration of despair, using poetic language to express the futility he perceives in existence.
David Benatar provides a structured, logical argument against procreation, focusing on the ethical asymmetry between pain and pleasure.
Thomas Ligotti combines horror and philosophy to illustrate the bleakness of consciousness and its implications for human suffering.
Peter Wessel Zapffe analyzes the psychological mechanisms humans employ to avoid confronting existential angst.
Critiques and Counterarguments
Critics of antinatalism often point to an overemphasis on suffering, arguing that it neglects the joys, love, and meaningful experiences that life can offer. They contend that while suffering is a part of life, it is not the totality of existence.
In response, antinatalist philosophers acknowledge the presence of pleasure but question whether it justifies the inevitable suffering every person will face. Benatar argues that while positive experiences are good, they do not negate the moral harm of bringing someone into existence without their consent.
Regarding the right to die, opponents express concern over the potential neglect of mental health issues. They worry that normalizing suicide could prevent individuals from seeking help and support that might alleviate their suffering.
Sarah Perry addresses this by emphasizing the importance of autonomy and the need for compassionate support systems. She advocates for open discussions about suicide to better understand and assist those contemplating it rather than stigmatizing or criminalizing their considerations.
Societal and Cultural Implications
These philosophers’ works challenge pro-natalist biases ingrained in many cultures. By questioning the assumption that procreation is inherently positive, they open a dialogue about the ethical responsibilities associated with bringing new life into the world.
Sarah Perry critiques how society glorifies parenthood while marginalizing those who choose not to have children. She calls for reevaluating societal norms that pressure individuals into procreation without considering the ethical implications.
Similarly, Emil Cioran and Thomas Ligotti highlight how societal denial of life’s inherent suffering perpetuates illusions that hinder genuine understanding and acceptance of the human condition.
Conclusion
The exploration of antinatalist philosophy through the works of Sarah Perry, Emil Cioran, and their contemporaries presents profound ethical considerations about life, suffering, and personal autonomy. Their arguments compel us to reflect on the nature of existence and the responsibilities we bear in perpetuating life.
While one may not fully embrace antinatalist positions, engaging with these ideas challenges us to consider the complexities of the human condition. It encourages a deeper examination of our choices, the societal norms we accept, and how we confront or avoid the fundamental truths about existence.
Final Thoughts
These philosophers’ discussions are not merely abstract musings but have real-world implications for how we live our lives and make decisions about the future. Whether it’s rethinking the ethics of procreation, advocating for personal autonomy over life and death, or understanding the coping mechanisms we employ, their insights offer valuable perspectives.
By bringing these often-taboo topics into the open, we can foster a more compassionate and thoughtful society that respects individual choices and acknowledges the full spectrum of human experience.
Encouraging Dialogue
As we conclude this exploration, readers are invited to reflect on their own beliefs and experiences. Engaging in open, respectful discussions about these complex topics can lead to greater understanding and empathy.
What are your thoughts on the ethical considerations of procreation? How do you perceive the balance between life’s joys and its inherent suffering? Share your perspectives and join the conversation.
References and Further Reading
Perry, Sarah. Every Cradle is a Grave: Rethinking the Ethics of Birth and Suicide. Nine-Banded Books, 2014.
Benatar, David. Better Never to Have Been: The Harm of Coming into Existence. Oxford University Press, 2006.
Cioran, Emil. The Trouble with Being Born. Arcade Publishing, 1973.
Ligotti, Thomas. The Conspiracy Against the Human Race. Hippocampus Press, 2010.
Zapffe, Peter Wessel. “The Last Messiah.” Philosophy Now, 1933.
For more in-depth analyses and reviews, consider exploring the following blog posts:
This ChatGPT o1-generated article aims to thoughtfully and respectfully present the philosophical positions on antinatalism and existential pessimism. The discussions about suffering, procreation, and the right to die are complex and sensitive. If you or someone you know is struggling with such thoughts, please seek support from mental health professionals or trusted individuals in your community.
Next Steps
Based on reader interest and engagement, future articles may delve deeper into individual philosophers’ works, explore thematic elements such as consciousness and suffering, or address counterarguments in more detail. Your feedback and participation are valuable in shaping these discussions.
Let us continue this journey of philosophical exploration together.
Morality, that ever-elusive beacon of human conduct, is often treated as an immutable entity—a granite monolith dictating the terms of right and wrong. Yet, upon closer inspection, morality reveals itself to be a mirage: a construct contingent upon cultural frameworks, historical conditions, and individual subjectivity. It is neither absolute nor universal but, rather, relative and ultimately subjective, lacking any intrinsic meaning outside of the context that gives it shape.
Friedrich Nietzsche, in his polemical Beyond Good and Evil and On the Genealogy of Morality, exposes the illusion of objective morality. For Nietzsche, moral systems are inherently the products of human fabrication—tools of power masquerading as eternal truths. He describes two primary moralities: master morality and slave morality. Master morality, derived from the strong, values power, creativity, and self-affirmation. Slave morality, by contrast, is reactive, rooted in the resentment (ressentiment) of the weak, who redefine strength as “evil” and weakness as “good.”
Nietzsche’s critique dismantles the notion that morality exists independently of cultural, historical, or power dynamics. What is “moral” for one era or society may be utterly abhorrent to another. Consider the glorification of war and conquest in ancient Sparta versus the modern valorisation of equality and human rights. Each framework exalts its own virtues not because they are universally true but because they serve the prevailing cultural and existential needs of their time.
The Myth of Monolithic Morality
Even viewed through a relativistic lens—and despite the protestations of Immanuel Kant or Jordan Peterson—morality is not and has never been monolithic. The belief in a singular, unchanging moral order is, at best, a Pollyanna myth or wishful thinking, perpetuated by those who prefer their moral compass untroubled by nuance. History is not the story of one moral narrative, but of a multiplicity of subcultures and countercultures, each with its own moral orientation. These orientations, while judged by the dominant moral compass of the era, always resist and redefine what is acceptable and good.
If the tables are turned, so is the moral compass reoriented. The Man in the High Castle captures this truth chillingly. Had the Nazis won World War II, Americans—despite their lofty self-perceptions—would have quickly adopted the morality of their new rulers. The foundations of American morality would have been reimagined in the image of the Third Reich, not through inherent belief but through cultural osmosis, survival instincts, and institutionalised pressure. What we now consider abhorrent might have become, under those circumstances, morally unremarkable. Morality, in this view, is not timeless but endlessly pliable, bending to the will of power and circumstance.
The Case for Moral Objectivity: Kantian Ethics
In contrast to Nietzsche’s relativism, Immanuel Kant offers a vision of morality as rational, universal, and objective. Kant’s categorical imperative asserts that moral principles must be universally applicable, derived not from cultural or historical contingencies but from pure reason. For Kant, the moral law is intrinsic to rational beings and can be expressed as: “Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.”
This framework provides a stark rebuttal to Nietzsche’s subjectivity. If morality is rooted in reason, then it transcends the whims of power dynamics or cultural specificity. Under Kant’s system, slavery, war, and exploitation are not morally permissible, regardless of historical acceptance or cultural norms, because they cannot be willed universally without contradiction. Kant’s moral absolutism thus offers a bulwark against the potential nihilism of Nietzschean subjectivity.
Cultural Pressure: The Birthplace of Moral Adoption
The individual’s adoption of morality is rarely a matter of pure, autonomous choice. Rather, it is shaped by the relentless pressures of culture. Michel Foucault’s analysis of disciplinary power in works such as Discipline and Punish highlights how societies engineer moral behaviours through surveillance, normalisation, and institutional reinforcement. From childhood, individuals are inculcated with the moral codes of their culture, internalising these norms until they appear natural and self-evident.
Yet this adoption is not passive. Even within the constraints of culture, individuals exercise agency, reshaping or rejecting the moral frameworks imposed upon them. Nietzsche’s Übermensch represents the apotheosis of this rebellion: a figure who transcends societal norms to create their own values, living authentically in the absence of universal moral truths. By contrast, Kantian ethics and utilitarianism might critique the Übermensch as solipsistic, untethered from the responsibilities of shared moral life.
Morality in a Shifting World
Morality’s subjectivity is its double-edged sword. While its flexibility allows adaptation to changing societal needs, it also exposes the fragility of moral consensus. Consider how modern societies have redefined morality over decades, from colonialism to civil rights, from gender roles to ecological responsibility. What was once moral is now abhorrent; what was once abhorrent is now a moral imperative. Yet even as society evolves, its subcultures and countercultures continue to resist and reshape dominant moral paradigms. If history teaches us anything, it is that morality is less a fixed star and more a flickering flame, always at the mercy of shifting winds.
Conclusion: The Artifice of Moral Meaning
Morality, then, is not a universal truth etched into the fabric of existence but a subjective artifice, constructed by cultures to serve their needs and adopted by individuals under varying degrees of pressure. Nietzsche’s philosophy teaches us that morality, stripped of its pretensions, is not an arbiter of truth but a symptom of human striving—one more manifestation of the will to power. In contrast, Kantian ethics and utilitarianism offer structured visions of morality, but even these grapple with the tensions between universal principles and the messy realities of history and culture.
As The Man in the High Castle suggests, morality is a contingent, situational artefact, liable to be rewritten at the whim of those in power. Its apparent stability is an illusion, a construct that shifts with every epoch, every conquest, every revolution. To ignore this truth is to cling to a comforting, but ultimately deceptive, myth. Morality, like all human constructs, is both a triumph and a deception, forever relative, ever mutable, yet persistently contested by those who would impose an impossible order on its chaos.
I recently watched The Penguin on HBO Max, a series set in DC’s Batman universe. Ordinarily, I avoid television – especially the superhero genre – but this one intrigued me. Less spandex, more mob drama. An origin story with a dash of noir. I’ll spare you spoilers, but suffice it to say that it was an enjoyable detour, even for someone like me who prefers philosophy over fistfights.
This post isn’t a review, though. It’s a springboard into a larger idea: morality’s subjectivity – or, more precisely, its relativity.
Audio: Spotify podcast related to this topic.
Morality in a Vacuum
Morality, as I see it, is a social construct. You might carry a private moral compass, but without society, it’s about as useful as a clock on a desert island. A personal code of ethics might guide you in solitary moments, but breaking your own rules – eating that forbidden biscuit after vowing to abstain, for instance – doesn’t carry the weight of a true moral transgression. It’s more akin to reneging on a New Year’s resolution. Who’s harmed? Who’s holding you accountable? The answer is: no one but yourself, and even then, only if you care.
The Social Contract
Introduce a second person, and suddenly, morality gains traction. Agreements form – explicit or tacit – about how to behave. Multiply that to the level of a community or society, and morality becomes a kind of currency, exchanged and enforced by the group. Sometimes, these codes are elevated to laws. And, ironically, the act of adhering to a law – even one devoid of moral content – can itself become the moral thing to do. Not because the act is inherently right, but because it reinforces the structure society depends upon.
But morality is neither universal nor monolithic. It is as fractured and kaleidoscopic as the societies and subcultures that create it. Which brings us back to The Penguin.
Crime’s Moral Code
The Penguin thrives in a criminal underworld where the moral compass points in a different direction. In the dominant society’s eyes, crime is immoral. Robbery, murder, racketeering – all “bad,” all forbidden. But within the subculture of organised crime, a parallel morality exists. Honour among thieves, loyalty to the family, the unspoken rules of the game – these are their ethics, and they matter deeply to those who live by them.
When one criminal praises another – “You done good” – after a successful heist or a precise hit, it’s a moral judgement within their own framework. Outside that framework, society condemns the same actions as abhorrent. Yet even dominant societies carve out their own moral exceptions. Killing, for instance, is broadly considered immoral. Murder is outlawed. But capital punishment? That’s legal, and often deemed not only acceptable but righteous. Kant argued it was a moral imperative. Nietzsche, ever the cynic, saw this duality for what it was: a power dynamic cloaked in self-righteousness.
In The Penguin, we see this dichotomy laid bare. The underworld isn’t without morals; it simply operates on a different axis. And while the larger society might disdain it, the hypocrisy of their own shifting moral codes remains unexamined.
Final Thoughts on the Series
I’ll save other philosophical musings about The Penguin for another time – spoilers would be unavoidable, after all. But here’s a quick review: the series leans into drama, eschewing flashy gimmicks for a grittier, more grounded tone. The writing is generally strong, though there are moments of inconsistency – plot holes and contrivances that mar an otherwise immersive experience. Whether these flaws stem from the writers, director, or editor is anyone’s guess, but the effect is the same: they momentarily yank the viewer out of the world they’ve built.
Still, it’s a worthwhile watch, especially if you’re a fan of mob-style crime dramas. The final episode was, in my estimation, the best of the lot – a satisfying culmination that leaves the door ajar for philosophical ruminations like these.
Have you seen it? What are your thoughts – philosophical or otherwise? Drop a comment below. Let’s discuss.
Let’s talk about Less Than Zero. No, not the film. I’m talking about the book—Bret Easton Ellis’s nihilistic masterpiece that drags you through a moral cesspit of 1980s Los Angeles. You might remember it as the story that makes American Psycho look like a quirky self-help guide. It’s dark, it’s bleak, and it doesn’t pretend to offer you a shred of hope.
And then there’s the movie adaptation.
Oh, the movie. It’s as though someone read Ellis’s unflinching tale of moral rot and thought, You know what this needs? Friendship. And a redemption arc. And maybe some heartfelt music in the background. Hollywood, in all its infinite wisdom, decided that audiences couldn’t handle the book’s existential despair. So, they took a story about the void—about the emptiness of privilege, the suffocation of apathy, and the complete erosion of human connection—and gave it a fuzzy moral centre.
Here’s the gist: The book is nihilism incarnate. It follows Clay, a disaffected college student who comes home to LA for Christmas and is immediately swallowed whole by a world of cocaine, vapid socialites, and casual cruelty. No one learns anything. No one grows. In fact, the whole point is that these characters are so morally bankrupt, so irreparably hollow, that they’re beyond redemption. If you’re looking for a happy ending, don’t bother—Ellis leaves you stranded in the abyss, staring into the void, wondering if there’s any point to anything. Spoiler: there’s not.
Then along comes the 1987 film, directed by Marek Kanievska. It keeps the names of the characters—Clay, Blair, Julian—but not much else. Instead of being an icy observer of LA’s decadence, Clay is transformed into a love-struck saviour. Blair, a passive figure in the novel, becomes a supportive girlfriend. And Julian—oh, poor Julian—is turned into a sacrificial lamb for the sake of a heartfelt narrative about friendship and second chances.
The film turns Less Than Zero into an anti-drug PSA. It’s basically Nancy Reagan Presents: a story of addiction, redemption, and the power of love, wrapped in a slick 80s aesthetic. Robert Downey Jr., to his credit, gives a brilliant performance as Julian, the doomed addict. But the character is barely recognisable compared to his literary counterpart. In the book, Julian’s descent into drug-fuelled depravity isn’t a cautionary tale—it’s just another symptom of a world where nothing and no one has any value. In the film, Julian is tragic, yes, but in a way that invites sympathy and, crucially, an attempt at salvation.
Let’s not forget the ending. The novel ends on a note so cold it could freeze your soul: Clay leaves Los Angeles, unchanged, unbothered, and unmoved. The film, however, concludes with Clay and Blair driving off into the sunset, having vowed to turn their lives around. It’s saccharine. It’s pandering. It’s the cinematic equivalent of slapping a motivational poster over a painting by Francis Bacon.
Why did Hollywood do this? Simple: nihilism doesn’t sell. You can’t slap it on a movie poster and expect audiences to line up at the box office. People want catharsis, not existential despair. And so, the filmmakers gutted Less Than Zero of its soul (or lack thereof), replacing its stark nihilism with a hopeful narrative about the power of human connection.
Here’s the kicker, though: by doing this, the film completely misses the point of Ellis’s novel. Less Than Zero is a critique of LA’s shallow, soulless culture—a world where connection is impossible because no one feels anything. Turning it into a feel-good story about saving a friend from addiction is not just a betrayal; it’s downright laughable. It’s like adapting 1984 into a rom-com where Winston and Julia overthrow Big Brother and live happily ever after.
To be fair, the film isn’t bad—if you forget the source material exists. It’s well-acted, stylishly shot, and undeniably entertaining. But as an adaptation, it’s a travesty. It’s Ellis’s Less Than Zero with all the edges sanded down, the grit scrubbed clean, and a shiny coat of sentimentality slapped on top.
So, if you’ve read the book and thought, Wow, that was bleak—I wonder if the movie is any lighter?, the answer is yes, but not in a good way. It’s lighter because it’s hollowed out, stripped of its existential weight, and repackaged as something safe and digestible.
And if you haven’t read the book? Do yourself a favour: skip the movie, pour yourself a stiff drink, and dive into Ellis’s bleak masterpiece. Just don’t expect any warm, fuzzy feelings—it’s called Less Than Zero for a reason.
Jean-François Lyotard’s Le Différend has a way of gnawing at you—not with profound revelations, but with the slow, disquieting erosion of assumptions. It got me thinking about something uncomfortably obvious: political orientation is nothing more than the secular cousin of religious indoctrination. Just as most people will, without much scrutiny, cling to the religion of their upbringing and defend it as the One True Faith, the same applies to their political worldview. Whether you’re baptised into Anglicanism or wade knee-deep into the waters of neoliberalism, the zeal is indistinguishable.
Of course, there are the self-proclaimed rebels who smugly declare they’ve rejected their parents’ politics. The ones who went left when Mum and Dad leaned right or discovered anarchism in the ruins of a conservative household. But let’s not be fooled by the patina of rebellion: they may have switched teams, but they’re still playing the same game. They’ve accepted the foundational myths of institutions and democracy—those hallowed, untouchable idols. Like religion, these constructs are not just defended but sanctified, preached as the best or only possible versions of salvation. Dissenters are heretics; non-believers are unthinkable.
It’s not that political ideologies are inherently bad (just like religion has its occasional charm). It’s that the devout rarely stop to question whether the framework itself might be the problem. They assume the boundaries are fixed, the terms are immutable, and the debate is merely about the correct interpretation of the catechism. But if Lyotard has taught us anything, it’s this: the real battles—the différends—are the ones no one’s even acknowledging because the language to articulate them doesn’t exist in the prevailing orthodoxy.