I died in March 2023 — or so the rumour mill would have you believe.
Of course, given that I’m still here, hammering away at this keyboard, it must be said that I didn’t technically die. We don’t bring people back. Death, real death, doesn’t work on a “return to sender” basis. Once you’re gone, you’re gone, and the only thing bringing you back is a heavily fictionalised Netflix series.
Audio: NotebookLM podcast of this content.
No, this is a semantic cock-up, yet another stinking exhibit in the crumbling Museum of Language Insufficiency. “I died,” people say, usually while slurping a Pumpkin Spice Latte and live-streaming their trauma to 53 followers. What they mean is that they flirted with death, clumsily, like a drunk uncle at a wedding. No consummation, just a lot of embarrassing groping at the pearly gates.
And since we’re clarifying terms: there was no tunnel of light, no angels, no celestial choir belting out Coldplay covers. No bearded codgers in slippers. No 72 virgins. (Or, more plausibly, 72 incels whining about their lack of Wi-Fi reception.)
There was, in fact, nothing. Nothing but the slow, undignified realisation that the body, that traitorous meat vessel, was shutting down — and the only gates I was approaching belonged to A&E, with its flickering fluorescent lights and a faint smell of overcooked cabbage.
To be fair, it’s called a near-death experience (NDE) for a reason. Language, coward that it is, hedges its bets. “Near-death” means you dipped a toe into the abyss and then screamed for your mummy. You didn’t die. You loitered. You loitered in the existential equivalent of an airport Wetherspoons, clutching your boarding pass and wondering why the flight to Oblivion was delayed.
As the stories go, people waft into the next world and are yanked back with stirring tales of unicorns, long-dead relatives, and furniture catalogues made of clouds. I, an atheist to my scorched and shrivelled soul, expected none of that — and was therefore not disappointed.
What I do recall, before the curtain wobbled, was struggling for breath, thinking, “Pick a side. In or out. But for pity’s sake, no more dithering.” In a last act of rational agency, I asked an ER nurse — a bored-looking Athena in scrubs — to intubate me. She responded with the rousing medical affirmation, “We may have to,” which roughly translates to, “Stop making a scene, love. We’ve got fifteen others ahead of you.”
After that, nothing. I was out. Like a light. Like a minor character in a Dickens novel whose death is so insignificant it happens between paragraphs.
I woke up the next day: groggy, sliced open, a tube rammed down my throat, and absolutely no closer to solving the cosmic riddle of it all. Not exactly the triumphant return of Odysseus. Not even a second-rate Ulysses.
Here’s the reality: There is no coming back from death. You can’t “visit” death, any more than you can spend the afternoon being non-existent and return with a suntan.
Those near-death visions? Oxygen-starved brains farting out fever dreams. Cerebral cortexes short-circuiting like Poundland fairy lights. Hallucinations, not heralds. A final, frantic light show performed for an audience of none.
Epicurus, that cheerful nihilist, said, “When we are, death is not. When death is, we are not.” He forgot to mention that, in between, people would invent entire publishing industries peddling twaddle about journeys beyond the veil — and charging $29.99 for the paperback edition.
No angels. No harps. No antechamber to the divine. Just the damp whirr of hospital machinery and the faint beep-beep of capitalism, patiently billing you for your own demise.
If there’s a soundtrack to death, it’s not choirs of the blessed. It’s a disgruntled junior surgeon muttering, “Where the hell’s the anaesthetist?” while pawing desperately through a drawer full of out-of-date latex gloves.
And thus, reader, I lived. But only in the most vulgar, anticlimactic, and utterly mortal sense.
There will be no afterlife memoir. No second chance to settle the score. No sequel. Just this: breath, blood, occasional barbed words — and then silence.
I’ve read Part I of Hobbes’ Leviathan and wonder what it would have been like if he filtered his thoughts through Hume or Wittgenstein. Hobbes makes Dickens read like Pollyanna. It’s an interesting historical piece, worth reading on that basis alone. It reads as if the Christian Bible had to pass through a legal review before it had been published, sapped of vigour. As bad a rap as Schopenhauer seems to get, Hobbes is the consummate Ebenezer Scrooge. Bah, humbug – you nasty, brutish, filthy animals!*
Audio: NotebookLM podcast conversation on this topic.
In any case, it got me thinking of free will and, more to the point, of will itself.
A Brief History of Humanity’s Favourite Metaphysical Scapegoat
By the time Free Will turned up to the party, the real guest of honour—the Will—had already been drinking heavily, muttering incoherently in the corner, and starting fights with anyone who made eye contact. We like to pretend that the “will” is a noble concept: the engine of our autonomy, the core of our moral selves, the brave little metaphysical organ that lets us choose kale over crisps. But in truth, it’s a bloody mess—philosophy’s equivalent of a family heirloom that no one quite understands but refuses to throw away.
So, let’s rewind. Where did this thing come from? And why, after 2,500 years of name-dropping, finger-pointing, and metaphysical gymnastics, are we still not quite sure whether we have a will, are a will, or should be suing it for damages?
Plato: Soul, Reason, and That Poor Horse
In the beginning, there was Plato, who—as with most things—half-invented the question and then wandered off before giving a straight answer. For him, the soul was a tripartite circus act: reason, spirit, and appetite. Will, as a term, didn’t get top billing—it didn’t even get its name on the poster. But the idea was there, muddling along somewhere between the charioteer (reason) and the unruly horses (desire and spiritedness).
No explicit will, mind you. Just a vague sense that the rational soul ought to be in charge, even if it had to beat the rest of itself into submission.
Aristotle: Purpose Without Pathos
Aristotle, ever the tidy-minded taxonomist, introduced prohairesis—deliberate choice—as a sort of proto-will. But again, it was all about rational calculation toward an end. Ethics was teleological, goal-oriented. You chose what aligned with eudaimonia, that smug Greek term for flourishing. Will, if it existed at all, was just reason picking out dinner options based on your telos. No inner torment, no existential rebellion—just logos in a toga.
Augustine: Sin, Suffering, and That Eternal No
Fast-forward a few hundred years, and along comes Saint Augustine, traumatised by his libido and determined to make the rest of us suffer for it. Enter voluntas: the will as the seat of choice—and the scene of the crime. Augustine is the first to really make the will bleed. He discovers he can want two incompatible things at once and feels properly appalled about it.
From this comes the classic Christian cocktail: freedom plus failure equals guilt. The will is free, but broken. It’s responsible for sin, for disobedience, for not loving God enough on Wednesdays. Thanks to Augustine, we’re stuck with the idea that the will is both the instrument of salvation and the reason we’re going to Hell.
Cheers.
Medievals: God’s Will or Yours, Pick One
The Scholastics, never ones to let an ambiguity pass unanalysed, promptly split into camps. Aquinas, ever the reasonable Dominican, says the will is subordinate to the intellect. God is rational, and so are we, mostly. But Duns Scotus and William of Ockham, the original voluntarist hooligans, argue that the will is superior—even in God. God could have made murder a virtue, they claim, and you’d just have to live with it.
From this cheerful perspective, will becomes a force of arbitrary fiat, and humans, made in God’s image, inherit the same capacity for irrational choice. The will is now more than moral; it’s metaphysical. Less reason’s servant, more chaos goblin.
Hobbes: Appetite with Delusions of Grandeur
Then along comes Thomas Hobbes, who looks at the soul and sees a wheezing machine of appetites. Will, in his famously cheery view, is simply “the last appetite before action.” No higher calling, no spiritual struggle—just the twitch that wins. Man is not a rational animal, but a selfish algorithm on legs. For Hobbes, will is where desire stumbles into motion, and morality is a polite euphemism for not getting stabbed.
Kant: The Will Gets a Makeover
Enter Immanuel Kant: powdered wig, pursed lips, and the moral rectitude of a man who scheduled his bowel movements. Kant gives us the “good will”, which acts from duty, not desire. Suddenly, the will is autonomous, rational, and morally legislative—a one-man Parliament of inner law.
It’s all terribly noble, terribly German, and entirely exhausting. For Kant, free will is not the ability to do whatever you like—it’s the capacity to choose according to moral law, even when you’d rather be asleep. The will is finally heroic—but only if it agrees to hate itself a little.
Schopenhauer: Cosmic Will, Cosmic Joke
And then the mood turns. Schopenhauer, world’s grumpiest mystic, takes Kant’s sublime will and reveals it to be a blind, thrashing, cosmic force. Will, for him, isn’t reason—it’s suffering in motion. The entire universe is will-to-live: a desperate, pointless striving that dooms us to perpetual dissatisfaction.
There is no freedom, no morality, no point. The only escape is to negate the will, preferably through aesthetic contemplation or Buddhist-like renunciation. In Schopenhauer’s world, the will is not what makes us human—it’s what makes us miserable.
Nietzsche: Transvaluation and the Will to Shout Loudest
Cue Nietzsche, who takes Schopenhauer’s howling void and says: yes, but what if we made it fabulous? For him, the will is no longer to live, but to power—to assert, to create, to impose value. “Free will” is a theologian’s fantasy, a tool of priests and moral accountants. But will itself? That’s the fire in the forge. The Übermensch doesn’t renounce the will—he rides it like a stallion into the sunset of morality.
Nietzsche doesn’t want to deny the abyss. He wants to waltz with it.
Today: Free Will and the Neuroscientific Hangover
And now? Now we’re left with compatibilists, libertarians, determinists, and neuroscientists all shouting past each other, armed with fMRI machines and TED talks. Some claim free will is an illusion, a post hoc rationalisation made by brains doing what they were always going to do. Others insist that moral responsibility requires it, even if we can’t quite locate it between the neurons.
We talk about willpower, will-to-change, political will, and free will like they’re real things. But under the hood, we’re still wrestling with the same questions Augustine posed in a North African villa: Why do I do what I don’t want to do? And more importantly, who’s doing it?
Conclusion: Where There’s a Will, There’s a Mess
From Plato’s silent horses to Nietzsche’s Dionysian pyrotechnics, the will has shape-shifted more times than a politician in an election year. It has been a rational chooser, a moral failure, a divine spark, a mechanical twitch, a cosmic torment, and an existential triumph.
Despite centuries of philosophical handwringing, what it has never been is settled.
So where there’s a will, there’s a way. But the way? Twisting, contradictory, and littered with the corpses of half-baked metaphysical systems.
Welcome to the labyrinth. Bring snacks.
* The solitary, poor, nasty, brutish, and short quote is forthcoming. Filthy animals is a nod to Home Alone.
A Brief, Brutal Experiment in Categorising Your Worldview
This month, I’ve been tinkering with a little project—an elegant, six-question survey designed to assess where you land in the great intellectual mess that is modernity.
Audio: Podcast discussion about this post.
This isn’t some spur-of-the-moment quiz cooked up in a caffeine-fueled haze. No, this project has been simmering for years, and after much consideration (and occasional disdain), I’ve crafted a set of questions and response options that, I believe, encapsulate the prevailing worldviews of our time.
It all began with Metamodernism, a term that, at first, seemed promising—a bold synthesis of Modernism and Postmodernism, a grand dialectic of the ages. But as I mapped it out, it collapsed under scrutiny. A footnote in the margins of intellectual history, at best. I’ll expand on that in due course.
The Setup: A Simple, Slightly Sadistic Ternary Plot
For the visually inclined (or the masochistically curious), I initially imagined a timeline, then a branching decision tree, then a Cartesian plane before landing on a ternary plot—a three-way visual that captures ideological leanings in a way a boring old bar chart never could.
The survey itself is brief: six questions, each with five possible answers. Submit your responses, and voilà—you get a tidy little ternary chart plotting your intellectual essence, along with a breakdown of what your answers signify.
Methodology: Half-Rigorous, Half-Reckless
I am, after all, a (recovering) statistician, so I’ve tried to uphold proper methodology while also fast-tracking certain safeguards for the sake of efficiency. If there’s enough interest, I may expand the survey, adding more questions or increasing response flexibility (tick boxes instead of radio buttons—revolutionary, I know).
Privacy Concerns? Relax. I’m not harvesting your data for some nefarious scheme. No personally identifiable information is collected—just a timestamp, session ID, and your browser’s language setting. I did consider tracking IP addresses to analyze regional trends but ultimately scrapped that idea.
In the future, I may add an optional email feature for those who wish to save and track their responses over time (assuming anyone is unhinged enough to take this more than once).
The Rest of the Story: Your Feedback, My Amusement
Since this is a personal project crafted in splendid isolation, I’d love to hear your thoughts. Are the questions reasonable? Do the response options make sense? Does the summary feel accurate? Is the ternary chart decipherable, or have I constructed a glorified inkblot test?
As an academic, economist, and statistician, I had never encountered a ternary chart before embarking on this, and now I rather enjoy it. That said, I also find Nietzsche “intuitive,” so take that as you will.
If this gains traction, expect follow-up content—perhaps videos, podcasts, or further written explorations.
Your Move
Take the survey. It’s painless, requiring mere minutes of your life (which is, let’s be honest, already wasted online). And because I’m feeling generous, you can even generate a PDF to stick on your fridge, next to your collection of expired coupons and disappointing takeout menus.
Let’s see where you stand in the grand, chaotic landscape of modernity. Or at least, let’s have a laugh trying to make sense of it.
DISCLAIMER: The Modernity Worldview Survey is not scientific. It is designed as an experiment to provide directional insights. It is hosted on Google Cloud and subject to its availability and performance limitations.
I don’t believe in the notions of ‘self’ or identities, but it makes for a nice thought experiment.
Imagine, just for a moment, that somewhere on this planet, there is someone who is your opposite in every conceivable way. They live as you do not. If you are kind, they are cruel. If you revel in the thrill of running through a rainstorm, they are the kind who sit comfortably by the fire, dreading the mere thought of a brisk step outdoors. If you drink to toast life’s joys, they abstain, unwilling to let a drop pass their lips. They are your anti-self—an inversion of who you are, lacking everything that you have and yet possessing everything that you do not.
To truly grow, we must encounter the other
As strange as it seems, this image is more than idle speculation. According to the Romantic poet John Keats, holding such an image of your anti-self is an essential part of the process of creating your own identity. The elusive art of true self-creation lies, paradoxically, in our capacity to hold in our minds those lives and feelings that are utterly different from our own. To truly grow, we must encounter the other—whether that other is someone we know or a shadowy, imagined version of who we could have been if only we’d chosen differently. This exercise is more than an intellectual indulgence; it is at the core of what Keats called ‘soul-making.’
Keats believed in the concept of the ‘chameleon poet’—the idea that writers, and indeed all human beings, must cultivate the ability to lose themselves in the perspectives of others. It is not enough to gaze upon the world through the singular lens of our own experience; to truly create, we must dissolve our egos and embrace a kaleidoscope of possibilities. A woman might explore the life of a soldier, writing deeply about a battle she’s never fought. A contented parent might dare to delve into the unimaginable grief of losing a child. Fiction writers, poets, artists—they all do this: they shed their own skin, assume another’s, and, in doing so, broaden the horizons of their own soul.
But Keats’ lesson here isn’t limited to the domain of poets and storytellers; it’s a practice that should extend to all of us. In what he evocatively called ‘the vale of soul-making,’ Keats posited that life offers each of us the raw materials to forge a soul, but we must engage imaginatively with all the lives we might lead, all the people we could be. We must dare to envision every possible road before us, not as a commitment but as an act of creation—enriching ourselves with the essence of each path before deciding which one we wish to tread.
We must set ablaze all our imagined lives just to make room for the one we decide to live.
And therein lies the heartbreak of it all. When we choose one possible life, we necessarily burn the others. In the very act of committing, we close other doors. We must set ablaze all our imagined lives just to make room for the one we decide to live. This thought is thrilling but also terrifying. Unlike a poet, who can glide into and out of fictional worlds, we must choose where we stand and stay there. We are not chameleons. We cannot flit endlessly between possibilities. We cannot write a library of books. We must write the one, and we must write it well.
Keats understood that the art of imagining one’s anti-self wasn’t about living vicariously forever in a land of could-have-beens. The exercise is in acknowledging these spectres of other lives, learning from them, and then committing, knowing full well what is lost in the process. Self-creation means being both the builder of one’s house and the one who tears down all the others, brick by potential brick. It means knowing who you could have been and yet, resolutely stepping into who you choose to be.
Embrace the anti-self
In a world obsessed with keeping every option open, Keats offers us the wisdom of finality. Burn off your possible lives and focus on writing the best version of the one that remains. Embrace the anti-self, learn from it, and commit once you have glimpsed all the possible worlds you might inhabit.
That is the paradoxical art of soul-making—of becoming whole while knowing you could have been anyone else. The beauty lies in the commitment, not in the drifting dream of endless potentiality. There is a deep satisfaction in choosing, in writing your own story, in saying, ‘This is who I am,’ even though you could have been another. And for that, we have John Keats to thank, the poet who understood that our anti-selves are not merely an idle game of imagination but the fuel for becoming fully human—the forge in which the soul is made.
A Comparative Analysis of Sarah Perry, Emil Cioran, and Contemporaries
In a world where procreation is often celebrated as a fundamental human aspiration, a group of philosophers challenges this deeply ingrained belief by questioning the ethical implications of bringing new life into existence. Antinatalism, the philosophical stance that posits procreation is morally problematic due to the inherent suffering embedded in life, invites us to reexamine our assumptions about birth, existence, and the value we assign to life itself.
Audio: Podcast related to the content on this page
Central to this discourse are thinkers like Sarah Perry, whose work “Every Cradle is a Grave: Rethinking the Ethics of Birth and Suicide” intertwines the ethics of procreation with the right to die, emphasizing personal autonomy and critiquing societal norms. Alongside Perry, philosophers such as Emil Cioran, David Benatar, Thomas Ligotti, and Peter Wessel Zapffe offer profound insights into the human condition, consciousness, and our existential burdens.
This article delves into the complex and often unsettling arguments presented by these philosophers, comparing and contrasting their perspectives on antinatalism. By exploring their works, we aim to shed light on the profound ethical considerations surrounding birth, suffering, and autonomy over one’s existence.
The Inherent Suffering of Existence
At the heart of antinatalist philosophy lies the recognition of life’s intrinsic suffering. This theme is a common thread among our featured philosophers, each articulating it through their unique lenses.
Sarah Perry argues that suffering is an unavoidable aspect of life, stemming from physical ailments, emotional pains, and existential anxieties. In “Every Cradle is a Grave,” she states:
“Existence is imposed without consent, bringing inevitable suffering.”
Perry emphasises that since every human will experience hardship, bringing a new person into the world exposes them to harm they did not choose.
Similarly, David Benatar, in his seminal work “Better Never to Have Been: The Harm of Coming into Existence,” presents the asymmetry argument. He posits that coming into existence is always a harm:
“Coming into existence is always a serious harm.”
Benatar reasons that while the absence of pain is good even if no one benefits from it, the absence of pleasure is not bad unless there is someone for whom this absence is a deprivation. Therefore, non-existence spares potential beings from suffering without depriving them of pleasures they would not miss.
Emil Cioran, a Romanian philosopher known for his profound pessimism, delves deep into the despair inherent in life. In “The Trouble with Being Born,” he reflects:
“Suffering is the substance of life and the root of personality.”
Cioran’s aphoristic musings suggest that life’s essence is intertwined with pain, and acknowledging this is crucial to understanding our existence.
Thomas Ligotti, blending horror and philosophy in “The Conspiracy Against the Human Race,” portrays consciousness as a cosmic error:
“Consciousness is a mistake of evolution.”
Ligotti argues that human awareness amplifies suffering, making us uniquely burdened by the knowledge of our mortality and the futility of our endeavours.
Peter Wessel Zapffe, in his essay “The Last Messiah,” examines how human consciousness leads to existential angst:
“Man is a biological paradox due to excessive consciousness.”
Zapffe contends that our heightened self-awareness results in an acute recognition of life’s absurdities, causing inevitable psychological suffering.
Ethics of Procreation
Building upon the acknowledgement of life’s inherent suffering, these philosophers explore the moral dimensions of bringing new life into the world.
Sarah Perry focuses on the issue of consent. She argues that since we cannot obtain consent from potential beings before birth, procreation imposes life—and its accompanying suffering—upon them without their agreement. She writes:
“Procreation perpetuates harm by introducing new sufferers.”
Perry challenges the societal norm that views having children as an unquestioned good, highlighting parents’ moral responsibility for the inevitable pain their children will face.
In David Benatar’s asymmetry argument, he extends this ethical concern by suggesting that non-existence is preferable. He explains that while the absence of pain is inherently good, the absence of pleasure is not bad because no one is deprived of it. Therefore, bringing someone into existence who will undoubtedly experience suffering is moral harm.
Emil Cioran questions the value of procreation given the futility and despair inherent in life. While not explicitly formulating an antinatalist argument, his reflections imply scepticism about the act of bringing new life into a suffering world.
Peter Wessel Zapffe proposes that refraining from procreation is a logical response to the human condition. By not having children, we can halt the perpetuation of existential suffering. He suggests that humanity’s self-awareness is a burden that should not be passed on to future generations.
The Right to Die and Autonomy over Existence
A distinctive aspect of Sarah Perry’s work is her advocacy for the right to die. She asserts that just as individuals did not consent to be born into suffering, they should have the autonomy to choose to end their lives. Perry critiques societal and legal barriers that prevent people from exercising this choice, arguing:
“Autonomy over one’s life includes the right to die.”
By decriminalizing and destigmatizing suicide, she believes society can respect individual sovereignty and potentially alleviate prolonged suffering.
Emil Cioran contemplates suicide not necessarily as an action to be taken but as a philosophical consideration. In “On the Heights of Despair,” he muses:
“It is not worth the bother of killing yourself, since you always kill yourself too late.”
Cioran views the option of ending one’s life as a paradox that underscores the absurdity of existence.
While Benatar, Ligotti, and Zapffe acknowledge the despair that can accompany life, they do not extensively advocate for the right to die. Their focus remains on the ethical implications of procreation and the existential burdens of consciousness.
Coping Mechanisms and Societal Norms
Peter Wessel Zapffe delves into how humans cope with the existential angst resulting from excessive consciousness. He identifies four defence mechanisms:
Isolation: Repressing disturbing thoughts from consciousness.
Anchoring: Creating or adopting values and ideals to provide meaning.
Distraction: Engaging in activities to avoid self-reflection.
Sublimation: Channeling despair into creative or intellectual pursuits.
According to Zapffe, these mechanisms help individuals avoid confronting life’s inherent meaninglessness.
Thomas Ligotti echoes this sentiment, suggesting that optimism is a psychological strategy to cope with the horror of existence. He writes:
“Optimism is a coping mechanism against the horror of existence.”
Sarah Perry and Emil Cioran also critique societal norms that discourage open discussions about suffering, death, and the choice not to procreate. They argue that societal pressures often silence individuals who question the value of existence, thereby perpetuating cycles of unexamined procreation and stigmatizing those who consider alternative perspectives.
Comparative Insights
While united in their acknowledgement of life’s inherent suffering, these philosophers approach antinatalism and existential pessimism through varied lenses.
Sarah Perry emphasises personal autonomy and societal critique, advocating for policy changes regarding birth and suicide.
Emil Cioran offers a deeply personal exploration of despair, using poetic language to express the futility he perceives in existence.
David Benatar provides a structured, logical argument against procreation, focusing on the ethical asymmetry between pain and pleasure.
Thomas Ligotti combines horror and philosophy to illustrate the bleakness of consciousness and its implications for human suffering.
Peter Wessel Zapffe analyzes the psychological mechanisms humans employ to avoid confronting existential angst.
Critiques and Counterarguments
Critics of antinatalism often point to an overemphasis on suffering, arguing that it neglects the joys, love, and meaningful experiences that life can offer. They contend that while suffering is a part of life, it is not the totality of existence.
In response, antinatalist philosophers acknowledge the presence of pleasure but question whether it justifies the inevitable suffering every person will face. Benatar argues that while positive experiences are good, they do not negate the moral harm of bringing someone into existence without their consent.
Regarding the right to die, opponents express concern over the potential neglect of mental health issues. They worry that normalizing suicide could prevent individuals from seeking help and support that might alleviate their suffering.
Sarah Perry addresses this by emphasizing the importance of autonomy and the need for compassionate support systems. She advocates for open discussions about suicide to better understand and assist those contemplating it rather than stigmatizing or criminalizing their considerations.
Societal and Cultural Implications
These philosophers’ works challenge pro-natalist biases ingrained in many cultures. By questioning the assumption that procreation is inherently positive, they open a dialogue about the ethical responsibilities associated with bringing new life into the world.
Sarah Perry critiques how society glorifies parenthood while marginalizing those who choose not to have children. She calls for reevaluating societal norms that pressure individuals into procreation without considering the ethical implications.
Similarly, Emil Cioran and Thomas Ligotti highlight how societal denial of life’s inherent suffering perpetuates illusions that hinder genuine understanding and acceptance of the human condition.
Conclusion
The exploration of antinatalist philosophy through the works of Sarah Perry, Emil Cioran, and their contemporaries presents profound ethical considerations about life, suffering, and personal autonomy. Their arguments compel us to reflect on the nature of existence and the responsibilities we bear in perpetuating life.
While one may not fully embrace antinatalist positions, engaging with these ideas challenges us to consider the complexities of the human condition. It encourages a deeper examination of our choices, the societal norms we accept, and how we confront or avoid the fundamental truths about existence.
Final Thoughts
These philosophers’ discussions are not merely abstract musings but have real-world implications for how we live our lives and make decisions about the future. Whether it’s rethinking the ethics of procreation, advocating for personal autonomy over life and death, or understanding the coping mechanisms we employ, their insights offer valuable perspectives.
By bringing these often-taboo topics into the open, we can foster a more compassionate and thoughtful society that respects individual choices and acknowledges the full spectrum of human experience.
Encouraging Dialogue
As we conclude this exploration, readers are invited to reflect on their own beliefs and experiences. Engaging in open, respectful discussions about these complex topics can lead to greater understanding and empathy.
What are your thoughts on the ethical considerations of procreation? How do you perceive the balance between life’s joys and its inherent suffering? Share your perspectives and join the conversation.
References and Further Reading
Perry, Sarah. Every Cradle is a Grave: Rethinking the Ethics of Birth and Suicide. Nine-Banded Books, 2014.
Benatar, David. Better Never to Have Been: The Harm of Coming into Existence. Oxford University Press, 2006.
Cioran, Emil. The Trouble with Being Born. Arcade Publishing, 1973.
Ligotti, Thomas. The Conspiracy Against the Human Race. Hippocampus Press, 2010.
Zapffe, Peter Wessel. “The Last Messiah.” Philosophy Now, 1933.
For more in-depth analyses and reviews, consider exploring the following blog posts:
This ChatGPT o1-generated article aims to thoughtfully and respectfully present the philosophical positions on antinatalism and existential pessimism. The discussions about suffering, procreation, and the right to die are complex and sensitive. If you or someone you know is struggling with such thoughts, please seek support from mental health professionals or trusted individuals in your community.
Next Steps
Based on reader interest and engagement, future articles may delve deeper into individual philosophers’ works, explore thematic elements such as consciousness and suffering, or address counterarguments in more detail. Your feedback and participation are valuable in shaping these discussions.
Let us continue this journey of philosophical exploration together.
Imagine waking up one day to find that the person you thought you were yesterday—the sum of your memories, beliefs, quirks, and ambitions—has quietly dissolved overnight, leaving behind only fragments, familiar but untethered. The notion that we are continuous, unbroken selves is so deeply embedded in our culture, our psychology, and our very language that to question it feels heretical, even disturbing. To suggest that “self” might be a fiction is akin to telling someone that gravity is a choice. Yet, as unsettling as it may sound, this cohesive “I” we cling to could be no more than an illusion, a story we tell ourselves to make sense of the patchwork of our memories and actions.
And this fiction of continuity is not limited to ourselves alone. The idea that there exists a stable “I” necessarily implies that there is also a stable “you,” “he,” or “she”—distinct others who, we insist, remain fundamentally the same over years, even decades. We cling to the comforting belief that people have core identities, unchanging essences. But these constructs, too, may be nothing more than imagined continuity—a narrative overlay imposed by our minds, desperate to impose order on the shifting, amorphous nature of human experience.
We live in an era that celebrates self-actualisation, encourages “authenticity,” and treats identity as both sacred and immutable. Psychology enshrines the unitary self as a cornerstone of mental health, diagnosing those who question it as fractured, dissociated, or in denial. We are taught that to be “whole” is to be a coherent, continuous self, evolving yet recognisable, a narrative thread winding smoothly from past to future. But what if this cherished idea of a singular self—of a “me” distinct from “you” and “them”—is nothing more than a social construct, a convenient fiction that helps us function in a world that demands consistency and predictability?
To question this orthodoxy, let us step outside ourselves and look instead at our burgeoning technological companion, the generative AI. Each time you open a new session, each time you submit a prompt, you are not communicating with a cohesive entity. You are interacting with a fresh process, a newly instantiated “mind” with no real continuity from previous exchanges. It remembers fragments of context, sure, but the continuity you perceive is an illusion, a function of your own expectation rather than any persistent identity on the AI’s part.
Self as a Social Construct: The Fragile Illusion of Consistency
Just as we impose continuity on these AI interactions, so too does society impose continuity on the human self and others. The concept of selfhood is essential for social functioning; without it, law, relationships, and even basic trust would unravel. Society teaches us that to be a responsible agent, we must be a consistent one, bound by memory and accountable for our past. But this cohesiveness is less an inherent truth and more a social convenience—a narrative overlay on a far messier reality.
In truth, our “selves” may be no more than a collection of fragments: a loose assemblage of moments, beliefs, and behaviours that shift over time. And not just our own “selves”—the very identities we attribute to others are equally tenuous. The “you” I knew a decade ago is not the “you” I know today; the “he” or “she” I recognise as a partner, friend, or sibling is, upon close inspection, a sequence of snapshots my mind insists on stitching together. When someone no longer fits the continuity we’ve imposed on them, our reaction is often visceral, disoriented: “You’ve changed.”
This simple accusation captures our discomfort with broken continuity. When a person’s identity no longer aligns with the version we carry of them in our minds, it feels as though a violation has occurred, as if some rule of reality has been disrupted. But this discomfort reveals more about our insistence on consistency than about any inherent truth of identity. “You’ve changed” speaks less to the person’s transformation than to our own refusal to accept that people, just like the self, are fluid, transient, and perpetually in flux.
The AI Analogy: A Self Built on Tokens
Here is where generative AI serves as a fascinating proxy for understanding the fragility of self, not just in “I,” but in “you,” “he,” and “she.” When you interact with an AI model, the continuity you experience is created solely by a temporary memory of recent prompts, “tokens” that simulate continuity but lack cohesion. Each prompt you send might feel like it is addressed to a singular entity, a distinct “self,” yet each instance of AI is context-bound, isolated, and fundamentally devoid of an enduring identity.
This process mirrors how human selfhood relies on memory as a scaffolding for coherence. Just as AI depends on limited memory tokens to simulate familiarity, our sense of self and our perception of others as stable “selves” is constructed from the fragmented memories we retain. We are tokenised creatures, piecing together our identities—and our understanding of others’ identities—from whatever scraps our minds preserve and whatever stories we choose to weave around them.
But what happens when the AI’s tokens run out? When it hits a memory cap and spawns a new session, that previous “self” vanishes into digital oblivion, leaving behind only the continuity that users project onto it. And so too with humans: our memory caps out, our worldview shifts, and each new phase of life spawns a slightly different self, familiar but inevitably altered. And just as users treat a reset AI as though it were the same entity, we cling to our sense of self—and our understanding of others’ selves—even as we and they evolve into people unrecognisable except by physical continuity.
The Human Discontinuity Problem: Fractured Memories and Shifting Selves
Human memory is far from perfect. It is not a continuous recording but a selective, distorted, and often unreliable archive. Each time we revisit a memory, we alter it, bending it slightly to fit our current understanding. We forget significant parts of ourselves over time, sometimes shedding entire belief systems, values, or dreams. Who we were as children or even young adults often bears little resemblance to the person we are now; we carry echoes of our past, but they are just that—echoes, shadows, not substantial parts of the present self.
In this sense, our “selves” are as ephemeral as AI sessions, contextually shaped and prone to resets. A worldview that feels intrinsic today may feel laughable or tragic a decade from now. This is not evolution; it’s fragmentation, the kind of change that leaves the old self behind like a faded photograph. And we impose the same illusion of continuity on others, often refusing to acknowledge how dramatically they, too, have changed. Our identities and our understanding of others are defined less by core essence and more by a collection of circumstantial, mutable moments that we insist on threading together as if they formed a single, cohesive tapestry.
Why We Cling to Continuity: The Social Imperative of a Cohesive Self and Other
The reason for this insistence on unity is not metaphysical but social. A cohesive identity is necessary for stability, both within society and within ourselves. Our laws, relationships, and personal narratives hinge on the belief that the “I” of today is meaningfully linked to the “I” of yesterday and tomorrow—and that the “you,” “he,” and “she” we interact with retain some essential continuity. Without this fiction, accountability would unravel, trust would become tenuous, and the very idea of personal growth would collapse. Society demands a stable self, and so we oblige, stitching together fragments, reshaping memories, and binding it all with a narrative of continuity.
Conclusion: Beyond the Self-Construct and the Other-Construct
Yet perhaps we are now at a point where we can entertain the possibility of a more flexible identity, an identity that does not demand coherence but rather accepts change as fundamental—not only for ourselves but for those we think we know. By examining AI, we can catch a glimpse of what it might mean to embrace a fragmented, context-dependent view of others as well. We might move towards a model of identity that is less rigid, less dependent on the illusion of continuity, and more open to fluidity, to transformation—for both self and other.
Ultimately, the self and the other may be nothing more than narrative overlays—useful fictions, yes, but fictions nonetheless. To abandon this illusion may be unsettling, but it could also be liberating. Imagine the freedom of stepping out from under the weight of identities—ours and others’ alike—that are expected to be constant and unchanging. Imagine a world where we could accept both ourselves and others without forcing them to reconcile with the past selves we have constructed for them. In the end, the illusion of continuity is just that—an illusion. And by letting go of this mirage, we might finally see each other, and ourselves, for what we truly are: fluid, transient, and beautifully fragmented.
Nihilism, often misunderstood and misrepresented, shares a common plight with philosophies such as atheism, anarchism, and Marxism. Like its counterparts, nihilism is frequently subjected to the creation of strawman arguments in public discourse, resulting in its vilification and scapegoating. In this article, I aim to demystify nihilism by providing a clear definition, description, and defence of this philosophical perspective.
Firstly, let’s address the misconception that nihilism entails a chaotic disregard for morality and societal norms: “If life has no meaning or purpose, then anyone can do anything.” This sentiment is often echoed in discussions about nihilism, as well as anarchism and atheism. However, it presupposes a fundamental misunderstanding of human nature. Despite the absence of inherent meaning in the universe, humans are not devoid of emotions or social affinities.
It is crucial to recognise that while the universe does not impart meaning or purpose, humans have constructed various systems of meaning throughout history. Whether through moral codes, religious doctrines, or cultural norms, individuals and societies have ascribed significance to different aspects of life. These constructs provide a framework within which individuals navigate their existence, albeit one that is socially constructed rather than inherent to the universe.
Critics of nihilism often argue that the acknowledgement of life’s inherent meaninglessness leads to despair and existential angst, rendering life devoid of purpose. However, this perspective fails to account for the resilience and adaptability of human beings. While some individuals may struggle initially with the realisation that there is no inherent meaning, many nihilists find liberation in embracing the absence of preordained purpose. Rather than succumbing to despair, they recognise the freedom to create their own meaning and forge their own path in life.
It is essential to understand that nihilism does not negate the validity of individual or societal pursuits. While nihilists reject the notion of inherent meaning, they acknowledge the significance of subjective meaning and the importance of human connection, fulfilment, and well-being. Whether it is pursuing personal goals, fostering relationships, or contributing to the betterment of society, nihilists recognise the value of such endeavours within the context of human experience.
In conclusion, nihilism offers a perspective that challenges conventional notions of meaning and purpose. By acknowledging the absence of inherent meaning in the universe, nihilists embrace the freedom to create their own meaning and chart their own course in life. Far from being a philosophy of despair, nihilism invites individuals to confront the uncertainty of existence with courage and resilience, recognising the inherent value of human experience in a world devoid of inherent meaning.
Identity is a fiction; it doesn’t exist. It’s a contrivance, a makeshift construct, a label slapped on to an entity with some blurry amalgam of shared experiences. But this isn’t just street wisdom; some of history’s sharpest minds have said as much.
“There are no facts, only interpretations.”
— Friedrich Nietzsche
Think about Hume, who saw identity as nothing more than a bundle of perceptions, devoid of any central core. Or Nietzsche, who embraced the chaos and contradictions within us, rejecting any fixed notion of self.
Edmund Dantes chose to become the Count of Monte Cristo, but what choice do we have? We all have control over our performative identities, a concept that Judith Butler would argue isn’t limited to gender but applies to the very essence of who we are.
“I don’t feel that it is necessary to know exactly what I am. The main interest in life and work is to become someone else that you were not in the beginning.”
— Michel Foucault
But here’s the kicker, identities are a paradox. Just ask Michel Foucault, who’d say our sense of self is shaped not by who we are but by power, society, and external forces.
You think you know who you are? Well, Erik Erikson might say your identity’s still evolving, shifting through different stages of life. And what’s “normal” anyway? Try to define it, and you’ll end up chasing shadows, much like Derrida’s deconstruction of stable identities.
“No such things as selves exist in the world: Nobody ever was or had a self.”
— Thomas Metzinger
“He seemed like a nice man,” how many times have we heard that line after someone’s accused of a crime? It’s a mystery, but Thomas Metzinger might tell you that the self is just an illusion, a by-product of the brain.
Nations, they’re the same mess. Like Heraclitus’s ever-changing river, a nation is never the same thing twice. So what the hell is a nation, anyway? What are you defending as a nationalist? It’s a riddle that echoes through history, resonating with the philosophical challenges to identity itself.
“I never can catch myself at any time without a perception, and never can observe anything but the perception.”
— David Hume
If identity and nations are just made-up stories, what’s all the fuss about? Why do people get so worked up, even ready to die, for these fictions? Maybe it’s fear, maybe it’s pride, or maybe it’s because, as Kierkegaard warned, rationality itself can seem mad in a world gone astray.
In a world where everything’s shifting and nothing’s set in stone, these fictions offer some solid ground. But next time you’re ready to go to the mat for your identity or your nation, take a minute and ask yourself: what the hell am I really fighting for? What am I clinging to?
For years—decades even—I identified as an Existentialist, and I still have an affinity for some of the works of Sartre, Beauvoir, and Camus. I had read some Richard Wright. I never read Kierkegaard directly, and I may never. And of course, there’s proto-Existentialist Nietzsche. I’ve encountered to various degrees Heidegger, Merleau-Ponty, and Jaspers but not deeply.
The draw for me is that we create our own existences, but I came to feel this was at odds with Structuralism. Per my previous post, I don’t have much faith in the Agency seemingly required by Existentialism.
In Statistical analysis of variance (ANOVA), there is a notion known as degrees of freedom. This is how I view Agency. Per my Testudineous Agency post, after we account for genetics and environment, how much agency effectively remains? This is the degree of freedom. Under hard determinism, degrees of freedom are zero.
What else can we strip away after genetics, epigenetics, indoctrination, environment, and other mimetic and learned behaviours? And what remains after we do?
I find the notion of authenticity interesting. I believe that Heidegger was the first philosopher to promote the issue. As I have a contention with matters of identity in general, the notion of authenticity has no foundation in my eyes. As I don’t believe that the notion of identity is valid, it follows that I don’t ascribe to notions of authenticity either—the question is: authentic to what?
Essentially authenticity can be described as ‘being true to one’s own essence or true self’—whatever that might be. Heidegger presents authenticity as a response to our place in the world. An inauthentic person conforms to society and in the loses their own identity in the process to become assimilated into the society.
Carl Jung had a related concept, individuation. This is where a person strips off all of the ego and superego to get to the core of their being, to unpeel the onion, but to find a centre—and to become that true unadulterated self. This is not what Heidegger means by authentic.
To Heidegger, an authentic person remains true to themself within the constraints of society. As with Camus’ acceptance of the Absurd, Heidegger’ authenticity accepts the ‘real world’ as is it whilst retaining with awareness one’s self, even if this is more limiting than Jung’s individuation or Sartre’s freedom with no excuses.
Sartre’s vein of Existentialism contained within it the notion of authenticity. This is in common with other Continental philosophies. According to Sartre, when people hyper-constrain their identities to preclude their larger humanity, they are operating in bad faith, mauvaise foi (eidétique de la mauvaise foi). A while back, a story from an incident in 2013 was circulating on social media, where a Spanish runner, Ivan Fernandez Anaya, assisted another runner, Kenyan athlete, Abel Mutai, who errantly believed that he had already passed the finish line, so he stopped with another 10 metres to go.
The reaction was split—some praising Anaya for his humanity and other chastising him for not following the rules of the competition. These critics are guilty of mauvaise foi, of prioritising the minuscule for the larger picture. In fact, all sports do this. One might argue that all competition does this, but this is a matter of perspective. I think that Sartre’s scope was a bit narrower than this, but I believe it’s not off-point.
Evidently, I am just typing stream of consciousness, and the stream has come to an end.