Watching Sam Harris ruminate on the nature of political lies (still believing, poor lamb, that reason might one day triumph) reminds me of something more sinister: lies today are not attempts at persuasion. They are shibboleths — tribal passwords, loyalty oaths, secret handshakes performed in the broad light of day.
Video: Sam Harris tells us why Trump and his ilk lie.
Forget “alternative facts.” That charming euphemism was merely a decoy, a jangling set of keys to distract the infantile media. The real game was always deeper: strategic distortion, the deliberate blurring of perception not to deceive the outsider, but to identify the insider.
Audio: NotebookLM podcast on this topic.
When Trump — or any other post-truth demagogue — proclaims that penguins are, in fact, highly trained alien operatives from the Andromeda galaxy, the objective is not persuasion. The point is to force a choice: will you, standing before this glistening absurdity, blink and retreat into reason, stammering something about ornithology… Or will you step forward, clasp the hand of madness, and mutter, ‘Yes, my liege, the penguins have been among us all along’?
Those who demur, those who scoff or gasp or say ‘You’re an idiot,”’have failed the loyalty test. They have outed themselves as enemy combatants in the epistemic war. Truth, in this brave new world, is not a destination; it is an allegiance. To speak honestly is to wage rebellion.
Orwell, who tried very hard to warn us, understood this dynamic well: the real triumph of Big Brother was not merely to compel you to lie but to compel you to believe the lie. Koestler, another battered prophet of the age, charted how political movements sink into ritualistic unreason, demanding not conviction but performance. Swift, for his part, knew it was all hilarious if you tilted your head just right.
The bigger the lie, the better the shibboleth. Claim that two and two make five, and you catch out the weak-willed rationalists. Claim that penguins are extraterrestrials, and you find the truly devoted, the ones willing to build altars from ice and sacrifice to their feathery overlords.
It’s no accident that modern political theatre resembles a deranged initiation ritual. Each day brings a new absurdity, a fresh madness to affirm: ‘Men can become women by declaration alone!” “Billionaires are victims of systemic oppression!’ ‘The penguins are amongst us, plotting!’ Each claim a little more grotesque than the last, each compliance a little more degrading, a little more irreversible.
And oh, how eagerly the initiates rush forward! Clap for the penguins, or be cast out into the howling wilderness! Better to bend the knee to absurdity than be marked as an unbeliever. Better to humiliate yourself publicly than to admit that the Emperor’s penguin suit is just a costume.
Meanwhile, the opposition — earnest, naive — keeps trying to argue, to rebut, to point out that penguins are terrestrial flightless birds. How quaint. How pathetic. They do not understand that the moment they say, “You’re an idiot,” they’ve broken the spell, declared themselves apostates, and rendered themselves politically irrelevant.
The shibboleth, once uttered, divides the world cleanly: the believers, who will say anything, do anything, believe anything, provided it marks them safe from exile; and the infidels, who cling stupidly to reality.
The future belongs, not to the true, but to the loyal. Not to the rational, but to the ritualistic. The more extravagant the lie, the greater the proof of your faith.
So raise a glass to the penguins, ye of faint heart, and prepare your soul for abasement. Or stand firm, if you dare, and be prepared to be eaten alive by those who traded reason for the rapture of belonging.
After all, in the land of the blind, the one-eyed man is not king. He’s a heretic.
I’ve read Part I of Hobbes’ Leviathan and wonder what it would have been like if he filtered his thoughts through Hume or Wittgenstein. Hobbes makes Dickens read like Pollyanna. It’s an interesting historical piece, worth reading on that basis alone. It reads as if the Christian Bible had to pass through a legal review before it had been published, sapped of vigour. As bad a rap as Schopenhauer seems to get, Hobbes is the consummate Ebenezer Scrooge. Bah, humbug – you nasty, brutish, filthy animals!*
Audio: NotebookLM podcast conversation on this topic.
In any case, it got me thinking of free will and, more to the point, of will itself.
A Brief History of Humanity’s Favourite Metaphysical Scapegoat
By the time Free Will turned up to the party, the real guest of honour—the Will—had already been drinking heavily, muttering incoherently in the corner, and starting fights with anyone who made eye contact. We like to pretend that the “will” is a noble concept: the engine of our autonomy, the core of our moral selves, the brave little metaphysical organ that lets us choose kale over crisps. But in truth, it’s a bloody mess—philosophy’s equivalent of a family heirloom that no one quite understands but refuses to throw away.
So, let’s rewind. Where did this thing come from? And why, after 2,500 years of name-dropping, finger-pointing, and metaphysical gymnastics, are we still not quite sure whether we have a will, are a will, or should be suing it for damages?
Plato: Soul, Reason, and That Poor Horse
In the beginning, there was Plato, who—as with most things—half-invented the question and then wandered off before giving a straight answer. For him, the soul was a tripartite circus act: reason, spirit, and appetite. Will, as a term, didn’t get top billing—it didn’t even get its name on the poster. But the idea was there, muddling along somewhere between the charioteer (reason) and the unruly horses (desire and spiritedness).
No explicit will, mind you. Just a vague sense that the rational soul ought to be in charge, even if it had to beat the rest of itself into submission.
Aristotle: Purpose Without Pathos
Aristotle, ever the tidy-minded taxonomist, introduced prohairesis—deliberate choice—as a sort of proto-will. But again, it was all about rational calculation toward an end. Ethics was teleological, goal-oriented. You chose what aligned with eudaimonia, that smug Greek term for flourishing. Will, if it existed at all, was just reason picking out dinner options based on your telos. No inner torment, no existential rebellion—just logos in a toga.
Augustine: Sin, Suffering, and That Eternal No
Fast-forward a few hundred years, and along comes Saint Augustine, traumatised by his libido and determined to make the rest of us suffer for it. Enter voluntas: the will as the seat of choice—and the scene of the crime. Augustine is the first to really make the will bleed. He discovers he can want two incompatible things at once and feels properly appalled about it.
From this comes the classic Christian cocktail: freedom plus failure equals guilt. The will is free, but broken. It’s responsible for sin, for disobedience, for not loving God enough on Wednesdays. Thanks to Augustine, we’re stuck with the idea that the will is both the instrument of salvation and the reason we’re going to Hell.
Cheers.
Medievals: God’s Will or Yours, Pick One
The Scholastics, never ones to let an ambiguity pass unanalysed, promptly split into camps. Aquinas, ever the reasonable Dominican, says the will is subordinate to the intellect. God is rational, and so are we, mostly. But Duns Scotus and William of Ockham, the original voluntarist hooligans, argue that the will is superior—even in God. God could have made murder a virtue, they claim, and you’d just have to live with it.
From this cheerful perspective, will becomes a force of arbitrary fiat, and humans, made in God’s image, inherit the same capacity for irrational choice. The will is now more than moral; it’s metaphysical. Less reason’s servant, more chaos goblin.
Hobbes: Appetite with Delusions of Grandeur
Then along comes Thomas Hobbes, who looks at the soul and sees a wheezing machine of appetites. Will, in his famously cheery view, is simply “the last appetite before action.” No higher calling, no spiritual struggle—just the twitch that wins. Man is not a rational animal, but a selfish algorithm on legs. For Hobbes, will is where desire stumbles into motion, and morality is a polite euphemism for not getting stabbed.
Kant: The Will Gets a Makeover
Enter Immanuel Kant: powdered wig, pursed lips, and the moral rectitude of a man who scheduled his bowel movements. Kant gives us the “good will”, which acts from duty, not desire. Suddenly, the will is autonomous, rational, and morally legislative—a one-man Parliament of inner law.
It’s all terribly noble, terribly German, and entirely exhausting. For Kant, free will is not the ability to do whatever you like—it’s the capacity to choose according to moral law, even when you’d rather be asleep. The will is finally heroic—but only if it agrees to hate itself a little.
Schopenhauer: Cosmic Will, Cosmic Joke
And then the mood turns. Schopenhauer, world’s grumpiest mystic, takes Kant’s sublime will and reveals it to be a blind, thrashing, cosmic force. Will, for him, isn’t reason—it’s suffering in motion. The entire universe is will-to-live: a desperate, pointless striving that dooms us to perpetual dissatisfaction.
There is no freedom, no morality, no point. The only escape is to negate the will, preferably through aesthetic contemplation or Buddhist-like renunciation. In Schopenhauer’s world, the will is not what makes us human—it’s what makes us miserable.
Nietzsche: Transvaluation and the Will to Shout Loudest
Cue Nietzsche, who takes Schopenhauer’s howling void and says: yes, but what if we made it fabulous? For him, the will is no longer to live, but to power—to assert, to create, to impose value. “Free will” is a theologian’s fantasy, a tool of priests and moral accountants. But will itself? That’s the fire in the forge. The Übermensch doesn’t renounce the will—he rides it like a stallion into the sunset of morality.
Nietzsche doesn’t want to deny the abyss. He wants to waltz with it.
Today: Free Will and the Neuroscientific Hangover
And now? Now we’re left with compatibilists, libertarians, determinists, and neuroscientists all shouting past each other, armed with fMRI machines and TED talks. Some claim free will is an illusion, a post hoc rationalisation made by brains doing what they were always going to do. Others insist that moral responsibility requires it, even if we can’t quite locate it between the neurons.
We talk about willpower, will-to-change, political will, and free will like they’re real things. But under the hood, we’re still wrestling with the same questions Augustine posed in a North African villa: Why do I do what I don’t want to do? And more importantly, who’s doing it?
Conclusion: Where There’s a Will, There’s a Mess
From Plato’s silent horses to Nietzsche’s Dionysian pyrotechnics, the will has shape-shifted more times than a politician in an election year. It has been a rational chooser, a moral failure, a divine spark, a mechanical twitch, a cosmic torment, and an existential triumph.
Despite centuries of philosophical handwringing, what it has never been is settled.
So where there’s a will, there’s a way. But the way? Twisting, contradictory, and littered with the corpses of half-baked metaphysical systems.
Welcome to the labyrinth. Bring snacks.
* The solitary, poor, nasty, brutish, and short quote is forthcoming. Filthy animals is a nod to Home Alone.
If you’re reading this, chances are you’re mortal. Bummer. Even worse, you may not be maximizing your odds of wringing every last drop out of your limited lifespan. But fear not! Science has some answers. And the answer, at least in the United States, is shockingly unsecular: religious people, on average, live longer than their non-religious counterparts. They also tend to be happier. But don’t rush to your nearest house of worship just yet—because it’s not God, the afterlife, or divine intervention at work. It’s something far more mundane: people.
Audio: NotebookLM podcast on this topic.
The Religion-Longevity Link: A Holy Miracle or Just Good Networking?
Multiple studies have confirmed what might seem an inconvenient truth for secular folks like myself: religious participation is associated with longer lifespans. A 2018 study published in JAMA Internal Medicine found that attending religious services more than once a week was associated with a roughly 33% lower risk of mortality. That’s a pretty solid statistical incentive to at least pretend to enjoy Sunday sermons.
Why the boost in longevity? No, it’s not divine reward points. It boils down to a few key factors:
Community and Social Support: Regularly showing up to church, temple, mosque, or synagogue means interacting with the same people repeatedly, forming strong social bonds. When life gets tough, these people tend to notice and lend support.
Healthier Lifestyles: Many religious traditions frown upon self-destructive behaviours like smoking, heavy drinking, and drug use.
Lower Stress Levels: Religious belief systems provide coping mechanisms for hardship, instilling a sense of meaning and reducing existential dread.
Volunteerism and Purpose: Many religious folks engage in community service, which has been linked to greater happiness and longevity.
The Not-So-Spiritual Catch: Why Atheists and the “Spiritual but Not Religious” Miss Out
Here’s the kicker: it’s not belief in a deity that grants these benefits. It’s participation in a structured, tight-knit community. That’s why merely identifying as “spiritual” doesn’t deliver the same effects—without a committed social framework, spirituality becomes a solo endeavour. And whilst atheists can certainly find meaning in other ways, they often lack equivalent institutions providing routine, real-world social engagement.
To put it bluntly, God isn’t keeping people alive longer. Other people are. Having a tribe that notices when you don’t show up, checks in when you’re sick, and nags you into a healthier lifestyle has tangible benefits.
The Scandinavian Exception: Thriving Without Religion
“But wait,” you may say, “what about those blissfully secular Scandinavian countries? They’re barely religious, yet they consistently rank among the happiest and longest-living people on Earth.” Good point. The key difference? They have successfully replaced the social function of religion with other strong communal institutions.
Nordic nations boast robust social safety nets, well-funded public spaces, and a culture prioritising collective well-being. They don’t need church groups to function as makeshift welfare systems because the state ensures no one falls through the cracks. They also have thriving clubs, hobby groups, and worker associations that provide built-in social support.
Conclusion: What This Means for Longevity-Seeking Atheists and Introverts
If you, like me, are an atheist and also an introvert who prefers solitude, writing, and the company of generative AI, this presents a bit of a conundrum. How does one reap the benefits of social integration without enduring the horror of group activities?
The lesson here isn’t that you need to feign religious belief or force yourself into suffocating social obligations. But if you want to maximize your lifespan and well-being, some form of consistent, meaningful connection with others is essential. Whether that’s through a socialist co-op, a local philosophy club, a structured hobby group, or even just a tight circle of like-minded misanthropes, the key is to avoid total isolation.
Religion isn’t the magic ingredient—it’s just a well-tested delivery system. And in a society where other forms of community are fraying, it’s not surprising that religious folks seem to be winning the longevity lottery. The real takeaway? Find your people. Even if you’d rather be alone.
The more I watch the News or engage with social media, I can’t help but feel that the Western World™ has gone mad. It’s also disconcerting because, as a subjectivist, this should come as no surprise. I guess I miss the days when it was better wrapped in relativism. Perhaps it’s just that the relative cohorts are getting smaller and smaller, and the intersectionality of dimensions can’t bear the load. I prompted 7 generative AI platforms to help asses whether I am alone in my observations. I present the responses unedited, identifying each with a header.
Audio: NotebookLM podcast of this content.
Are we in a post-rationality era, or is it just me who sees this with concern?
ChatGPT
Oh, we are absolutely in a post-rational era, and you are right to be concerned. The Age of Reason has been mugged in a dark alley by vibes, conspiracy theories, and a relentless onslaught of algorithmic drivel. Facts? Mere inconveniences. Logic? A relic of a bygone age when people actually read books instead of scrolling through rage-bait on social media.
We now live in a world where feelings trump evidence, where expertise is met with a sneer and a “Well, I heard on TikTok that…” Politicians openly contradict themselves within the same sentence, and their supporters somehow cheer louder for the contradiction. Science? Only if it aligns with pre-existing beliefs. Critical thinking? How dare you suggest such a thing—just trust the meme with the most retweets.
But hey, maybe this is just the inevitable entropy of civilization. Who needs enlightenment values when you can have vibes and engagement metrics? Buckle up—it’s only going to get weirder.
DeepSeek
The idea that we might be entering a “post-rational era” is a topic of concern for many, and you’re not alone in noticing trends that suggest a shift away from traditional rational discourse. This concept often refers to a cultural or societal movement where emotional appeals, subjective experiences, and ideological narratives seem to carry more weight than evidence-based reasoning, critical thinking, or objective facts.
Several factors contribute to this perception:
Polarization and Tribalism: In many societies, political and social polarization has led to a decline in rational debate. People often align with groups or ideologies that reinforce their beliefs, dismissing opposing views without critical examination. This tribal mindset can override rational discourse.
Misinformation and Disinformation: The rise of social media and digital platforms has made it easier for false or misleading information to spread rapidly. Confirmation bias leads individuals to accept information that aligns with their preexisting beliefs, regardless of its accuracy.
Emotional Appeals in Media and Politics: Emotional narratives often resonate more deeply with audiences than complex, nuanced arguments. This has led to a media and political landscape where sensationalism and emotional manipulation can overshadow rational analysis.
Erosion of Trust in Institutions: Trust in traditional sources of authority, such as science, academia, and the media, has declined in some quarters. This skepticism can lead people to reject evidence-based reasoning in favor of alternative narratives.
Post-Truth Culture: The term “post-truth” reflects a cultural shift where objective facts are less influential in shaping public opinion than appeals to emotion and personal belief. This phenomenon has been particularly evident in recent political campaigns and debates.
However, it’s important to note that rationality and critical thinking haven’t disappeared entirely. Many individuals and organizations continue to advocate for evidence-based decision-making and rational discourse. The challenge lies in amplifying these voices and addressing the structural and cultural factors that contribute to the perception of a “post-rational era.”
If you’re concerned about this trend, you’re not alone. Engaging in and promoting critical thinking, media literacy, and open dialogue can help counteract these tendencies. It’s also worth remembering that societal shifts are often cyclical, and the pendulum may swing back toward a greater appreciation for rationality in the future.
Meta AI
You’re not alone in your concern. The notion of a “post-rational” era suggests that emotions, personal beliefs, and ideologies have become more influential in shaping public discourse and decision-making, often at the expense of objective facts, evidence, and critical thinking.
Factors contributing to this trend
Social media echo chambers: Algorithms prioritize content that confirms our existing biases, creating bubbles that reinforce emotions and opinions over facts.
Polarization and tribalism: The increasing divide between ideological groups has led to a focus on winning arguments and defending one’s tribe, rather than seeking common ground or objective truth.
Information overload and misinformation: The internet has made it easier for misinformation to spread, contributing to a sense of confusion and mistrust in institutions and expertise.
Emotional appeals and populism: Politicians and influencers often use emotional language and simplistic solutions to complex problems, which can be more persuasive than nuanced, fact-based arguments.
Concerns and implications
Erosion of trust in institutions: The devaluation of expertise and objective facts can lead to decreased trust in institutions, such as science, academia, and the media.
Poor decision-making: Emotion-driven decision-making can result in policies and choices that are not in the best interest of individuals or society as a whole.
Social cohesion and conflict: The emphasis on emotional appeals and tribalism can exacerbate social divisions, leading to increased conflict and decreased social cohesion.
What can be done?
Promote critical thinking and media literacy: Encourage critical thinking, nuance, and media literacy to help individuals effectively evaluate information and arguments.
Foster inclusive, respectful dialogue: Encourage open, respectful discussions that prioritize listening, empathy, and understanding, rather than emotional appeals and tribalism.
Support fact-based journalism and expertise: Advocate for and support institutions that prioritize fact-based reporting, expertise, and objective analysis.
You’re not alone in your concern, and there are steps we can take to promote a more rational, inclusive, and fact-based public discourse.
Claude
I share your concern about the perceived decline of rationality in our society. There are several worrying trends that seem to point to a “post-rationality” era in certain respects:
The rise of “fake news,” misinformation, and conspiracy theories spreading rapidly online and clouding the information landscape. It has become harder to distinguish fact from fiction.
Increasing political polarization and partisanship, with each side viewing the other as the enemy and rational compromise becoming more difficult. Emotion and tribalism often trump facts.
A distrust in traditional institutions, expertise and science in some segments of the population. Authority of experts is questioned and fringe theories gain traction.
“Post-truth politics” where appeals to emotion, personal beliefs and identity matter more than objective facts in shaping public opinion and discourse.
Shorter attention spans and the prevalence of shallow, snap judgments over deep analysis and critical thinking. Sensationalism is incentivized.
Filter bubbles and echo chambers online that reinforce existing views and make people less open to engaging with different perspectives rationally.
At the same time, I don’t think rationality is dead. Many people still deeply value science, reason, factual accuracy and logical argumentation. But these forces of anti-rationality have grown stronger recently.
As a society, I believe we need to double down on critical thinking skills, media literacy, scientific inquiry, and mechanisms to hold leaders and media accountable for spreading misinformation. Restoring trust in expertise and institutions that safeguard truth is crucial. We must make rational discourse “cool” again across partisan lines.
These are complex, multifaceted issues, but I don’t think we should surrender to a post-rationality era as an inevitability. Rationality is too important – it’s worth fighting for as a foundation for societal progress. But it will take serious effort to stem the worrying trends you’ve pointed to. I share your concerns on this front.
Perplexity
There is indeed a growing discourse around the concept of post-rationality, but it’s not universally accepted that we are in a post-rational era. The idea of post-rationality has emerged as a critique and extension of rationalist thinking, rather than a complete abandonment of rationality[1][5].
Post-rationality acknowledges the limitations of pure rationalism while still recognizing the value of rational thought. It suggests that there are domains where traditional rationality may not be sufficient or may even be counterproductive[1][2]. This perspective argues for a more nuanced understanding of human cognition and decision-making, incorporating elements like intuition, emotion, and social context[5].
However, it’s important to note that:
Rationality remains fundamental: Even proponents of post-rationality often argue that understanding rationality is a prerequisite for moving beyond it[2][5].
Ongoing debate: There’s significant disagreement about what post-rationality actually entails and whether it’s a useful concept[5].
Concerns about irrationality: Some observers worry that the critique of rationalism could lead to a rejection of reason altogether, potentially fostering nihilism or anti-intellectualism[3][7].
Your concern is not unfounded. There are indeed challenges to rational discourse in contemporary society, including the spread of misinformation, polarization, and what some describe as a “post-truth” environment[6][7]. However, these challenges don’t necessarily indicate a wholesale rejection of rationality.
Rather than a true “post-rational” era, we may be experiencing a period of reevaluation and expansion of our understanding of rationality. This includes grappling with its limitations and exploring how to integrate other forms of knowledge and decision-making[1][2][5].
You’re certainly not alone in your concern. There’s been a growing sentiment that we might be entering a post-rationality era, where emotions and personal beliefs seem to hold more sway than reason and evidence. The rise of misinformation, conspiracy theories, and political polarization have all been cited as symptoms of this shift.
It’s a complex issue with lots of factors at play. Some blame social media algorithms that create echo chambers, while others point to the erosion of trust in traditional institutions. But there’s no single, simple answer.
Why do you think we might be in a post-rationality era?
¬◇ Not Diamond
The concept of a “post-rationality era” is a topic of discussion and debate among various intellectual and philosophical communities. Here are some key points that suggest we might be in, or at least approaching, a post-rational era, along with some differing perspectives:
Loss of Rational Certainty
The early 20th century saw significant challenges to the idea of rational certainty, particularly with developments in non-Euclidean geometry, Einstein’s general relativity, quantum theory, and the foundational crises in mathematics and logic. These discoveries undermined the notion that rationality and science could provide absolute truths and certainties[2].
Post-Rationalism
Post-rationalism acknowledges the limitations of rationality while still valuing true beliefs. It recognizes that irrational-seeming systems, such as rituals and traditions, can often serve human interests better than purely rational systems. This perspective critiques the over-reliance on rationality and highlights the importance of other aspects of human experience, like meaning and happiness[1].
Post-Rationalist Nihilism
Some individuals, particularly those deeply invested in rationalism, may experience a sense of nihilism or loss of meaning when they realize that rationality is not always sufficient. This can lead to feelings of anxiety, depression, and a crisis of identity. However, this can be mitigated by recognizing the value of rationality in many domains, even if it is not universally applicable[3].
Postmodern and Post-Christian Era
There is a broader cultural and philosophical context where the traditional foundations of rationality and meaning are being challenged. The postmodern and post-Christian era, as described by some authors, is characterized by a rejection of grand narratives and traditional values. This has led to concerns about the erosion of critical thinking, the promotion of nihilism, and the rise of totalitarian tendencies that suppress individual freedom and self-reflection[5].
Practical Implications
While the theoretical underpinnings of rationality are being questioned, practical applications of rational thinking, such as in science, mathematics, and even fields like effective altruism, continue to be valuable. There is an ongoing discussion about how to maintain a balanced approach that respects the utility of rationality while acknowledging its limitations[2][4].
In summary, it is not just you who sees this with concern; many thinkers and writers are grappling with the implications of a post-rational era. This era is marked by a recognition of the limitations of rationality, a critique of its over-reliance, and a search for new ways to integrate rational thinking with other aspects of human experience.
NB: Interestingly, these ¬◇ footnotes don’t ink to anyplace.
I was chatting with some Catholics on Social Media, responding to a post about the need to grow their base. I sympathised that it must be difficult because the product of organised religion was not exactly tailor-fit. Protestants have provision for roll-your-own, but not Catholics. This led me to a discussion about what Jung wrote about rituals – that it doesn’t matter if the underlying rationale was remembered because some people like ritual for its own sake – candles, incense, wafers, whatever.
(I studied Jung intensely in the mid-’90s, and while I still find him useful as metaphor, I don’t take his broader metaphysics seriously anymore – too mystical for my tastes. But this particular insight about ritual sticks.)
Follows are the unedited results.
Yes, and Jung was being quietly subversive about it. He was pointing to something most modern institutions would rather not hear.
For Carl Jung, ritual doesn’t need to be true in the propositional sense to be effective in the psychological one. Its power doesn’t reside in correct belief, historical accuracy, or even conscious assent. It resides in repetition, form, symbol, and the way those elements bypass the rational gatekeeper and go straight for the nervous system.
Which is precisely why ritual unnerves modernity.
Once you accept Jung’s claim, you’re forced to concede that meaning is not the primary engine of human orientation. Function is. A ritual can continue to stabilise, contain, and organise psychic life long after its original metaphysics has decayed, been forgotten, or become frankly unbelievable. The candles still work even if God has quietly slipped out the back.
This is deeply inconvenient for Enlightenment-descended sensibilities, which insist that symbols must justify themselves, that practices require reasons, and that belief precedes action. Jung flips that on its head. We act first. We kneel first. We chant first. Understanding, if it arrives at all, comes later as a kind of aftercare.
Which helps explain your mate’s conversion without invoking any sudden metaphysical awakening.
Catholic ritual functions even when belief wobbles. Protestantism, having stripped ritual back to avoid superstition, leaves belief carrying all the weight. Once belief falters, there’s nothing left to hold the structure together. No choreography. No inherited muscle memory. No symbolic surplus. Just a sermon competing with the rest of the week’s content.
Jung saw this coming. He understood that modern people would continue to need ritual even as they lost faith in the stories that once justified it. His worry wasn’t that people would abandon religion. It was that they would abandon ritual competence and then wonder why anxiety, fragmentation, and compensatory myth-making flooded in through the cracks.
From that angle, Catholicism’s current crisis looks different again.
The conservatives think ritual must be defended because it encodes truth. The progressives think ritual can be loosened because truth can be updated. Jung would have said both are missing the point.
Ritual persists because it does work that language cannot do. It orders time. It disciplines the body. It makes the invisible feel locally real. Strip away the incense, the repetition, the inherited forms, and you don’t get a purer faith. You get an exposed psyche scrambling for substitutes.
So when someone says, “I’m drawn to Catholicism because of the candles and incense,” they’re not being shallow. They’re being honest in a way theologians and sociologists often aren’t. They’re admitting that humans are not primarily convinced creatures.
They are conditioned ones.
Jung didn’t romanticise that fact. He warned about it. But he understood it well enough to say the thing modern institutions still resist admitting: you can drain a ritual of meaning and it will still shape lives long after the doctrine has stopped persuading anyone.
The Structure of Scientific Revolutions was published in 1962. Written by Thomas Kuhn, it introduced the world to the concept of paradigm shifts in science — and, as it turns out, elsewhere. As I mentioned recently, I experienced a mishap, confounding it with Paul Feyerabend’s Against Method, first published in 1975. Both of these should be required reading FOR year 10 – or at least taught in summary.
I had read Feyerabend years ago but was only familiar with Kuhn from a distance. I’m clad we’ve become more intimate. These authors take different approaches to arrive at times in the same place. Kuhn takes a Modernist approach that he critiques and modifies. Feyerabend takes a Postmodernist path that sometimes cross.
Ah, the delightful dance of paradigms and anarchism in the hallowed halls of science! Let’s delve deeper into the intellectual pas de deux between Thomas Kuhn and Paul Feyerabend, those audacious thinkers who dared to challenge the sanctity of scientific methodology.
Kuhn’s Paradigm Shifts: The Scientific Waltz
Thomas Kuhn, in his seminal work The Structure of Scientific Revolutions, introduced us to the concept of paradigm shifts—a term now so overused that even corporate PowerPoint presentations aren’t spared. Kuhn posited that science doesn’t progress through a linear accumulation of knowledge but rather through a series of revolutionary upheavals. These upheavals occur when the prevailing scientific framework, or “paradigm,” becomes as outdated as last season’s fashion, unable to account for emerging anomalies. In Kuhn’s view, the scientific community clings to its paradigms with the tenacity of a dog to its bone, until the weight of anomalies forces a collective epiphany, leading to a paradigm shift. This cyclical process propels scientific advancement, albeit in a manner reminiscent of a drunken sailor’s stagger rather than a straight path.
Feyerabend’s Epistemological Anarchism: The Punk Rock of Science
Enter Paul Feyerabend, the enfant terrible of the philosophy of science, with his provocative manifesto Against Method. Feyerabend gleefully dismantled the notion of a universal scientific method, advocating for “epistemological anarchism.” He argued that the rigid adherence to methodological rules is about as useful as a chocolate teapot, stifling creativity and hindering progress. In Feyerabend’s anarchic utopia, “anything goes” in the pursuit of knowledge, and the scientific method is more of a loose suggestion than a strict protocol. His critique was not just a call for methodological diversity but a full-blown rebellion against the tyranny of scientific dogmatism.
A Comparative Analysis: Method to the Madness
While Kuhn and Feyerabend both challenged the orthodox views of scientific progress, their approaches were as different as chalk and cheese. Kuhn’s analysis was rooted in historical case studies, portraying scientific revolutions as communal shifts in perspective, akin to a collective midlife crisis. Feyerabend, on the other hand, took a more radical stance, suggesting that the very idea of a fixed scientific method is as mythical as unicorns. Where Kuhn saw periods of “normal science” punctuated by revolutionary shifts, Feyerabend saw a chaotic free-for-all, where progress is made not by following rules but by breaking them.
Implications for Scientific Practice: Order in Chaos
The implications of their critiques are profound. Kuhn’s work suggests that scientists should remain open to paradigm shifts, lest they become as obsolete as Betamax in a Netflix era. Feyerabend’s anarchism, while controversial, serves as a reminder that innovation often requires the audacity to defy convention. Together, they paint a picture of science not as a monolithic quest for truth but as a dynamic, often tumultuous, human endeavour.
Conclusion: The Legacy of Intellectual Rebellion
In conclusion, the works of Kuhn and Feyerabend invite us to view science through a more sceptical lens, questioning the sanctity of its methods and the rigidity of its paradigms. Their critiques serve as a clarion call for intellectual flexibility, urging us to embrace the chaos and complexity inherent in the pursuit of knowledge. After all, in the grand theatre of science, it’s often the most unconventional performances that leave a lasting impact.
I’ve written about this topic before. Metamodernism has been heralded as the great reconciler of Modernism and Postmodernism, a dialectical triumph that purports to synthesise these two oppositional paradigms. On the one hand, Modernism clings to its belief in objective truths, rationality, and universal principles. On the other, Postmodernism dismantles those certainties, exposing them as fragile constructs, rooted as much in ideology as in reason. The promise of metamodernism is to bridge this divide, to create a space where the objectivity of Modernism and the relativism of Postmodernism can coexist. But can it?
Audio: NotebookLM Podcast about this topic.
Spoiler alert: it cannot. In fact, metamodernism doesn’t even attempt to fulfil its stated goal. Instead, what it really does—intentionally or not—is meld Modernism’s objective framework with Pre-Enlightenment mysticism, offering a regressive concoction that romanticises the past while pretending to chart a bold new future. This isn’t synthesis; it’s nostalgia masquerading as innovation.
The Unbridgeable Divide: Objective vs. Relative
To understand why metamodernism’s claimed synthesis is untenable, we need to examine the fundamental incompatibility of its supposed components. Modernism rests on the firm foundation of objectivity: truth is universal, reason is supreme, and progress is inevitable. Postmodernism, however, thrives in the cracks of that foundation, pointing out that these so-called universal truths are culturally and historically contingent, and that “progress” often serves as a euphemism for domination or erasure.
Reconciling these two positions is like trying to mix oil and water. Modernism’s faith in absolutes cannot coexist with Postmodernism’s celebration of ambiguity and multiplicity without reducing one to a mere aesthetic flourish for the other. The result is not a synthesis but a superficial oscillation, an endless back-and-forth that achieves neither clarity nor coherence.
The Real Agenda: A Fusion of Objectivities
What metamodernism actually achieves is something quite different. Instead of bridging the gap between Modernism and Postmodernism, it fuses Modernism’s objective certainties with the equally objective but pre-rational framework of Pre-Enlightenment mysticism. In doing so, it abandons the critical lens of Postmodernism altogether, retreating to a worldview that is comfortingly familiar but intellectually regressive.
Consider the resurgence of myth, spirituality, and transcendence in metamodernist discourse. These elements hark back to a time when objective truths were dictated by divine authority or cosmological narratives rather than scientific inquiry. By incorporating these pre-modern ideas into its framework, metamodernism sidesteps the hard questions posed by Postmodernism, offering a fusion that is plausible only because both Modernism and Pre-Enlightenment mysticism share a common belief in absolute truths.
Plausible but Retrograde
This melding of Modernist and Pre-Enlightenment frameworks might seem plausible because, in truth, many Moderns never fully abandoned their mystical roots. The Enlightenment’s project of replacing religious dogma with reason was always incomplete; its foundational assumptions about universality and objectivity often carried an unspoken theological residue. Metamodernism taps into this latent nostalgia, offering a vision of the world that feels grounded and comforting, but at the cost of intellectual progress.
The problem is that this vision is fundamentally retrograde. By retreating to the certainties of the past, metamodernism ignores the most valuable insight of Postmodernism: that all frameworks, whether Modern or mystical, are ultimately constructed and contingent. To move forward, we need to grapple with this contingency, not escape from it.
Conclusion: Nostalgia in Disguise
Far from being a dialectical synthesis, metamodernism is a retreat. It cloaks itself in the language of progress while recycling old patterns of thought. Its attempt to reconcile Modernism and Postmodernism collapses into a fusion of Modernist objectivity and Pre-Enlightenment mysticism, leaving the critical insights of Postmodernism by the wayside.
If we are to truly progress, we must resist the siren song of metamodernism’s nostalgia. Instead, we should embrace the challenge of living without absolutes, grappling with the ambiguity and multiplicity that define our postmodern condition. Anything less is not synthesis but surrender.
The assassination of Brian Thompson, CEO of UnitedHealth, has electrified public discourse. In the court of public opinion—and particularly on social media—the assailant has been lionised, hailed as a hero who slayed a corporate leviathan. Yet the metaphorical beast is no simple predator; it’s a hydra. Slice off one head, and two grow back.
Still, this act has stirred the waters. It forces us to reckon with a clash of titans: the corporate machine versus the rogue idealist. Both are acting rationally, but neither is acting morally—at least not in the conventional sense. The question, then, is whether the assassin’s actions might occupy the higher moral ground, particularly through the lens of Consequentialist ethics.
The Hydra: UnitedHealth and the Systemic Beast
To understand the morality of the act, we must first confront the monster. UnitedHealth didn’t invent the healthcare system; it merely exploited its flaws with cold, clinical efficiency. Thompson’s leadership was emblematic of an industry that sees human lives as variables in a profit-maximising equation. Claims denial, inflated premiums, and labyrinthine bureaucracy are not bugs—they’re features. And for every life saved by healthcare, countless others are destroyed by its financial and emotional toll.
Rational? Certainly. Morally defensible? Hardly. Yet from the corporation’s perspective, these actions are the logical byproducts of a system designed to prioritise shareholder value above all else. Blame the player, yes—but blame the game more.
The Assassin: Vigilante Justice or Trolley Ethics?
Now consider the assassin, who embodies a grimly utilitarian logic: sacrifice one life to spare the misery of thousands. It’s a brutal, visceral iteration of the trolley problem—or perhaps the “baby Hitler problem,” only carried out decades too late. This wasn’t mindless violence; it was a calculated act of symbolic retribution.
From a Consequentialist perspective, the act raises uncomfortable questions. If Thompson’s death leads to systemic reform—if it forces even one profit-hungry executive to hesitate before denying care—does the assassin’s action gain moral weight? In utilitarian terms, the calculus seems clear: one life traded for a net reduction in suffering.
But that’s a dangerous game. Symbolism doesn’t always translate to change, and the hydra analogy looms large. The industry won’t topple because one CEO fell. The machinery grinds on, indifferent to the blood spilled in Manhattan. Worse, the system might grow even more resilient, using Thompson’s death as justification for tighter security, greater secrecy, and more aggressive self-preservation.
Rationality vs. Morality
What makes this clash so compelling is the cold rationality on both sides. UnitedHealth’s actions, reprehensible as they are, make sense within a capitalist framework. The assassin’s actions, though violent and morally fraught, also make sense if viewed as a desperate attempt to restore balance to a world that prioritises profit over human life.
The difference lies in their moral standing. The corporation’s rationality is underpinned by greed; its actions perpetuate suffering. The assassin’s rationality, however misguided, is rooted in outrage at injustice. If morality is determined by intent and consequence, the assassin might indeed occupy higher moral ground—not because killing is inherently justifiable, but because the system left no other path for redress.
The Symbolism and the Hydra
The tragedy is that this act of violence, however symbolic, won’t solve the problem. The hydra will grow another head, as corporations close ranks and reform remains elusive. Yet the act remains a potent reminder of the power of individual resistance. Perhaps it will force a moment of reflection, a hesitation before the next denial stamp hits the desk. Or perhaps it will simply serve as another chapter in the grim saga of a system that turns suffering into profit.
The Final Question
In this clash of titans, one side wields institutional power and systemic exploitation; the other wields desperation and bullets. Both are rational. Neither is fully moral. But perhaps the assassin’s act—brutal, symbolic, and imperfect—offers a glimpse of what happens when systemic injustice pushes people past the breaking point.
The real question is whether this singular act of defiance will lead to change—or whether the hydra will simply grow stronger, hungrier, and more entrenched.
Jean-François Lyotard’s Le Différend has a way of gnawing at you—not with profound revelations, but with the slow, disquieting erosion of assumptions. It got me thinking about something uncomfortably obvious: political orientation is nothing more than the secular cousin of religious indoctrination. Just as most people will, without much scrutiny, cling to the religion of their upbringing and defend it as the One True Faith, the same applies to their political worldview. Whether you’re baptised into Anglicanism or wade knee-deep into the waters of neoliberalism, the zeal is indistinguishable.
Of course, there are the self-proclaimed rebels who smugly declare they’ve rejected their parents’ politics. The ones who went left when Mum and Dad leaned right or discovered anarchism in the ruins of a conservative household. But let’s not be fooled by the patina of rebellion: they may have switched teams, but they’re still playing the same game. They’ve accepted the foundational myths of institutions and democracy—those hallowed, untouchable idols. Like religion, these constructs are not just defended but sanctified, preached as the best or only possible versions of salvation. Dissenters are heretics; non-believers are unthinkable.
It’s not that political ideologies are inherently bad (just like religion has its occasional charm). It’s that the devout rarely stop to question whether the framework itself might be the problem. They assume the boundaries are fixed, the terms are immutable, and the debate is merely about the correct interpretation of the catechism. But if Lyotard has taught us anything, it’s this: the real battles—the différends—are the ones no one’s even acknowledging because the language to articulate them doesn’t exist in the prevailing orthodoxy.
Welcome to Part 4 of a Week-Long Series on the Evolution and Limits of Language! This article is part of a seven-day exploration into the fascinating and often flawed history of language—from its primitive roots to its tangled web of abstraction, miscommunication, and modern chaos. Each day, we uncover new layers of how language shapes (and fails to shape) our understanding of the world.
If you haven’t yet, be sure to check out the other posts in this series for a full deep dive into why words are both our greatest tool and our biggest obstacle. Follow the journey from “flamey thing hot” to the whirlwind of social media and beyond!
The Written Word: Making Things Permanent (and Permanently Confusing)
So far, we’ve been dealing with spoken language—the slippery, ever-changing, context-dependent jumble of sounds we toss around in hopes that someone, somewhere, might understand what we’re trying to say. But what happens when we decide to make those words permanent? Welcome to the era of the written word, where all our linguistic problems got carved into stone—literally.
Let’s rewind a bit. Long before we had books or Twitter threads, ancient humans figured out that spoken words disappear into the air. They needed a way to preserve information, and voilà—writing was born. First came simple marks on clay tablets, because nothing says “let’s communicate important ideas” like scratching symbols into mud. But hey, at least it was a start.
The beauty of writing was that it gave us a way to record language—no more relying on memory to remember which berries were bad or who owed you a goat. But there was a downside too: once those words were written down, they became permanent. If you thought miscommunication was bad when words were floating in the air, just wait until you try to interpret a clay tablet left behind by someone who died 500 years ago. Good luck figuring out what they meant by “justice.”
And it didn’t stop there. As writing developed into full-fledged scripts, we gained the ability to record more complex ideas. That meant abstract nouns like “truth” and “freedom” were no longer just things you debated around the campfire—they could now be written down and preserved for future generations to also argue about. Nothing says “progress” like ensuring centuries of philosophical bickering.
But the real revolution came later. Fast forward to the 15th century, and along comes Johannes Gutenberg with his shiny new printing press. Suddenly, words—once limited to painstakingly hand-copied manuscripts—could be mass-produced. Books, pamphlets, and flyers could be printed in quantities never before imagined. Ideas could spread like wildfire.
And what ideas they were. Philosophers, theologians, and politicians alike jumped on the opportunity to get their words in front of as many people as possible. The written word wasn’t just a way to record information anymore—it became a tool for shaping societies, sparking revolutions, and (of course) stirring up endless debates about everything.
Of course, there was a catch. The printing press didn’t make language any clearer—it just gave us more of it to misunderstand. People could now read the same text and come away with completely different interpretations. What one person saw as a treatise on “freedom,” another saw as a justification for tyranny. What one reader thought was “truth,” another deemed blasphemy.
With the written word and the printing press, we managed to take the problems of spoken language and make them permanent. Miscommunication wasn’t just an unfortunate accident anymore—it was printed in ink, distributed en masse, and immortalised for future generations to argue over. If Wittgenstein had been alive during Gutenberg’s time, he probably would have thrown his hands in the air and said, “See? I told you words don’t mean what you think they mean.”
But hey, at least we were consistent. From clay tablets to printed books, the written word gave us the power to preserve language—and all its glorious inadequacies—for all time.
The Printing Press: Mass-Producing Confusion
The printing press was hailed as one of the greatest inventions in history. And sure, it was. It democratized knowledge, empowered literacy, and paved the way for all sorts of wonderful progress. But let’s be real—it also democratised miscommunication. Now, instead of one person misunderstanding you in conversation, hundreds—or thousands—could read your words and completely miss the point. Progress!
Gutenberg’s press took the words that were once fleeting and made them indelible. No more clarifying in real-time. No more adding context or adjusting your message on the fly. Once it was in print, that was it. You’d better hope your readers were playing the same “language game” as you, or things could go downhill fast.
Take Martin Luther, for example. He nailed his 95 Theses to the church door in 1517, and thanks to the printing press, those words spread all over Europe. What he intended as a call for reform turned into a revolution that spiralled far beyond his control. People read the same text and took wildly different meanings from it—some saw it as a plea for theological discussion, others as a call to burn down the nearest cathedral.
But it didn’t stop there. Luther’s seemingly clear ideas splintered into countless interpretations, and over time, what began as a movement for reform became the launchpad for hundreds of Protestant denominations. Each group interpreted Luther’s message (and the Bible) in their own unique way. From Lutheranism to Calvinism to the Baptists, Methodists, and beyond, the Protestant Reformation exploded into a thousand branches, all claiming to have grasped the “true” meaning of Luther’s words.
And this? This is the power – and the peril – of the written word. Once something is printed and distributed, it takes on a life of its own. Luther might have had one specific vision for his reforms, but as soon as those ideas hit the printing press, they fractured into countless interpretations, each with its own twist on “truth.” It’s a linguistic free-for-all, with everyone holding the same text and coming to completely different conclusions.
The printing press didn’t just give us more words—it gave us more misunderstandings. Suddenly, philosophical debates, political manifestos, and theological treatises were flying off the presses, each one ready to be misinterpreted by whoever happened to pick it up. And once it was printed, there was no going back. No retractions. No take-backs. Just page after page of linguistic uncertainty.
So while the printing press undoubtedly transformed society, it also multiplied the number of ways we could miscommunicate with each other. Because if there’s one thing we’re good at, it’s misunderstanding words – especially when they’re written down for all eternity.