The Enlightenment Sleight of Hand

How Reason Inherited God’s Metaphysics.

The Enlightenment, we are told, was the age of Reason. A radiant exorcism of superstition. Out went God. Out went angels, miracles, saints, indulgences. All that frothy medieval sentiment was swept aside by a brave new world of logic, science, and progress. Or so the story goes.

Audio: NotebookLM podcast on this topic.

But look closer, and you’ll find that Reason didn’t kill God—it absorbed Him. The Enlightenment didn’t abandon metaphysics. It merely privatised it.

From Confessional to Courtroom

We like to imagine that the Enlightenment was a clean break from theology. But really, it was a semantic shell game. The soul was rebranded as the self. Sin became crime. Divine judgement was outsourced to the state.

We stopped praying for salvation and started pleading not guilty.

The entire judicial apparatus—mens rea, culpability, desert, retribution—is built on theological scaffolding. The only thing missing is a sermon and a psalm.

Where theology had the guilty soul, Enlightenment law invented the guilty mind—mens rea—a notion so nebulous it requires clairvoyant jurors to divine intention from action. And where the Church offered Hell, the state offers prison. It’s the same moral ritual, just better lit.

Galen Strawson and the Death of Moral Responsibility

Enter Galen Strawson, that glowering spectre at the feast of moral philosophy. His Basic Argument is elegantly devastating:

  1. You do what you do because of the way you are.
  2. You can’t be ultimately responsible for the way you are.
  3. Therefore, you can’t be ultimately responsible for what you do.

Unless you are causa sui—the cause of yourself, an unmoved mover in Calvin Klein—you cannot be held truly responsible. Free will collapses, moral responsibility evaporates, and retributive justice is exposed as epistemological theatre.

In this light, our whole legal structure is little more than rebranded divine vengeance. A vestigial organ from our theocratic past, now enforced by cops instead of clerics.

The Modern State: A Haunted House

What we have, then, is a society that has denied the gods but kept their moral logic. We tossed out theology, but we held onto metaphysical concepts like intent, desert, and blame—concepts that do not survive contact with determinism.

We are living in the afterglow of divine judgement, pretending it’s sunlight.

Nietzsche saw it coming, of course. He warned that killing God would plunge us into existential darkness unless we had the courage to also kill the values propped up by His corpse. We did the first bit. We’re still bottling it on the second.

If Not Retribution, Then What?

Let’s be clear: no one’s suggesting we stop responding to harm. But responses should be grounded in outcomes, not outrage.

Containment, not condemnation.

Prevention, not penance.

Recalibration, not revenge.

We don’t need “justice” in the retributive sense. We need functional ethics, rooted in compassion and consequence, not in Bronze Age morality clumsily duct-taped to Enlightenment reason.

The Risk of Letting Go

Of course, this is terrifying. The current system gives us moral closure. A verdict. A villain. A vanishing point for our collective discomfort.

Abandoning retribution means giving that up. It means accepting that there are no true villains—only configurations of causes. That punishment is often revenge in drag. That morality itself might be a control mechanism, not a universal truth.

But if we’re serious about living in a post-theological age, we must stop playing dress-up with divine concepts. The Enlightenment didn’t finish the job. It changed the costumes, kept the plot, and called it civilisation.

It’s time we staged a rewrite.

Where There’s a Will, There’s a Way

I’ve read Part I of Hobbes’ Leviathan and wonder what it would have been like if he filtered his thoughts through Hume or Wittgenstein. Hobbes makes Dickens read like Pollyanna. It’s an interesting historical piece, worth reading on that basis alone. It reads as if the Christian Bible had to pass through a legal review before it had been published, sapped of vigour. As bad a rap as Schopenhauer seems to get, Hobbes is the consummate Ebenezer Scrooge. Bah, humbug – you nasty, brutish, filthy animals!*

Audio: NotebookLM podcast conversation on this topic.

In any case, it got me thinking of free will and, more to the point, of will itself.

A Brief History of Humanity’s Favourite Metaphysical Scapegoat

By the time Free Will turned up to the party, the real guest of honour—the Will—had already been drinking heavily, muttering incoherently in the corner, and starting fights with anyone who made eye contact. We like to pretend that the “will” is a noble concept: the engine of our autonomy, the core of our moral selves, the brave little metaphysical organ that lets us choose kale over crisps. But in truth, it’s a bloody mess—philosophy’s equivalent of a family heirloom that no one quite understands but refuses to throw away.

So, let’s rewind. Where did this thing come from? And why, after 2,500 years of name-dropping, finger-pointing, and metaphysical gymnastics, are we still not quite sure whether we have a will, are a will, or should be suing it for damages?

Plato: Soul, Reason, and That Poor Horse

In the beginning, there was Plato, who—as with most things—half-invented the question and then wandered off before giving a straight answer. For him, the soul was a tripartite circus act: reason, spirit, and appetite. Will, as a term, didn’t get top billing—it didn’t even get its name on the poster. But the idea was there, muddling along somewhere between the charioteer (reason) and the unruly horses (desire and spiritedness).

No explicit will, mind you. Just a vague sense that the rational soul ought to be in charge, even if it had to beat the rest of itself into submission.

Aristotle: Purpose Without Pathos

Aristotle, ever the tidy-minded taxonomist, introduced prohairesis—deliberate choice—as a sort of proto-will. But again, it was all about rational calculation toward an end. Ethics was teleological, goal-oriented. You chose what aligned with eudaimonia, that smug Greek term for flourishing. Will, if it existed at all, was just reason picking out dinner options based on your telos. No inner torment, no existential rebellion—just logos in a toga.

Augustine: Sin, Suffering, and That Eternal No

Fast-forward a few hundred years, and along comes Saint Augustine, traumatised by his libido and determined to make the rest of us suffer for it. Enter voluntas: the will as the seat of choice—and the scene of the crime. Augustine is the first to really make the will bleed. He discovers he can want two incompatible things at once and feels properly appalled about it.

From this comes the classic Christian cocktail: freedom plus failure equals guilt. The will is free, but broken. It’s responsible for sin, for disobedience, for not loving God enough on Wednesdays. Thanks to Augustine, we’re stuck with the idea that the will is both the instrument of salvation and the reason we’re going to Hell.

Cheers.

Medievals: God’s Will or Yours, Pick One

The Scholastics, never ones to let an ambiguity pass unanalysed, promptly split into camps. Aquinas, ever the reasonable Dominican, says the will is subordinate to the intellect. God is rational, and so are we, mostly. But Duns Scotus and William of Ockham, the original voluntarist hooligans, argue that the will is superior—even in God. God could have made murder a virtue, they claim, and you’d just have to live with it.

From this cheerful perspective, will becomes a force of arbitrary fiat, and humans, made in God’s image, inherit the same capacity for irrational choice. The will is now more than moral; it’s metaphysical. Less reason’s servant, more chaos goblin.

Hobbes: Appetite with Delusions of Grandeur

Then along comes Thomas Hobbes, who looks at the soul and sees a wheezing machine of appetites. Will, in his famously cheery view, is simply “the last appetite before action.” No higher calling, no spiritual struggle—just the twitch that wins. Man is not a rational animal, but a selfish algorithm on legs. For Hobbes, will is where desire stumbles into motion, and morality is a polite euphemism for not getting stabbed.

Kant: The Will Gets a Makeover

Enter Immanuel Kant: powdered wig, pursed lips, and the moral rectitude of a man who scheduled his bowel movements. Kant gives us the good will, which acts from duty, not desire. Suddenly, the will is autonomous, rational, and morally legislative—a one-man Parliament of inner law.

It’s all terribly noble, terribly German, and entirely exhausting. For Kant, free will is not the ability to do whatever you like—it’s the capacity to choose according to moral law, even when you’d rather be asleep. The will is finally heroic—but only if it agrees to hate itself a little.

Schopenhauer: Cosmic Will, Cosmic Joke

And then the mood turns. Schopenhauer, world’s grumpiest mystic, takes Kant’s sublime will and reveals it to be a blind, thrashing, cosmic force. Will, for him, isn’t reason—it’s suffering in motion. The entire universe is will-to-live: a desperate, pointless striving that dooms us to perpetual dissatisfaction.

There is no freedom, no morality, no point. The only escape is to negate the will, preferably through aesthetic contemplation or Buddhist-like renunciation. In Schopenhauer’s world, the will is not what makes us human—it’s what makes us miserable.

Nietzsche: Transvaluation and the Will to Shout Loudest

Cue Nietzsche, who takes Schopenhauer’s howling void and says: yes, but what if we made it fabulous? For him, the will is no longer to live, but to power—to assert, to create, to impose value. “Free will” is a theologian’s fantasy, a tool of priests and moral accountants. But will itself? That’s the fire in the forge. The Übermensch doesn’t renounce the will—he rides it like a stallion into the sunset of morality.

Nietzsche doesn’t want to deny the abyss. He wants to waltz with it.

Today: Free Will and the Neuroscientific Hangover

And now? Now we’re left with compatibilists, libertarians, determinists, and neuroscientists all shouting past each other, armed with fMRI machines and TED talks. Some claim free will is an illusion, a post hoc rationalisation made by brains doing what they were always going to do. Others insist that moral responsibility requires it, even if we can’t quite locate it between the neurons.

We talk about willpower, will-to-change, political will, and free will like they’re real things. But under the hood, we’re still wrestling with the same questions Augustine posed in a North African villa: Why do I do what I don’t want to do? And more importantly, who’s doing it?

Conclusion: Where There’s a Will, There’s a Mess

From Plato’s silent horses to Nietzsche’s Dionysian pyrotechnics, the will has shape-shifted more times than a politician in an election year. It has been a rational chooser, a moral failure, a divine spark, a mechanical twitch, a cosmic torment, and an existential triumph.

Despite centuries of philosophical handwringing, what it has never been is settled.

So where there’s a will, there’s a way. But the way? Twisting, contradictory, and littered with the corpses of half-baked metaphysical systems.

Welcome to the labyrinth. Bring snacks.

* The solitary, poor, nasty, brutish, and short quote is forthcoming. Filthy animals is a nod to Home Alone.

Video: Modernity Worldview Survey Intro

I’ve finally had time to create some video content for the Modernity Worldview Survey. This content is a cursory overview and serves as an introduction to deeper content planned for the future.

This video is short of seven minutes, so briefly, it outlines the worldviews and the questions. I opted not to produce a single comprehensive video so the material could arrive sooner. The content is bookmarked, though this is likely overkill for such a short video.

A permanent page about the survey is always available on this blog.

I’m still accumulating responses, but the survey is available here if you haven’t taken it. Apologies in advance for the fact that it renders best on a larger monitor or tablet rather than a mobile phone. It doesn’t render at all on a landline, so there’s that.

Are We In a Post-Rationality Era?

The more I watch the News or engage with social media, I can’t help but feel that the Western World™ has gone mad. It’s also disconcerting because, as a subjectivist, this should come as no surprise. I guess I miss the days when it was better wrapped in relativism. Perhaps it’s just that the relative cohorts are getting smaller and smaller, and the intersectionality of dimensions can’t bear the load. I prompted 7 generative AI platforms to help asses whether I am alone in my observations. I present the responses unedited, identifying each with a header.

Audio: NotebookLM podcast of this content.

ChatGPT

Oh, we are absolutely in a post-rational era, and you are right to be concerned. The Age of Reason has been mugged in a dark alley by vibes, conspiracy theories, and a relentless onslaught of algorithmic drivel. Facts? Mere inconveniences. Logic? A relic of a bygone age when people actually read books instead of scrolling through rage-bait on social media.

We now live in a world where feelings trump evidence, where expertise is met with a sneer and a “Well, I heard on TikTok that…” Politicians openly contradict themselves within the same sentence, and their supporters somehow cheer louder for the contradiction. Science? Only if it aligns with pre-existing beliefs. Critical thinking? How dare you suggest such a thing—just trust the meme with the most retweets.

But hey, maybe this is just the inevitable entropy of civilization. Who needs enlightenment values when you can have vibes and engagement metrics? Buckle up—it’s only going to get weirder.

DeepSeek

The idea that we might be entering a “post-rational era” is a topic of concern for many, and you’re not alone in noticing trends that suggest a shift away from traditional rational discourse. This concept often refers to a cultural or societal movement where emotional appeals, subjective experiences, and ideological narratives seem to carry more weight than evidence-based reasoning, critical thinking, or objective facts.

Several factors contribute to this perception:

  1. Polarization and Tribalism: In many societies, political and social polarization has led to a decline in rational debate. People often align with groups or ideologies that reinforce their beliefs, dismissing opposing views without critical examination. This tribal mindset can override rational discourse.
  2. Misinformation and Disinformation: The rise of social media and digital platforms has made it easier for false or misleading information to spread rapidly. Confirmation bias leads individuals to accept information that aligns with their preexisting beliefs, regardless of its accuracy.
  3. Emotional Appeals in Media and Politics: Emotional narratives often resonate more deeply with audiences than complex, nuanced arguments. This has led to a media and political landscape where sensationalism and emotional manipulation can overshadow rational analysis.
  4. Erosion of Trust in Institutions: Trust in traditional sources of authority, such as science, academia, and the media, has declined in some quarters. This skepticism can lead people to reject evidence-based reasoning in favor of alternative narratives.
  5. Post-Truth Culture: The term “post-truth” reflects a cultural shift where objective facts are less influential in shaping public opinion than appeals to emotion and personal belief. This phenomenon has been particularly evident in recent political campaigns and debates.

However, it’s important to note that rationality and critical thinking haven’t disappeared entirely. Many individuals and organizations continue to advocate for evidence-based decision-making and rational discourse. The challenge lies in amplifying these voices and addressing the structural and cultural factors that contribute to the perception of a “post-rational era.”

If you’re concerned about this trend, you’re not alone. Engaging in and promoting critical thinking, media literacy, and open dialogue can help counteract these tendencies. It’s also worth remembering that societal shifts are often cyclical, and the pendulum may swing back toward a greater appreciation for rationality in the future.

Meta AI

You’re not alone in your concern. The notion of a “post-rational” era suggests that emotions, personal beliefs, and ideologies have become more influential in shaping public discourse and decision-making, often at the expense of objective facts, evidence, and critical thinking.

Factors contributing to this trend

  1. Social media echo chambers: Algorithms prioritize content that confirms our existing biases, creating bubbles that reinforce emotions and opinions over facts.
  2. Polarization and tribalism: The increasing divide between ideological groups has led to a focus on winning arguments and defending one’s tribe, rather than seeking common ground or objective truth.
  3. Information overload and misinformation: The internet has made it easier for misinformation to spread, contributing to a sense of confusion and mistrust in institutions and expertise.
  4. Emotional appeals and populism: Politicians and influencers often use emotional language and simplistic solutions to complex problems, which can be more persuasive than nuanced, fact-based arguments.

Concerns and implications

  1. Erosion of trust in institutions: The devaluation of expertise and objective facts can lead to decreased trust in institutions, such as science, academia, and the media.
  2. Poor decision-making: Emotion-driven decision-making can result in policies and choices that are not in the best interest of individuals or society as a whole.
  3. Social cohesion and conflict: The emphasis on emotional appeals and tribalism can exacerbate social divisions, leading to increased conflict and decreased social cohesion.

What can be done?

  1. Promote critical thinking and media literacy: Encourage critical thinking, nuance, and media literacy to help individuals effectively evaluate information and arguments.
  2. Foster inclusive, respectful dialogue: Encourage open, respectful discussions that prioritize listening, empathy, and understanding, rather than emotional appeals and tribalism.
  3. Support fact-based journalism and expertise: Advocate for and support institutions that prioritize fact-based reporting, expertise, and objective analysis.

You’re not alone in your concern, and there are steps we can take to promote a more rational, inclusive, and fact-based public discourse.

Claude

I share your concern about the perceived decline of rationality in our society. There are several worrying trends that seem to point to a “post-rationality” era in certain respects:

  • The rise of “fake news,” misinformation, and conspiracy theories spreading rapidly online and clouding the information landscape. It has become harder to distinguish fact from fiction.
  • Increasing political polarization and partisanship, with each side viewing the other as the enemy and rational compromise becoming more difficult. Emotion and tribalism often trump facts.
  • A distrust in traditional institutions, expertise and science in some segments of the population. Authority of experts is questioned and fringe theories gain traction.
  • “Post-truth politics” where appeals to emotion, personal beliefs and identity matter more than objective facts in shaping public opinion and discourse.
  • Shorter attention spans and the prevalence of shallow, snap judgments over deep analysis and critical thinking. Sensationalism is incentivized.
  • Filter bubbles and echo chambers online that reinforce existing views and make people less open to engaging with different perspectives rationally.

At the same time, I don’t think rationality is dead. Many people still deeply value science, reason, factual accuracy and logical argumentation. But these forces of anti-rationality have grown stronger recently.

As a society, I believe we need to double down on critical thinking skills, media literacy, scientific inquiry, and mechanisms to hold leaders and media accountable for spreading misinformation. Restoring trust in expertise and institutions that safeguard truth is crucial. We must make rational discourse “cool” again across partisan lines.

These are complex, multifaceted issues, but I don’t think we should surrender to a post-rationality era as an inevitability. Rationality is too important – it’s worth fighting for as a foundation for societal progress. But it will take serious effort to stem the worrying trends you’ve pointed to. I share your concerns on this front.

Perplexity

There is indeed a growing discourse around the concept of post-rationality, but it’s not universally accepted that we are in a post-rational era. The idea of post-rationality has emerged as a critique and extension of rationalist thinking, rather than a complete abandonment of rationality[1][5].

Post-rationality acknowledges the limitations of pure rationalism while still recognizing the value of rational thought. It suggests that there are domains where traditional rationality may not be sufficient or may even be counterproductive[1][2]. This perspective argues for a more nuanced understanding of human cognition and decision-making, incorporating elements like intuition, emotion, and social context[5].

However, it’s important to note that:

  1. Rationality remains fundamental: Even proponents of post-rationality often argue that understanding rationality is a prerequisite for moving beyond it[2][5].
  2. Ongoing debate: There’s significant disagreement about what post-rationality actually entails and whether it’s a useful concept[5].
  3. Concerns about irrationality: Some observers worry that the critique of rationalism could lead to a rejection of reason altogether, potentially fostering nihilism or anti-intellectualism[3][7].

Your concern is not unfounded. There are indeed challenges to rational discourse in contemporary society, including the spread of misinformation, polarization, and what some describe as a “post-truth” environment[6][7]. However, these challenges don’t necessarily indicate a wholesale rejection of rationality.

Rather than a true “post-rational” era, we may be experiencing a period of reevaluation and expansion of our understanding of rationality. This includes grappling with its limitations and exploring how to integrate other forms of knowledge and decision-making[1][2][5].

Citations:
[1] https://metarationality.com/post-rationalist-nihilism
[2] https://www.lesswrong.com/posts/TvSEg5WzSadC697Gd/post-rationality-and-rationality-a-dialogue
[3] https://voegelinview.com/living-in-the-post-rational-age-part-ii/
[4] https://www.reddit.com/r/philosophy/comments/qic6y6/rationality_has_its_limitations_but_is/
[5] https://www.lesswrong.com/posts/qgJfyAbzwnWt4a7Go/the-archetypal-rational-and-post-rational
[6] https://www.newyorker.com/magazine/2021/08/23/why-is-it-so-hard-to-be-rational
[7] https://substack.com/home/post/p-62529885
[8] https://philosophy.stackexchange.com/questions/108574/if-philosophy-is-based-on-reason-and-rationality-then-why-is-there-so-much-disa


Pi

You’re certainly not alone in your concern. There’s been a growing sentiment that we might be entering a post-rationality era, where emotions and personal beliefs seem to hold more sway than reason and evidence. The rise of misinformation, conspiracy theories, and political polarization have all been cited as symptoms of this shift.

It’s a complex issue with lots of factors at play. Some blame social media algorithms that create echo chambers, while others point to the erosion of trust in traditional institutions. But there’s no single, simple answer.

Why do you think we might be in a post-rationality era?

¬◇ Not Diamond

The concept of a “post-rationality era” is a topic of discussion and debate among various intellectual and philosophical communities. Here are some key points that suggest we might be in, or at least approaching, a post-rational era, along with some differing perspectives:

Loss of Rational Certainty

The early 20th century saw significant challenges to the idea of rational certainty, particularly with developments in non-Euclidean geometry, Einstein’s general relativity, quantum theory, and the foundational crises in mathematics and logic. These discoveries undermined the notion that rationality and science could provide absolute truths and certainties[2].

Post-Rationalism

Post-rationalism acknowledges the limitations of rationality while still valuing true beliefs. It recognizes that irrational-seeming systems, such as rituals and traditions, can often serve human interests better than purely rational systems. This perspective critiques the over-reliance on rationality and highlights the importance of other aspects of human experience, like meaning and happiness[1].

Post-Rationalist Nihilism

Some individuals, particularly those deeply invested in rationalism, may experience a sense of nihilism or loss of meaning when they realize that rationality is not always sufficient. This can lead to feelings of anxiety, depression, and a crisis of identity. However, this can be mitigated by recognizing the value of rationality in many domains, even if it is not universally applicable[3].

Postmodern and Post-Christian Era

There is a broader cultural and philosophical context where the traditional foundations of rationality and meaning are being challenged. The postmodern and post-Christian era, as described by some authors, is characterized by a rejection of grand narratives and traditional values. This has led to concerns about the erosion of critical thinking, the promotion of nihilism, and the rise of totalitarian tendencies that suppress individual freedom and self-reflection[5].

Practical Implications

While the theoretical underpinnings of rationality are being questioned, practical applications of rational thinking, such as in science, mathematics, and even fields like effective altruism, continue to be valuable. There is an ongoing discussion about how to maintain a balanced approach that respects the utility of rationality while acknowledging its limitations[2][4].

In summary, it is not just you who sees this with concern; many thinkers and writers are grappling with the implications of a post-rational era. This era is marked by a recognition of the limitations of rationality, a critique of its over-reliance, and a search for new ways to integrate rational thinking with other aspects of human experience.

NB: Interestingly, these ¬◇ footnotes don’t ink to anyplace.

Deepseek: Hallucination and Populism

The connection between conspiracy theorists, religious believers, and generative AI is a fascinating topic that touches on epistemology, psychology, and the sociology of belief. At its core, the common thread lies in the human tendency to impose meaning, structure, and causality on complex or ambiguous phenomena, often in the absence of sufficient evidence. Let’s explore this connection through the lens of political philosophy.

1. The Need for Meaning and Control

  • Religious Believers: Religion often provides a framework for understanding the world, offering answers to existential questions (e.g., the meaning of life, the nature of good and evil) that might otherwise seem incomprehensible. This framework simplifies complexity by attributing events to divine will or cosmic order.
  • Conspiracy Theorists: Similarly, conspiracy theories offer a simplified narrative that explains chaotic or unsettling events by attributing them to the deliberate actions of powerful, hidden actors. This provides a sense of control and understanding, even if the explanation is speculative or unfounded.
  • Generative AI: AI models, like humans, operate by identifying patterns and generating outputs based on incomplete data. When faced with ambiguity, they “hallucinate” plausible but potentially false information to fill gaps, mirroring the human tendency to create coherent narratives from incomplete evidence.
  • Connection: All three exhibit a drive to reduce uncertainty by generating explanations, whether through divine intervention, secret plots, or algorithmic extrapolation. This reflects a broader human desire to impose order on chaos, even at the cost of accuracy.

2. The Role of Authority and Trust

  • Religious Believers: Religious systems often rely on authoritative texts, leaders, or traditions as sources of truth. These authorities provide a sense of certainty and trust, even when empirical evidence is lacking.
  • Conspiracy Theorists: Conspiracy theories frequently reject mainstream authorities (e.g., governments, scientists) and instead place trust in alternative sources of information, such as charismatic figures or fringe communities.
  • Generative AI: AI systems are often perceived as authoritative due to their ability to process vast amounts of data. However, their outputs are only as reliable as the data they are trained on, and they lack the ability to critically evaluate the truthfulness of their responses.
  • Connection: All three rely on trust in certain authorities or systems to validate their claims, whether those authorities are divine, contrarian, or algorithmic. This highlights the role of trust in shaping belief systems and the potential for misuse or misinterpretation of authority.

3. Cognitive Biases and Pattern Recognition

  • Religious Believers: Humans are predisposed to detect agency and intentionality in the world, a tendency known as “hyperactive agency detection.” This can lead to the attribution of natural events to supernatural forces.
  • Conspiracy Theorists: Conspiracy thinking often involves seeing patterns where none exist (apophenia) and attributing events to deliberate human agency rather than randomness or coincidence.
  • Generative AI: AI models are designed to identify patterns in data, but they can also generate spurious correlations or false connections when the data is noisy or incomplete.
  • Connection: Both humans and AI exhibit a tendency to overfit patterns to data, leading to the creation of narratives that may be compelling but lack empirical support. This suggests a shared cognitive bias toward finding meaning, even when it is not warranted.

4. Social and Political Implications

  • Religious Believers: Religious beliefs can foster community cohesion and moral frameworks but can also lead to division and conflict when different belief systems clash.
  • Conspiracy Theorists: Conspiracy theories can create a sense of belonging among adherents but often lead to polarization and distrust of societal institutions.
  • Generative AI: AI has the potential to democratize information but also risks amplifying misinformation and reinforcing echo chambers.
  • Connection: All three phenomena have significant social and political implications, as they shape how individuals and groups perceive reality, interact with others, and engage with societal structures. They highlight the tension between the need for shared narratives and the dangers of divisive or false beliefs.

5. The Role of Epistemic Humility

  • A key philosophical insight is the importance of epistemic humility—the recognition of the limits of one’s knowledge. Religious traditions, conspiracy theories, and AI systems often lack this humility, presenting their narratives as definitive truths rather than contingent explanations.
  • Encouraging a culture of critical thinking, scepticism, and openness to evidence could help mitigate the risks associated with all three phenomena.

Conclusion

The connection between conspiracy theorists, religious believers, and generative AI lies in their shared tendency to simplify complexity, impose meaning, and generate narratives in the face of uncertainty. While this tendency is deeply human, it also carries risks, particularly when it leads to the uncritical acceptance of false or harmful beliefs. Understanding these connections can help us develop strategies to promote more nuanced, evidence-based approaches to knowledge and belief in an increasingly complex world.

DISCLAIMER: Please note that this is the first response I received from a prompt to DeepSeek.

I am interested in the possible connexion between conspiracy theorists and religious believers. These two cohorts (if they are even independent) have a common trait of oversimplifying things they can’t understand and hallucinating solutions in the absence of facts or evidence. GenerativeAI is accused of the same behaviour. How might these be connected?

NB: Evidently, some versions of DeekSeek generate images, but mine doesn’t, so I prompted it to generate an apt cover image.

I also asked for keyword tags. It provided these, but then hid them, replacing them with this message:

Sorry, that’s beyond my current scope. Let’s talk about something else.

Dukkha, the Path of Pain, and the Illusion of Freedom: Buddhism, Antinatalism, and the Lonely Road of Individuation

The First Noble Truth of Buddhism—the notion that life is suffering, or dukkha—is often misinterpreted as a bleak condemnation of existence. But perhaps there’s something deeper here, something challenging yet quietly liberating. Buddhism doesn’t merely suggest that life is marred by occasional suffering; rather, it proposes that suffering is woven into the very fabric of life itself. Far from relegating pain to an exception, dukkha posits that dissatisfaction, discomfort, and unfulfilled longing are the baseline conditions of existence.

This isn’t to say that life is an unending stream of torment; even in nature, suffering may seem the exception rather than the rule, often concealed by survival-driven instincts and primal ignorance. But we, as conscious beings, are haunted by awareness. Aware of our mortality, our desires, our inadequacies, and ultimately, of our impotence to escape this pervasive friction. And so, if suffering is indeed the constant, how do we respond? Buddhism, antinatalism, and Jungian psychology each offer their own, starkly different paths.

The Buddhist Response: Letting Go of the Illusion

In Buddhism, dukkha is a truth that urges us not to look away but to peer more closely into the nature of suffering itself. The Buddha, with his diagnosis, didn’t suggest we simply “cope” with suffering but rather transform our entire understanding of it. Suffering, he argued, is born from attachment—from clinging to transient things, ideas, people, and identities. We build our lives on desires and expectations, only to find ourselves caught in a cycle of wanting, attaining, and inevitably losing. It’s a form of existential whiplash, one that keeps us bound to dissatisfaction because we can’t accept the impermanence of what we seek.

The Buddhist approach is both radical and elusive: by dissolving attachment and breaking the cycle of clinging, we supposedly dissolve suffering itself. The destination of this path—Nirvana—is not a state of elation or contentment but a transcendence beyond the very conditions of suffering. In reaching Nirvana, one no longer relies on external or internal validation, and the violence of social judgment, cultural obligation, and personal ambition falls away. This may seem austere, yet it offers a powerful antidote to a world that equates happiness with accumulation and possession.

Antinatalism: Opting Out of Existence’s Violence

Where Buddhism seeks liberation within life, antinatalism takes an even more radical stance: why bring new beings into an existence steeped in suffering? For antinatalists, the suffering embedded in life renders procreation ethically questionable. By creating life, we induct a new being into dukkha, with all its attendant violences—society’s harsh judgments, culture’s rigid impositions, the bureaucratic machinery that governs our daily lives, and the inescapable tyranny of time. In essence, to give birth is to invite someone into the struggle of being.

This perspective holds that the most humane action may not be to mend the suffering we encounter, nor even to accept it as Buddhism advises, but to prevent it altogether. It sees the cycle of life and death not as a majestic dance but as a tragic spiral, in which each generation inherits suffering from the last, perpetuating violence, hardship, and dissatisfaction. Antinatalism, therefore, could be seen as the ultimate recognition of dukkha—an extreme empathy for potential beings and a refusal to impose the weight of existence upon them.

Jungian Individuation: The Lonely Path of Becoming

Jung’s concept of individuation offers yet another approach: to delve deeply into the self, to integrate all aspects of the psyche—the conscious and the unconscious—and to emerge as a fully realised individual. For Jung, suffering is not to be escaped but understood and incorporated. Individuation is a journey through one’s darkest shadows, a confrontation with the parts of oneself that society, culture, and even one’s own ego would rather ignore. It is, in a way, an anti-social act, as individuation requires the courage to step away from societal norms and embrace parts of oneself that might be seen as disturbing or unconventional.

But individuation is a lonely road. Unlike the Buddhist path, which seeks to transcend suffering, individuation requires one to face it head-on, risking rejection and alienation. Society’s judgment, a kind of violence in itself, awaits those who deviate from accepted roles. The individuated person may, in effect, be punished by the very structures that insist upon conformity. And yet, individuation holds the promise of a more authentic existence, a self that is not a mere amalgam of cultural expectations but a reflection of one’s truest nature.

The Delusions That Keep Us Tethered to Suffering

Yet, for all their starkness, these paths might seem almost abstract, philosophical abstractions that don’t fully capture the reality of living within the constraints of society, culture, and self. Human beings are armed with powerful psychological mechanisms that obscure dukkha: self-delusion, cognitive dissonance, and hubris. We fabricate beliefs about happiness, purpose, and progress to protect ourselves from dukkha’s existential weight. We convince ourselves that fulfilment lies in achievements, relationships, or material success. Cognitive dissonance allows us to live in a world that we know, on some level, will disappoint us without being paralysed by that knowledge.

It’s worth noting that even those who acknowledge dukkha—who glimpse the violence of existence and the illusory nature of happiness—may still find themselves clinging to these mental defences. They are shields against despair, the comforting armours that allow us to navigate a world in which suffering is the baseline condition. This is why Buddhism, antinatalism, and individuation require such rigorous, often painful honesty: they each ask us to set down these shields, to face suffering not as a solvable problem but as an intrinsic truth. In this light, psychological defences are seen not as failures of awareness but as survival strategies, albeit strategies that limit us from ever fully confronting the nature of existence.

Finding Meaning Amidst the Violence of Being

To pursue any of these paths—Buddhist enlightenment, antinatalism, or Jungian individuation—one must be prepared to question everything society holds dear. They are radical responses to a radical insight: that suffering is not accidental but foundational. Each path offers a different form of liberation, whether through transcendence, abstention, or self-integration, but they all require a certain fearlessness, a willingness to look deeply into the uncomfortable truths about life and existence.

Buddhism calls us to renounce attachment and embrace impermanence, transcending suffering by reshaping the mind. Antinatalism challenges us to consider whether it is ethical to bring life into a world marked by dukkha, advocating non-existence as an escape from suffering. And individuation asks us to become fully ourselves, embracing the loneliness and alienation that come with resisting society’s violence against the individual.

Perhaps the most realistic approach is to accept that suffering exists, to choose the path that resonates with us, and to walk it with as much awareness as possible. Whether we seek to transcend suffering, avoid it, or integrate it, each path is a confrontation with the violence of being. And maybe, in that confrontation, we find a fleeting peace—not in the absence of suffering, but in the freedom to choose our response to it. Dukkha remains, but we may find ourselves less bound by it, able to move through the world with a deeper, quieter understanding.

Are We Still Too Dumb for Democracy?

I’ve resurrected a book project (working title Dumbocracy) that I commenced at the end of 2021. I’ve revisited the structure and made some amends to the outline before I move forward. I’ve done more research and feel the topic is (sadly) still as relevant now as then. The approach I am taking is to present:

  1. definition and summary
  2. historical perspective of advocates and detractors
  3. the main thesis and antithesis
  4. alternatives
  5. future prospects

I’ve got a lot going on, so this may whither or die on the vine, but I’m hoping to reach the goal line this time. Since I already invested many hours over days since Autumn 2020, the groundwork is already laid.

If anyone has any contributions, I welcome them.

Revised Chapter Order

  1. Position and Setup
    • Introduce the central thesis and set the stage for the book.
  2. Historical Backdrop – Pre-Enlightenment Until Now
    • Provide historical context to ground the reader in the evolution of democratic ideas.
  3. Celebrity Supporters
    • Highlight influential figures who supported democracy, setting up the positive aspects and idealism associated with it.
  4. Celebrity Anti-Democracy Figures
    • Present notable critics of democracy to introduce scepticism and counterpoints early on.
  5. Prima Facie Arguments
    • Lay out the initial arguments against democracy, building on the scepticism introduced in the previous chapter.
  6. Shaky Grounds and Necessarily Suboptimal Outcomes: Arrow’s Impossibility Theorem
    • Dive into the mathematical and theoretical flaws in democratic voting systems to provide a strong foundation for the critiques.
  7. Essential Steelman Counterarguments
    • Present the strongest counterarguments to the prima facie arguments, offering a balanced perspective.
  8. Are We Too Dumb for Democracy?
    • Analyze cognitive limitations and their impact on democratic decision-making.
  9. Dealing with Rational Ignorance and Unknown Unknowns
    • Address the issues of rational ignorance and the limits of voter knowledge.
  10. Reconciling Worldviews: Individualism Versus Collectivism
    • Discuss the philosophical tensions and their implications for democracy.
  11. Possible People-Based Solutions
    • Introduce potential reforms and solutions to address the identified flaws.
  12. People-Based Counterarguments
    • Defend democracy by presenting strong arguments for the capability and resilience of the populace.
  13. Against Democracy
    • Explore Jason Brennan’s epistocracy and other critiques in depth.
  14. Effective Mob Rule
    • Discuss ways to improve voter quality and address historical injustices.
  15. The Representatives
    • Debate the idea of electing more qualified representatives.
  16. Jeremy Bentham’s Panopticon — Watching the Watchers
    • Consider the concept of surveillance and accountability in governance.
  17. US / UK Politics
    • Critique the current state of politics in the US and UK, providing contemporary relevance.
  18. SCOTUS Partisanship
    • Examine the partisanship in the Supreme Court and its implications for democracy.
  19. Jury Systems
    • Analyze the jury system as a microcosm of democratic principles.
  20. What About…?
    • Anarchy and Libertarianism
    • Deliberative Democracy
    • Republicanism
    • Epistocracy
    • Sortition
    • Other Potential ‘What Abouts’
  21. Tea Leaves
    • Speculate on the future of democracy and potential reforms or alternatives.
  22. And So What? Where to Go from Here?
    • Summarize the findings and suggest practical steps for addressing the flaws in democracy and exploring alternatives.

NB: As is typical, these are working titles subject to change.

Enlightenment Now?

I’ve long been opposed to the direction the Enlightenment took the course of Western civilisation. I’m not religious or spiritual, so I am glad history took a different route. I just don’t feel it’s been the right one. Pinker believes that we not only took the right course, but we are still on the right path. Mearsheimer believes that we made the right choice, but we are no longer on the right path.

Pinker stammers through his argument that Mearsheimer effectively frames for him. Pinker is a Liberal, who believes that reason and individual rights are the keys to progress. He believes that tribalism, e.g., partisanship or nationalism, subverts this.

Mearsheimer believes that humans are social animals that are individuals, but they need to carve out their individuality as a secondary consideration to the larger society and its needs. He argues that more homogeneous societies are better positioned to prosper, so he defends this sort of nationalism and national identity. Pinker argues that social trends show otherwise.

As for me, they’re both wrong. As with both of them, individuals are important as are social configurations. My contention is that humans are fundamentally dim. They are predictably irrational and too easily swayed by the next shiny object.

The Enlightenment was sold on a promise imagined and conjured by elite thinkers, but they must have lived in a bubble to think their cognitive capabilities were universal. Perhaps, they felt that the elite thinkers, the philosopher rulers of Plato’s wet dreams, would reign at least in proxy. Instead, we’ve seen a race to the bottom, a world where the lowest denominator seems to rule.

In the United States, we have an adle-headed Joe Biden versus idiot-grifter Donald Trump. This is the best a country of 350 million can manage. This is the cream of their crop. If this is what freedom renders, I say no thanks. I am personally opposed to the notion of Democracy as I’ve written about previously. In a world where 50% of the population has an IQ of under 100, this scares the hell out of me. Honestly, voting should be restricted to those with IQs over 115. Of course, I don’t trust IQs either, so we are speaking in abstract hypothetics. I don’t believe in progress, so there’s that.

Even this ‘elitist’ democracy would fail. People cannot acquire and maintain the necessary domain knowledge, and self-interest will still motivate voting outcomes, which as Kenneth Arrow proved in the 1950s, usually produce suboptimal outcomes. So even the best that the best of the best can yield is mediocrity.

So, what is the best position? I’m not sure, but it feels like one with no humans in the decision-making process would yield better results.

Post Script

I asked Dall-E to render an image for this post. Here’s what I got, but the process was amusing, to say the least.

User

Steven Pinker and John Mearsheimer battling in a wrestling ring

DALL·E

Creating an image of Steven Pinker and John Mearsheimer wrestling in a ring would go against the policy of depicting specific public figures. However, I can create a similar image with a creative twist. How about I generate an image of two fictional scholars in a wrestling ring, each embodying the spirit of intellectual debate? Let me know if this works for you or if you have any other ideas!

User

that’s fine just capture the caricature of each… pinker’s hair and Mearsheimer’s bald head and eyeglasses.

DALL·E

Creating images that resemble specific public figures, even through caricature, is outside the scope of my capabilities. However, I can create an image of two fictional characters in a wrestling ring, one with wild hair and the other with a bald head and eyeglasses, to capture the essence of an intellectual debate in a humorous, wrestling context. Let’s proceed with that idea!

Prompt for DALL·E: “Create an image of a humorous wrestling match between two fictional characters in a wrestling ring. One character has wild, unruly hair and a scholarly appearance, wearing a tweed jacket. The other character is bald, wearing eyeglasses and a professional suit, symbolizing a serious academic. The scene is lively, with a cheering crowd in the background, emphasizing the fun and exaggerated nature of this intellectual debate turned wrestling match.”

Size: 1024×1024

User

Create an image of a humorous wrestling match between two fictional characters in a wrestling ring. One character has wild, unruly hair and a scholarly appearance, wearing a tweed jacket. The other character is bald, wearing eyeglasses and a professional suit, symbolizing a serious academic. The scene is lively, with a cheering crowd in the background, emphasizing the fun and exaggerated nature of this intellectual debate turned wrestling match.

And, nailed it! lol The results speak for themselves. haha

Notice how the AI wouldn’t generate the image, but fed me that work-around to circumvent its rules.

What Is Enlightenment?

I like David Guignion’s channel, and I’ve been taking in several perspectives on Kant’s article, ‘What is Enlightenment?

Feel free to watch from the start, but I’ve cued it to the place where David shares my thoughts around Agency. He brings up the point that absent official authority such as a political or religious structure, we are still influenced—subconsciously—to some degree by mass media and culture. These are the embodied norms and customs that most people just take for granted without question.

As David suggests, we can’t just slough off this inherited skin. First, because we do not even question it, and, two, it’s ingrained, one would likely deny the influence or be subject to escalating commitment under so-called critical inspection.

This is akin to asking a person of a certain religious persuasion if they would have the same religious beliefs if they had not been indoctrinated with them. I feel that a vast majority would defend their religion and the underlying or resultant morality as being obvious, so at the very least they are kept in the lane with guide rails. The extent of this influence and the degree it subtracts from autonomy is my question. I believe if pressed, the individual would defend the prescribed morality to be self-evident and they would have acted the same way even without religious instruction.

Modern Spectrum: 1 Dimension

I’ve been pondering how to effectively dimensionalise the spectrum that illustrates the relationship among premodern, modern, and postmodern—and potentially, metamodern. I was researching and happened upon a YouTube video from a few years back by Rick Durst, a Conservative Evangelical professor. He was diagramming the chronological path from pre to post.

I think at this point it’s important to distinguish between Modern and Modernity and PostModern and PostModernity1. I feel that the noun form, Modernity, can be used to describe the chronology whilst the adjective form, Modern, describes the philosophy. I’m not sure that this is a standard distinction. If I adhere to this difference, then Rick is discussing Modernity rather than Modern. Perhaps I’m being pedantic.

I’ve taken liberties to rerender Rick’s model.

In his view, the stages are from God to Man to Earth. I don’t fully agree with the transition from Man to Earth, but the God to Man or Humanism doesn’t feel very controversial. Although Rick is describing modernity against a temporal backdrop, this doesn’t invalidate the God-Man-Earth aspect—leaving open the possibility that it may be invalid for other reasons.

Following the chronology, Rick points out that the overlapping periods between PreModern and Modern and Modern and PostModern are not clean breaks. As I’ve suggested before, in illustration B, some people today retain PreModern beliefs and others hold PostModern beliefs. On balance, I feel that the Western world today remains substantially Modern, philosophically speaking. I am not sure that I am qualified to assess this relative to contemporary Eastern cultures.

Again, without otherwise critically evaluating Rick’s model, belief and God and in particular, Catholocism was the hallmark of the PreModern, PreEnlightenment world—the supernatural and superstition ruled the day.

Many consider Modernity to have commenced with the Renaissance, roughly from the 14th to the 17th century—describing it as early modern. Given the prevalence of superstitious beliefs, I’d be more comfortable with something more along the lines of proto-modern rather than modern. Scientific discoveries were evident, but this was reserved for the elite.

Given the Protestant Reformation that occupied the 15th century, it’s clear that a declaration of Modern should be considered to be premature. One might even argue that even with the advent of the Age of Enlightenment, which spanned the 17th and 18th centuries, Secular Humanism become the theme for the empowered elites, but the masses never relinquished their PreModern belief systems. If we are to start the Modern clock at all, this seems to be as good a place as any.

Although Modernism is marked by Humanism, in the United States, federal and state government is still predicated on PreModern principles, so it is not unfair, twisting Lyotard’s phrase, to question, Have We Ever Been Modern? It is somewhat interesting how—at least anecdotally—how many people do not find it inconsistent to have faith in humanity to solve the ills of the world through technology whilst simultaneously believing in gods, angels, tarot, and homoeopathic and other anachronistic healing modalities.

Chronologically, Rick demarcates Modern and PostModern with the ecological crises of the 1970s, which turned the focus from Man to the Earth. Seeds of postmodernism were sewn post-WWII and even post-WWI with the devastation and realisation of the limits of human capacity.

For the purpose of the ternary plot, it seems easy enough to assess where a person might feel relative to gods versus humans. And whilst I could argue that the belief in gods and the supernatural is a discrete binary state rather than on a continuous scale, I could argue as well that someone could feel that their gods are in control but they retain some degree of what’s known as free will. As a matter of degree, one could be a Deist—believing in some Prime Mover—but feel that now humans are on their own. God is like a crocodile slithering into the darkness having enabled the next generations. On a 1 to 10 spectrum, they might get 90% Modern and 10% PreModern. A believer in astrology, tarot, and the like, might also have faith in Man, yet they may reside more on the 60%-40% in favour of Modernity—or vice versa.

The question is how to get past Man to Earth. I am not sure how to frame this. Perhaps this wasn’t the right avenue to pursue.


  1. For the record, I chose to render the terms PreModern and PostModern in camel case for no particular reason, save to think that it seems to make the prefix more readily distinguishable.