The Red Flag of Truth

Nothing says “I’ve stopped thinking” quite like someone waving the banner of Truth. The word itself, when capitalised and flapped about like a holy relic, isn’t a signal of wisdom but of closure. A red flag.

The short video by Jonny Thompson that inspired this post.

Those who proclaim to “speak the Truth” or “know the Truth”rarely mean they’ve stumbled upon a tentative insight awaiting refinement. No, what they mean is: I have grasped reality in its totality, and—surprise!—it looks exactly like my prejudices. It’s the epistemic equivalent of a toddler declaring ownership of the playground by drooling on the swings.

The Fetish of Objectivity

The conceit is that Truth is singular, objective, eternal, a monolithic obelisk towering over human folly. But history’s scrapyard is full of such obelisks, toppled and broken: phlogiston, bloodletting, Manifest Destiny, “the market will regulate itself.” Each was once trumpeted as capital-T Truth. Each is now embarrassing clutter for the dustbin.

Still, the zealots never learn. Every generation delivers its own batch of peddlers, flogging their version of Truth as if it were snake oil guaranteed to cure ignorance and impotence. (Side effects may include dogmatism, authoritarianism, and an inability to read the room.)

Why It’s a Red Flag

When someone says, “It’s just the truth”, what they mean is “, I am not listening,” like the parent who argues, “because I said so.” Dialogue is dead; curiosity cremated. Truth, in their hands, is less a lantern than a cosh. It is wielded not to illuminate, but to bludgeon.

Ralph Waldo Emerson’s voice breaks in, urging us to trust ourselves and to think for ourselves. Nothing is more degrading than to borrow another’s convictions wholesale and parade them as universal law. Better to err in the wilderness of one’s own reason than to be shepherded safely into another man’s paddock of certainties.

A Better Alternative

Rather than fetishising Truth, perhaps we ought to cultivate its neglected cousins: curiosity, provisionality, and doubt. These won’t look as good on a placard, admittedly. Picture a mob waving banners emblazoned with Ambiguity! – not exactly the stuff of revolutions. But infinitely more honest, and infinitely more humane.

So when you see someone waving the flag of Truth, don’t salute. Recognise it for what it is: a warning sign. Proceed with suspicion, and for God’s sake, bring Emerson.

Semantic Drift: When Language Outruns the Science

Science has a language problem. Not a lack of it – if anything, a surfeit. But words, unlike test tubes, do not stay sterile. They evolve, mutate, and metastasise. They get borrowed, bent, misused, and misremembered. And when the public discourse gets hold of them, particularly on platforms like TikTok, it’s the language that gets top billing. The science? Second lead, if it’s lucky.

Semantic drift is at the centre of this: the gradual shift in meaning of a word or phrase over time. It’s how “literally” came to mean “figuratively,” how “organic” went from “carbon-based” to “morally superior,” and how “theory” in science means robust explanatory framework but in the public square means vague guess with no homework.

In short, semantic drift lets rhetoric masquerade as reason. Once a word acquires enough connotation, you can deploy it like a spell. No need to define your terms when the vibe will do.

Audio: NotebookLM podcast on this topic.

When “Vitamin” No Longer Means Vitamin

Take the word vitamin. It sounds objective. Authoritative. Something codified in the genetic commandments of all living things. (reference)

But it isn’t.

A vitamin is simply a substance that an organism needs but cannot synthesise internally, and must obtain through its diet. That’s it. It’s a functional definition, not a chemical one.

So:

  • Vitamin C is a vitamin for humans, but not for dogs, cats, or goats. They make their own. We lost the gene. Tough luck.
  • Vitamin D, meanwhile, isn’t a vitamin at all. It’s a hormone, synthesised when sunlight hits your skin. Its vitamin status is a historical relic – named before we knew better, and now marketed too profitably to correct.

But in the land of TikTok and supplement shelves, these nuances evaporate. “Vitamin” has drifted from scientific designation to halo term – a linguistic fig leaf draped over everything from snake oil to ultraviolet-induced steroidogenesis.

The Rhetorical Sleight of Hand

This linguistic slippage is precisely what allows the rhetorical shenanigans to thrive.

In one video, a bloke claims a burger left out for 151 days neither moulds nor decays, and therefore, “nature won’t touch it.” From there, he leaps (with Olympic disregard for coherence) into talk of sugar spikes, mood swings, and “metabolic chaos.” You can almost hear the conspiratorial music rising.

The science here is, let’s be generous, circumstantial. But the language? Oh, the language is airtight.

Words like “processed,” “chemical,” and “natural” are deployed like moral verdicts, not descriptive categories. The implication isn’t argued – it’s assumed, because the semantics have been doing quiet groundwork for years. “Natural” = good. “Chemical” = bad. “Vitamin” = necessary. “Addiction” = no agency.

By the time the viewer blinks, they’re nodding along to a story told by words in costume, not facts in context.

The Linguistic Metabolism of Misunderstanding

This is why semantic drift isn’t just an academic curiosity – it’s a vector. A vector by which misinformation spreads, not through outright falsehood, but through weaponised ambiguity.

A term like “sugar crash” sounds scientific. It even maps onto a real physiological process: postprandial hypoglycaemia. But when yoked to vague claims about mood, willpower, and “chemical hijacking,” it becomes a meme with lab coat cosplay. And the science, if mentioned at all, is there merely to decorate the argument, not drive it.

That’s the crux of my forthcoming book, The Language Insufficiency Hypothesis: that our inherited languages, designed for trade, prayer, and gossip, are woefully ill-equipped for modern scientific clarity. They lag behind our knowledge, and worse, they often distort it.

Words arrive first. Definitions come limping after.

In Closing: You Are What You Consume (Linguistically)

The real problem isn’t that TikTokers get the science wrong. The problem is that they get the words right – right enough to slip past your critical filters. Rhetoric wears the lab coat. Logic gets left in the locker room.

If vitamin C is a vitamin only for some species, and vitamin D isn’t a vitamin at all, then what else are we mislabelling in the great nutritional theatre? What other linguistic zombies are still wandering the scientific lexicon?

Language may be the best tool we have, but don’t mistake it for a mirror. It’s a carnival funhouse – distorting, framing, and reflecting what we expect to see. And until we fix that, science will keep playing second fiddle to the words pretending to explain it.

Artificial Intelligence Isn’t Broken

Rather than recreate a recent post on my business site, LinkedIn.

(Warning: contains traces of logic, satire, and uncomfortable truths. But you knew that.)

Audio: NotebookLM podcast on the linked topic.

It’s just refusing to cosplay as your idealised fantasy of “human” cognition.

While pundits at the Wall Street Journal lament that AI thinks with “bags of heuristics” instead of “true models,” they somehow forget that humans themselves are kludged-together Rube Goldberg disasters, lurching from cognitive bias to logical fallacy with astonishing grace.

In my latest piece, I take a flamethrower to the myth of human intellectual purity, sketch a real roadmap for modular AI evolution, and suggest (only partly in jest) that the machines are becoming more like us every day — messy, contradictory, and disturbingly effective.

Let’s rethink what “thinking” actually means. Before the machines do it for us.

The Dubious Art of Reasoning: Why Thinking Is Harder Than It Looks

The Illusion of Clarity in a World of Cognitive Fog

Apologies in advance for this Logic 101 posting. Reason—our once-proud torch in the darkness, now more like a flickering lighter in a hurricane of hot takes and LinkedIn thought-leadership. The modern mind, bloated on TED Talks and half-digested Wikipedia articles, tosses around terms like “inductive” and “deductive” as if they’re interchangeable IKEA tools. So let us pause, sober up, and properly inspect these three venerable pillars of human inference: deduction, induction, and abduction—each noble, each flawed, each liable to betray you like a Greco-Roman tragedy.

Video: This post was prompted by this short by MiniPhilosophy.
Audio: NotebookLM podcast on this topic.

Deduction: The Tyrant of Certainty

Deduction is the purest of the lot, the high priest of logic. It begins with a general premise and guarantees a specific conclusion, as long as you don’t cock up the syllogism. Think Euclid in a toga, laying down axioms like gospel.

Example:

Perfect. Crisp. Unassailable. Unless, of course, your premise is bollocks. Deduction doesn’t check its ingredients—it just cooks with whatever it’s given. Garbage in, garbage out.

Strength: Valid conclusions from valid premises.
Weakness: Blind to empirical falsity. You can deduce nonsense from nonsense and still be logically sound.

Induction: The Gambler’s Gospel

Induction is the philosopher’s lottery ticket: generalising from particulars. Every swan I’ve seen is white, ergo all swans must be white. Until, of course, Australia coughs up a black one and wrecks your little Enlightenment fantasy.

Example:

Touching, isn’t it? Unfortunately, induction doesn’t prove anything—it suggests probability. David Hume had an existential breakdown over this. Entire centuries of Western philosophy spiralled into metaphysical despair. And yet, we still rely on it to predict weather, markets, and whether that dodgy lasagna will give us food poisoning.

Strength: Empirically rich and adaptive.
Weakness: One exception detonates the generalisation. Induction is only ever as good as the sample size and your luck.

Abduction: Sherlock Holmes’ Drug of Choice

Abduction is the inference to the best explanation. The intellectual equivalent of guessing what made the dog bark at midnight while half-drunk and barefoot in the garden.

Example:

It could be a garden sprinkler. Or a hose. Or divine intervention. But we bet on rain because it’s the simplest, most plausible explanation. Pragmatic, yes. But not immune to deception.

Strength: Useful in messy, real-world contexts.
Weakness: Often rests on a subjective idea of “best,” which tends to mean “most convenient to my prejudices.”

The Modern Reasoning Crisis: Why We’re All Probably Wrong

Our contemporary landscape has added new layers of complexity to these already dubious tools. Social media algorithms function as induction machines on steroids, drawing connections between your click on a pasta recipe and your supposed interest in Italian real estate. Meanwhile, partisan echo chambers have perfected the art of deductive reasoning from absolutely bonkers premises.

Consider how we navigate information today:

And thus, the modern reasoning loop is complete—a perfect system for being confidently incorrect while feeling intellectually superior.

Weakness by Analogy: The Reasoning Café

Imagine a café.

All three are trying to reason. Only one might get lunch.

The Meta-Problem: Reasoning About Reasoning

The true joke is this: we’re using these flawed reasoning tools to evaluate our reasoning tools. It’s like asking a drunk person to judge their own sobriety test. The very mechanisms we use to detect faulty reasoning are themselves subject to the same faults.

This explains why debates about critical thinking skills typically devolve into demonstrations of their absence. We’re all standing on intellectual quicksand while insisting we’ve found solid ground.

Conclusion: Reason Is Not a Guarantee, It’s a Wager

None of these modalities offer omniscience. Deduction only shines when your axioms aren’t ridiculous. Induction is forever haunted by Hume’s skepticism and the next black swan. Abduction is basically educated guessing dressed up in tweed.

Yet we must reason. We must argue. We must infer—despite the metaphysical vertigo.

The tragedy isn’t that these methods fail. The tragedy is when people believe they don’t.

Perhaps the wisest reasoners are those who understand the limitations of their cognitive tools, who approach conclusions with both confidence and humility. Who recognize that even our most cherished beliefs are, at best, sophisticated approximations of a reality we can never fully grasp.

So reason on, fellow thinkers. Just don’t be too smug about it.

When Suspension of Disbelief Escapes the Page

Welcome to the Age of Realism Fatigue

Once upon a time — which is how all good fairy tales begin — suspension of disbelief was a tidy little tool we used to indulge in dragons, space travel, talking animals, and the idea that people in rom-coms have apartments that match their personalities and incomes. It was a temporary transaction, a gentleman’s agreement, a pact signed between audience and creator with metaphorical ink: I know this is nonsense, but I’ll play along if you don’t insult my intelligence.

Audio: NotebookLM podcast of this page content.

This idea, famously coined by Samuel Taylor Coleridge as the “willing suspension of disbelief,” was meant to give art its necessary air to breathe. Coleridge’s hope was that audiences would momentarily silence their rational faculties in favour of emotional truth. The dragons weren’t real, but the heartbreak was. The ghosts were fabrications, but the guilt was palpable.

But that was then. Before the world itself began auditioning for the role of absurdist theatre. Before reality TV became neither reality nor television. Before politicians quoted memes, tech CEOs roleplayed as gods, and conspiracy theorists became bestsellers on Amazon. These days, suspension of disbelief is no longer a leisure activity — it’s a survival strategy.

The Fictional Contract: Broken but Not Forgotten

Traditionally, suspension of disbelief was deployed like a visitor’s badge. You wore it when entering the imagined world and returned it at the door on your way out. Fiction, fantasy, speculative fiction — they all relied on that badge. You accepted the implausible if it served the probable. Gandalf could fall into shadow and return whiter than before because he was, after all, a wizard. We were fine with warp speed as long as the emotional logic of Spock’s sacrifice made sense. There were rules — even in rule-breaking.

The genres varied. Hard sci-fi asked you to believe in quantum wormholes but not in lazy plotting. Magical realism got away with absurdities wrapped in metaphor. Superhero films? Well, their disbelief threshold collapsed somewhere between the multiverse and the Bat-credit card.

Still, we always knew we were pretending. We had a tether to the real, even when we floated in the surreal.

But Then Real Life Said, “Hold My Beer.”

At some point — let’s call it the twenty-first century — the need to suspend disbelief seeped off the screen and into the bloodstream of everyday life. News cycles became indistinguishable from satire (except that satire still had editors). Headlines read like rejected Black Mirror scripts. A reality TV star became president, and nobody even blinked. Billionaires declared plans to colonise Mars whilst democracy quietly lost its pulse.

We began to live inside a fiction that demanded that our disbelief be suspended daily. Except now, it wasn’t voluntary. It was mandatory. If you wanted to participate in public life — or just maintain your sanity — you had to turn off some corner of your rational mind.

You had to believe, or pretend to, that the same people calling for “freedom” were banning books. That artificial intelligence would definitely save us, just as soon as it was done replacing us. That social media was both the great democratiser and the sewer mainline of civilisation.

The boundary between fiction and reality? Eroded. Fact-checking? Optional. Satire? Redundant. We’re all characters now, improvising in a genreless world that refuses to pick a lane.

Cognitive Gymnastics: Welcome to the Cirque du Surréalisme

What happens to a psyche caught in this funhouse? Nothing good.

Our brains, bless them, were designed for some contradiction — religion’s been pulling that trick for millennia — but the constant toggling between belief and disbelief, trust and cynicism, is another matter. We’re gaslit by the world itself. Each day, a parade of facts and fabrications marches past, and we’re told to clap for both.

Cognitive dissonance becomes the default. We scroll through doom and memes in the same breath. We read a fact, then three rebuttals, then a conspiracy theory, then a joke about the conspiracy, then a counter-conspiracy about why the joke is state-sponsored. Rinse. Repeat. Sleep if you can.

The result? Mental fatigue. Not just garden-variety exhaustion, but a creeping sense that nothing means anything unless it’s viral. Critical thinking atrophies not because we lack the will but because the floodwaters never recede. You cannot analyse the firehose. You can only drink — or drown.

Culture in Crisis: A Symptom or the Disease?

This isn’t just a media problem. It’s cultural, epistemological, and possibly even metaphysical.

We’ve become simultaneously more skeptical — distrusting institutions, doubting authorities — and more gullible, accepting the wildly implausible so long as it’s entertaining. It’s the postmodern paradox in fast-forward: we know everything is a construct, but we still can’t look away. The magician shows us the trick, and we cheer harder.

In a world where everything is performance, authenticity becomes the ultimate fiction. And with that, the line between narrative and news, between aesthetic and actuality, collapses.

So what kind of society does this create?

One where engagement replaces understanding. Where identity is a curated feed. Where politics is cosplay, religion is algorithm, and truth is whatever gets the most shares. We aren’t suspending disbelief anymore. We’re embalming it.

The Future: A Choose-Your-Own-Delusion Adventure

So where does this all end?

There’s a dark path, of course: total epistemic breakdown. Truth becomes just another fandom and reality a subscription model. But there’s another route — one with a sliver of hope — where we become literate in illusion.

We can learn to hold disbelief like a scalpel, not a blindfold. To engage the implausible with curiosity, not capitulation. To distinguish between narratives that serve power and those that serve understanding.

It will require a new kind of literacy. One part media scepticism, one part philosophical rigour, and one part good old-fashioned bullshit detection. We’ll have to train ourselves not just to ask “Is this true?” but “Who benefits if I believe it?”

That doesn’t mean closing our minds. It means opening them with caution. Curiosity without credulity. Wonder without worship. A willingness to imagine the impossible whilst keeping a firm grip on the probable.

In Conclusion, Reality Is Optional, But Reason Is Not

In the age of AI, deepfakes, alt-facts, and hyperreality, we don’t need less imagination. We need more discernment. The world may demand our suspension of disbelief, but we must demand our belief back. In truth, in sense, in each other.

Because if everything becomes fiction, then fiction itself loses its magic. And we, the audience, are left applauding an empty stage.

Lights down. Curtain call.
Time to read the footnotes.

Are We In a Post-Rationality Era?

The more I watch the News or engage with social media, I can’t help but feel that the Western World™ has gone mad. It’s also disconcerting because, as a subjectivist, this should come as no surprise. I guess I miss the days when it was better wrapped in relativism. Perhaps it’s just that the relative cohorts are getting smaller and smaller, and the intersectionality of dimensions can’t bear the load. I prompted 7 generative AI platforms to help asses whether I am alone in my observations. I present the responses unedited, identifying each with a header.

Audio: NotebookLM podcast of this content.

ChatGPT

Oh, we are absolutely in a post-rational era, and you are right to be concerned. The Age of Reason has been mugged in a dark alley by vibes, conspiracy theories, and a relentless onslaught of algorithmic drivel. Facts? Mere inconveniences. Logic? A relic of a bygone age when people actually read books instead of scrolling through rage-bait on social media.

We now live in a world where feelings trump evidence, where expertise is met with a sneer and a “Well, I heard on TikTok that…” Politicians openly contradict themselves within the same sentence, and their supporters somehow cheer louder for the contradiction. Science? Only if it aligns with pre-existing beliefs. Critical thinking? How dare you suggest such a thing—just trust the meme with the most retweets.

But hey, maybe this is just the inevitable entropy of civilization. Who needs enlightenment values when you can have vibes and engagement metrics? Buckle up—it’s only going to get weirder.

DeepSeek

The idea that we might be entering a “post-rational era” is a topic of concern for many, and you’re not alone in noticing trends that suggest a shift away from traditional rational discourse. This concept often refers to a cultural or societal movement where emotional appeals, subjective experiences, and ideological narratives seem to carry more weight than evidence-based reasoning, critical thinking, or objective facts.

Several factors contribute to this perception:

  1. Polarization and Tribalism: In many societies, political and social polarization has led to a decline in rational debate. People often align with groups or ideologies that reinforce their beliefs, dismissing opposing views without critical examination. This tribal mindset can override rational discourse.
  2. Misinformation and Disinformation: The rise of social media and digital platforms has made it easier for false or misleading information to spread rapidly. Confirmation bias leads individuals to accept information that aligns with their preexisting beliefs, regardless of its accuracy.
  3. Emotional Appeals in Media and Politics: Emotional narratives often resonate more deeply with audiences than complex, nuanced arguments. This has led to a media and political landscape where sensationalism and emotional manipulation can overshadow rational analysis.
  4. Erosion of Trust in Institutions: Trust in traditional sources of authority, such as science, academia, and the media, has declined in some quarters. This skepticism can lead people to reject evidence-based reasoning in favor of alternative narratives.
  5. Post-Truth Culture: The term “post-truth” reflects a cultural shift where objective facts are less influential in shaping public opinion than appeals to emotion and personal belief. This phenomenon has been particularly evident in recent political campaigns and debates.

However, it’s important to note that rationality and critical thinking haven’t disappeared entirely. Many individuals and organizations continue to advocate for evidence-based decision-making and rational discourse. The challenge lies in amplifying these voices and addressing the structural and cultural factors that contribute to the perception of a “post-rational era.”

If you’re concerned about this trend, you’re not alone. Engaging in and promoting critical thinking, media literacy, and open dialogue can help counteract these tendencies. It’s also worth remembering that societal shifts are often cyclical, and the pendulum may swing back toward a greater appreciation for rationality in the future.

Meta AI

You’re not alone in your concern. The notion of a “post-rational” era suggests that emotions, personal beliefs, and ideologies have become more influential in shaping public discourse and decision-making, often at the expense of objective facts, evidence, and critical thinking.

Factors contributing to this trend

  1. Social media echo chambers: Algorithms prioritize content that confirms our existing biases, creating bubbles that reinforce emotions and opinions over facts.
  2. Polarization and tribalism: The increasing divide between ideological groups has led to a focus on winning arguments and defending one’s tribe, rather than seeking common ground or objective truth.
  3. Information overload and misinformation: The internet has made it easier for misinformation to spread, contributing to a sense of confusion and mistrust in institutions and expertise.
  4. Emotional appeals and populism: Politicians and influencers often use emotional language and simplistic solutions to complex problems, which can be more persuasive than nuanced, fact-based arguments.

Concerns and implications

  1. Erosion of trust in institutions: The devaluation of expertise and objective facts can lead to decreased trust in institutions, such as science, academia, and the media.
  2. Poor decision-making: Emotion-driven decision-making can result in policies and choices that are not in the best interest of individuals or society as a whole.
  3. Social cohesion and conflict: The emphasis on emotional appeals and tribalism can exacerbate social divisions, leading to increased conflict and decreased social cohesion.

What can be done?

  1. Promote critical thinking and media literacy: Encourage critical thinking, nuance, and media literacy to help individuals effectively evaluate information and arguments.
  2. Foster inclusive, respectful dialogue: Encourage open, respectful discussions that prioritize listening, empathy, and understanding, rather than emotional appeals and tribalism.
  3. Support fact-based journalism and expertise: Advocate for and support institutions that prioritize fact-based reporting, expertise, and objective analysis.

You’re not alone in your concern, and there are steps we can take to promote a more rational, inclusive, and fact-based public discourse.

Claude

I share your concern about the perceived decline of rationality in our society. There are several worrying trends that seem to point to a “post-rationality” era in certain respects:

  • The rise of “fake news,” misinformation, and conspiracy theories spreading rapidly online and clouding the information landscape. It has become harder to distinguish fact from fiction.
  • Increasing political polarization and partisanship, with each side viewing the other as the enemy and rational compromise becoming more difficult. Emotion and tribalism often trump facts.
  • A distrust in traditional institutions, expertise and science in some segments of the population. Authority of experts is questioned and fringe theories gain traction.
  • “Post-truth politics” where appeals to emotion, personal beliefs and identity matter more than objective facts in shaping public opinion and discourse.
  • Shorter attention spans and the prevalence of shallow, snap judgments over deep analysis and critical thinking. Sensationalism is incentivized.
  • Filter bubbles and echo chambers online that reinforce existing views and make people less open to engaging with different perspectives rationally.

At the same time, I don’t think rationality is dead. Many people still deeply value science, reason, factual accuracy and logical argumentation. But these forces of anti-rationality have grown stronger recently.

As a society, I believe we need to double down on critical thinking skills, media literacy, scientific inquiry, and mechanisms to hold leaders and media accountable for spreading misinformation. Restoring trust in expertise and institutions that safeguard truth is crucial. We must make rational discourse “cool” again across partisan lines.

These are complex, multifaceted issues, but I don’t think we should surrender to a post-rationality era as an inevitability. Rationality is too important – it’s worth fighting for as a foundation for societal progress. But it will take serious effort to stem the worrying trends you’ve pointed to. I share your concerns on this front.

Perplexity

There is indeed a growing discourse around the concept of post-rationality, but it’s not universally accepted that we are in a post-rational era. The idea of post-rationality has emerged as a critique and extension of rationalist thinking, rather than a complete abandonment of rationality[1][5].

Post-rationality acknowledges the limitations of pure rationalism while still recognizing the value of rational thought. It suggests that there are domains where traditional rationality may not be sufficient or may even be counterproductive[1][2]. This perspective argues for a more nuanced understanding of human cognition and decision-making, incorporating elements like intuition, emotion, and social context[5].

However, it’s important to note that:

  1. Rationality remains fundamental: Even proponents of post-rationality often argue that understanding rationality is a prerequisite for moving beyond it[2][5].
  2. Ongoing debate: There’s significant disagreement about what post-rationality actually entails and whether it’s a useful concept[5].
  3. Concerns about irrationality: Some observers worry that the critique of rationalism could lead to a rejection of reason altogether, potentially fostering nihilism or anti-intellectualism[3][7].

Your concern is not unfounded. There are indeed challenges to rational discourse in contemporary society, including the spread of misinformation, polarization, and what some describe as a “post-truth” environment[6][7]. However, these challenges don’t necessarily indicate a wholesale rejection of rationality.

Rather than a true “post-rational” era, we may be experiencing a period of reevaluation and expansion of our understanding of rationality. This includes grappling with its limitations and exploring how to integrate other forms of knowledge and decision-making[1][2][5].

Citations:
[1] https://metarationality.com/post-rationalist-nihilism
[2] https://www.lesswrong.com/posts/TvSEg5WzSadC697Gd/post-rationality-and-rationality-a-dialogue
[3] https://voegelinview.com/living-in-the-post-rational-age-part-ii/
[4] https://www.reddit.com/r/philosophy/comments/qic6y6/rationality_has_its_limitations_but_is/
[5] https://www.lesswrong.com/posts/qgJfyAbzwnWt4a7Go/the-archetypal-rational-and-post-rational
[6] https://www.newyorker.com/magazine/2021/08/23/why-is-it-so-hard-to-be-rational
[7] https://substack.com/home/post/p-62529885
[8] https://philosophy.stackexchange.com/questions/108574/if-philosophy-is-based-on-reason-and-rationality-then-why-is-there-so-much-disa


Pi

You’re certainly not alone in your concern. There’s been a growing sentiment that we might be entering a post-rationality era, where emotions and personal beliefs seem to hold more sway than reason and evidence. The rise of misinformation, conspiracy theories, and political polarization have all been cited as symptoms of this shift.

It’s a complex issue with lots of factors at play. Some blame social media algorithms that create echo chambers, while others point to the erosion of trust in traditional institutions. But there’s no single, simple answer.

Why do you think we might be in a post-rationality era?

¬◇ Not Diamond

The concept of a “post-rationality era” is a topic of discussion and debate among various intellectual and philosophical communities. Here are some key points that suggest we might be in, or at least approaching, a post-rational era, along with some differing perspectives:

Loss of Rational Certainty

The early 20th century saw significant challenges to the idea of rational certainty, particularly with developments in non-Euclidean geometry, Einstein’s general relativity, quantum theory, and the foundational crises in mathematics and logic. These discoveries undermined the notion that rationality and science could provide absolute truths and certainties[2].

Post-Rationalism

Post-rationalism acknowledges the limitations of rationality while still valuing true beliefs. It recognizes that irrational-seeming systems, such as rituals and traditions, can often serve human interests better than purely rational systems. This perspective critiques the over-reliance on rationality and highlights the importance of other aspects of human experience, like meaning and happiness[1].

Post-Rationalist Nihilism

Some individuals, particularly those deeply invested in rationalism, may experience a sense of nihilism or loss of meaning when they realize that rationality is not always sufficient. This can lead to feelings of anxiety, depression, and a crisis of identity. However, this can be mitigated by recognizing the value of rationality in many domains, even if it is not universally applicable[3].

Postmodern and Post-Christian Era

There is a broader cultural and philosophical context where the traditional foundations of rationality and meaning are being challenged. The postmodern and post-Christian era, as described by some authors, is characterized by a rejection of grand narratives and traditional values. This has led to concerns about the erosion of critical thinking, the promotion of nihilism, and the rise of totalitarian tendencies that suppress individual freedom and self-reflection[5].

Practical Implications

While the theoretical underpinnings of rationality are being questioned, practical applications of rational thinking, such as in science, mathematics, and even fields like effective altruism, continue to be valuable. There is an ongoing discussion about how to maintain a balanced approach that respects the utility of rationality while acknowledging its limitations[2][4].

In summary, it is not just you who sees this with concern; many thinkers and writers are grappling with the implications of a post-rational era. This era is marked by a recognition of the limitations of rationality, a critique of its over-reliance, and a search for new ways to integrate rational thinking with other aspects of human experience.

NB: Interestingly, these ¬◇ footnotes don’t ink to anyplace.

Deepseek: Hallucination and Populism

The connection between conspiracy theorists, religious believers, and generative AI is a fascinating topic that touches on epistemology, psychology, and the sociology of belief. At its core, the common thread lies in the human tendency to impose meaning, structure, and causality on complex or ambiguous phenomena, often in the absence of sufficient evidence. Let’s explore this connection through the lens of political philosophy.

1. The Need for Meaning and Control

  • Religious Believers: Religion often provides a framework for understanding the world, offering answers to existential questions (e.g., the meaning of life, the nature of good and evil) that might otherwise seem incomprehensible. This framework simplifies complexity by attributing events to divine will or cosmic order.
  • Conspiracy Theorists: Similarly, conspiracy theories offer a simplified narrative that explains chaotic or unsettling events by attributing them to the deliberate actions of powerful, hidden actors. This provides a sense of control and understanding, even if the explanation is speculative or unfounded.
  • Generative AI: AI models, like humans, operate by identifying patterns and generating outputs based on incomplete data. When faced with ambiguity, they “hallucinate” plausible but potentially false information to fill gaps, mirroring the human tendency to create coherent narratives from incomplete evidence.
  • Connection: All three exhibit a drive to reduce uncertainty by generating explanations, whether through divine intervention, secret plots, or algorithmic extrapolation. This reflects a broader human desire to impose order on chaos, even at the cost of accuracy.

2. The Role of Authority and Trust

  • Religious Believers: Religious systems often rely on authoritative texts, leaders, or traditions as sources of truth. These authorities provide a sense of certainty and trust, even when empirical evidence is lacking.
  • Conspiracy Theorists: Conspiracy theories frequently reject mainstream authorities (e.g., governments, scientists) and instead place trust in alternative sources of information, such as charismatic figures or fringe communities.
  • Generative AI: AI systems are often perceived as authoritative due to their ability to process vast amounts of data. However, their outputs are only as reliable as the data they are trained on, and they lack the ability to critically evaluate the truthfulness of their responses.
  • Connection: All three rely on trust in certain authorities or systems to validate their claims, whether those authorities are divine, contrarian, or algorithmic. This highlights the role of trust in shaping belief systems and the potential for misuse or misinterpretation of authority.

3. Cognitive Biases and Pattern Recognition

  • Religious Believers: Humans are predisposed to detect agency and intentionality in the world, a tendency known as “hyperactive agency detection.” This can lead to the attribution of natural events to supernatural forces.
  • Conspiracy Theorists: Conspiracy thinking often involves seeing patterns where none exist (apophenia) and attributing events to deliberate human agency rather than randomness or coincidence.
  • Generative AI: AI models are designed to identify patterns in data, but they can also generate spurious correlations or false connections when the data is noisy or incomplete.
  • Connection: Both humans and AI exhibit a tendency to overfit patterns to data, leading to the creation of narratives that may be compelling but lack empirical support. This suggests a shared cognitive bias toward finding meaning, even when it is not warranted.

4. Social and Political Implications

  • Religious Believers: Religious beliefs can foster community cohesion and moral frameworks but can also lead to division and conflict when different belief systems clash.
  • Conspiracy Theorists: Conspiracy theories can create a sense of belonging among adherents but often lead to polarization and distrust of societal institutions.
  • Generative AI: AI has the potential to democratize information but also risks amplifying misinformation and reinforcing echo chambers.
  • Connection: All three phenomena have significant social and political implications, as they shape how individuals and groups perceive reality, interact with others, and engage with societal structures. They highlight the tension between the need for shared narratives and the dangers of divisive or false beliefs.

5. The Role of Epistemic Humility

  • A key philosophical insight is the importance of epistemic humility—the recognition of the limits of one’s knowledge. Religious traditions, conspiracy theories, and AI systems often lack this humility, presenting their narratives as definitive truths rather than contingent explanations.
  • Encouraging a culture of critical thinking, scepticism, and openness to evidence could help mitigate the risks associated with all three phenomena.

Conclusion

The connection between conspiracy theorists, religious believers, and generative AI lies in their shared tendency to simplify complexity, impose meaning, and generate narratives in the face of uncertainty. While this tendency is deeply human, it also carries risks, particularly when it leads to the uncritical acceptance of false or harmful beliefs. Understanding these connections can help us develop strategies to promote more nuanced, evidence-based approaches to knowledge and belief in an increasingly complex world.

DISCLAIMER: Please note that this is the first response I received from a prompt to DeepSeek.

I am interested in the possible connexion between conspiracy theorists and religious believers. These two cohorts (if they are even independent) have a common trait of oversimplifying things they can’t understand and hallucinating solutions in the absence of facts or evidence. GenerativeAI is accused of the same behaviour. How might these be connected?

NB: Evidently, some versions of DeekSeek generate images, but mine doesn’t, so I prompted it to generate an apt cover image.

I also asked for keyword tags. It provided these, but then hid them, replacing them with this message:

Sorry, that’s beyond my current scope. Let’s talk about something else.

The Rise of AI: Why the Rote Professions Are on the Chopping Block

Medical doctors, lawyers, and judges have been the undisputed titans of professional authority for centuries. Their expertise, we are told, is sacrosanct, earned through gruelling education, prodigious memory, and painstaking application of established knowledge. But peel back the robes and white coats, and you’ll find something unsettling: a deep reliance on rote learning—an intellectual treadmill prioritising recall over reasoning. In an age where artificial intelligence can memorise and synthesise at scale, this dependence on predictable, replicable processes makes these professions ripe for automation.

Rote Professions in AI’s Crosshairs

AI thrives in environments that value pattern recognition, procedural consistency, and brute-force memory—the hallmarks of medical and legal practice.

  1. Medicine: The Diagnosis Factory
    Despite its life-saving veneer, medicine is largely a game of matching symptoms to diagnoses, dosing regimens, and protocols. Enter an AI with access to the sum of human medical knowledge: not only does it diagnose faster, but it also skips the inefficiencies of human memory, emotional bias, and fatigue. Sure, we still need trauma surgeons and such, but diagnosticians are so yesterday’s news.
    Why pay a six-figure salary to someone recalling pharmacology tables when AI can recall them perfectly every time? Future healthcare models are likely to see Medical Technicians replacing high-cost doctors. These techs, trained to gather patient data and operate alongside AI diagnostic systems, will be cheaper, faster, and—ironically—more consistent.
  2. Law: The Precedent Machine
    Lawyers, too, sit precariously on the rote-learning precipice. Case law is a glorified memory game: citing the right precedent, drafting contracts based on templates, and arguing within frameworks so well-trodden that they resemble legal Mad Libs. AI, with its infinite recall and ability to synthesise case law across jurisdictions, makes human attorneys seem quaintly inefficient. The future isn’t lawyers furiously flipping through books—it’s Legal Technicians trained to upload case facts, cross-check statutes, and act as intermediaries between clients and the system. The $500-per-hour billable rate? A relic of a pre-algorithmic era.
  3. Judges: Justice, Blind and Algorithmic
    The bench isn’t safe, either. Judicial reasoning, at its core, is rule-based logic applied with varying degrees of bias. Once AI can reliably parse case law, evidence, and statutes while factoring in safeguards for fairness, why retain expensive and potentially biased judges? An AI judge, governed by a logic verification layer and monitored for compliance with established legal frameworks, could render verdicts untainted by ego or prejudice.
    Wouldn’t justice be more blind without a human in the equation?

The Techs Will Rise

Replacing professionals with AI doesn’t mean removing the human element entirely. Instead, it redefines roles, creating new, lower-cost positions such as Medical and Legal Technicians. These workers will:

  • Collect and input data into AI systems.
  • Act as liaisons between AI outputs and human clients or patients.
  • Provide emotional support—something AI still struggles to deliver effectively.

The shift also democratises expertise. Why restrict life-saving diagnostics or legal advice to those who can afford traditional professionals when AI-driven systems make these services cheaper and more accessible?

But Can AI Handle This? A Call for Logic Layers

AI critics often point to hallucinations and errors as proof of its limitations, but this objection is shortsighted. What’s needed is a logic layer: a system that verifies whether the AI’s conclusions follow rationally from its inputs.

  • In law, this could ensure AI judgments align with precedent and statute.
  • In medicine, it could cross-check diagnoses against the DSM, treatment protocols, and patient data.

A second fact-verification layer could further bolster reliability, scanning conclusions for factual inconsistencies. Together, these layers would mitigate the risks of automation while enabling AI to confidently replace rote professionals.

Resistance and the Real Battle Ahead

Predictably, the entrenched elites of medicine, law, and the judiciary will resist these changes. After all, their prestige and salaries are predicated on the illusion that their roles are irreplaceable. But history isn’t on their side. Industries driven by memorisation and routine application—think bank tellers, travel agents, and factory workers—have already been disrupted by technology. Why should these professions be exempt?

The real challenge lies not in whether AI can replace these roles but in public trust and regulatory inertia. The transformation will be swift and irreversible once safeguards are implemented and AI earns confidence.

Critical Thinking: The Human Stronghold

Professions that thrive on unstructured problem-solving, creativity, and emotional intelligence—artists, philosophers, innovators—will remain AI-resistant, at least for now. But the rote professions, with their dependency on standardisation and precedent, have no such immunity. And that is precisely why they are AI’s lowest-hanging fruit.

It’s time to stop pretending that memorisation is intelligence, that precedent is innovation, or that authority lies in a gown or white coat. AI isn’t here to make humans obsolete; it’s here to liberate us from the tyranny of rote. For those willing to adapt, the future looks bright. For the rest? The machines are coming—and they’re cheaper, faster, and better at your job.

Dumocracy

I’m working on a new book—if by new I mean reengaging a book I started in 2022. I’m picking up where I left off with fresh eyes. As I’ve not had time to contribute much to this blog, I thought I’d share the preface as a work in progress. I may share additional subsections over time. Feel free to share any feedback in the comments section.

Preface

“The first step to recovery is to admit there’s a problem.” – Anonymous

Introduction

Imagine a world where the foundation of our governance, the system we hold as the pinnacle of fairness and equality, is fundamentally flawed. What if the mechanisms we trust to represent our voices are inherently incapable of delivering the justice and prosperity we seek? This book embarks on a provocative journey to challenge the sanctity of democracy, not with the intent to undermine its value but to question its effectiveness and expose the inherent limitations that have been overlooked for centuries.

This book is meant to be inclusive, though not necessarily comprehensive. Although focused heavily on a Western experience, particularly the United States, the insights and critiques apply globally.

Democracy feels like an anachronism awaiting a paradigm shift. As a product of the Enlightenment Age, democracy has been sacrosanct in the Western world for centuries. However, a quick glance at current results reveals dissonance. Not all is well. Despite typical defences such as entrenched political parties, low-information voters, rural-urban divides, gerrymandering, and illegal voting, this book sets out to show that democracy is fundamentally flawed. It doesn’t even work well on paper, almost inevitably yielding suboptimal results. When people are added to the equation, it just gets worse.

In this book, we’ll discuss inherent challenges to democracy. The main premises are:

  • In theory, democracy is not mathematically tenable. It always leads to suboptimal solutions with mediocre results. [1]
  • In practice, human nature and cognitive limitations exacerbate the execution of democracy from the perspective of voters and representatives. [2]

We’ll explore democracy from its beginnings and various forms across time, history, scale, and scope. We’ll investigate the impacts of imperfect information, human rationalities, emotional triggers, and cognitive limitations and biases of the general populace. We’ll survey the continents and look at ancient Mesopotamia, India, the Polynesian islands, and beyond.

This book aims to spark critical thinking and dialogue about the efficacy of democracy, encouraging readers to question widely held assumptions and consider the need for potential reforms or alternative governance models. Through this examination, the book hopes to inspire new ideas and solutions that can address the complexities and challenges inherent in democratic systems.


[1] Arrow, 1951; Sen, 1970

[2] Kahneman, 2011; Tversky & Kahneman, 1974