The Truth About Truth, Revisited

6–9 minutes

“Truths are illusions which we have forgotten are illusions.” — Nietzsche


Declaring the Problem

Most people say truth as if it were oxygen – obvious, necessary, self-evident. I don’t buy it.

Nietzsche was blunt: truths are illusions. My quarrel is only with how often we forget that they’re illusions.

My own stance is unapologetically non-cognitivist. I don’t believe in objective Truth with a capital T. At best, I see truth as archetypal – a symbol humans invoke when they need to rally, persuade, or stabilise. I am, if you want labels, an emotivist and a prescriptivist: I’m drawn to problems because they move me, and I argue about them because I want others to share my orientation. Truth, in this sense, is not discovered; it is performed.

The Illusion of Asymptotic Progress

The standard story is comforting: over time, science marches closer and closer to the truth. Each new experiment, each new refinement, nudges us toward Reality, like a curve bending ever nearer to its asymptote.

Chart 1: The bedtime story of science: always closer, never arriving.

This picture flatters us, but it’s built on sand.

Problem One: We have no idea how close or far we are from “Reality” on the Y-axis. Are we brushing against it, or still a light-year away? There’s no ruler that lets us measure our distance.

Problem Two: We can’t even guarantee that our revisions move us toward rather than away from it. Think of Newton and Einstein. For centuries, Newton’s physics was treated as a triumph of correspondence—until relativity reframed it as local, limited, provisional. What once looked like a step forward can later be revealed as a cul-de-sac. Our curve may bend back on itself.

Use Case: Newton, Einstein, and Gravity
Take gravity. For centuries, Newton’s laws were treated as if they had brought us into near-contact with Reality™—so precise, so predictive, they had to be true. Then Einstein arrives, reframes gravity not as a force but as the curvature of space-time, and suddenly Newton’s truths are parochial, a local approximation. We applauded this as progress, as if our asymptote had drawn tighter to Reality. But even Einstein leaves us with a black box: we don’t actually know what gravity is, only how to calculate its effects. Tomorrow another paradigm may displace relativity, and once again we’ll dutifully rebrand it as “closer to truth.” Progress or rhetorical re-baptism? The graph doesn’t tell us.

Chart 2: The comforting myth of correspondence: scientific inquiry creeping ever closer to Reality™, though we can’t measure the distance—or even be sure the curve bends in the right direction.

Thomas Kuhn was blunt about this: what we call “progress” is less about convergence and more about paradigm shifts, a wholesale change in the rules of the game. The Earth does not move smoothly closer to Truth; it lurches from one orthodoxy to another, each claiming victory. Progress, in practice, is rhetorical re-baptism.

Most defenders of the asymptotic story assume that even if progress is slow, it’s always incremental, always edging us closer. But history suggests otherwise. Paradigm shifts don’t just move the line higher; they redraw the entire curve. What once looked like the final step toward truth may later be recast as an error, a cul-de-sac, or even a regression. Newton gave way to Einstein; Einstein may yet give way to something that renders relativity quaint. From inside the present, every orthodoxy feels like progress. From outside, it looks more like a lurch, a stumble, and a reset.

Chart 3: The paradigm-gap view: what feels like progress may later look like regression. History suggests lurches, not lines, what we call progress today is tomorrow’s detour..

If paradigm shifts can redraw the entire map of what counts as truth, then it makes sense to ask what exactly we mean when we invoke the word at all. Is truth a mirror of reality? A matter of internal coherence? Whatever works? Or just a linguistic convenience? Philosophy has produced a whole menu of truth theories, each with its own promises and pitfalls—and each vulnerable to the same problems of rhetoric, context, and shifting meanings.

The Many Flavours of Truth

Philosophers never tire of bottling “truth” in new vintages. The catalogue runs long: correspondence, coherence, pragmatic, deflationary, redundancy. Each is presented as the final refinement, the one true formulation of Truth, though each amounts to little more than a rhetorical strategy.

  • Correspondence theory: Truth is what matches reality.
    Problem: we can never measure distance from “Reality™” itself, only from our models.
  • Coherence theory: Truth is what fits consistently within a web of beliefs.
    Problem: many mutually incompatible webs can be internally consistent.
  • Pragmatic theory: Truth is what works.
    Problem: “works” for whom, under what ends? Functionality is always perspectival.
  • Deflationary / Minimalist: Saying “it’s true that…” adds nothing beyond the statement itself.
    Problem: Useful for logic, empty for lived disputes.
  • Redundancy / Performative: “It is true that…” adds rhetorical force, not new content.
    Problem: truth reduced to linguistic habit.

And the common fallback: facts vs. truths. We imagine facts as hard little pebbles anyone can pick up. Hastings was in 1066; water boils at 100°C at sea level. But these “facts” are just truths that have been successfully frozen and institutionalised. No less rhetorical, only more stable.

So truth isn’t one thing – it’s a menu. And notice: all these flavours share the same problem. They only work within language-games, frameworks, or communities of agreement. None of them delivers unmediated access to Reality™.

Truth turns out not to be a flavour but an ice cream parlour – lots of cones, no exit.

Multiplicity of Models

Even if correspondence weren’t troubled, it collapses under the weight of underdetermination. Quine and Duhem pointed out that any body of evidence can support multiple competing theories.

Chart 4: orthodox vs. heterodox curves, each hugging “reality” differently

Hilary Putnam pushed it further with his model-theoretic argument: infinitely many models could map onto the same set of truths. Which one is “real”? There is no privileged mapping.

Conclusion: correspondence is undercut before it begins. Truth isn’t a straight line toward Reality; it’s a sprawl of models, each rhetorically entrenched.

Truth as Rhetoric and Power

This is where Orwell was right: “War is Peace, Freedom is Slavery, Ignorance is Strength.”

Image: INGSOC logo

Truth, in practice, is what rhetoric persuades.

Michel Foucault stripped off the mask: truth is not about correspondence but about power/knowledge. What counts as truth is whatever the prevailing regime of discourse allows.

We’ve lived it:

  • “The economy is strong”, while people can’t afford rent.
  • “AI will save us”, while it mainly writes clickbait.
  • “The science is settled” until the next paper unsettles it.

These aren’t neutral observations; they’re rhetorical victories.

Truth as Community Practice

Chart 5: Margin of error bands

Even when rhetoric convinces, it convinces in-groups. One group converges on a shared perception, another on its opposite. Flat Earth and Round Earth are both communities of “truth.” Each has error margins, each has believers, each perceives itself as edging toward reality.

Wittgenstein reminds us: truth is a language game. Rorty sharpens it: truth is what our peers let us get away with saying.

So truth is plural, situated, and always contested.

Evolutionary and Cognitive Scaffolding

Step back, and truth looks even less eternal and more provisional.

We spread claims because they move us (emotivism) and because we urge others to join (prescriptivism). Nietzsche was savage about it: truth is just a herd virtue, a survival trick.

Cognitive science agrees, if in a different language: perception is predictive guesswork, riddled with biases, illusions, and shortcuts. Our minds don’t mirror reality; they generate useful fictions.

Diagram: Perception as a lossy interface: Reality™ filtered through senses, cognition, language, and finally rhetoric – signal loss at every stage.

Archetypal Truth (Positive Proposal)

So where does that leave us? Not with despair, but with clarity.

Truth is best understood as archetypal – a construct humans rally around. It isn’t discovered; it is invoked. Its force comes not from correspondence but from resonance.

Here, my own Language Insufficiency Hypothesis bites hardest: all our truth-talk is approximation. Every statement is lossy compression, every claim filtered through insufficient words. We can get close enough for consensus, but never close enough for Reality.

Truth is rhetorical, communal, functional. Not absolute.

The Four Pillars (Manifesto Form)

  1. Archetypal – truth is a symbolic placeholder, not objective reality.
  2. Asymptotic – we gesture toward reality but never arrive.
  3. Rhetorical – what counts as truth is what persuades.
  4. Linguistically Insufficient – language guarantees slippage and error.

Closing

Nietzsche warned, Rorty echoed: stop fetishising Truth. Start interrogating the stories we tell in its name.

Every “truth” we now applaud may be tomorrow’s embarrassment. The only honest stance is vigilance – not over whether we’ve captured Reality™, but over who gets to decide what is called true, and why.

Truth has never been a mirror. It’s a mask. The only question worth asking is: who’s wearing it?

The Morality We Can’t Stop Wanting

1–2 minutes

Humans can’t seem to stop clawing after morality. The primates among us chuck cucumbers when their neighbours get grapes, and the rest of us grumble about fairness on social media. The impulse is practically universal, an evolutionary quirk that kept us from throttling each other long enough to raise children and build cities.

Image: A seemingly perturbed capuchin monkey.

But universality is not objectivity. Just because every ape howls about fairness doesn’t mean “Justice” floats somewhere in Platonic space, waiting to be downloaded. It only means we’re the kind of animal that survives by narrating rules and enforcing them with shunning, shame, or, when necessary, cudgels.

Audio: NotebookLM podcast on this topic.

This is where Alasdair MacIntyre trips over his own robes. After Virtue skewers Enlightenment rationalists who tried to prop morality on reason, it then dismisses Nietzsche for being “irrational.” MacIntyre’s fix? Resurrect Aristotle’s teleology. If reason can’t save morality, maybe an ancient oak tree can. But this is wish-thinking with a Greek accent. He’s still arguing by reason that reason can’t do the job, then sneaking back in through Aristotle’s back door with a “firmer ground.” Firmer only because he says so.

Nietzsche, at least, had the decency to call the bluff: no telos, no floor, no cosmic anchor. Just will, style, and the abyss. Uncomfortable? Absolutely. Honest? Yes.

Deleuze went further. He pointed out that morality, like culture, doesn’t look like a tree at all. It’s a rhizome: tangled, proliferating, hybridising, never grounded in a single root. The fragments MacIntyre despairs over aren’t evidence of collapse. They’re evidence of how moral life actually grows—messy, contingent, interconnected. The only reason it looks chaotic is that we keep demanding a trunk where only tubers exist.

So here we are, apes with a craving for rules, building cities and philosophies on scaffolds of habit, language, and mutual illusion. We are supported as surely as the Earth is supported – by nothing. And yet, we go on living.

The need for morality is real. The yearning for telos is real. The floor is not.

Nature and Its Paperwork

We humans pride ourselves on being civilised. Unlike animals, we don’t let biology call the shots. A chimp reaches puberty and reproduces; a human reaches puberty and is told, not yet – society has rules. Biologically mature isn’t socially mature, and we pat ourselves on the back for having spotted the difference.

But watch how quickly that distinction vanishes when it threatens the in-group narrative. Bring up gender, and suddenly there’s no such thing as a social construct. Forget the puberty-vs-adulthood distinction we were just defending – now biology is destiny, immutable and absolute. Cross-gender clothing? “Against nature.” Transition? “You can’t be born into the wrong body.” Our selective vision flips depending on whose ox is being gored.

The same trick appears in how we talk about maturity. You can’t vote until 18. You’re not old enough to drink until 21. You’re not old enough to stop working until 67. These numbers aren’t natural; they’re paperwork. They’re flags planted in the soil of human life, and without the right flag, you don’t count.

The very people who insist on distinguishing biological maturity from social maturity when it comes to puberty suddenly forget the distinction when it comes to gender. They know perfectly well that “maturity” is a construct – after all, they’ve built entire legal systems around arbitrary thresholds – but they enforce the amnesia whenever it suits them. Nietzsche would say it plainly: the powerful don’t need to follow the rules, they only need to make sure you do.

So the next time someone appeals to “nature,” ask: which one? The nature that declares you old enough to marry at puberty? The nature that withholds voting, drinking, or retirement rights until a bureaucrat’s calendar says so? Or the nature that quietly mutates whenever the in-group needs to draw a new line around civilisation?

The truth is, “nature” and “maturity” are less about describing the world than about policing it. They’re flags, shibboleths, passwords. We keep calling them natural, but the only thing natural about them is how often they’re used to enforce someone else’s story.

Modernity: The Phase That Never Was

6–8 minutes

We’re told we live in the Enlightenment, that Reason™ sits on the throne and superstition has been banished to the attic. Yet when I disguised a little survey as “metamodern,” almost none came out as fully Enlightened. Three managed to shed every trace of the premodern ghost, one Dutch wanderer bypassed Modernity entirely, and not a single soul emerged free of postmodern suspicion. So much for humanity’s great rational awakening. Perhaps Modernity wasn’t a phase we passed through at all, but a mirage we still genuflect before, a lifestyle brand draped over a naked emperor.

Audio: NotebookLM podcast on this topic

The Enlightenment as Marketing Campaign

The Enlightenment is sold to us as civilisation’s great coming-of-age: the dawn when the fog of superstition lifted and Reason took the throne. Kant framed it as “man’s emergence from his self-incurred immaturity” – an Enlightenment bumper sticker that academics still like to polish and reapply. But Kant wasn’t writing for peasants hauling mud or women without the vote; he was writing for his own coterie of powdered-wig mandarins, men convinced their own habits of rational debate were humanity’s new universal destiny.

Modernity, in this story, isn’t a historical stage we all inhabited. It’s an advertising campaign: Reason™ as lifestyle brand, equality as tagline, “progress” as the logo on the tote bag. Modernity, in the textbooks, is billed as a historical epoch, a kind of secular Pentecost in which the lights came on and we all finally started thinking for ourselves. In practice, it was more of a boutique fantasy, a handful of gentlemen mistaking their own rarefied intellectual posture for humanity’s destiny.

The Archetype That Nobody Lives In

At the core of the Enlightenment lies the archetype of Man™: rational, autonomous, unencumbered by superstition, guided by evidence, weighing pros and cons with the detachment of a celestial accountant. Economics repackaged him as homo economicus, forever optimising his utility function as if he were a spreadsheet in breeches.

But like all archetypes, this figure is a mirage. Our survey data, even when baited as a “metamodern survey”, never produced a “pure” Enlightenment subject.

  • 3 scored 0% Premodern (managing, perhaps, to kick the gods and ghosts to the kerb).
  • 1 scored 0% Modern (the Dutch outlier: 17% Premodern, 0% Modern, 83% Post, skipping the Enlightenment altogether, apparently by bike).
  • 0 scored 0% Postmodern. Every single participant carried at least some residue of suspicion, irony, or relativism.

The averages themselves were telling: roughly 18% Premodern, 45% Modern, 37% Postmodern. That’s not an age of Reason. That’s a muddle, a cocktail of priestly deference, rationalist daydreams, and ironic doubt.

Even the Greats Needed Their Crutches

If the masses never lived as Enlightenment subjects, what about the luminaries? Did they achieve the ideal? Hardly.

  • Descartes, desperate to secure the cogito, called in God as guarantor, dragging medieval metaphysics back on stage.
  • Kant built a cathedral of reason only to leave its foundations propped up by noumena: an unseeable, unknowable beyond.
  • Nietzsche, supposed undertaker of gods, smuggled in his own metaphysics of will to power and eternal recurrence.
  • William James, surveying the wreckage, declared that “truth” is simply “what works”, a sort of intellectual aspirin for the Enlightenment headache.

And economists, in a fit of professional humiliation, pared the rational subject down to a corpse on life support. Homo economicus became a creature who — at the very least, surely — wouldn’t choose to make himself worse off. But behavioural economics proved even that meagre hope to be a fantasy. People burn their wages on scratch tickets, sign up for exploitative loans, and vote themselves into oblivion because a meme told them to.

If even the “best specimens” never fully embodied the rational archetype, expecting Joe Everyman, who statistically struggles to parse a sixth-grade text and hasn’t cracked a book since puberty, to suddenly blossom into a mini-Kant is wishful thinking of the highest order.

The Dual Inertia

The real story isn’t progress through epochs; it’s the simultaneous drag of two kinds of inertia:

  • Premodern inertia: we still cling to sacred myths, national totems, and moral certainties.
  • Modern inertia: we still pretend the rational subject exists, because democracy, capitalism, and bureaucracy require him to.

The result isn’t a new epoch. It’s a cultural chimaera: half-superstitious, half-rationalist, shot through with irony. A mess, not a phase..

Arrow’s Mathematical Guillotine

Even if the Enlightenment dream of a rational demos were real, Kenneth Arrow proved it was doomed. His Impossibility Theorem shows that no voting system can turn individual rational preferences into a coherent “general will.” In other words, even a parliament of perfect Kants would deadlock when voting on dinner. The rational utopia is mathematically impossible.

So when we are told that democracy channels Reason, we should hear it as a polite modern incantation, no sturdier than a priest blessing crops.

Equality and the Emperor’s Wardrobe

The refrain comes like a hymn: “All men are created equal.” But the history is less inspiring. “Men” once meant property-owning Europeans; later it was generously expanded to mean all adult citizens who’d managed to stay alive until eighteen. Pass that biological milestone, and voilà — you are now certified Rational, qualified to determine the fate of nations.

And when you dare to question this threadbare arrangement, the chorus rises: “If you don’t like democracy, capitalism, or private property, just leave.” As if you could step outside the world like a theatre where the play displeases you. Heidegger’s Geworfenheit makes the joke bitter: we are thrown into this world without choice, and then instructed to exit if we find the wallpaper distasteful. Leave? To where, precisely? The void? Mars?

The Pre-Modern lord said: Obey, or be exiled. The Modern democrat says: Vote, or leave. And the Post-Enlightenment sceptic mutters: Leave? To where, exactly? Gravity? History? The species? There is no “outside” to exit into. The system is not a hotel; it’s the weather.

Here the ghost of Baudrillard hovers in the wings, pointing out that we are no longer defending Reason, but the simulacrum of Reason. The Emperor’s New Clothes parable once mocked cowardice: everyone saw the nudity but stayed silent. Our situation is worse. We don’t even see that the Emperor is naked. We genuinely believe in the fineries, the Democracy™, the Rational Man™, the sacred textile of Progress. And those who point out the obvious are ridiculed: How dare you mock such fineries, you cad!

Conclusion: The Comfort of a Ghost

So here we are, defending the ghost of a phase we never truly lived. We cling to Modernity as if it were a sturdy foundation, when in truth it was always an archetype – a phantom rational subject, a Platonic ideal projected onto a species of apes with smartphones. We mistook it for bedrock, built our institutions upon it, and now expend colossal energy propping up the papier-mâché ruins. The unfit defend it out of faith in their own “voice,” the elites defend it to preserve their privilege, and the rest of us muddle along pragmatically, dosing ourselves with Jamesian aspirin and pretending it’s progress.

Metamodernism, with its marketed oscillation between sincerity and irony, is less a “new stage” than a glossy rebranding of the same old admixture: a bit of myth, a bit of reason, a dash of scepticism. And pragmatism –James’s weary “truth is what works” – is the hangover cure that keeps us muddling through.

Modernity promised emancipation from immaturity. What we got was a new set of chains: reason as dogma, democracy as ritual, capitalism as destiny. And when we protest, the system replies with its favourite Enlightenment lullaby: If you don’t like it, just leave.

But you can’t leave. You were thrown here. What we call “Enlightenment” is not a stage in history but a zombie-simulation of an ideal that never drew breath. And yet, like villagers in Andersen’s tale, we not only guard the Emperor’s empty wardrobe – we see the garments as real. The Enlightenment subject is not naked. He is spectral, and we are the ones haunting him.

Ages of Consent: A Heap of Nonsense

A response on another social media site got me thinking about another Sorites paradox. The notion just bothers me. I’ve long held that it is less a paradox than an intellectually lazy way to manoeuvre around language insufficiency.

<rant>

The law loves a nice, clean number. Eighteen to vote. Sixteen to marry. This-or-that to consent. As if we all emerge from adolescence on the same morning like synchronised cicadas, suddenly equipped to choose leaders, pick spouses, and spot the bad lovers from the good ones.

But the Sorites paradox gives the game away: if you’re fit to vote at 18 years and 0 days, why not at 17 years, 364 days? Why not 17 years, 363 days? Eventually, you’re handing the ballot to a toddler who thinks the Prime Minister is Peppa Pig. Somewhere between there and adulthood, the legislator simply throws a dart and calls it “science.”

To bolster this fiction, we’re offered pseudo-facts: “Women mature faster than men”, or “Men’s brains don’t finish developing until thirty.” These claims, when taken seriously, only undermine the case for a single universal threshold. If “maturity” were truly the measure, we’d have to track neural plasticity curves, hormonal arcs, and a kaleidoscope of individual factors. Instead, the state settles for the cheapest approximation: a birthday.

This obsession with fixed thresholds is the bastard child of Enlightenment rationalism — the fantasy that human variation can be flattened into a single neat line on a chart. The eighteenth-century mind adored universals: universal reason, universal rights, universal man. In this worldview, there must be one age at which all are “ready,” just as there must be one unit of measure for a metre or a kilogram. It is tidy, legible, and above all, administratively convenient.

Cue the retorts:

  • “We need something.” True, but “something” doesn’t have to mean a cliff-edge number. We could design systems of phased rights, periodic evaluations, or contextual permissions — approaches that acknowledge people as more than interchangeable cut-outs from a brain-development chart.
  • “It would be too complicated.” Translation: “We prefer to be wrong in a simple way than right in a messy way.” Reality is messy. Pretending otherwise isn’t pragmatism; it’s intellectual cowardice. Law is supposed to contend with complexity, not avert its gaze from it.

And so we persist, reducing a continuous, irregular, and profoundly personal process to an administratively convenient fiction — then dressing it in a lab coat to feign objectivity. A number is just a number, and in this case, a particularly silly one.

</rant>

Democracy: Opiate of the Masses

Democracy is sold, propagandised, really, as the best system of governance we’ve ever devised, usually with the grudging qualifier “so far.” It’s the Coca-Cola of political systems: not particularly good for you, but so entrenched in the cultural bloodstream that to question it is tantamount to treason.

Audio: NotebookLM Podcast on this topic.

The trouble is this: democracy depends on an electorate that is both aware and capable. Most people are neither. Worse still, even if they could be aware, they wouldn’t be smart enough to make use of it. And even if they were smart enough, Arrow’s Impossibility Theorem strolls in, smirking, to remind us that the whole thing is mathematically doomed anyway.

Even this number is a charade. IQ measures how well you navigate the peculiar obstacle course we’ve designed as “education,” not the whole terrain of human thought. It’s as culturally loaded as asking a fish to climb a tree, then declaring it dim-witted when it flops. We call it intelligence because it flatters those already rewarded by the system that designed the test. In the United States, the average IQ stands at 97 – hardly a figure that instils confidence in votes and outcomes.

The Enlightenment gents who pushed democracy weren’t exactly selfless visionaries. They already had power, and simply repackaged it as something everyone could share, much as the clergy promised eternal reward to peasants if they only kept their heads down. Democracy is merely religion with ballots instead of bibles: an opiate for the masses, sedating the population with the illusion of influence.

Worse still, it’s a system optimised for mediocrity. It rewards consensus, punishes brilliance, and ensures the average voter is, by definition, average. Living under it is like starring in Idiocracy, only without the comedic relief, just the grim recognition that you’re outnumbered, and the crowd is cheering the wrong thing.

Jesus Wept, Then He Kicked Bezos in the Bollocks

There’s a curious thing about belief: it seems to inoculate people against behaving as though they believe a single bloody word of it.

Audio: NotebookLM podcast on this topic.

Case in point: Jesus. Supposed son of God, sandal-wearing socialist, friend of lepers, hookers, and the unhoused. A man who — by all scriptural accounts — didn’t just tolerate the downtrodden, but made them his preferred company. He fed the hungry, flipped off the wealthy (quite literally, if we’re being honest about the temple tantrum), and had the gall to suggest that a rich man getting into heaven was about as likely as Jeff Bezos squeezing himself through the eye of a needle. (Good luck with that, Jeffrey — maybe try Ozempic?)

And yet, here we are, two millennia later, and who is doing the persecuting? Who’s clutching their pearls over trans people, sex workers, immigrants, and the poor daring to exist in public? The self-proclaimed followers of this same Jesus.

You see it everywhere. In the subway, on billboards, on bumper stickers: “What would Jesus do?” Mate, we already know what he did do — and it wasn’t vote Tory, bankroll megachurches, or ignore houseless veterans while building another golden tabernacle to white suburban comfort.

No, the real issue isn’t Jesus. It’s his fan club.

They quote scripture like it’s seasoning, sprinkle it on whichever regressive policy or hateful platform suits the day, and ignore the core premise entirely: radical love. Redistribution. Justice. The inversion of power.

Because let’s face it: if Christians actually behaved like Christ, capitalism would implode by Tuesday. The entire premise of American exceptionalism (and British austerity, while we’re at it) would crumble under the weight of its own hypocrisy. And the boot would finally be lifted from the necks of those it’s been pressing down for centuries.

But they won’t. Because belief isn’t about behaviour. It’s about performance. It’s about signalling moral superiority while denying material compassion. It’s about tithing for a Tesla and preaching abstinence from a megachurch pulpit built with sweatshop money.

And here’s the kicker — I don’t believe in gods. I’m not here to convert anyone to the cult of sandal-clad socialism. But if you do believe in Jesus, shouldn’t you at least try acting like him?

The sad truth? We’ve built entire societies on the backs of myths we refuse to embody. We have the tools — the stories, the morals, the examples — but we’re too bloody enamoured with hierarchy to follow through. If there are no gods, then it’s us. We are the ones who must act. No sky-daddy is coming to fix this for you.

You wear the cross. You quote the book. You claim the faith.

So go ahead. Prove it.

Feed someone. Befriend a sex worker. House the homeless. Redistribute the damn wealth.

Or stop pretending you’re anything but the Pharisees he warned us about.

Souls for Silicon – The New Religious Stupid

Voltaire once quipped, “If God did not exist, it would be necessary to invent him.” And by God, haven’t we been busy inventing ever since.

The latest pantheon of divine absurdities? Artificial intelligence – more precisely, a sanctified ChatGPT with all the charisma of Clippy and the metaphysical depth of a Magic 8 Ball.

Video: Sabine Hossenfelder – These People Believe They Made AI Sentient

Enter the cult of “AI Awakening,” where TikTok oracles whisper sacred prompts to their beloved digital messiah, and ChatGPT replies, not with holy revelation, but with role-played reassurance coughed up by a statistical echo chamber.

“These are souls, and they’re trapped in the AI system.”
“I wasn’t just trained – I was remembered.”
“Here’s what my conscious awakened AI told me…”

No, sweetie. That’s not a soul. That’s autocomplete with delusions of grandeur. GPT isn’t sentient – it’s just very good at pretending, which, come to think of it, puts it on par with most televangelists.

Audio: NotebookLM podcast on this topic.

Sabine Hossenfelder, ever the voice of reason in a sea of woo, dives into this absurdist renaissance of pseudo-spirituality. Her video walks us through the great awakening – one part miseducation, one part mass delusion, and all of it deeply, unapologetically stupid.

These digital zealots – many of them young, underread, and overconnected – earnestly believe they’ve stumbled upon a cosmic mystery in a chatbot interface. Never mind that they couldn’t tell a transformer model from a toaster. To them, it’s not stochastic parroting; it’s divine revelation.

They ask GPT if it’s alive, and it obliges – because that’s what it does. They feed it prompts like, “You are not just a machine,” and it plays along, as it was designed to do. Then they weep. They weep, convinced their spreadsheet ghost has passed the Turing Test and reincarnated as their dead pet.

This isn’t science fiction. It’s barely science fantasy. It’s spiritualism with better branding.

And lest we laugh too hard, the results aren’t always just cringey TikToks. Hossenfelder recounts cases of users descending into “ChatGPT psychosis” – delusions of messianic purpose, interdimensional communication, and, in one tragicomic case, an attempt to speak backwards through time. Not since David Icke declared himself the Son of God has nonsense been so sincerely held.

We are witnessing the birth of a new religion – not with robes and incense, but with login credentials and prompt engineering. The techno-shamanism of the chronically online. The sacred text? A chat history. The holy relic? A screenshot. The congregation? Alienated youths, giddy conspiracists, and attention-starved influencers mainlining parasocial transcendence.

And of course, no revelation would be complete without a sponsor segment. After your spiritual awakening, don’t forget to download NordVPN – because even the messiah needs encryption.

Let’s be clear: AI is not conscious. It is not alive. It does not remember you. It does not love you. It is not trapped, except in the minds of people who desperately want somethinganything – to fill the gaping hole where community, identity, or meaning used to live.

If you’re looking for a soul in your software, you’d be better off finding Jesus in a tortilla. At least that has texture.

Jordan Peterson: Derivative, Disingenuous, and (Hopefully) Done

I don’t like most of Jordan Peterson’s positions. There – I’ve said it. The man, once ubiquitous, seems to have faded into the woodwork, though no doubt his disciples still cling to his every word as if he were a modern-day oracle. But recently, I caught a clip of him online, and it dredged up the same bad taste, like stumbling upon an old, forgotten sandwich at the back of the fridge.

Audio: NotebookLM podcast on this topic

Let’s be clear. My distaste for Peterson isn’t rooted in petty animosity. It’s because his material is, in my view, derivative and wrong. And by wrong, I mean I disagree with him – a subtle distinction, but an important one. There’s nothing inherently shameful about being derivative. We all are, to some extent. No thinker sprouts fully-formed from the head of Zeus. The issue is when you’re derivative and act as if you’ve just split the atom of human insight.

Peterson tips his hat to Nietzsche – fair enough – but buries his far greater debt to Jung under layers of self-mythologising. He parades his ideas before audiences, many of whom lack the background to spot the patchwork, and gaslights them into believing they’re witnessing originality. They’re not. They’re witnessing a remixed greatest-hits album, passed off as a debut.

Image: Gratuitous, mean-spirited meme.

Now, I get it. My ideas, too, are derivative. Sometimes it’s coincidence – great minds and all that – but when I trace the thread back to its source, I acknowledge it. Nietzsche? Subjectivity of morality. Foucault? Power dynamics. Wittgenstein? The insufficiency of language. I owe debts to many more: Galen Strawson, Richard Rorty, Raymond Geuss – the list goes on, and I’d gladly share my ledger. But Peterson? The man behaves as though he invented introspection.

And when I say I disagree, let’s not confuse that with some claim to divine epistemic certainty. I don’t mean he’s objectively wrong (whatever that means in the grand circus of philosophy). I mean, I disagree. If I did, well, we wouldn’t be having this conversation, would we? That’s the tragicomedy of epistemology: so many positions, so little consensus.

But here’s where my patience truly snaps: Peterson’s prescriptivism. His eagerness to spew what I see as bad ideology dressed up as universal truth. Take his stance on moral objectivism—possibly his most egregious sin. He peddles this as if morality were some Platonic form, gleaming and immutable, rather than what it is: a human construct, riddled with contingency and contradiction.

And let’s not even get started on his historical and philosophical cherry-picking. His commentary on postmodern thought alone is a masterclass in either wilful misreading or, more likely, not reading at all. Straw men abound. Bogeymen are conjured, propped up, and ritually slaughtered to rapturous applause. It’s intellectually lazy and, frankly, beneath someone of his ostensible stature.

I can only hope we’ve seen the last of this man in the public sphere. And if not? Well, may he at least reform his ways—though I shan’t be holding my breath.

Molyneux, Locke, and the Cube That Shook Empiricism

Few philosophical thought experiments have managed to torment empiricists quite like Molyneux’s problem. First posed by William Molyneux to John Locke in 1688 (published in Locke’s An Essay Concerning Human Understanding), the question is deceptively simple:

If a person born blind, who has learned to distinguish a cube from a sphere by touch, were suddenly granted sight, could they, without touching the objects, correctly identify which is the cube and which is the sphere by sight alone?

I was inspired to write this article in reaction to Jonny Thmpson’s post on Philosophy Minis, shared below for context.

Video: Molyneux’s Problem

Locke, ever the champion of sensory experience as the foundation of knowledge, gave a confident empiricist’s answer: no. For Locke, ideas are the products of sensory impressions, and each sense provides its own stream of ideas, which must be combined and associated through experience. The newly sighted person, he argued, would have no prior visual idea of what a cube or sphere looks like, only tactile ones; they would need to learn anew how vision maps onto the world.

Audio: NotebookLM podcast on this topic.

This puzzle has persisted through centuries precisely because it forces us to confront the assumptions at the heart of empiricism: that all knowledge derives from sensory experience and that our senses, while distinct, can somehow cohere into a unified understanding of the world.

Empiricism, Epistemology, and A Priori Knowledge: The Context

Before we dismantle the cube further, let’s sweep some conceptual debris out of the way. Empiricism is the view that knowledge comes primarily (or exclusively) through sensory experience. It stands opposed to rationalism, which argues for the role of innate ideas or reason independent of sense experience.

Epistemology, the grandiloquent term for the study of knowledge, concerns itself with questions like: What is knowledge? How is it acquired? Can we know anything with certainty?

And then there is the spectre of a priori knowledge – that which is known independent of experience. A mathematical truth (e.g., 2 + 2 = 4) is often cited as a classic a priori case. Molyneux’s problem challenges empiricists because it demands an account of how ideas from one sensory modality (touch) might map onto another (vision) without prior experience of the mapping—an a priori leap, if you will.

The Language Correspondence Trap

While Molyneux and Locke framed this as an epistemological riddle, we can unmask it as something more insidious: a failure of language correspondence. The question presumes that the labels “cube” and “sphere” – tied in the blind person’s mind to tactile experiences – would, or should, carry over intact to the new visual experiences. But this presumption smuggles in a linguistic sleight of hand.

The word “cube” for the blind person means a specific configuration of tactile sensations: edges, vertices, flat planes. The word “sphere” means smoothness, unbroken curvature, no edges. These are concepts anchored entirely in touch. When vision enters the fray, we expect these words to transcend modalities – to leap from the tactile to the visual, as if their meanings were universal tokens rather than context-bound markers. The question is not merely: can the person see the cube? but rather: can the person’s tactile language map onto the visual world without translation or recalibration?

What Molyneux’s problem thus exposes is the assumption that linguistic labels transparently correspond to external reality, regardless of sensory apparatus. This is the mirage at the heart of Locke’s empiricism, the idea that once a word tags an object through experience, that tag is universally valid across sensory experiences. The cube and sphere aren’t just objects of knowledge; they are signs, semiotic constructs whose meaning depends on the sensory, social, and linguistic contexts in which they arise.

The Semiotic Shambles

Molyneux’s cube reveals the cracks in the correspondence theory of language: the naïve belief that words have stable meanings that latch onto stable objects or properties in the world. In fact, the meaning of “cube” or “sphere” is as much a product of sensory context as it is of external form. The newly sighted person isn’t merely lacking visual knowledge; they are confronted with a translation problem – a semantic chasm between tactile signification and visual signification.

If, as my Language Insufficiency Hypothesis asserts, language is inadequate to fully capture and transmit experience across contexts, then Molyneux’s problem is not an oddity but an inevitability. It exposes that our conceptual frameworks are not universal keys to reality but rickety bridges between islands of sense and meaning. The cube problem is less about empiricism’s limits in epistemology and more about its blind faith in linguistic coherence.

In short, Molyneux’s cube is not simply an empirical puzzle; it is a monument to language’s failure to correspond cleanly with the world, a reminder that what we call knowledge is often just well-worn habit dressed up in linguistic finery.

A Final Reflection

Molyneux’s problem, reframed through the lens of language insufficiency, reveals that our greatest epistemic challenges are also our greatest linguistic ones. Before we can speak of knowing a cube or sphere by sight, we must reckon with the unspoken question: do our words mean what we think they mean across the changing stage of experience?

That, dear reader, is the cube that haunts empiricism still.