Democracy: Opiate of the Masses

Democracy is sold, propagandised, really, as the best system of governance we’ve ever devised, usually with the grudging qualifier “so far.” It’s the Coca-Cola of political systems: not particularly good for you, but so entrenched in the cultural bloodstream that to question it is tantamount to treason.

Audio: NotebookLM Podcast on this topic.

The trouble is this: democracy depends on an electorate that is both aware and capable. Most people are neither. Worse still, even if they could be aware, they wouldn’t be smart enough to make use of it. And even if they were smart enough, Arrow’s Impossibility Theorem strolls in, smirking, to remind us that the whole thing is mathematically doomed anyway.

Even this number is a charade. IQ measures how well you navigate the peculiar obstacle course we’ve designed as “education,” not the whole terrain of human thought. It’s as culturally loaded as asking a fish to climb a tree, then declaring it dim-witted when it flops. We call it intelligence because it flatters those already rewarded by the system that designed the test. In the United States, the average IQ stands at 97 – hardly a figure that instils confidence in votes and outcomes.

The Enlightenment gents who pushed democracy weren’t exactly selfless visionaries. They already had power, and simply repackaged it as something everyone could share, much as the clergy promised eternal reward to peasants if they only kept their heads down. Democracy is merely religion with ballots instead of bibles: an opiate for the masses, sedating the population with the illusion of influence.

Worse still, it’s a system optimised for mediocrity. It rewards consensus, punishes brilliance, and ensures the average voter is, by definition, average. Living under it is like starring in Idiocracy, only without the comedic relief, just the grim recognition that you’re outnumbered, and the crowd is cheering the wrong thing.

The Myth of Causa Sui Creativity

(or: Why Neither Humans nor AI Create from Nothing)

In the endless squabble over whether AI can be “creative” or “intelligent,” we always end up back at the same semantic swamp. At the risk of poking the bear, I have formulated a response. Creativity is either whatever humans do, or whatever humans do that AI can’t. Intelligence is either the general ability to solve problems or a mysterious inner light that glows only in Homo sapiens. The definitions shift like sand under the feet of the argument.

Audio: NotebookLM podcast on this topic

Strip away the romance, and the truth is far less flattering: neither humans nor AI conjure from the void. Creativity is recombination, the reconfiguration of existing material into something unfamiliar. Intelligence is the ability to navigate problems using whatever tools and heuristics one has to hand.

The Causa Sui conceit, the idea that one can be the cause of oneself, is incoherent in art, thought, or physics. Conservation of energy applies as much to ideas as to atoms.

  • Humans consume inputs: books, conversations, music, arguments, TikTok videos.
  • We metabolise them through cognitive habits, biases, and linguistic forms.
  • We output something rearranged, reframed, sometimes stripped to abstraction.

The AI process is identical in structure, if not in substrate: ingest vast data, run it through a model, output recombination. The difference is that AI doesn’t pretend otherwise.

When a human produces something impressive, we call it creative without inspecting the provenance of the ideas. When an AI produces something impressive, we immediately trace the lineage of its inputs, as if the human mind weren’t doing the same. This is not epistemic rigour, it’s tribal boundary enforcement.

The real objection to AI is not that it fails the test of creativity or intelligence; it’s that it passes the functional test without being part of the club. Our stories about human exceptionalism require a clear line between “us” and “it,” even if we have to draw that line through semantic fog.

My Language Insufficiency Hypothesis began with the recognition that language cannot fully capture the reality it describes. Here, the insufficiency is deliberate; the words “creativity” and “intelligence” are kept vague so they can always be shifted away from anything AI achieves.

I cannot be causa sui, and neither can you. The only difference is that I’m willing to admit it.

The Enlightenment: A Postmortem

Or: How the Brightest Ideas in Europe Got Us into This Bloody Mess

Disclaimer: This output is entirely ChatGPT 4o from a conversation on the failure and anachronism of Enlightenment promises. I’m trying to finish editing my next novel, so I can’t justify taking much more time to share what are ultimately my thoughts as expounded upon by generative AI. I may comment personally in future. Until then, this is what I have to share.

AI Haters, leave now or perish ye all hope.


The Enlightenment promised us emancipation from superstition, authority, and ignorance. What we got instead was bureaucracy, colonialism, and TED Talks. We replaced divine right with data dashboards and called it progress. And like any good inheritance, the will was contested, and most of us ended up with bugger-all.

Below, I take each Enlightenment virtue, pair it with its contemporary vice, and offer a detractor who saw through the Enlightenment’s powder-wigged charade. Because if we’re going down with this ship, we might as well point out the dry rot in the hull.


1. Rationalism

The Ideal: Reason shall lead us out of darkness.
The Reality: Reason led us straight into the gas chambers—with bureaucratic precision.

Detractor: Max Horkheimer & Theodor Adorno

“Enlightenment is totalitarian.”
Dialectic of Enlightenment (1944)

Horkheimer and Adorno saw what reason looks like when it slips off its leash. Instrumental rationality, they warned, doesn’t ask why—it only asks how efficiently. The result? A world where extermination is scheduled, costs are optimised, and ethics are politely filed under “subjective.”


2. Empiricism

The Ideal: Observation and experience will uncover truth.
The Reality: If it can’t be measured, it can’t be real. (Love? Not statistically significant.)

Detractor: Michel Foucault

“Truth isn’t outside power… truth is a thing of this world.”
Power/Knowledge (1977)

Foucault dismantled the whole edifice. Knowledge isn’t neutral; it’s an instrument of power. Empiricism becomes just another way of disciplining the body—measuring skulls, classifying deviants, and diagnosing women with “hysteria” for having opinions.


3. Individualism

The Ideal: The sovereign subject, free and self-determining.
The Reality: The atomised consumer, trapped in a feedback loop of self-optimisation.

Detractor: Jean Baudrillard

“The individual is no longer an autonomous subject but a terminal of multiple networks.”
Simulacra and Simulation (1981)

You wanted autonomy? You got algorithms. Baudrillard reminds us that the modern “individual” is a brand in search of market validation. You are free to be whoever you want, provided it fits within platform guidelines and doesn’t disrupt ad revenue.


4. Secularism

The Ideal: Liberation from superstition.
The Reality: We swapped saints for STEMlords and called it even.

Detractor: Charles Taylor

“We are now living in a spiritual wasteland.”
A Secular Age (2007)

Taylor—perhaps the most polite Canadian apocalypse-whisperer—reminds us that secularism didn’t replace religion with reason; it replaced mystery with malaise. We’re no longer awed, just “motivated.” Everything is explainable, and yet somehow nothing means anything.


5. Progress

The Ideal: History is a forward march toward utopia.
The Reality: History is a meat grinder in a lab coat.

Detractor: Walter Benjamin

“The storm irresistibly propels him into the future to which his back is turned.”
Theses on the Philosophy of History (1940)

Benjamin’s “angel of history” watches helplessly as the wreckage piles up—colonialism, genocide, climate collapse—all in the name of progress. Every step forward has a cost, but we keep marching, noses in the spreadsheet, ignoring the bodies behind us.


6. Universalism

The Ideal: One humanity, under Reason.
The Reality: Enlightenment values, brought to you by cannon fire and Christian missionaries.

Detractor: Gayatri Chakravorty Spivak

“White men are saving brown women from brown men.”
Can the Subaltern Speak? (1988)

Universalism was always a bit… French, wasn’t it? Spivak unmasks it as imperialism in drag—exporting “rights” and “freedom” to people who never asked for them, while ignoring the structural violence built into the Enlightenment’s own Enlightened societies.


7. Tolerance

The Ideal: Let a thousand opinions bloom.
The Reality: Tolerance, but only for those who don’t threaten the status quo.

Detractor: Karl Popper

“Unlimited tolerance must lead to the disappearance of tolerance.”
The Open Society and Its Enemies (1945)

Popper, bless him, thought tolerance needed a firewall. But in practice, “tolerance” has become a smug liberal virtue signalling its own superiority while deplatforming anyone who makes the dinner party uncomfortable. We tolerate all views—except the unseemly ones.


8. Scientific Method

The Ideal: Observe, hypothesise, repeat. Truth shall emerge.
The Reality: Publish or perish. Fund or flounder.

Detractor: Paul Feyerabend

“Science is not one thing, it is many things.”
Against Method (1975)

Feyerabend called the whole thing a farce. There is no single “method,” just a bureaucratic orthodoxy masquerading as objectivity. Today, science bends to industry, cherry-picks for grants, and buries null results in the backyard. Peer review? More like peer pressure.


9. Anti-Authoritarianism

The Ideal: Smash the throne! Burn the mitre!
The Reality: Bow to the data analytics team.

Detractor: Herbert Marcuse

“Free election of masters does not abolish the masters or the slaves.”
One-Dimensional Man (1964)

Marcuse skewered the liberal illusion of choice. We may vote, but we do so within a system that already wrote the script. Authority didn’t vanish; it just became procedural, faceless, algorithmic. Bureaucracy is the new monarchy—only with more forms.


10. Education and Encyclopaedism

The Ideal: All knowledge, accessible to all minds.
The Reality: Behind a paywall. Written in impenetrable prose. Moderated by white men with tenure.

Detractor: Ivan Illich

“School is the advertising agency which makes you believe that you need the society as it is.”
Deschooling Society (1971)

Illich pulls the curtain: education isn’t emancipatory; it’s indoctrinatory. The modern university produces not thinkers but credentialed employees. Encyclopaedias are replaced by Wikipedia, curated by anonymous pedants and revision wars. Truth is editable.


Postscript: Picking through the Rubble

So—has the Enlightenment failed?

Not exactly. It succeeded too literally. It was taken at its word. Its principles, once radical, were rendered banal. It’s not that reason, progress, or rights are inherently doomed—it’s that they were never as pure as advertised. They were always products of their time: male, white, bourgeois, and utterly convinced of their own benevolence.

If there’s a path forward, it’s not to restore Enlightenment values, but to interrogate them—mercilessly, with irony and eyes open.

After all, the problem was never darkness. It was the people with torches who thought they’d found the only path.

From Thesaurus to Thoughtcrime: The Slippery Slope of Authorial Purity

I had planned to write about Beauvoir’s Second Sex, but this has been on my mind lately.

There’s a certain breed of aspiring author, let’s call them the Sacred Scribes, who bristle at the notion of using AI to help with their writing. Not because it’s unhelpful. Not because it produces rubbish. But because it’s impure.

Like some Victorian schoolmarm clutching her pearls at the sight of a split infinitive, they cry: “If you let the machine help you fix a clumsy sentence, what’s next? The whole novel? Your diary? Your soul?”

The panic is always the same: one small compromise and you’re tumbling down the greased chute of creative ruin. It starts with a synonym suggestion and ends with a ghostwritten autobiography titled My Journey to Authenticity, dictated by chatbot, of course.

But let’s pause and look at the logic here. Or rather, the lack thereof.

By this standard, you must also renounce the thesaurus. Shun the spellchecker. Burn your dictionary. Forbid yourself from reading any book you might accidentally learn from. Heaven forbid you read a well-constructed sentence and think, “I could try that.” That’s theft, isn’t it?

And while we’re at it, no editors. No beta readers. No workshopping. No taking notes. Certainly no research. If your brain didn’t birth it in a vacuum, it’s suspect. It’s borrowed. It’s… contaminated.

Let’s call this what it is: purity fetishism in prose form.

But here’s the twist: it’s not new. Plato, bless him, was already clutching his tunic about this twenty-four centuries ago. In Phaedrus, he warned that writing itself would be the death of memory, of real understanding. Words on the page were a crutch. Lazy. A hollow imitation of wisdom. True knowledge lived in the mind, passed orally, and refined through dialogue. Writing, he said, would make us forgetful, outsource our thinking.

Sound familiar?

Fast forward a few millennia, and we’re hearing the same song, remixed for the AI age:
“If you let ChatGPT restructure your second paragraph, you’re no longer the author.”
Nonsense. You were never the sole author. Not even close.

Everything you write is a palimpsest, your favourite genres echoing beneath the surface, your heroes whispering in your turns of phrase. You’re just remixing the residue. And there’s no shame in that. Unless, of course, you believe that distilling your top five comfort reads into a Frankenstein narrative somehow makes you an oracle of literary genius.

Here’s the rub: You’ve always been collaborating.

With your past. With your influences. With your tools. With language itself, which you did not invent and barely control. Whether the suggestion comes from a friend, an editor, a margin note, or an algorithm, what matters is the choice you make with it. That’s authorship. Let’s not play the slippery slope game.

The slippery slope argument collapses under its own weight. No one accuses you of cheating when you use a pencil sharpener. Or caffeine. Or take a walk to clear your head. But involve a silicon co-author, and suddenly you’re the Antichrist of Art?

Let’s not confuse integrity with insecurity. Let’s not confuse control with fear.

Use the tool. Ignore the purists. They’ve been wrong since Plato, and they’ll still be wrong when your great-grandchildren are dictating novels to a neural implant while bathing in synthetic dopamine.

The future of writing is always collaborative. The only question is whether you’ll join the conversation or sit in the corner, scribbling manifestos by candlelight, declaring war on electricity.

Faithful to the Salt: Idioms, Interference, and the Philosophy of Flavour

Don’t get salty with me when I tell you I asked AI to write this for me. I was thinking that “take it with a grain of salt” or “take it with a pinch of salt” in English did not share the same meaning as “mettre son grain de selen français, so I asked ChatGPT for other uses of salt. This is why it doesn’t follow by usual style, if one can call it that.

🧂 Salt: That Most Misunderstood Metaphor

Salt has an image problem.

Despite being one of the most ancient and revered substances in human civilisation—once used as currency, treaty-sealer, and god-bait—it somehow gets dragged through the metaphorical gutter in modern idiom. In English, to take something “with a grain of salt” is to doubt it. To “add your grain of salt,” per the French idiom mettre son grain de sel, is to interrupt uninvited. Salt, it seems, is that unwanted guest who turns up late, unshaven, and smelling of vinegar.

And yet, salt is also life. Necessary. Essential. Literal. So what gives?

Let’s do what the internet never does and look at context.


🏴‍☠️ English: Cynicism in a Crystal

The English expression “take it with a grain of salt” (or, in older form, a pinch) comes from Latin cum grano salis, which likely implied adding a figurative preservative to dubious claims—treat this as you would old meat. In other words, don’t fully trust it unless you like dysentery.

We also say “he’s a bit salty” to mean grumpy, caustic, or prone to verbal cutlery. “Adding your two cents” is bad enough, but adding your grain of salt implies that what you’re contributing is both unsolicited and probably irritating.

Put simply, English idioms treat salt as if it’s the person in the meeting who thinks they’re clever. There’s a faint whiff of Protestantism here—suspicious of flavour, pleasure, and expressive enthusiasm. Plain oatmeal, plain truths, no seasoning required. Salt is vice. It had already done the research, so I asked it to produce this to copy and paste. You’re welcome.


🇫🇷 French: Salty Saboteurs

The French mettre son grain de sel is more or less the same: to butt in. To lob your unwanted opinion into someone else’s stew. Not unlike “putting in your two penn’orth” in British English—but somehow meaner, as if your salt is not just annoying, but wrong.

Salt, in this idiom, doesn’t enrich—it ruins. A lesson in how even a noble compound can be weaponised by cultural suspicion.


🏺 Hindi: Loyalty Seasoned with Honour

Contrast this with Hindi: namak harām — literally “unfaithful to salt.” This is a powerful accusation. It means you’ve betrayed someone who fed you, someone who sustained you. You’ve taken their salt and spat in their dish.

Conversely, namak halāl is a compliment: someone loyal, trustworthy, faithful to the hand that seasoned them. Salt is the symbol of obligation and honour—not interference.

It is covenantal.


🗾 Japanese: Salt as Mercy

塩を送る (shio o okuru) – “to send salt” – is a Japanese idiom meaning to help your enemy in their time of need. Based on a historical moment when Uesugi Kenshin sent salt to his rival, Takeda Shingen, when the latter’s supply was blockaded.

Salt, here, transcends enmity. It’s noble. A tool of ethics.

In short: send salt, don’t throw it.


🇩🇪 German & 🇪🇸 Spanish: Flavour as Personality

The Germans say “das Salz in der Suppe sein”—to be the salt in the soup. You’re what makes life interesting. Without you, it’s just… wet nutrition.

In Spanish, “ser la sal de la vida” means to be the zest of existence. Without salt, life is dull, bland, morally beige.

In these idioms, salt is essential. A little dangerous, maybe, but necessary. Just like any compelling person.


🇹🇷 Turkish: The Dry Salt of Privilege

The Turkish idiom “tuzu kuru” (lit. “dry salt”) means you’re doing fine. Perhaps too fine. You’re unaffected, aloof, in your tower of comfort while others stew.

Dry salt is privilege: unbothered, unsalted tears. An idiom with side-eye built in.


🕊️ Christianity: Salt of the Earth

The Gospels famously commend the righteous as “the salt of the earth.” Not merely good people, but the ones who preserve and season the whole damn world. And yet, “if salt loses its savour,” says Matthew 5:13, “wherewith shall it be salted?” A warning to remain vital. Relevant. Useful.

Even Jesus had thoughts about flavour fatigue.


⚖️ So… Is Salt Praised or Pitied?

Depends who you ask.

  • For some, salt is civic virtue (Hindi).
  • For others, it’s moral generosity (Japanese).
  • Sometimes it’s life’s spark (German, Spanish).
  • Sometimes it’s trouble in a shaker (English, French).

But the ambivalence is the point. Salt is essential—but easily overdone. Too little, and life is bland. Too much, and it’s ruined.

Like language, then: salt mediates between flavour and clarity. Add carefully. Stir well.


🧂 Final Sprinkle

Before you disparage someone for being “a bit salty,” ask yourself whether they’re really interfering—or simply adding what your grey little broth lacked all along.

And for heaven’s sake, be faithful to the salt you’ve eaten.

The Ethics of Feedback in an Algorithmic Age


We’ve entered an era where machines tell us how we’re doing, whether it’s an AI app rating our résumé, a model reviewing our fiction, or an algorithm nudging our attention with like-shaped carrots.

Full story here, from the Ridley side: Needle’s Edge: Scene Feedback 01

Recently, I ran a brutally raw scene through a few AI platforms. The kind of scene that’s meant to unsettle, not entertain. One of them responded with effusive praise: “Devastating, but masterfully executed.”

Was it honest?

Was it useful?

Or was it merely reflecting my own aesthetic back at me, polished by a thousand reinforcement-learning smiles?

This is the ethical dilemma: If feedback is always flattering, what good is it? If criticism is only tolerated when couched in praise, how do we grow? And when machine feedback mimics the politeness of a mid-level manager with performance anxiety, we risk confusing validation with truth.

There’s a difference between signal and applause. Between understanding and affirmation.

The danger isn’t that AI flatters us. The danger is that we start to believe it and forget that art, inquiry, and ethics thrive on friction.

The Heuristic Self: On Persona, Identity, and Character

Man is least himself when he talks in his own person. Give him a mask, and he will tell you the truth.”
— Oscar Wilde

Identity is an illusion—but a necessary one. It’s a shortcut. A heuristic, evolved not for truth but for coherence. We reduce ourselves and others to fixed traits to preserve continuity—psychological, social, narrative.

Audio: NotebookLM podcast on this topic. (Direct)

Audio: NotebookLM podcast on this topic. (Spotify)

In the latest post on RidleyPark.blog, we meet Sarah—a woman who survives by splintering. She has three names, three selves, three economies of interaction. Each persona—Sarah, Stacey, and Pink—fulfils a role. Each protects her in a system that punishes complexity.

Identity Is Compression

Cognitive science suggests that we don’t possess a self—we perform one. Our so-called identity is assembled post-hoc from memory, context, and social cues. It’s recursive. It’s inferred.

We are not indivisible atoms of identity. We are bundled routines, personae adapted to setting and audience.

From Performance to Survival

In Needle’s Edge, Sarah doesn’t use aliases to deceive. She uses them to survive contradictions:

  • Stacey is desirable, stable, and profitable—so long as she appears clean and composed.
  • Pink is a consumer, invisible, stripped of glamour but allowed access to the block.
  • Sarah is the residue, the name used by those who once knew her—or still believe they do.

Each persona comes with scripts, limitations, and permissions. Sarah isn’t being dishonest. She’s practicing domain-specific identity. This is no different from how professionals code-switch at work, or how people self-edit on social media.

The Literary Echo

In character development, we often demand “depth,” by which we mean contradiction. We want to see a character laugh and break. Love and lie. But Sarah shows us that contradiction isn’t depth—it’s baseline reality. Any singular identity would be a narrative failure.

Characters like Sarah expose the poverty of reduction. They resist archetype. They remind us that fiction succeeds when it reflects the multiple, the shifting, the incompatible—which is to say, the real.

What Else Might We Say?

  • That authenticity is a myth: “Just be yourself” presumes you know which self to be.
  • That moral judgment often stems from a failure to see multiple selves in others.
  • That trauma survivors often fracture not because they’re broken, but because fracturing is adaptive.
  • That in a capitalist framework, the ability to fragment and role-play becomes a survival advantage.
  • That fiction is one of the few spaces where we can explore multiple selves without collapse.

The Missing Link

For a concrete, narrative reflection of these ideas, this post on RidleyPark.blog explores how one woman carries three selves to survive three worlds—and what it costs her.

On Predictive Text, Algebra, and the Ghost of Markov

Before I was a writer, before I was a management consultant, before I was an economist, and before I was a statistician, I was a student.

Video: Veritasium piece on Markov chains and more.

Back then, when dinosaurs roamed the chalkboards, I fell for a rather esoteric field: stochastic processes, specifically, Markov chains and Monte Carlo simulations. These weren’t just idle fascinations. They were elegant, probabilistic odes to chaos, dressed up in matrix notation. I’ll not bore you with my practical use of linear algebra.

So imagine my surprise (feigned, of course) when, decades later, I find myself confronted by the same concepts under a different guise—this time in the pocket-sized daemon we all carry: predictive text.

If you’ve not watched it yet, this excellent explainer by Veritasium demystifies how Markov chains can simulate plausible language. In essence, if you’ve ever marvelled at your phone guessing the next word in your sentence, you can thank a Russian mathematician and a few assumptions about memoryless transitions.

But here’s the rub. The predictive text often gets it hilariously wrong. Start typing “to be or not to—” and it offers you “schedule a meeting.” Close, but existentially off. This isn’t just clunky programming; it’s probabilistic dementia.

This leads me to a pet peeve: people who smugly proclaim they’ve “never used algebra” since high school. I hear this a lot. It’s the battle cry of the proudly innumerate. What they mean, of course, is they’ve never recognised algebra in the wild. They think if they’re not solving for x with a number 2 pencil, it doesn’t count. Meanwhile, their phone is doing a polynomial dance just to autocorrect their butchery of the English language.

It’s a classic case of not recognising the water in which we’re swimming. Algebra is everywhere. Markov chains are everywhere. And Monte Carlo simulations are probably calculating your credit risk as we speak. Just because the interface is clean and the maths is hidden behind a swipeable veneer doesn’t mean the complexity has vanished. It’s merely gone incognito.

As someone who has used maths across various fields – software development, data analysis, policy modelling – I can tell you that I use less of it than a physicist, but probably more than your average lifestyle coach. I say this not to flex but to point out that even minimal exposure to mathematical literacy grants one the ability to notice when the machines are quietly doing cartwheels behind the curtain.

So the next time your phone offers you a sentence completion that reads like it’s been dropped on its head, spare a thought for Markov. He’s doing his best, bless him. It’s just that probability doesn’t always align with meaning.

Or as the algorithms might say: “To be or not to – subscribe for updates.”

Derrida’s Deconstruction Summarised

David Guignion describes Derrida’s Deconstruction in under three minutes.

Video: YouTube short on Derrida’s notion of deconstruction.

The confusion he mentions is why I chose a different term – dis-integration – to describe “deconstructing” communication to discover underlying metanarratives.

I am busy editing my next novel, so that’s all the time I want to allocate to this matter, but David is a trusted resource of mine. Meantime, check out my deconstructed cover image.

Jesus Wept, Then He Kicked Bezos in the Bollocks

There’s a curious thing about belief: it seems to inoculate people against behaving as though they believe a single bloody word of it.

Audio: NotebookLM podcast on this topic.

Case in point: Jesus. Supposed son of God, sandal-wearing socialist, friend of lepers, hookers, and the unhoused. A man who — by all scriptural accounts — didn’t just tolerate the downtrodden, but made them his preferred company. He fed the hungry, flipped off the wealthy (quite literally, if we’re being honest about the temple tantrum), and had the gall to suggest that a rich man getting into heaven was about as likely as Jeff Bezos squeezing himself through the eye of a needle. (Good luck with that, Jeffrey — maybe try Ozempic?)

And yet, here we are, two millennia later, and who is doing the persecuting? Who’s clutching their pearls over trans people, sex workers, immigrants, and the poor daring to exist in public? The self-proclaimed followers of this same Jesus.

You see it everywhere. In the subway, on billboards, on bumper stickers: “What would Jesus do?” Mate, we already know what he did do — and it wasn’t vote Tory, bankroll megachurches, or ignore houseless veterans while building another golden tabernacle to white suburban comfort.

No, the real issue isn’t Jesus. It’s his fan club.

They quote scripture like it’s seasoning, sprinkle it on whichever regressive policy or hateful platform suits the day, and ignore the core premise entirely: radical love. Redistribution. Justice. The inversion of power.

Because let’s face it: if Christians actually behaved like Christ, capitalism would implode by Tuesday. The entire premise of American exceptionalism (and British austerity, while we’re at it) would crumble under the weight of its own hypocrisy. And the boot would finally be lifted from the necks of those it’s been pressing down for centuries.

But they won’t. Because belief isn’t about behaviour. It’s about performance. It’s about signalling moral superiority while denying material compassion. It’s about tithing for a Tesla and preaching abstinence from a megachurch pulpit built with sweatshop money.

And here’s the kicker — I don’t believe in gods. I’m not here to convert anyone to the cult of sandal-clad socialism. But if you do believe in Jesus, shouldn’t you at least try acting like him?

The sad truth? We’ve built entire societies on the backs of myths we refuse to embody. We have the tools — the stories, the morals, the examples — but we’re too bloody enamoured with hierarchy to follow through. If there are no gods, then it’s us. We are the ones who must act. No sky-daddy is coming to fix this for you.

You wear the cross. You quote the book. You claim the faith.

So go ahead. Prove it.

Feed someone. Befriend a sex worker. House the homeless. Redistribute the damn wealth.

Or stop pretending you’re anything but the Pharisees he warned us about.