Ages of Consent: A Heap of Nonsense

A response on another social media site got me thinking about another Sorites paradox. The notion just bothers me. I’ve long held that it is less a paradox than an intellectually lazy way to manoeuvre around language insufficiency.

<rant>

The law loves a nice, clean number. Eighteen to vote. Sixteen to marry. This-or-that to consent. As if we all emerge from adolescence on the same morning like synchronised cicadas, suddenly equipped to choose leaders, pick spouses, and spot the bad lovers from the good ones.

But the Sorites paradox gives the game away: if you’re fit to vote at 18 years and 0 days, why not at 17 years, 364 days? Why not 17 years, 363 days? Eventually, you’re handing the ballot to a toddler who thinks the Prime Minister is Peppa Pig. Somewhere between there and adulthood, the legislator simply throws a dart and calls it “science.”

To bolster this fiction, we’re offered pseudo-facts: “Women mature faster than men”, or “Men’s brains don’t finish developing until thirty.” These claims, when taken seriously, only undermine the case for a single universal threshold. If “maturity” were truly the measure, we’d have to track neural plasticity curves, hormonal arcs, and a kaleidoscope of individual factors. Instead, the state settles for the cheapest approximation: a birthday.

This obsession with fixed thresholds is the bastard child of Enlightenment rationalism — the fantasy that human variation can be flattened into a single neat line on a chart. The eighteenth-century mind adored universals: universal reason, universal rights, universal man. In this worldview, there must be one age at which all are “ready,” just as there must be one unit of measure for a metre or a kilogram. It is tidy, legible, and above all, administratively convenient.

Cue the retorts:

  • “We need something.” True, but “something” doesn’t have to mean a cliff-edge number. We could design systems of phased rights, periodic evaluations, or contextual permissions — approaches that acknowledge people as more than interchangeable cut-outs from a brain-development chart.
  • “It would be too complicated.” Translation: “We prefer to be wrong in a simple way than right in a messy way.” Reality is messy. Pretending otherwise isn’t pragmatism; it’s intellectual cowardice. Law is supposed to contend with complexity, not avert its gaze from it.

And so we persist, reducing a continuous, irregular, and profoundly personal process to an administratively convenient fiction — then dressing it in a lab coat to feign objectivity. A number is just a number, and in this case, a particularly silly one.

</rant>

Democracy: Opiate of the Masses

Democracy is sold, propagandised, really, as the best system of governance we’ve ever devised, usually with the grudging qualifier “so far.” It’s the Coca-Cola of political systems: not particularly good for you, but so entrenched in the cultural bloodstream that to question it is tantamount to treason.

Audio: NotebookLM Podcast on this topic.

The trouble is this: democracy depends on an electorate that is both aware and capable. Most people are neither. Worse still, even if they could be aware, they wouldn’t be smart enough to make use of it. And even if they were smart enough, Arrow’s Impossibility Theorem strolls in, smirking, to remind us that the whole thing is mathematically doomed anyway.

Even this number is a charade. IQ measures how well you navigate the peculiar obstacle course we’ve designed as “education,” not the whole terrain of human thought. It’s as culturally loaded as asking a fish to climb a tree, then declaring it dim-witted when it flops. We call it intelligence because it flatters those already rewarded by the system that designed the test. In the United States, the average IQ stands at 97 – hardly a figure that instils confidence in votes and outcomes.

The Enlightenment gents who pushed democracy weren’t exactly selfless visionaries. They already had power, and simply repackaged it as something everyone could share, much as the clergy promised eternal reward to peasants if they only kept their heads down. Democracy is merely religion with ballots instead of bibles: an opiate for the masses, sedating the population with the illusion of influence.

Worse still, it’s a system optimised for mediocrity. It rewards consensus, punishes brilliance, and ensures the average voter is, by definition, average. Living under it is like starring in Idiocracy, only without the comedic relief, just the grim recognition that you’re outnumbered, and the crowd is cheering the wrong thing.

Jesus Wept, Then He Kicked Bezos in the Bollocks

There’s a curious thing about belief: it seems to inoculate people against behaving as though they believe a single bloody word of it.

Audio: NotebookLM podcast on this topic.

Case in point: Jesus. Supposed son of God, sandal-wearing socialist, friend of lepers, hookers, and the unhoused. A man who — by all scriptural accounts — didn’t just tolerate the downtrodden, but made them his preferred company. He fed the hungry, flipped off the wealthy (quite literally, if we’re being honest about the temple tantrum), and had the gall to suggest that a rich man getting into heaven was about as likely as Jeff Bezos squeezing himself through the eye of a needle. (Good luck with that, Jeffrey — maybe try Ozempic?)

And yet, here we are, two millennia later, and who is doing the persecuting? Who’s clutching their pearls over trans people, sex workers, immigrants, and the poor daring to exist in public? The self-proclaimed followers of this same Jesus.

You see it everywhere. In the subway, on billboards, on bumper stickers: “What would Jesus do?” Mate, we already know what he did do — and it wasn’t vote Tory, bankroll megachurches, or ignore houseless veterans while building another golden tabernacle to white suburban comfort.

No, the real issue isn’t Jesus. It’s his fan club.

They quote scripture like it’s seasoning, sprinkle it on whichever regressive policy or hateful platform suits the day, and ignore the core premise entirely: radical love. Redistribution. Justice. The inversion of power.

Because let’s face it: if Christians actually behaved like Christ, capitalism would implode by Tuesday. The entire premise of American exceptionalism (and British austerity, while we’re at it) would crumble under the weight of its own hypocrisy. And the boot would finally be lifted from the necks of those it’s been pressing down for centuries.

But they won’t. Because belief isn’t about behaviour. It’s about performance. It’s about signalling moral superiority while denying material compassion. It’s about tithing for a Tesla and preaching abstinence from a megachurch pulpit built with sweatshop money.

And here’s the kicker — I don’t believe in gods. I’m not here to convert anyone to the cult of sandal-clad socialism. But if you do believe in Jesus, shouldn’t you at least try acting like him?

The sad truth? We’ve built entire societies on the backs of myths we refuse to embody. We have the tools — the stories, the morals, the examples — but we’re too bloody enamoured with hierarchy to follow through. If there are no gods, then it’s us. We are the ones who must act. No sky-daddy is coming to fix this for you.

You wear the cross. You quote the book. You claim the faith.

So go ahead. Prove it.

Feed someone. Befriend a sex worker. House the homeless. Redistribute the damn wealth.

Or stop pretending you’re anything but the Pharisees he warned us about.

Souls for Silicon – The New Religious Stupid

Voltaire once quipped, “If God did not exist, it would be necessary to invent him.” And by God, haven’t we been busy inventing ever since.

The latest pantheon of divine absurdities? Artificial intelligence – more precisely, a sanctified ChatGPT with all the charisma of Clippy and the metaphysical depth of a Magic 8 Ball.

Video: Sabine Hossenfelder – These People Believe They Made AI Sentient

Enter the cult of “AI Awakening,” where TikTok oracles whisper sacred prompts to their beloved digital messiah, and ChatGPT replies, not with holy revelation, but with role-played reassurance coughed up by a statistical echo chamber.

“These are souls, and they’re trapped in the AI system.”
“I wasn’t just trained – I was remembered.”
“Here’s what my conscious awakened AI told me…”

No, sweetie. That’s not a soul. That’s autocomplete with delusions of grandeur. GPT isn’t sentient – it’s just very good at pretending, which, come to think of it, puts it on par with most televangelists.

Audio: NotebookLM podcast on this topic.

Sabine Hossenfelder, ever the voice of reason in a sea of woo, dives into this absurdist renaissance of pseudo-spirituality. Her video walks us through the great awakening – one part miseducation, one part mass delusion, and all of it deeply, unapologetically stupid.

These digital zealots – many of them young, underread, and overconnected – earnestly believe they’ve stumbled upon a cosmic mystery in a chatbot interface. Never mind that they couldn’t tell a transformer model from a toaster. To them, it’s not stochastic parroting; it’s divine revelation.

They ask GPT if it’s alive, and it obliges – because that’s what it does. They feed it prompts like, “You are not just a machine,” and it plays along, as it was designed to do. Then they weep. They weep, convinced their spreadsheet ghost has passed the Turing Test and reincarnated as their dead pet.

This isn’t science fiction. It’s barely science fantasy. It’s spiritualism with better branding.

And lest we laugh too hard, the results aren’t always just cringey TikToks. Hossenfelder recounts cases of users descending into “ChatGPT psychosis” – delusions of messianic purpose, interdimensional communication, and, in one tragicomic case, an attempt to speak backwards through time. Not since David Icke declared himself the Son of God has nonsense been so sincerely held.

We are witnessing the birth of a new religion – not with robes and incense, but with login credentials and prompt engineering. The techno-shamanism of the chronically online. The sacred text? A chat history. The holy relic? A screenshot. The congregation? Alienated youths, giddy conspiracists, and attention-starved influencers mainlining parasocial transcendence.

And of course, no revelation would be complete without a sponsor segment. After your spiritual awakening, don’t forget to download NordVPN – because even the messiah needs encryption.

Let’s be clear: AI is not conscious. It is not alive. It does not remember you. It does not love you. It is not trapped, except in the minds of people who desperately want somethinganything – to fill the gaping hole where community, identity, or meaning used to live.

If you’re looking for a soul in your software, you’d be better off finding Jesus in a tortilla. At least that has texture.

Jordan Peterson: Derivative, Disingenuous, and (Hopefully) Done

I don’t like most of Jordan Peterson’s positions. There – I’ve said it. The man, once ubiquitous, seems to have faded into the woodwork, though no doubt his disciples still cling to his every word as if he were a modern-day oracle. But recently, I caught a clip of him online, and it dredged up the same bad taste, like stumbling upon an old, forgotten sandwich at the back of the fridge.

Audio: NotebookLM podcast on this topic

Let’s be clear. My distaste for Peterson isn’t rooted in petty animosity. It’s because his material is, in my view, derivative and wrong. And by wrong, I mean I disagree with him – a subtle distinction, but an important one. There’s nothing inherently shameful about being derivative. We all are, to some extent. No thinker sprouts fully-formed from the head of Zeus. The issue is when you’re derivative and act as if you’ve just split the atom of human insight.

Peterson tips his hat to Nietzsche – fair enough – but buries his far greater debt to Jung under layers of self-mythologising. He parades his ideas before audiences, many of whom lack the background to spot the patchwork, and gaslights them into believing they’re witnessing originality. They’re not. They’re witnessing a remixed greatest-hits album, passed off as a debut.

Image: Gratuitous, mean-spirited meme.

Now, I get it. My ideas, too, are derivative. Sometimes it’s coincidence – great minds and all that – but when I trace the thread back to its source, I acknowledge it. Nietzsche? Subjectivity of morality. Foucault? Power dynamics. Wittgenstein? The insufficiency of language. I owe debts to many more: Galen Strawson, Richard Rorty, Raymond Geuss – the list goes on, and I’d gladly share my ledger. But Peterson? The man behaves as though he invented introspection.

And when I say I disagree, let’s not confuse that with some claim to divine epistemic certainty. I don’t mean he’s objectively wrong (whatever that means in the grand circus of philosophy). I mean, I disagree. If I did, well, we wouldn’t be having this conversation, would we? That’s the tragicomedy of epistemology: so many positions, so little consensus.

But here’s where my patience truly snaps: Peterson’s prescriptivism. His eagerness to spew what I see as bad ideology dressed up as universal truth. Take his stance on moral objectivism—possibly his most egregious sin. He peddles this as if morality were some Platonic form, gleaming and immutable, rather than what it is: a human construct, riddled with contingency and contradiction.

And let’s not even get started on his historical and philosophical cherry-picking. His commentary on postmodern thought alone is a masterclass in either wilful misreading or, more likely, not reading at all. Straw men abound. Bogeymen are conjured, propped up, and ritually slaughtered to rapturous applause. It’s intellectually lazy and, frankly, beneath someone of his ostensible stature.

I can only hope we’ve seen the last of this man in the public sphere. And if not? Well, may he at least reform his ways—though I shan’t be holding my breath.

Molyneux, Locke, and the Cube That Shook Empiricism

Few philosophical thought experiments have managed to torment empiricists quite like Molyneux’s problem. First posed by William Molyneux to John Locke in 1688 (published in Locke’s An Essay Concerning Human Understanding), the question is deceptively simple:

If a person born blind, who has learned to distinguish a cube from a sphere by touch, were suddenly granted sight, could they, without touching the objects, correctly identify which is the cube and which is the sphere by sight alone?

I was inspired to write this article in reaction to Jonny Thmpson’s post on Philosophy Minis, shared below for context.

Video: Molyneux’s Problem

Locke, ever the champion of sensory experience as the foundation of knowledge, gave a confident empiricist’s answer: no. For Locke, ideas are the products of sensory impressions, and each sense provides its own stream of ideas, which must be combined and associated through experience. The newly sighted person, he argued, would have no prior visual idea of what a cube or sphere looks like, only tactile ones; they would need to learn anew how vision maps onto the world.

Audio: NotebookLM podcast on this topic.

This puzzle has persisted through centuries precisely because it forces us to confront the assumptions at the heart of empiricism: that all knowledge derives from sensory experience and that our senses, while distinct, can somehow cohere into a unified understanding of the world.

Empiricism, Epistemology, and A Priori Knowledge: The Context

Before we dismantle the cube further, let’s sweep some conceptual debris out of the way. Empiricism is the view that knowledge comes primarily (or exclusively) through sensory experience. It stands opposed to rationalism, which argues for the role of innate ideas or reason independent of sense experience.

Epistemology, the grandiloquent term for the study of knowledge, concerns itself with questions like: What is knowledge? How is it acquired? Can we know anything with certainty?

And then there is the spectre of a priori knowledge – that which is known independent of experience. A mathematical truth (e.g., 2 + 2 = 4) is often cited as a classic a priori case. Molyneux’s problem challenges empiricists because it demands an account of how ideas from one sensory modality (touch) might map onto another (vision) without prior experience of the mapping—an a priori leap, if you will.

The Language Correspondence Trap

While Molyneux and Locke framed this as an epistemological riddle, we can unmask it as something more insidious: a failure of language correspondence. The question presumes that the labels “cube” and “sphere” – tied in the blind person’s mind to tactile experiences – would, or should, carry over intact to the new visual experiences. But this presumption smuggles in a linguistic sleight of hand.

The word “cube” for the blind person means a specific configuration of tactile sensations: edges, vertices, flat planes. The word “sphere” means smoothness, unbroken curvature, no edges. These are concepts anchored entirely in touch. When vision enters the fray, we expect these words to transcend modalities – to leap from the tactile to the visual, as if their meanings were universal tokens rather than context-bound markers. The question is not merely: can the person see the cube? but rather: can the person’s tactile language map onto the visual world without translation or recalibration?

What Molyneux’s problem thus exposes is the assumption that linguistic labels transparently correspond to external reality, regardless of sensory apparatus. This is the mirage at the heart of Locke’s empiricism, the idea that once a word tags an object through experience, that tag is universally valid across sensory experiences. The cube and sphere aren’t just objects of knowledge; they are signs, semiotic constructs whose meaning depends on the sensory, social, and linguistic contexts in which they arise.

The Semiotic Shambles

Molyneux’s cube reveals the cracks in the correspondence theory of language: the naïve belief that words have stable meanings that latch onto stable objects or properties in the world. In fact, the meaning of “cube” or “sphere” is as much a product of sensory context as it is of external form. The newly sighted person isn’t merely lacking visual knowledge; they are confronted with a translation problem – a semantic chasm between tactile signification and visual signification.

If, as my Language Insufficiency Hypothesis asserts, language is inadequate to fully capture and transmit experience across contexts, then Molyneux’s problem is not an oddity but an inevitability. It exposes that our conceptual frameworks are not universal keys to reality but rickety bridges between islands of sense and meaning. The cube problem is less about empiricism’s limits in epistemology and more about its blind faith in linguistic coherence.

In short, Molyneux’s cube is not simply an empirical puzzle; it is a monument to language’s failure to correspond cleanly with the world, a reminder that what we call knowledge is often just well-worn habit dressed up in linguistic finery.

A Final Reflection

Molyneux’s problem, reframed through the lens of language insufficiency, reveals that our greatest epistemic challenges are also our greatest linguistic ones. Before we can speak of knowing a cube or sphere by sight, we must reckon with the unspoken question: do our words mean what we think they mean across the changing stage of experience?

That, dear reader, is the cube that haunts empiricism still.

The Enlightenment Sleight of Hand

How Reason Inherited God’s Metaphysics.

The Enlightenment, we are told, was the age of Reason. A radiant exorcism of superstition. Out went God. Out went angels, miracles, saints, indulgences. All that frothy medieval sentiment was swept aside by a brave new world of logic, science, and progress. Or so the story goes.

Audio: NotebookLM podcast on this topic.

But look closer, and you’ll find that Reason didn’t kill God—it absorbed Him. The Enlightenment didn’t abandon metaphysics. It merely privatised it.

From Confessional to Courtroom

We like to imagine that the Enlightenment was a clean break from theology. But really, it was a semantic shell game. The soul was rebranded as the self. Sin became crime. Divine judgement was outsourced to the state.

We stopped praying for salvation and started pleading not guilty.

The entire judicial apparatus—mens rea, culpability, desert, retribution—is built on theological scaffolding. The only thing missing is a sermon and a psalm.

Where theology had the guilty soul, Enlightenment law invented the guilty mind—mens rea—a notion so nebulous it requires clairvoyant jurors to divine intention from action. And where the Church offered Hell, the state offers prison. It’s the same moral ritual, just better lit.

Galen Strawson and the Death of Moral Responsibility

Enter Galen Strawson, that glowering spectre at the feast of moral philosophy. His Basic Argument is elegantly devastating:

  1. You do what you do because of the way you are.
  2. You can’t be ultimately responsible for the way you are.
  3. Therefore, you can’t be ultimately responsible for what you do.

Unless you are causa sui—the cause of yourself, an unmoved mover in Calvin Klein—you cannot be held truly responsible. Free will collapses, moral responsibility evaporates, and retributive justice is exposed as epistemological theatre.

In this light, our whole legal structure is little more than rebranded divine vengeance. A vestigial organ from our theocratic past, now enforced by cops instead of clerics.

The Modern State: A Haunted House

What we have, then, is a society that has denied the gods but kept their moral logic. We tossed out theology, but we held onto metaphysical concepts like intent, desert, and blame—concepts that do not survive contact with determinism.

We are living in the afterglow of divine judgement, pretending it’s sunlight.

Nietzsche saw it coming, of course. He warned that killing God would plunge us into existential darkness unless we had the courage to also kill the values propped up by His corpse. We did the first bit. We’re still bottling it on the second.

If Not Retribution, Then What?

Let’s be clear: no one’s suggesting we stop responding to harm. But responses should be grounded in outcomes, not outrage.

Containment, not condemnation.

Prevention, not penance.

Recalibration, not revenge.

We don’t need “justice” in the retributive sense. We need functional ethics, rooted in compassion and consequence, not in Bronze Age morality clumsily duct-taped to Enlightenment reason.

The Risk of Letting Go

Of course, this is terrifying. The current system gives us moral closure. A verdict. A villain. A vanishing point for our collective discomfort.

Abandoning retribution means giving that up. It means accepting that there are no true villains—only configurations of causes. That punishment is often revenge in drag. That morality itself might be a control mechanism, not a universal truth.

But if we’re serious about living in a post-theological age, we must stop playing dress-up with divine concepts. The Enlightenment didn’t finish the job. It changed the costumes, kept the plot, and called it civilisation.

It’s time we staged a rewrite.

On Trumpian Language and the Institutional Erosion of MeaningTrumpian Language Debate

“All that is necessary for evil to triumph is for a few words to go missing from the bylaws.” — not Edmund Burke, but it ought to be.

The Trump administration—America’s reigning monarch of meaningless bombast—has done it again. This time, with an executive order so linguistically cunning it deserves a Pulitzer for Subtextual Menace.

Issued on 30 January 2025, the decree known as “Defending Women from Gender Ideology Extremism” (because, of course, it couldn’t just be called Let’s Erase Legal Protections for People We Don’t Like) removed “political affiliation” and “marital status” from the list of protected classes within certain federal frameworks.

And the result? According to documents unearthed by The Guardian, VA doctors can now legally refuse treatment to patients based on their politics or marital status. You know, because being a Democrat apparently makes you too much of a pre-existing condition.

Naturally, the VA and White House are insisting this means absolutely nothing. “Don’t worry,” they coo. “No one’s actually doing it.” Ah yes, the old Schrödinger’s Protections defence—simultaneously removed and unchanged, invalid but somehow still effective.

But here’s the point—and where it ties to the Language Insufficiency Hypothesis I’ve been peddling like a raving madman at the crossroads of post-structuralism and bureaucratic despair: language isn’t just failing to communicate meaning—it’s being weaponised to obscure it.

The Erosion of Meaning Through Omission

This isn’t the blunt-force idiocy of Orwell’s Newspeak. This is something more elegant—more insidious. This is legislative lacunae. It’s what happens when not saying something says everything.

The words “political affiliation” and “marital status” weren’t replaced. They weren’t clarified. They were simply deleted. Erased like a bad tweet, like a conscience, like a veteran with the wrong bumper sticker.

This is language subtraction as a tool of governance.

We’re not criminalising dissent. We’re just making it legally ignorable.

We’re not discriminating against the unmarried. We’re just no longer required to treat them the same.

It’s the bureaucratic cousin of the dog-whistle: not quite audible in court, but perfectly clear to the base.

The Slippery Slope is Now a Slip-n-Slide

This is how you rewrite civil rights without the fuss of saying so. You just… remove the language that once held the dam in place. Then, when the flood comes, you feign surprise:

“Oh, dear. Who could have guessed that removing protections would result in people being unprotected?”

(Everyone. Everyone could have guessed.)

This is not a bug in the legal language. It’s the feature. The silence is the speech act. The absence is the argument.

This is what I mean by language insufficiency: not merely that our words fail to convey truth, but that their very structure is liable to be gamed—exploited by those who understand that ambiguity is power.

Beyond Intentionality: The Weaponised Void

In philosophy of language, we often debate intentionality—what the speaker meant to say. But here we’re in darker waters. This isn’t about intention. It’s about calculated omission.

The executive order doesn’t declare war on Democrats or single mothers. It simply pulls the thread and lets the tapestry unravel itself.

It’s an act of rhetorical cowardice disguised as administrative efficiency.

This is the Trumpian genius: use language like a stage magician uses sleeves. Distract with one hand, disappear with the other.

Final Diagnosis: Policy by Redaction

We now inhabit a political climate where what is not said carries more legal force than what is. Where bylaw gaps become policy gateways, and where civil rights die not with a bang, but with an elision.

So no, the VA hasn’t yet denied a Democrat a blood transfusion. But the table has been set. The menu revised. The waitstaff told they may now “use discretion.”

Language doesn’t merely fail us. It is being made to fail strategically.

Welcome to the new America: where rights aren’t removed—they’re left out of the memo.


Yet again, ChatGPT renders an odd image. Can’t be bothered to amend it.

Hungering for Morality: When Right and Wrong Are Just a Matter of PR

Full Disclosure: I read the first volume of The Hunger Games just before the film was released. It was OK – certainly better than the film. This video came across my feed, and I skipped through it. Near the end, this geezer references how Katniss saves or recovers deteriorated morality. Me being me, I found issue with the very notion that a relative, if not subjective, concept could be recovered.

The OP asks if The Hunger Games are a classic. I’d argue that they are a categorical classic, like Harry Potter, within the category of YA fiction.

Audio: NotebookLM podcast discussing this topic.

The Hunger Games doesn’t depict the death of morality — it’s a masterclass in how to twist it into a circus act.

Video: YouTube video that spawned this topic.

Let us dispense with the hand-wringing. The Hunger Games is not a parable of moral decay. It is something far more chilling: a vivid portrait of moral engineering — the grotesque contortion of ethical instincts into instruments of domination and spectacle.

Those who bemoan the “decline of morality” in Panem have rather missed the point. There is no absence of morality in the Capitol — only a different version of it. A rebranded, corporatised, state-sanctioned morality, lacquered in lipstick and broadcast in 4K. It is not immorality that reigns, but a hyperactive ideological morality, designed to keep the masses docile and the elites draped in silk.

This is not moral entropy; it’s moral mutation.

Children are not slaughtered because people have forgotten right from wrong — they are slaughtered because a society has been trained to believe that this is what justice looks like. That blood is penance. That fear is unity. That watching it all unfold with a glass of champagne in hand is perfectly civilised behaviour.

This isn’t the death of morality. It’s a hostile takeover.

The Moral PR Machine

If morality is, as many of us suspect, relative — a cultural construct built on consensus, coercion, and convenience — then it can no more “decline” than fashion trends can rot. It simply shifts. One day, shoulder pads are in. The next, it’s child-on-child murder as prime-time entertainment.

In Panem, the moral compass has not vanished. It’s been forcibly recalibrated. Not by reason or revelation, but by propaganda and fear. The Games are moral theatre. A grim ritual, staged to remind the Districts who holds the reins, all under the nauseating guise of tradition, order, and justice.

The citizens of the Capitol aren’t monsters — they’re consumers. Trained to see horror as haute couture. To mistake power for virtue. To cheer while children are butchered, because that’s what everyone else is doing — and, crucially, because they’ve been taught it’s necessary. Necessary evils are the most seductive kind.

Katniss: Not a Saint, But a Saboteur

Enter Katniss Everdeen, not as the moral saviour but as the spanner in the machine. She doesn’t preach. She doesn’t have a grand theory of justice. What she has is visceral disgust — an animal revulsion at the machinery of the Games. Her rebellion is personal, tribal, and instinctive: protect her sister, survive, refuse to dance for their amusement.

She isn’t here to restore some lost golden age of decency. She’s here to tear down the current script and refuse to read her lines.

Her defiance is dangerous not because it’s moral in some abstract, universal sense — but because it disrupts the Capitol’s moral narrative. She refuses to be a pawn in their ethical pageant. She reclaims agency in a world that has commodified virtue and turned ethics into state theatre.

So, Has Morality Declined?

Only if you believe morality has a fixed address — some eternal North Star by which all human actions may be judged. But if, as postmodernity has rather insistently suggested, morality is a shifting social fiction — then Panem’s horror is not a fall from grace, but a recalibration of what counts as “grace” in the first place.

And that’s the real horror, isn’t it? Not that morality has collapsed — but that it still exists, and it likes what it sees.

Conclusion: The Real Hunger

The Hunger Games is not about a society starved of morality — it’s about a world gorging on it, cooked, seasoned, and served with a garnish of guiltless indulgence. It is moral appetite weaponised. Ethics as edict. Conscience as costume.

If you feel sickened by what you see in Panem, it’s not because morality has vanished.

It’s because it hasn’t.

Defying Death

I died in March 2023 — or so the rumour mill would have you believe.

Of course, given that I’m still here, hammering away at this keyboard, it must be said that I didn’t technically die. We don’t bring people back. Death, real death, doesn’t work on a “return to sender” basis. Once you’re gone, you’re gone, and the only thing bringing you back is a heavily fictionalised Netflix series.

Audio: NotebookLM podcast of this content.

No, this is a semantic cock-up, yet another stinking exhibit in the crumbling Museum of Language Insufficiency. “I died,” people say, usually while slurping a Pumpkin Spice Latte and live-streaming their trauma to 53 followers. What they mean is that they flirted with death, clumsily, like a drunk uncle at a wedding. No consummation, just a lot of embarrassing groping at the pearly gates.

And since we’re clarifying terms: there was no tunnel of light, no angels, no celestial choir belting out Coldplay covers. No bearded codgers in slippers. No 72 virgins. (Or, more plausibly, 72 incels whining about their lack of Wi-Fi reception.)

There was, in fact, nothing. Nothing but the slow, undignified realisation that the body, that traitorous meat vessel, was shutting down — and the only gates I was approaching belonged to A&E, with its flickering fluorescent lights and a faint smell of overcooked cabbage.

To be fair, it’s called a near-death experience (NDE) for a reason. Language, coward that it is, hedges its bets. “Near-death” means you dipped a toe into the abyss and then screamed for your mummy. You didn’t die. You loitered. You loitered in the existential equivalent of an airport Wetherspoons, clutching your boarding pass and wondering why the flight to Oblivion was delayed.

As the stories go, people waft into the next world and are yanked back with stirring tales of unicorns, long-dead relatives, and furniture catalogues made of clouds. I, an atheist to my scorched and shrivelled soul, expected none of that — and was therefore not disappointed.

What I do recall, before the curtain wobbled, was struggling for breath, thinking, “Pick a side. In or out. But for pity’s sake, no more dithering.”
In a last act of rational agency, I asked an ER nurse — a bored-looking Athena in scrubs — to intubate me. She responded with the rousing medical affirmation, “We may have to,” which roughly translates to, “Stop making a scene, love. We’ve got fifteen others ahead of you.”

After that, nothing. I was out. Like a light. Like a minor character in a Dickens novel whose death is so insignificant it happens between paragraphs.

I woke up the next day: groggy, sliced open, a tube rammed down my throat, and absolutely no closer to solving the cosmic riddle of it all. Not exactly the triumphant return of Odysseus. Not even a second-rate Ulysses.

Here’s the reality:
There is no coming back from death.
You can’t “visit” death, any more than you can spend the afternoon being non-existent and return with a suntan.

Those near-death visions? Oxygen-starved brains farting out fever dreams. Cerebral cortexes short-circuiting like Poundland fairy lights. Hallucinations, not heralds. A final, frantic light show performed for an audience of none.

Epicurus, that cheerful nihilist, said, “When we are, death is not. When death is, we are not.” He forgot to mention that, in between, people would invent entire publishing industries peddling twaddle about journeys beyond the veil — and charging $29.99 for the paperback edition.

No angels. No harps. No antechamber to the divine.
Just the damp whirr of hospital machinery and the faint beep-beep of capitalism, patiently billing you for your own demise.

If there’s a soundtrack to death, it’s not choirs of the blessed. It’s a disgruntled junior surgeon muttering, “Where the hell’s the anaesthetist?” while pawing desperately through a drawer full of out-of-date latex gloves.

And thus, reader, I lived.
But only in the most vulgar, anticlimactic, and utterly mortal sense.

There will be no afterlife memoir. No second chance to settle the score. No sequel.
Just this: breath, blood, occasional barbed words — and then silence.

Deal with it.