Semantic Drift: When Language Outruns the Science

Science has a language problem. Not a lack of it – if anything, a surfeit. But words, unlike test tubes, do not stay sterile. They evolve, mutate, and metastasise. They get borrowed, bent, misused, and misremembered. And when the public discourse gets hold of them, particularly on platforms like TikTok, it’s the language that gets top billing. The science? Second lead, if it’s lucky.

Semantic drift is at the centre of this: the gradual shift in meaning of a word or phrase over time. It’s how “literally” came to mean “figuratively,” how “organic” went from “carbon-based” to “morally superior,” and how “theory” in science means robust explanatory framework but in the public square means vague guess with no homework.

In short, semantic drift lets rhetoric masquerade as reason. Once a word acquires enough connotation, you can deploy it like a spell. No need to define your terms when the vibe will do.

Audio: NotebookLM podcast on this topic.

When “Vitamin” No Longer Means Vitamin

Take the word vitamin. It sounds objective. Authoritative. Something codified in the genetic commandments of all living things. (reference)

But it isn’t.

A vitamin is simply a substance that an organism needs but cannot synthesise internally, and must obtain through its diet. That’s it. It’s a functional definition, not a chemical one.

So:

  • Vitamin C is a vitamin for humans, but not for dogs, cats, or goats. They make their own. We lost the gene. Tough luck.
  • Vitamin D, meanwhile, isn’t a vitamin at all. It’s a hormone, synthesised when sunlight hits your skin. Its vitamin status is a historical relic – named before we knew better, and now marketed too profitably to correct.

But in the land of TikTok and supplement shelves, these nuances evaporate. “Vitamin” has drifted from scientific designation to halo term – a linguistic fig leaf draped over everything from snake oil to ultraviolet-induced steroidogenesis.

The Rhetorical Sleight of Hand

This linguistic slippage is precisely what allows the rhetorical shenanigans to thrive.

In one video, a bloke claims a burger left out for 151 days neither moulds nor decays, and therefore, “nature won’t touch it.” From there, he leaps (with Olympic disregard for coherence) into talk of sugar spikes, mood swings, and “metabolic chaos.” You can almost hear the conspiratorial music rising.

The science here is, let’s be generous, circumstantial. But the language? Oh, the language is airtight.

Words like “processed,” “chemical,” and “natural” are deployed like moral verdicts, not descriptive categories. The implication isn’t argued – it’s assumed, because the semantics have been doing quiet groundwork for years. “Natural” = good. “Chemical” = bad. “Vitamin” = necessary. “Addiction” = no agency.

By the time the viewer blinks, they’re nodding along to a story told by words in costume, not facts in context.

The Linguistic Metabolism of Misunderstanding

This is why semantic drift isn’t just an academic curiosity – it’s a vector. A vector by which misinformation spreads, not through outright falsehood, but through weaponised ambiguity.

A term like “sugar crash” sounds scientific. It even maps onto a real physiological process: postprandial hypoglycaemia. But when yoked to vague claims about mood, willpower, and “chemical hijacking,” it becomes a meme with lab coat cosplay. And the science, if mentioned at all, is there merely to decorate the argument, not drive it.

That’s the crux of my forthcoming book, The Language Insufficiency Hypothesis: that our inherited languages, designed for trade, prayer, and gossip, are woefully ill-equipped for modern scientific clarity. They lag behind our knowledge, and worse, they often distort it.

Words arrive first. Definitions come limping after.

In Closing: You Are What You Consume (Linguistically)

The real problem isn’t that TikTokers get the science wrong. The problem is that they get the words right – right enough to slip past your critical filters. Rhetoric wears the lab coat. Logic gets left in the locker room.

If vitamin C is a vitamin only for some species, and vitamin D isn’t a vitamin at all, then what else are we mislabelling in the great nutritional theatre? What other linguistic zombies are still wandering the scientific lexicon?

Language may be the best tool we have, but don’t mistake it for a mirror. It’s a carnival funhouse – distorting, framing, and reflecting what we expect to see. And until we fix that, science will keep playing second fiddle to the words pretending to explain it.

Sustenance: A Book About Aliens, Language, and Everything You’re Getting Wrong

Violet aliens on a farm

So, I wrote a book and published it under Ridley Park, the pseudonym I use for fiction.

It has aliens. But don’t get excited—they’re not here to save us, probe us, or blow up the White House. They’re not even here for us.

Which is, frankly, the point.

Audio: NotebookLM podcast on this topic.

The book’s called Sustenance, and while it’s technically speculative fiction, it’s more about us than them. Or rather, it’s about how we can’t stop making everything about us—even when it shouldn’t be. Especially when it shouldn’t be.

Let’s talk themes. And yes, we’re using that word like academics do: as a smokescreen for saying uncomfortable things abstractly.

Language: The Original Scam

Language is the ultimate colonial tool. We call it communication, but it’s mostly projection. You speak. You hope. You assume. You superimpose meaning on other people like a cling film of your own ego.

Sustenance leans into this—not by showing a breakdown of communication, but by showing what happens when communication was never mutual in the first place. When the very idea of “meaning” has no purchase. It’s not about mishearing—it’s about misbeing.

Culture: A Meme You Were Born Into

Culture is the software you didn’t choose to install, and probably can’t uninstall. Most people treat it like a universal law—until they meet someone running a different OS. Cue confusion, arrogance, or violence.

The book explores what happens when cultural norms aren’t shared, and worse, aren’t even legible. Imagine trying to enforce property rights on beings who don’t understand “ownership.” It’s like trying to baptise a toaster.

Sex/Gender: You Keep Using Those Words…

One of the quiet joys of writing non-human characters is discarding human assumptions about sex and gender—and watching readers squirm.

What if sex wasn’t about power, pleasure, or identity? What if it was just a biological procedure, like cell division or pruning roses? Would you still be interested? Would you still moralise about it?

We love to believe our sex/gender constructs are inevitable. They’re not. They’re habits—often bad ones.

Consent: Your Framework Is Showing

Consent, as we use it, assumes mutual understanding, shared stakes, and equivalent agency. Remove any one of those and what’s left?

Sustenance doesn’t try to solve this—it just shows what happens when those assumptions fall apart. Spoiler: it’s not pretty, but it is honest.

Projection: The Mirror That Lies

Humans are deeply committed to anthropocentrism. If it walks like us, or flinches like us, it must be us. This is why we get so disoriented when faced with the truly alien: it won’t dance to our tune, and we’re left staring at ourselves in the funhouse mirror.

This isn’t a book about aliens.

It’s a book about the ways we refuse to see what’s not us.

Memory: The Autobiography of Your Justifications

Memory is not a record. It’s a defence attorney with a narrative license. We rewrite the past to make ourselves look consistent, or innocent, or right.

In Sustenance, memory acts less as a tether to truth and more as a sculpting tool—a way to carve guilt into something manageable. Something you can live with. Until you can’t.

In Summary: It’s Not About Them. It’s About You.

If that sounds bleak, good. It’s meant to.

But it’s also a warning: don’t get too comfortable in your own categories. They’re only universal until you meet someone who doesn’t share them.

Like I said, it’s not really about the aliens.

It’s about us.


If you enjoy fiction that’s more unsettling than escapist, more question than answer, you might be interested in Sustenance. It’s live on Kindle now for the cost of a regrettable coffee:

📘 Sustenance on Amazon US
Also available in the UK, DE, FR, ES, IT, NL, JP, BR, CA, MX, AU, and IN—because alienation is a universal language.

Unwilling Steelman, Part IV

A five-part descent into the illusion of autonomy, where biology writes the script, reason provides the excuse, and the self is merely the echo of its own conditioning. This is a follow-up to a recent post on the implausibility of free will.

Audio: NotebookLM podcast on the topic.

“It’s not just that you’re a hallucination of yourself.
It’s that everyone else is hallucinating you, too — through their own fog.”

The Feedback Loop of False Selves

You are being judged — by others who are also compromised

If you are a chemically modulated, state-dependent, narrativising automaton, then so is everyone who evaluates you. The moral courtroom — society, the law, the dinner table — is just a gathering of biased systems confidently misreading each other.

We are taught to believe in things like:

  • “Good character”
  • “Knowing someone”
  • “Getting a read on people”

But these are myths of stability, rituals of judgment, and cognitive vanity projects. There is no fixed you — and there is no fixed them to do the judging.

Judging the Snapshot, Not the Self

Let’s say you act irritable. Or generous. Or quiet.
An observer sees this and says:

“That’s who you are.”

But which version of you are they observing?

  • The you on two hours of sleep?
  • The you on SSRIs?
  • The you grieving, healing, adjusting, masking?

They don’t know. They don’t ask.
They just flatten the moment into character.

One gesture becomes identity.
One expression becomes essence.

This isn’t judgment.
It’s snapshot essentialism — moral conclusion by convenience.

The Observer Is No Less Biased

Here’s the darker truth: they’re compromised, too.

  • If they’re stressed, you’re rude.
  • If they’re lonely, you’re charming.
  • If they’re hungry, you’re annoying.

What they’re perceiving is not you — it’s their current chemistry’s reaction to your presentation, filtered through their history, memory, mood, and assumptions.

It’s not a moral lens.
It’s a funhouse mirror, polished with certainty.

Mutual Delusion in a Moral Marketplace

The tragedy is recursive:

  • You act based on internal constraints.
  • They judge based on theirs.
  • Then you interpret their reaction… and adjust accordingly.
  • And they, in turn, react to your adjustment…

And on it goes — chemical systems calibrating against each other, mistaking interaction for insight, familiarity for truth, coherence for character.

Identity isn’t formed.
It’s inferred, then reinforced.
By people who have no access to your internal states and no awareness of their own.

The Myth of the Moral Evaluator

This has massive implications:

  • Justice assumes objectivity.
  • Culture assumes shared moral standards.
  • Relationships assume “knowing” someone.

But all of these are built on the fantasy that moral evaluation is accurate, stable, and earned.

It is not.

It is probabilistic, state-sensitive, and mutually confabulatory.

You are being judged by the weather inside someone else’s skull.

TL;DR: Everyone’s Lying to Themselves About You

  • You behave according to contingent states.
  • Others judge you based on their own contingent states.
  • Both of you invent reasons to justify your interpretations.
  • Neither of you has access to the full picture.
  • The result is a hall of mirrors with no ground floor.

So no — you’re not “being seen.”
You’re being misread, reinterpreted, and categorised
— by people who are also misreading themselves.

📅 Coming Tomorrow

You Cannot Originate Yourself

The causa sui argument, and the final collapse of moral responsibility.

The Emperor’s New Models: Box, Lawson, and the Death of Truth

We live in an age intoxicated by models: climate models, economic models, epidemiological models, cosmological models—each one an exquisite confection of assumptions draped in a lab coat and paraded as gospel. Yet if you trace the bloodline of model-building back through the annals of intellectual history, you encounter two figures who coldly remind us of the scam: George Box and Hilary Lawson.

Box: The Gentle Assassin of Certainty

George Box, the celebrated statistician, is often credited with the aphorism: “All models are wrong, but some are useful.” However, Box himself never uttered this precise phrase. What he did say, in his 1976 paper Science and Statistics, was:

The “some are useful” flourish was added later by a public desperate to sweeten the bitter pill. Nevertheless, Box deserves credit for the lethal insight: no model, however elegant, perfectly captures reality. They are provisional guesses, finger-paintings smeared across the rough surface of the unknown.

Audio: NotebookLM podcast on this topic.

Lawson: The Arsonist Who Burned the Map

Hilary Lawson, contemporary philosopher and author of Closure: A Story of Everything, drags Box’s modest scepticism into full-blown philosophical insurrection. In a recent lecture, Lawson declared:

Where Box warns us the emperor’s clothes don’t fit, Lawson points out that the emperor himself is a paper doll. Either way, we dress our ignorance in equations and hope no one notices the draft.

Lawson’s view is grim but clarifying: models are not mere approximations of some Platonic truth. They are closures—temporary, pragmatic structures we erect to intervene effectively in a world we will never fully comprehend. Reality, in Lawson’s framing, is an “openness”: endlessly unfolding, resistant to total capture.

The Case of the Celestial Spheres

Take Aristotle’s model of celestial spheres. Ludicrous? Yes. Obsolete? Absolutely. Yet for centuries, it allowed navigators to chart courses, astrologers to cast horoscopes, and priests to intimidate peasants—all without the slightest whiff of heliocentrism. A model does not need to be right; it merely needs to be operational.

Our modern theories—Big Bang cosmology, dark matter, and quantum gravity—may well be tomorrow’s celestial spheres: charming relics of ignorance that nonetheless built bridges, cured diseases, and sold mobile phones.

Summary Table: Lawson’s View on Models and Truth

Conclusion

Box taught us to distrust the fit of our models; Lawson reminds us there is no true body underneath them. If truth is a ghost, then our models are ghost stories—and some ghost stories, it turns out, are very good at getting us through the night.

We are left not with certainty, but with craftsmanship: the endless, imperfect art of refining our closures, knowing full well they are lies that work. Better lies. Usable lies. And perhaps, in a world without final answers, that is the most honest position of all.

Unwilling Steelman, Part III

A five-part descent into the illusion of autonomy, where biology writes the script, reason provides the excuse, and the self is merely the echo of its own conditioning. This is a follow-up to a recent post on the implausibility of free will.

Manipulability as Disproof

If your will can be altered without your consent, was it ever truly yours?

“If a button on the outside of your skull can change your morality,
then where, exactly, is your autonomy hiding?”

Audio: NotebookLM podcast of this topic.

We’ve heard it all before:

“Sure, I’m influenced — but at the end of the day, I choose.”
But what happens when that influence isn’t influence, but modulation?
What if your very sense of right and wrong — your willingness to forgive, to punish, to empathise — can be dialled like a radio station?

And what if you never know it’s happening?

Your Morality Is Neurochemical

Studies using Transcranial Magnetic Stimulation (TMS) and Transcranial Direct Current Stimulation (tDCS) have shown that moral judgments can be shifted by stimulating the dorsolateral prefrontal cortex (DLPFC).

  • Turn it up: the subject becomes more utilitarian.
  • Turn it down: the subject becomes more emotionally reactive.
  • They make different decisions in the exact same scenarios, depending on which neural pathway is dominant.

The kicker?

They always explain their choices as though they had made them deliberately.

There is no awareness of the manipulation.
Only a retrospective illusion of authorship.

A|B Testing the Soul

Let’s run a thought experiment.

Scenario A: You’re well-fed, calm, unprovoked.
Scenario B: You’re hungry, cortisol-spiked, primed with images of threat.

Same moral dilemma. Different choice.

Query both versions of you, and both will offer coherent post hoc justifications.
Neither suspects that their “will” was merely a biochemical condition in drag.

If both versions feel authentic, then neither can claim authority.

Your will is not sovereign.
It’s state-dependent.
And if it changes without your knowledge, it was never really yours to begin with.

Even the Observer Is a Variable

To make matters worse: the person judging your decision is just as susceptible.

An irritated observer sees you as difficult.
A relaxed one sees you as generous.
The same action — different verdict.

And yet both observers think they are the neutral party.
They are not.
They are chemically calibrated hallucinations, mistaking their reaction for objective truth.

You’re a Vending Machine, Not a Virtuoso

This isn’t metaphor. It’s architecture.

  • You input a stimulus.
  • The brain processes it using pre-loaded scripts, shaped by hormones, past trauma, fatigue, blood sugar, social context.
  • An output emerges.
  • Then the brain rationalises it, like a PR firm cleaning up after a CEO’s impulse tweet.

Reason follows emotion.
Emotion is involuntary.
Therefore, your reasoning is not yours. It’s a post-event explanation for something you didn’t choose to feel.

TL;DR: If It Can Be Tweaked, It’s Not Yours

  • Your “moral core” can be adjusted without your awareness.
  • You justify manipulated choices with total confidence.
  • Your assessors are equally chemically biased.
  • There is no neutral version of “you” — just shifting states with internal coherence.
  • And if your choices depend on state, and your state can be altered, then freedom is a costume worn by contingency.

📅 Coming Tomorrow

The Feedback Loop of False Selves

You are being judged — by others who are also compromised.

Unwilling: The Neuroscience Against Free Will

Why the cherished myth of human autonomy dissolves under the weight of our own biology

We cling to free will like a comfort blanket—the reassuring belief that our actions spring from deliberation, character, and autonomous choice. This narrative has powered everything from our justice systems to our sense of personal achievement. It feels good, even necessary, to believe we author our own stories.

But what if this cornerstone of human self-conception is merely a useful fiction? What if, with each advance in neuroscience, our cherished notion of autonomy becomes increasingly untenable?

Audio: NotebookLM podcast on this topic.

I. The Myth of Autonomy: A Beautiful Delusion

Free will requires that we—some essential, decision-making “self”—stand somehow separate from the causal chains of biology and physics. But where exactly would this magical pocket of causation exist? And what evidence do we have for it?

Your preferences, values, and impulses emerge from a complex interplay of factors you never chose:

The genetic lottery determined your baseline neurochemistry and cognitive architecture before your first breath. You didn’t select your dopamine sensitivity, your amygdala reactivity, or your executive function capacity.

The hormonal symphony that controls your emotional responses operates largely beneath conscious awareness. These chemical messengers—testosterone, oxytocin, and cortisol—don’t ask permission before altering your perceptions and priorities.

Environmental exposures—from lead in your childhood drinking water to the specific traumas of your upbringing—have sculpted neural pathways you didn’t design and can’t easily rewire.

Developmental contingencies have shaped your moral reasoning, impulse control, and capacity for empathy through processes invisible to conscious inspection.

Your prized ability to weigh options, inhibit impulses, and make “rational” choices depends entirely on specific brain structures—particularly the dorsolateral prefrontal cortex (DLPFC)—operating within a neurochemical environment you inherited rather than created.

You occupy this biological machinery; you do not transcend it. Yet, society holds you responsible for its outputs as if you stood separate from these deterministic processes.

II. The DLPFC: Puppet Master of Moral Choice

The dorsolateral prefrontal cortex serves as command central for what we proudly call executive function—our capacity to plan, inhibit, decide, and morally judge. We experience its operations as deliberation, as the weighing of options, as the essence of choice itself.

And yet this supposed seat of autonomy can be manipulated with disturbing ease.

When researchers apply transcranial magnetic stimulation to inhibit DLPFC function, test subjects make dramatically different moral judgments about identical scenarios. Under different stimulation protocols, the same person arrives at contradictory conclusions about right and wrong without any awareness of the external influence.

Similarly, transcranial direct current stimulation over the DLPFC alters moral reasoning, especially regarding personal moral dilemmas. The subject experiences these externally induced judgments as entirely their own, with no sense that their moral compass has been hijacked.

If our most cherished moral deliberations can be redirected through simple electromagnetic manipulation, what does this reveal about the nature of “choice”? If will can be so easily influenced, how free could it possibly be?

III. Hormonal Puppetmasters: The Will in Your Bloodstream

Your decision-making machinery doesn’t stop at neural architecture. Your hormonal profile actively shapes what you perceive as your autonomous choices.

Consider oxytocin, popularly known as the “love hormone.” Research demonstrates that elevated oxytocin levels enhance feelings of guilt and shame while reducing willingness to harm others. This isn’t a subtle effect—it’s a direct biological override of what you might otherwise “choose.”

Testosterone tells an equally compelling story. Administration of this hormone increases utilitarian moral judgments, particularly when such decisions involve aggression or social dominance. The subject doesn’t experience this as a foreign influence but as their own authentic reasoning.

These aren’t anomalies or edge cases. They represent the normal operation of the biological systems governing what we experience as choice. You aren’t choosing so much as regulating, responding, and rebalancing a biochemical economy you inherited rather than designed.

IV. The Accident of Will: Uncomfortable Conclusions

If the will can be manipulated through such straightforward biological interventions, was it ever truly “yours” to begin with?

Philosopher Galen Strawson’s causa sui argument becomes unavoidable here: To be morally responsible, one must be the cause of oneself, but no one creates their own neural and hormonal architecture. By extension, no one can be ultimately responsible for actions emerging from that architecture.

What we dignify as “will” may be nothing more than a fortunate (or unfortunate) biochemical accident—the particular configuration of neurons and neurochemicals you happened to inherit and develop.

This lens forces unsettling questions:

  • How many behaviours we praise or condemn are merely phenotypic expressions masquerading as choices? How many acts of cruelty or compassion reflect neurochemistry rather than character?
  • How many punishments and rewards are we assigning not to autonomous agents, but to biological processes operating beyond conscious control?
  • And perhaps most disturbingly: If we could perfect the moral self through direct biological intervention—rewiring neural pathways or adjusting neurotransmitter levels to ensure “better” choices—should we?
  • Or would such manipulation, however well-intentioned, represent the final acknowledgement that what we’ve called free will was never free at all?

A Compatibilist Rebuttal? Not So Fast.

Some philosophers argue for compatibilism, the view that determinism and free will can coexist if we redefine free will as “uncoerced action aligned with one’s desires.” But this semantic shuffle doesn’t rescue moral responsibility.

If your desires themselves are products of biology and environment—if even your capacity to evaluate those desires depends on inherited neural architecture—then “acting according to your desires” just pushes the problem back a step. You’re still not the ultimate author of those desires or your response to them.

What’s Left?

Perhaps we need not a defence of free will but a new framework for understanding human behaviour—one that acknowledges our biological embeddedness while preserving meaningful concepts of agency and responsibility without magical thinking.

The evidence doesn’t suggest we are without agency; it suggests our agency operates within biological constraints we’re only beginning to understand. The question isn’t whether biology influences choice—it’s whether anything else does.

For now, the neuroscientific evidence points in one direction: The will exists, but its freedom is the illusion.

Hungering for Morality: When Right and Wrong Are Just a Matter of PR

Full Disclosure: I read the first volume of The Hunger Games just before the film was released. It was OK – certainly better than the film. This video came across my feed, and I skipped through it. Near the end, this geezer references how Katniss saves or recovers deteriorated morality. Me being me, I found issue with the very notion that a relative, if not subjective, concept could be recovered.

The OP asks if The Hunger Games are a classic. I’d argue that they are a categorical classic, like Harry Potter, within the category of YA fiction.

Audio: NotebookLM podcast discussing this topic.

The Hunger Games doesn’t depict the death of morality — it’s a masterclass in how to twist it into a circus act.

Video: YouTube video that spawned this topic.

Let us dispense with the hand-wringing. The Hunger Games is not a parable of moral decay. It is something far more chilling: a vivid portrait of moral engineering — the grotesque contortion of ethical instincts into instruments of domination and spectacle.

Those who bemoan the “decline of morality” in Panem have rather missed the point. There is no absence of morality in the Capitol — only a different version of it. A rebranded, corporatised, state-sanctioned morality, lacquered in lipstick and broadcast in 4K. It is not immorality that reigns, but a hyperactive ideological morality, designed to keep the masses docile and the elites draped in silk.

This is not moral entropy; it’s moral mutation.

Children are not slaughtered because people have forgotten right from wrong — they are slaughtered because a society has been trained to believe that this is what justice looks like. That blood is penance. That fear is unity. That watching it all unfold with a glass of champagne in hand is perfectly civilised behaviour.

This isn’t the death of morality. It’s a hostile takeover.

The Moral PR Machine

If morality is, as many of us suspect, relative — a cultural construct built on consensus, coercion, and convenience — then it can no more “decline” than fashion trends can rot. It simply shifts. One day, shoulder pads are in. The next, it’s child-on-child murder as prime-time entertainment.

In Panem, the moral compass has not vanished. It’s been forcibly recalibrated. Not by reason or revelation, but by propaganda and fear. The Games are moral theatre. A grim ritual, staged to remind the Districts who holds the reins, all under the nauseating guise of tradition, order, and justice.

The citizens of the Capitol aren’t monsters — they’re consumers. Trained to see horror as haute couture. To mistake power for virtue. To cheer while children are butchered, because that’s what everyone else is doing — and, crucially, because they’ve been taught it’s necessary. Necessary evils are the most seductive kind.

Katniss: Not a Saint, But a Saboteur

Enter Katniss Everdeen, not as the moral saviour but as the spanner in the machine. She doesn’t preach. She doesn’t have a grand theory of justice. What she has is visceral disgust — an animal revulsion at the machinery of the Games. Her rebellion is personal, tribal, and instinctive: protect her sister, survive, refuse to dance for their amusement.

She isn’t here to restore some lost golden age of decency. She’s here to tear down the current script and refuse to read her lines.

Her defiance is dangerous not because it’s moral in some abstract, universal sense — but because it disrupts the Capitol’s moral narrative. She refuses to be a pawn in their ethical pageant. She reclaims agency in a world that has commodified virtue and turned ethics into state theatre.

So, Has Morality Declined?

Only if you believe morality has a fixed address — some eternal North Star by which all human actions may be judged. But if, as postmodernity has rather insistently suggested, morality is a shifting social fiction — then Panem’s horror is not a fall from grace, but a recalibration of what counts as “grace” in the first place.

And that’s the real horror, isn’t it? Not that morality has collapsed — but that it still exists, and it likes what it sees.

Conclusion: The Real Hunger

The Hunger Games is not about a society starved of morality — it’s about a world gorging on it, cooked, seasoned, and served with a garnish of guiltless indulgence. It is moral appetite weaponised. Ethics as edict. Conscience as costume.

If you feel sickened by what you see in Panem, it’s not because morality has vanished.

It’s because it hasn’t.

The Tyranny of “Human Nature”

There is a kind of political necromancy afoot in modern discourse—a dreary chant murmured by pundits, CEOs, and power-drunk bureaucrats alike: “It’s just human nature.” As if this incantation explains, excuses, and absolves all manner of violent absurdities. As if, by invoking the mystic forces of evolution or primal instinct, one can justify the grotesque state of things. Income inequality? Human nature. War? Human nature. Corporate psychopathy? Oh, sweetie, it’s just how we’re wired.

What a convenient mythology.

Audio: NotebookLM podcast on this topic.

If “human nature” is inherently brutish and selfish, then resistance is not only futile, it is unnatural. The doctrine of dominance gets sanctified, the lust to rule painted as destiny rather than deviance. Meanwhile, the quiet, unglamorous yearning of most people—to live undisturbed, to coöperate rather than conquer—is dismissed as naïve, childish, and unrealistic. How curious that the preferences of the vast majority are always sacrificed at the altar of some aggressive minority’s ambitions.

Let us dispense with this dogma. The desire to dominate is not a feature of human nature writ large; it is a glitch exploited by systems that reward pathological ambition. Most of us would rather not be ruled, and certainly not managed by glorified algorithms in meat suits. The real human inclination, buried beneath centuries of conquest and control, is to live in peace, tend to our gardens, and perhaps be left the hell alone.

And yet, we are not. Because there exists a virulent cohort—call them oligarchs, executives, generals, kings—whose raison d’être is the acquisition and consolidation of power. Not content to build a life, they must build empires. Not content to share, they must extract. They regard the rest of us as livestock: occasionally troublesome, but ultimately manageable.

To pacify us, they offer the Social Contract™—a sort of ideological bribe that says, “Give us your freedom, and we promise not to let the wolves in.” But what if the wolves are already inside the gates, wearing suits and passing legislation? What if the protection racket is the threat itself?

So no, it is not “human nature” that is the problem. Cancer is natural, too, but we don’t celebrate its tenacity. We treat it, research it, and fight like hell to survive it. Likewise, we must treat pathological power-lust not as an inevitability to be managed but as a disease to be diagnosed and dismantled.

The real scandal isn’t that humans sometimes fail to coöperate. It’s that we’re constantly told we’re incapable of it by those whose power depends on keeping it that way.

Let the ruling classes peddle their myths. The rest of us might just choose to write new ones.

Defying Death

I died in March 2023 — or so the rumour mill would have you believe.

Of course, given that I’m still here, hammering away at this keyboard, it must be said that I didn’t technically die. We don’t bring people back. Death, real death, doesn’t work on a “return to sender” basis. Once you’re gone, you’re gone, and the only thing bringing you back is a heavily fictionalised Netflix series.

Audio: NotebookLM podcast of this content.

No, this is a semantic cock-up, yet another stinking exhibit in the crumbling Museum of Language Insufficiency. “I died,” people say, usually while slurping a Pumpkin Spice Latte and live-streaming their trauma to 53 followers. What they mean is that they flirted with death, clumsily, like a drunk uncle at a wedding. No consummation, just a lot of embarrassing groping at the pearly gates.

And since we’re clarifying terms: there was no tunnel of light, no angels, no celestial choir belting out Coldplay covers. No bearded codgers in slippers. No 72 virgins. (Or, more plausibly, 72 incels whining about their lack of Wi-Fi reception.)

There was, in fact, nothing. Nothing but the slow, undignified realisation that the body, that traitorous meat vessel, was shutting down — and the only gates I was approaching belonged to A&E, with its flickering fluorescent lights and a faint smell of overcooked cabbage.

To be fair, it’s called a near-death experience (NDE) for a reason. Language, coward that it is, hedges its bets. “Near-death” means you dipped a toe into the abyss and then screamed for your mummy. You didn’t die. You loitered. You loitered in the existential equivalent of an airport Wetherspoons, clutching your boarding pass and wondering why the flight to Oblivion was delayed.

As the stories go, people waft into the next world and are yanked back with stirring tales of unicorns, long-dead relatives, and furniture catalogues made of clouds. I, an atheist to my scorched and shrivelled soul, expected none of that — and was therefore not disappointed.

What I do recall, before the curtain wobbled, was struggling for breath, thinking, “Pick a side. In or out. But for pity’s sake, no more dithering.”
In a last act of rational agency, I asked an ER nurse — a bored-looking Athena in scrubs — to intubate me. She responded with the rousing medical affirmation, “We may have to,” which roughly translates to, “Stop making a scene, love. We’ve got fifteen others ahead of you.”

After that, nothing. I was out. Like a light. Like a minor character in a Dickens novel whose death is so insignificant it happens between paragraphs.

I woke up the next day: groggy, sliced open, a tube rammed down my throat, and absolutely no closer to solving the cosmic riddle of it all. Not exactly the triumphant return of Odysseus. Not even a second-rate Ulysses.

Here’s the reality:
There is no coming back from death.
You can’t “visit” death, any more than you can spend the afternoon being non-existent and return with a suntan.

Those near-death visions? Oxygen-starved brains farting out fever dreams. Cerebral cortexes short-circuiting like Poundland fairy lights. Hallucinations, not heralds. A final, frantic light show performed for an audience of none.

Epicurus, that cheerful nihilist, said, “When we are, death is not. When death is, we are not.” He forgot to mention that, in between, people would invent entire publishing industries peddling twaddle about journeys beyond the veil — and charging $29.99 for the paperback edition.

No angels. No harps. No antechamber to the divine.
Just the damp whirr of hospital machinery and the faint beep-beep of capitalism, patiently billing you for your own demise.

If there’s a soundtrack to death, it’s not choirs of the blessed. It’s a disgruntled junior surgeon muttering, “Where the hell’s the anaesthetist?” while pawing desperately through a drawer full of out-of-date latex gloves.

And thus, reader, I lived.
But only in the most vulgar, anticlimactic, and utterly mortal sense.

There will be no afterlife memoir. No second chance to settle the score. No sequel.
Just this: breath, blood, occasional barbed words — and then silence.

Deal with it.

Questioning Traditional Families

I neither champion nor condemn tradition—whether it’s marriage, family, or whatever dusty relic society is currently parading around like a prize marrow at a village fête.

Audio: NotebookLM podcast on traditional families.

In a candid group conversation recently, I met “Jenny”, who declared she would have enjoyed her childhood much more had her father not “ruined everything” simply by existing. “Marie” countered that it was her mother who had been the wrecker-in-chief. Then “Lulu” breezed in, claiming, “We had a perfect family — we practically raised ourselves.”

Now, here’s where it gets delicious:

Each of these women, bright-eyed defenders of “traditional marriage” and “traditional family” (cue the brass band), had themselves ticked every box on the Modern Chaos Bingo Card: children out of wedlock? Check. Divorces? Check. Performative, cold-marriage pantomimes? Absolutely—and scene.
Their definition of “traditional marriage” is the vintage model: one cis-male, one cis-female, Dad brings home the bacon, Mum weeps quietly into the washing-up. Standard.

Let’s meet the players properly:

Jenny sprang from a union of two serial divorcées, each dragging along the tattered remnants of previous families. She was herself a “love child,” born out of wedlock and “forcing” another reluctant stroll down the aisle. Her father? A man of singular achievements: he paid the bills and terrorised the household. Jenny now pays a therapist to untangle the psychological wreckage.

Marie, the second of two daughters, was the product of a more textbook “traditional family”—if by textbook you mean a Victorian novel where everyone is miserable but keeps a stiff upper lip about it. Her mother didn’t want children but acquiesced to her husband’s demands (standard operating procedure at the time). Marie’s childhood was a kingdom where Daddy was a demigod and Mummy was the green-eyed witch guarding the gates of hell.

Lulu grew up in a household so “traditional” that it might have been painted by Hogarth: an underemployed, mostly useless father and a mother stretched thinner than the patience of a British Rail commuter. Despite—or because of—the chaos, Lulu claims it was “perfect,” presumably redefining the word in a way the Oxford English Dictionary would find hysterical. She, too, had a child out of wedlock, with the explicit goal of keeping feckless men at bay.

And yet—and yet—all three women cling, white-knuckled, to the fantasy of the “traditional family.” They did not achieve stability. Their families of origin were temples of dysfunction. But somehow, the “traditional family” remains the sacred cow, lovingly polished and paraded on Sundays.

Why?

Because what they’re chasing isn’t “tradition” at all — it’s stability, that glittering chimera. It’s nostalgia for a stability they never actually experienced. A mirage constructed from second-hand dreams, glossy 1950s propaganda, and whatever leftover fairy tales their therapists hadn’t yet charged them £150 an hour to dismantle.

Interestingly, none of them cared two figs about gay marriage, though opinions about gay parenting varied wildly—a kettle of fish I’ll leave splashing outside this piece.

Which brings us back to the central conundrum:

If lived experience tells you that “traditional family” equals trauma, neglect, and thinly-veiled loathing, why in the name of all that’s rational would you still yearn for it?

Societal pressure, perhaps. Local customs. Generational rot. The relentless cultural drumbeat that insists that marriage (preferably heterosexual and miserable) is the cornerstone of civilisation.

Still, it’s telling that Jenny and Marie were both advised by therapists to cut ties with their toxic families—yet in the same breath urged to create sturdy nuclear families for their own children. It was as if summoning a functional household from the smoking ruins of dysfunction were a simple matter of willpower and a properly ironed apron.

Meanwhile, Lulu—therapy-free and stubbornly independent—declares that raising oneself in a dysfunctional mess is not only survivable but positively idyllic. One can only assume her standards of “perfect” are charmingly flexible.

As the title suggests, this piece questions traditional families. I offer no solutions—only a raised eyebrow and a sharper question:

What is the appeal of clinging to a fantasy so thoroughly at odds with reality?
Your thoughts, dear reader? I’d love to hear your defences, your protests, or your own tales from the trenches.