A five-part descent into the illusion of autonomy, where biology writes the script, reason provides the excuse, and the self is merely the echo of its own conditioning.This is a follow-up to a recent post on the implausibility of free will.
Audio: NotebookLM podcast on the topic.
“It’s not just that you’re a hallucination of yourself. It’s that everyone else is hallucinating you, too — through their own fog.”
The Feedback Loop of False Selves
You are being judged — by others who are also compromised
If you are a chemically modulated, state-dependent, narrativising automaton, then so is everyone who evaluates you. The moral courtroom — society, the law, the dinner table — is just a gathering of biased systems confidently misreading each other.
We are taught to believe in things like:
“Good character”
“Knowing someone”
“Getting a read on people”
But these are myths of stability, rituals of judgment, and cognitive vanity projects. There is no fixed you — and there is no fixed them to do the judging.
Judging the Snapshot, Not the Self
Let’s say you act irritable. Or generous. Or quiet. An observer sees this and says:
“That’s who you are.”
But which version of you are they observing?
The you on two hours of sleep?
The you on SSRIs?
The you grieving, healing, adjusting, masking?
They don’t know. They don’t ask. They just flatten the moment into character.
One gesture becomes identity. One expression becomes essence.
This isn’t judgment. It’s snapshot essentialism — moral conclusion by convenience.
The Observer Is No Less Biased
Here’s the darker truth: they’re compromised, too.
If they’re stressed, you’re rude.
If they’re lonely, you’re charming.
If they’re hungry, you’re annoying.
What they’re perceiving is not you — it’s their current chemistry’s reaction to your presentation, filtered through their history, memory, mood, and assumptions.
It’s not a moral lens. It’s a funhouse mirror, polished with certainty.
Mutual Delusion in a Moral Marketplace
The tragedy is recursive:
You act based on internal constraints.
They judge based on theirs.
Then you interpret their reaction… and adjust accordingly.
And they, in turn, react to your adjustment…
And on it goes — chemical systems calibrating against each other, mistaking interaction for insight, familiarity for truth, coherence for character.
Identity isn’t formed. It’s inferred, then reinforced. By people who have no access to your internal states and no awareness of their own.
The Myth of the Moral Evaluator
This has massive implications:
Justice assumes objectivity.
Culture assumes shared moral standards.
Relationships assume “knowing” someone.
But all of these are built on the fantasy that moral evaluation is accurate, stable, and earned.
It is not.
It is probabilistic, state-sensitive, and mutually confabulatory.
You are being judged by the weather inside someone else’s skull.
TL;DR: Everyone’s Lying to Themselves About You
You behave according to contingent states.
Others judge you based on their own contingent states.
Both of you invent reasons to justify your interpretations.
Neither of you has access to the full picture.
The result is a hall of mirrors with no ground floor.
So no — you’re not “being seen.” You’re being misread, reinterpreted, and categorised — by people who are also misreading themselves.
We live in an age intoxicated by models: climate models, economic models, epidemiological models, cosmological models—each one an exquisite confection of assumptions draped in a lab coat and paraded as gospel. Yet if you trace the bloodline of model-building back through the annals of intellectual history, you encounter two figures who coldly remind us of the scam: George Box and Hilary Lawson.
Box: The Gentle Assassin of Certainty
George Box, the celebrated statistician, is often credited with the aphorism: “All models are wrong, but some are useful.” However, Box himself never uttered this precise phrase. What he did say, in his 1976 paper Science and Statistics, was:
“Since all models are wrong, the scientist must be alert to what is importantly wrong.”
George Box
The “some are useful” flourish was added later by a public desperate to sweeten the bitter pill. Nevertheless, Box deserves credit for the lethal insight: no model, however elegant, perfectly captures reality. They are provisional guesses, finger-paintings smeared across the rough surface of the unknown.
Audio: NotebookLM podcast on this topic.
Lawson: The Arsonist Who Burned the Map
Hilary Lawson, contemporary philosopher and author of Closure: A Story of Everything, drags Box’s modest scepticism into full-blown philosophical insurrection. In a recent lecture, Lawson declared:
“You don’t need truth to have a usable model.”
Hilary Lawson
Where Box warns us the emperor’s clothes don’t fit, Lawson points out that the emperor himself is a paper doll. Either way, we dress our ignorance in equations and hope no one notices the draft.
Lawson’s view is grim but clarifying: models are not mere approximations of some Platonic truth. They are closures—temporary, pragmatic structures we erect to intervene effectively in a world we will never fully comprehend. Reality, in Lawson’s framing, is an “openness”: endlessly unfolding, resistant to total capture.
The Case of the Celestial Spheres
Take Aristotle’s model of celestial spheres. Ludicrous? Yes. Obsolete? Absolutely. Yet for centuries, it allowed navigators to chart courses, astrologers to cast horoscopes, and priests to intimidate peasants—all without the slightest whiff of heliocentrism. A model does not need to be right; it merely needs to be operational.
Our modern theories—Big Bang cosmology, dark matter, and quantum gravity—may well be tomorrow’s celestial spheres: charming relics of ignorance that nonetheless built bridges, cured diseases, and sold mobile phones.
Summary Table: Lawson’s View on Models and Truth
Aspect
Lawson’s Position
Role of Models
Tools/metaphors for intervention, not truth
Truth
Not required for usefulness
Refinement
Models are improved for practical effectiveness
Reality
Fundamentally open, never fully captured by models
Implication
Focus on utility and adaptability, not final truth
Conclusion
Box taught us to distrust the fit of our models; Lawson reminds us there is no true body underneath them. If truth is a ghost, then our models are ghost stories—and some ghost stories, it turns out, are very good at getting us through the night.
We are left not with certainty, but with craftsmanship: the endless, imperfect art of refining our closures, knowing full well they are lies that work. Better lies. Usable lies. And perhaps, in a world without final answers, that is the most honest position of all.
A five-part descent into the illusion of autonomy, where biology writes the script, reason provides the excuse, and the self is merely the echo of its own conditioning.This is a follow-up to a recent post on the implausibility of free will.
Manipulability as Disproof
If your will can be altered without your consent, was it ever truly yours?
“If a button on the outside of your skull can change your morality, then where, exactly, is your autonomy hiding?”
Audio: NotebookLM podcast of this topic.
We’ve heard it all before:
“Sure, I’m influenced — but at the end of the day, I choose.” But what happens when that influence isn’t influence, but modulation? What if your very sense of right and wrong — your willingness to forgive, to punish, to empathise — can be dialled like a radio station?
And what if you never know it’s happening?
Your Morality Is Neurochemical
Studies using Transcranial Magnetic Stimulation (TMS) and Transcranial Direct Current Stimulation (tDCS) have shown that moral judgments can be shifted by stimulating the dorsolateral prefrontal cortex (DLPFC).
Turn it up: the subject becomes more utilitarian.
Turn it down: the subject becomes more emotionally reactive.
They make different decisions in the exact same scenarios, depending on which neural pathway is dominant.
The kicker?
They always explain their choices as though they had made them deliberately.
There is no awareness of the manipulation. Only a retrospective illusion of authorship.
A|B Testing the Soul
Let’s run a thought experiment.
Scenario A: You’re well-fed, calm, unprovoked. Scenario B: You’re hungry, cortisol-spiked, primed with images of threat.
Same moral dilemma. Different choice.
Query both versions of you, and both will offer coherent post hoc justifications. Neither suspects that their “will” was merely a biochemical condition in drag.
If both versions feel authentic, then neither can claim authority.
Your will is not sovereign. It’s state-dependent. And if it changes without your knowledge, it was never really yours to begin with.
Even the Observer Is a Variable
To make matters worse: the person judging your decision is just as susceptible.
An irritated observer sees you as difficult. A relaxed one sees you as generous. The same action — different verdict.
And yet both observers think they are the neutral party. They are not. They are chemically calibrated hallucinations, mistaking their reaction for objective truth.
You’re a Vending Machine, Not a Virtuoso
This isn’t metaphor. It’s architecture.
You input a stimulus.
The brain processes it using pre-loaded scripts, shaped by hormones, past trauma, fatigue, blood sugar, social context.
An output emerges.
Then the brain rationalises it, like a PR firm cleaning up after a CEO’s impulse tweet.
Reason follows emotion. Emotion is involuntary. Therefore, your reasoning is not yours. It’s a post-event explanation for something you didn’t choose to feel.
TL;DR: If It Can Be Tweaked, It’s Not Yours
Your “moral core” can be adjusted without your awareness.
You justify manipulated choices with total confidence.
Your assessors are equally chemically biased.
There is no neutral version of “you” — just shifting states with internal coherence.
And if your choices depend on state, and your state can be altered, then freedom is a costume worn by contingency.
A five-part descent into the illusion of autonomy, where biology writes the script, reason provides the excuse, and the self is merely the echo of its own conditioning.This is a follow-up to a recent post on the implausibility of free will.
Continuity Is Not Identity
You are not who you were — you are what you’ve become
“A river doesn’t remember yesterday’s curve. But we point to it and say: ‘Look, it’s still the same.’”
Audio: NotebookLM podcast on this topic.
The compatibilist move here is subtler — less grandiose than autonomy, more domesticated than sovereignty. It says: Even if you didn’t choose your traits, your past, your preferences — you’re still you. That self has endured. And moral responsibility flows from that continuity.
But this, too, is sleight of hand.
Because continuity is a pattern, not a person. And a pattern that evolves without origin is not accountable. It’s just happening.
A River Is Not a Moral Agent
Yes, you are still “you” — but only in the loosest, most cosmetic sense. The fact that your behaviour follows a recognisable pattern does not imply authorship. It merely confirms that systems tend toward stability, or path dependence, or neural canalisation.
You can be stable. You can even be consistent. But that doesn’t mean you’re choosing anything.
Continuity is not control. It’s inertia dressed up as identity.
Predictability Is Not Ownership
We mistake persistence for personhood. If someone acts one way for long enough, we assume that behaviour is theirs — that it reflects their values, their self. But all it really reflects is probability.
“You’re still you,” we say. But which you?
The one shaped by sleep deprivation?
The one under hormonal flux?
The one shaped by language, trauma, and cultural myopia?
Every iteration of “you” is a snapshot — a chemical event disguised as character.
You’re Not Rebuilding — You’re Accreting
The recursive defence — “I can change who I am” — also crumbles here. Because you don’t change yourself from nowhere. You change because something changed you. And that change, too, emerges from your condition.
Growth, reflection, habit formation — these aren’t proofs of freedom. They’re signs that adaptive systems accumulate structure.
You are not shaping clay. You are sediment, layered by time.
Character Is Compulsion in Costume
We love stories about people who “showed their true colours.” But this is narrative bias — we flatten a life’s complexity into a myth of revelation.
Yet even our finest moments — courage, restraint, sacrifice — may be nothing more than compulsions coded as character. You didn’t choose to be brave. You just were.
The brave person says: “I had no choice.” The coward says the same.
Who gets the medal is irrelevant to the question of freedom.
TL;DR: Continuity Doesn’t Mean You Own It
The self is a pattern of events, not a stable agent.
Continuity is not agency — it’s habit.
Predictability doesn’t prove ownership.
Even your finest moments might be involuntary.
And if you’re not choosing your changes, you’re just being changed.
So, no — you are not who you were. You are what you’ve become. And what you’ve become was never yours to shape freely.
📅 Coming Tomorrow:
Manipulability as Disproof
If your will can be altered without your knowledge, was it ever truly yours?
A five-part descent into the illusion of autonomy, where biology writes the script, reason provides the excuse, and the self is merely the echo of its own conditioning.This is a follow-up to a recent post on the implausibility of free will.
Audio: NotebookLM podcast discussing this topic.
Constraint Is Not Freedom
The ergonomic cage of compatibilist comfort
“You are not playing the piano. You are the piano, playing itself — then applauding.”
Compatibilists — those philosophical locksmiths determined to keep the myth of free will intact — love to say that constraint doesn’t contradict freedom. That a system can still be “free” so long as it is coherent, self-reflective, and capable of recursive evaluation.
In this view, freedom doesn’t require being uncaused — it only requires being causally integrated. You don’t need to be sovereign. You just need to be responsive.
“The pianist may not have built the piano — but she still plays it.”
It sounds lovely.
It’s also false.
You Are the Piano
This analogy fails for a simple reason: there is no pianist. No ghost in the gears. No homunculus seated behind the cortex, pulling levers and composing virtue. There is only the piano — complex, self-modulating, exquisitely tuned — but self-playing nonetheless.
The illusion of choice is merely the instrument responding to its state: to its internal wiring, environmental inputs, and the accumulated sediment of prior events. What feels like deliberation is often delay. What feels like freedom is often latency.
Recursive ≠ Free
Ah, but what about reflection? Don’t we revise ourselves over time?
We do. But that revision is itself conditioned. You didn’t choose the capacity to reflect. You didn’t choose your threshold for introspection. If you resist a bias, it’s because you were predisposed — by some cocktail of education, temperament, or trauma — to resist it.
A thermostat that updates its own algorithm is still a thermostat.
It doesn’t become “free” by being self-correcting. It becomes better adapted. Likewise, human introspection is just adaptive determinism wearing a philosophical hat.
Why the cherished myth of human autonomy dissolves under the weight of our own biology
We cling to free will like a comfort blanket—the reassuring belief that our actions spring from deliberation, character, and autonomous choice. This narrative has powered everything from our justice systems to our sense of personal achievement. It feels good, even necessary, to believe we author our own stories.
But what if this cornerstone of human self-conception is merely a useful fiction? What if, with each advance in neuroscience, our cherished notion of autonomy becomes increasingly untenable?
Audio: NotebookLM podcast on this topic.
I. The Myth of Autonomy: A Beautiful Delusion
Free will requires that we—some essential, decision-making “self”—stand somehow separate from the causal chains of biology and physics. But where exactly would this magical pocket of causation exist? And what evidence do we have for it?
Your preferences, values, and impulses emerge from a complex interplay of factors you never chose:
The genetic lottery determined your baseline neurochemistry and cognitive architecture before your first breath. You didn’t select your dopamine sensitivity, your amygdala reactivity, or your executive function capacity.
The hormonal symphony that controls your emotional responses operates largely beneath conscious awareness. These chemical messengers—testosterone, oxytocin, and cortisol—don’t ask permission before altering your perceptions and priorities.
Environmental exposures—from lead in your childhood drinking water to the specific traumas of your upbringing—have sculpted neural pathways you didn’t design and can’t easily rewire.
Developmental contingencies have shaped your moral reasoning, impulse control, and capacity for empathy through processes invisible to conscious inspection.
Your prized ability to weigh options, inhibit impulses, and make “rational” choices depends entirely on specific brain structures—particularly the dorsolateral prefrontal cortex (DLPFC)—operating within a neurochemical environment you inherited rather than created.
You occupy this biological machinery; you do not transcend it. Yet, society holds you responsible for its outputs as if you stood separate from these deterministic processes.
transcranial direct current stimulation over the DLPFC alters moral reasoning, especially regarding personal moral dilemmas. The subject experiences these externally induced judgments as entirely their own, with no sense that their moral compass has been hijacked
II. The DLPFC: Puppet Master of Moral Choice
The dorsolateral prefrontal cortex serves as command central for what we proudly call executive function—our capacity to plan, inhibit, decide, and morally judge. We experience its operations as deliberation, as the weighing of options, as the essence of choice itself.
And yet this supposed seat of autonomy can be manipulated with disturbing ease.
When researchers apply transcranial magnetic stimulation to inhibit DLPFC function, test subjects make dramatically different moral judgments about identical scenarios. Under different stimulation protocols, the same person arrives at contradictory conclusions about right and wrong without any awareness of the external influence.
Similarly, transcranial direct current stimulation over the DLPFC alters moral reasoning, especially regarding personal moral dilemmas. The subject experiences these externally induced judgments as entirely their own, with no sense that their moral compass has been hijacked.
If our most cherished moral deliberations can be redirected through simple electromagnetic manipulation, what does this reveal about the nature of “choice”? If will can be so easily influenced, how free could it possibly be?
III. Hormonal Puppetmasters: The Will in Your Bloodstream
Your decision-making machinery doesn’t stop at neural architecture. Your hormonal profile actively shapes what you perceive as your autonomous choices.
Consider oxytocin, popularly known as the “love hormone.” Research demonstrates that elevated oxytocin levels enhance feelings of guilt and shame while reducing willingness to harm others. This isn’t a subtle effect—it’s a direct biological override of what you might otherwise “choose.”
Testosterone tells an equally compelling story. Administration of this hormone increases utilitarian moral judgments, particularly when such decisions involve aggression or social dominance. The subject doesn’t experience this as a foreign influence but as their own authentic reasoning.
These aren’t anomalies or edge cases. They represent the normal operation of the biological systems governing what we experience as choice. You aren’t choosing so much as regulating, responding, and rebalancing a biochemical economy you inherited rather than designed.
IV. The Accident of Will: Uncomfortable Conclusions
If the will can be manipulated through such straightforward biological interventions, was it ever truly “yours” to begin with?
Philosopher Galen Strawson’s causa sui argument becomes unavoidable here: To be morally responsible, one must be the cause of oneself, but no one creates their own neural and hormonal architecture. By extension, no one can be ultimately responsible for actions emerging from that architecture.
What we dignify as “will” may be nothing more than a fortunate (or unfortunate) biochemical accident—the particular configuration of neurons and neurochemicals you happened to inherit and develop.
This lens forces unsettling questions:
How many behaviours we praise or condemn are merely phenotypic expressions masquerading as choices? How many acts of cruelty or compassion reflect neurochemistry rather than character?
How many punishments and rewards are we assigning not to autonomous agents, but to biological processes operating beyond conscious control?
And perhaps most disturbingly: If we could perfect the moral self through direct biological intervention—rewiring neural pathways or adjusting neurotransmitter levels to ensure “better” choices—should we?
Or would such manipulation, however well-intentioned, represent the final acknowledgement that what we’ve called free will was never free at all?
A Compatibilist Rebuttal? Not So Fast.
Some philosophers argue for compatibilism, the view that determinism and free will can coexist if we redefine free will as “uncoerced action aligned with one’s desires.” But this semantic shuffle doesn’t rescue moral responsibility.
If your desires themselves are products of biology and environment—if even your capacity to evaluate those desires depends on inherited neural architecture—then “acting according to your desires” just pushes the problem back a step. You’re still not the ultimate author of those desires or your response to them.
What’s Left?
Perhaps we need not a defence of free will but a new framework for understanding human behaviour—one that acknowledges our biological embeddedness while preserving meaningful concepts of agency and responsibility without magical thinking.
The evidence doesn’t suggest we are without agency; it suggests our agency operates within biological constraints we’re only beginning to understand. The question isn’t whether biology influences choice—it’s whether anything else does.
For now, the neuroscientific evidence points in one direction: The will exists, but its freedom is the illusion.
Full Disclosure: I read the first volume of The Hunger Games just before the film was released. It was OK – certainly better than the film. This video came across my feed, and I skipped through it. Near the end, this geezer references how Katniss saves or recovers deteriorated morality. Me being me, I found issue with the very notion that a relative, if not subjective, concept could be recovered.
The OP asks if The Hunger Games are a classic. I’d argue that they are a categorical classic, like Harry Potter, within the category of YA fiction.
Audio: NotebookLM podcast discussing this topic.
The Hunger Games doesn’t depict the death of morality — it’s a masterclass in how to twist it into a circus act.
Video: YouTube video that spawned this topic.
Let us dispense with the hand-wringing. The Hunger Games is not a parable of moral decay. It is something far more chilling: a vivid portrait of moral engineering — the grotesque contortion of ethical instincts into instruments of domination and spectacle.
Those who bemoan the “decline of morality” in Panem have rather missed the point. There is no absence of morality in the Capitol — only a different version of it. A rebranded, corporatised, state-sanctioned morality, lacquered in lipstick and broadcast in 4K. It is not immorality that reigns, but a hyperactive ideological morality, designed to keep the masses docile and the elites draped in silk.
This is not moral entropy; it’s moral mutation.
Children are not slaughtered because people have forgotten right from wrong — they are slaughtered because a society has been trained to believe that this is what justice looks like. That blood is penance. That fear is unity. That watching it all unfold with a glass of champagne in hand is perfectly civilised behaviour.
This isn’t the death of morality. It’s a hostile takeover.
The Moral PR Machine
If morality is, as many of us suspect, relative — a cultural construct built on consensus, coercion, and convenience — then it can no more “decline” than fashion trends can rot. It simply shifts. One day, shoulder pads are in. The next, it’s child-on-child murder as prime-time entertainment.
In Panem, the moral compass has not vanished. It’s been forcibly recalibrated. Not by reason or revelation, but by propaganda and fear. The Games are moral theatre. A grim ritual, staged to remind the Districts who holds the reins, all under the nauseating guise of tradition, order, and justice.
The citizens of the Capitol aren’t monsters — they’re consumers. Trained to see horror as haute couture. To mistake power for virtue. To cheer while children are butchered, because that’s what everyone else is doing — and, crucially, because they’ve been taught it’s necessary. Necessary evils are the most seductive kind.
Katniss: Not a Saint, But a Saboteur
Enter Katniss Everdeen, not as the moral saviour but as the spanner in the machine. She doesn’t preach. She doesn’t have a grand theory of justice. What she has is visceral disgust — an animal revulsion at the machinery of the Games. Her rebellion is personal, tribal, and instinctive: protect her sister, survive, refuse to dance for their amusement.
She isn’t here to restore some lost golden age of decency. She’s here to tear down the current script and refuse to read her lines.
Her defiance is dangerous not because it’s moral in some abstract, universal sense — but because it disrupts the Capitol’s moral narrative. She refuses to be a pawn in their ethical pageant. She reclaims agency in a world that has commodified virtue and turned ethics into state theatre.
So, Has Morality Declined?
Only if you believe morality has a fixed address — some eternal North Star by which all human actions may be judged. But if, as postmodernity has rather insistently suggested, morality is a shifting social fiction — then Panem’s horror is not a fall from grace, but a recalibration of what counts as “grace” in the first place.
And that’s the real horror, isn’t it? Not that morality has collapsed — but that it still exists, and it likes what it sees.
Conclusion: The Real Hunger
The Hunger Games is not about a society starved of morality — it’s about a world gorging on it, cooked, seasoned, and served with a garnish of guiltless indulgence. It is moral appetite weaponised. Ethics as edict. Conscience as costume.
If you feel sickened by what you see in Panem, it’s not because morality has vanished.
The Illusion of Clarity in a World of Cognitive Fog
Apologies in advance for this Logic 101 posting. Reason—our once-proud torch in the darkness, now more like a flickering lighter in a hurricane of hot takes and LinkedIn thought-leadership. The modern mind, bloated on TED Talks and half-digested Wikipedia articles, tosses around terms like “inductive” and “deductive” as if they’re interchangeable IKEA tools. So let us pause, sober up, and properly inspect these three venerable pillars of human inference: deduction, induction, and abduction—each noble, each flawed, each liable to betray you like a Greco-Roman tragedy.
Video: This post was prompted by this short by MiniPhilosophy.
Audio: NotebookLM podcast on this topic.
Deduction: The Tyrant of Certainty
Deduction is the purest of the lot, the high priest of logic. It begins with a general premise and guarantees a specific conclusion, as long as you don’t cock up the syllogism. Think Euclid in a toga, laying down axioms like gospel.
Example:
All humans are mortal.
Socrates is human.
Therefore, Socrates is mortal.
Perfect. Crisp. Unassailable. Unless, of course, your premise is bollocks. Deduction doesn’t check its ingredients—it just cooks with whatever it’s given. Garbage in, garbage out.
Strength: Valid conclusions from valid premises. Weakness: Blind to empirical falsity. You can deduce nonsense from nonsense and still be logically sound.
Induction: The Gambler’s Gospel
Induction is the philosopher’s lottery ticket: generalising from particulars. Every swan I’ve seen is white, ergo all swans must be white. Until, of course, Australia coughs up a black one and wrecks your little Enlightenment fantasy.
Example:
The sun rose today.
It rose yesterday.
It has risen every day I’ve been alive.
Therefore, the sun will rise tomorrow.
Touching, isn’t it? Unfortunately, induction doesn’t prove anything—it suggests probability. David Hume had an existential breakdown over this. Entire centuries of Western philosophy spiralled into metaphysical despair. And yet, we still rely on it to predict weather, markets, and whether that dodgy lasagna will give us food poisoning.
Strength: Empirically rich and adaptive. Weakness: One exception detonates the generalisation. Induction is only ever as good as the sample size and your luck.
Abduction: Sherlock Holmes’ Drug of Choice
Abduction is the inference to the best explanation. The intellectual equivalent of guessing what made the dog bark at midnight while half-drunk and barefoot in the garden.
Example:
The lawn is wet.
It probably rained.
It could be a garden sprinkler. Or a hose. Or divine intervention. But we bet on rain because it’s the simplest, most plausible explanation. Pragmatic, yes. But not immune to deception.
Strength: Useful in messy, real-world contexts. Weakness: Often rests on a subjective idea of “best,” which tends to mean “most convenient to my prejudices.”
The Modern Reasoning Crisis: Why We’re All Probably Wrong
Our contemporary landscape has added new layers of complexity to these already dubious tools. Social media algorithms function as induction machines on steroids, drawing connections between your click on a pasta recipe and your supposed interest in Italian real estate. Meanwhile, partisan echo chambers have perfected the art of deductive reasoning from absolutely bonkers premises.
Consider how we navigate information today:
We encounter a headline that confirms our worldview
We accept it without scrutiny (deductive failure)
We see similar headlines repeatedly (inductive trap)
We conclude our worldview is objectively correct (abductive collapse)
And thus, the modern reasoning loop is complete—a perfect system for being confidently incorrect while feeling intellectually superior.
Weakness by Analogy: The Reasoning Café
Imagine a café.
Deduction is the customer who checks the menu and confidently orders “Soup of the Day,” because the chalkboard says “Today’s Soup is Tomato,” and she trusts chalkboards.
Induction is the one who has had tomato soup every Wednesday for months and assumes it’ll be tomato today again—until it isn’t, and now he’s wearing bisque.
Abduction sees the waiter carrying bowls of red liquid to every table and infers it’s probably tomato soup, orders it, and gets… gazpacho. Ice-cold disappointment.
All three are trying to reason. Only one might get lunch.
The Meta-Problem: Reasoning About Reasoning
The true joke is this: we’re using these flawed reasoning tools to evaluate our reasoning tools. It’s like asking a drunk person to judge their own sobriety test. The very mechanisms we use to detect faulty reasoning are themselves subject to the same faults.
This explains why debates about critical thinking skills typically devolve into demonstrations of their absence. We’re all standing on intellectual quicksand while insisting we’ve found solid ground.
Conclusion: Reason Is Not a Guarantee, It’s a Wager
None of these modalities offer omniscience. Deduction only shines when your axioms aren’t ridiculous. Induction is forever haunted by Hume’s skepticism and the next black swan. Abduction is basically educated guessing dressed up in tweed.
Yet we must reason. We must argue. We must infer—despite the metaphysical vertigo.
The tragedy isn’t that these methods fail. The tragedy is when people believe they don’t.
Perhaps the wisest reasoners are those who understand the limitations of their cognitive tools, who approach conclusions with both confidence and humility. Who recognize that even our most cherished beliefs are, at best, sophisticated approximations of a reality we can never fully grasp.
So reason on, fellow thinkers. Just don’t be too smug about it.
Post-COVID, we’re told trust in science is eroding. But perhaps the real autopsy should be performed on the institution of public discourse itself.
Since the COVID-19 crisis detonated across our global stage—part plague, part PR disaster—the phrase “trust in science” has become the most abused slogan since “thoughts and prayers.” Every public official with a podium and a pulse declared they were “following the science,” as if “science” were a kindly oracle whispering unambiguous truths into the ears of the righteous. But what happened when those pronouncements proved contradictory, politically convenient, or flat-out wrong? Was it science that failed, or was it simply a hostage to an incoherent performance of authority?
Audio: NotebookLM podcast discussing this topic.
Two recent Nature pieces dig into the supposed “decline” of scientific credibility in the post-pandemic world, offering the expected hand-wringing about public opinion and populist mistrust. But let’s not be so credulous. This isn’t merely a crisis of trust—it’s a crisis of theatre.
“The Science” as Ventriloquism
Let’s begin by skewering the central absurdity: there is no such thing as “The Science.” Science is not a monolith. It’s not a holy writ passed down by lab-coated Levites. It’s a process—a messy, iterative, and perpetually provisional mode of inquiry. But during the pandemic, politicians, pundits, and even some scientists began to weaponise the term, turning it into a rhetorical cudgel. “The Science says” became code for “shut up and comply.” Any dissent—even from within the scientific community—was cast as heresy. Galileo would be proud.
In Nature Human Behaviour paper (van der Linden et al., 2025) identifies four archetypes of distrust: distrust in the message, the messenger, the medium, and the motivation. What they fail to ask is: what if all four were compromised simultaneously? What if the medium (mainstream media) served more as a stenographer to power than a check upon it? What if the message was oversimplified into PR slogans, the messengers were party apparatchiks in lab coats, and the motivations were opaque at best?
Trust didn’t just erode. It was actively incinerated in a bonfire of institutional vanity.
A Crisis of Influence, Not Integrity
The second Nature commentary (2025) wrings its hands over “why trust in science is declining,” as if the populace has suddenly turned flat-Earth overnight. But the real story isn’t a decline in trust per se; it’s a redistribution of epistemic authority. Scientists no longer have the stage to themselves. Influencers, conspiracy theorists, rogue PhDs, and yes—exhausted citizens armed with Wi-Fi and anxiety—have joined the fray.
Science hasn’t lost truth—it’s lost control. And frankly, perhaps it shouldn’t have had that control in the first place. Democracy is messy. Information democracies doubly so. And in that mess, the epistemic pedestal of elite scientific consensus was bound to topple—especially when its public face was filtered through press conferences, inconsistent policies, and authoritarian instincts.
Technocracy’s Fatal Hubris
What we saw wasn’t science failing—it was technocracy failing in real time, trying to manage public behaviour with a veneer of empirical certainty. But when predictions shifted, guidelines reversed, and public health policy began to resemble a mood ring, the lay public was expected to pretend nothing happened. Orwell would have a field day.
This wasn’t a failure of scientific method. It was a failure of scientific messaging—an inability (or unwillingness) to communicate uncertainty, probability, and risk in adult terms. Instead, the public was infantilised. And then pathologised for rebelling.
Toward a Post-Scientistic Public Sphere
So where does that leave us? Perhaps we need to kill the idol of “The Science” to resurrect a more mature relationship with scientific discourse—one that tolerates ambiguity, embraces dissent, and admits when the data isn’t in. Science, done properly, is the art of saying “we don’t know… yet.”
The pandemic didn’t erode trust in science. It exposed how fragile our institutional credibility scaffolding really is—how easily truth is blurred when science is fed through the meat grinder of media, politics, and fear.
The answer isn’t more science communication—it’s less scientism, more honesty, and above all, fewer bureaucrats playing ventriloquist with the language of discovery.
Conclusion
Trust in science isn’t dead. But trust in those who claim to speak for science? That’s another matter. Perhaps it’s time to separate the two.
I died in March 2023 — or so the rumour mill would have you believe.
Of course, given that I’m still here, hammering away at this keyboard, it must be said that I didn’t technically die. We don’t bring people back. Death, real death, doesn’t work on a “return to sender” basis. Once you’re gone, you’re gone, and the only thing bringing you back is a heavily fictionalised Netflix series.
Audio: NotebookLM podcast of this content.
No, this is a semantic cock-up, yet another stinking exhibit in the crumbling Museum of Language Insufficiency. “I died,” people say, usually while slurping a Pumpkin Spice Latte and live-streaming their trauma to 53 followers. What they mean is that they flirted with death, clumsily, like a drunk uncle at a wedding. No consummation, just a lot of embarrassing groping at the pearly gates.
And since we’re clarifying terms: there was no tunnel of light, no angels, no celestial choir belting out Coldplay covers. No bearded codgers in slippers. No 72 virgins. (Or, more plausibly, 72 incels whining about their lack of Wi-Fi reception.)
There was, in fact, nothing. Nothing but the slow, undignified realisation that the body, that traitorous meat vessel, was shutting down — and the only gates I was approaching belonged to A&E, with its flickering fluorescent lights and a faint smell of overcooked cabbage.
To be fair, it’s called a near-death experience (NDE) for a reason. Language, coward that it is, hedges its bets. “Near-death” means you dipped a toe into the abyss and then screamed for your mummy. You didn’t die. You loitered. You loitered in the existential equivalent of an airport Wetherspoons, clutching your boarding pass and wondering why the flight to Oblivion was delayed.
As the stories go, people waft into the next world and are yanked back with stirring tales of unicorns, long-dead relatives, and furniture catalogues made of clouds. I, an atheist to my scorched and shrivelled soul, expected none of that — and was therefore not disappointed.
What I do recall, before the curtain wobbled, was struggling for breath, thinking, “Pick a side. In or out. But for pity’s sake, no more dithering.” In a last act of rational agency, I asked an ER nurse — a bored-looking Athena in scrubs — to intubate me. She responded with the rousing medical affirmation, “We may have to,” which roughly translates to, “Stop making a scene, love. We’ve got fifteen others ahead of you.”
After that, nothing. I was out. Like a light. Like a minor character in a Dickens novel whose death is so insignificant it happens between paragraphs.
I woke up the next day: groggy, sliced open, a tube rammed down my throat, and absolutely no closer to solving the cosmic riddle of it all. Not exactly the triumphant return of Odysseus. Not even a second-rate Ulysses.
Here’s the reality: There is no coming back from death. You can’t “visit” death, any more than you can spend the afternoon being non-existent and return with a suntan.
Those near-death visions? Oxygen-starved brains farting out fever dreams. Cerebral cortexes short-circuiting like Poundland fairy lights. Hallucinations, not heralds. A final, frantic light show performed for an audience of none.
Epicurus, that cheerful nihilist, said, “When we are, death is not. When death is, we are not.” He forgot to mention that, in between, people would invent entire publishing industries peddling twaddle about journeys beyond the veil — and charging $29.99 for the paperback edition.
No angels. No harps. No antechamber to the divine. Just the damp whirr of hospital machinery and the faint beep-beep of capitalism, patiently billing you for your own demise.
If there’s a soundtrack to death, it’s not choirs of the blessed. It’s a disgruntled junior surgeon muttering, “Where the hell’s the anaesthetist?” while pawing desperately through a drawer full of out-of-date latex gloves.
And thus, reader, I lived. But only in the most vulgar, anticlimactic, and utterly mortal sense.
There will be no afterlife memoir. No second chance to settle the score. No sequel. Just this: breath, blood, occasional barbed words — and then silence.