Unwilling: The Neuroscience Against Free Will

Why the cherished myth of human autonomy dissolves under the weight of our own biology

We cling to free will like a comfort blanket—the reassuring belief that our actions spring from deliberation, character, and autonomous choice. This narrative has powered everything from our justice systems to our sense of personal achievement. It feels good, even necessary, to believe we author our own stories.

But what if this cornerstone of human self-conception is merely a useful fiction? What if, with each advance in neuroscience, our cherished notion of autonomy becomes increasingly untenable?

Audio: NotebookLM podcast on this topic.

I. The Myth of Autonomy: A Beautiful Delusion

Free will requires that we—some essential, decision-making “self”—stand somehow separate from the causal chains of biology and physics. But where exactly would this magical pocket of causation exist? And what evidence do we have for it?

Your preferences, values, and impulses emerge from a complex interplay of factors you never chose:

The genetic lottery determined your baseline neurochemistry and cognitive architecture before your first breath. You didn’t select your dopamine sensitivity, your amygdala reactivity, or your executive function capacity.

The hormonal symphony that controls your emotional responses operates largely beneath conscious awareness. These chemical messengers—testosterone, oxytocin, and cortisol—don’t ask permission before altering your perceptions and priorities.

Environmental exposures—from lead in your childhood drinking water to the specific traumas of your upbringing—have sculpted neural pathways you didn’t design and can’t easily rewire.

Developmental contingencies have shaped your moral reasoning, impulse control, and capacity for empathy through processes invisible to conscious inspection.

Your prized ability to weigh options, inhibit impulses, and make “rational” choices depends entirely on specific brain structures—particularly the dorsolateral prefrontal cortex (DLPFC)—operating within a neurochemical environment you inherited rather than created.

You occupy this biological machinery; you do not transcend it. Yet, society holds you responsible for its outputs as if you stood separate from these deterministic processes.

II. The DLPFC: Puppet Master of Moral Choice

The dorsolateral prefrontal cortex serves as command central for what we proudly call executive function—our capacity to plan, inhibit, decide, and morally judge. We experience its operations as deliberation, as the weighing of options, as the essence of choice itself.

And yet this supposed seat of autonomy can be manipulated with disturbing ease.

When researchers apply transcranial magnetic stimulation to inhibit DLPFC function, test subjects make dramatically different moral judgments about identical scenarios. Under different stimulation protocols, the same person arrives at contradictory conclusions about right and wrong without any awareness of the external influence.

Similarly, transcranial direct current stimulation over the DLPFC alters moral reasoning, especially regarding personal moral dilemmas. The subject experiences these externally induced judgments as entirely their own, with no sense that their moral compass has been hijacked.

If our most cherished moral deliberations can be redirected through simple electromagnetic manipulation, what does this reveal about the nature of “choice”? If will can be so easily influenced, how free could it possibly be?

III. Hormonal Puppetmasters: The Will in Your Bloodstream

Your decision-making machinery doesn’t stop at neural architecture. Your hormonal profile actively shapes what you perceive as your autonomous choices.

Consider oxytocin, popularly known as the “love hormone.” Research demonstrates that elevated oxytocin levels enhance feelings of guilt and shame while reducing willingness to harm others. This isn’t a subtle effect—it’s a direct biological override of what you might otherwise “choose.”

Testosterone tells an equally compelling story. Administration of this hormone increases utilitarian moral judgments, particularly when such decisions involve aggression or social dominance. The subject doesn’t experience this as a foreign influence but as their own authentic reasoning.

These aren’t anomalies or edge cases. They represent the normal operation of the biological systems governing what we experience as choice. You aren’t choosing so much as regulating, responding, and rebalancing a biochemical economy you inherited rather than designed.

IV. The Accident of Will: Uncomfortable Conclusions

If the will can be manipulated through such straightforward biological interventions, was it ever truly “yours” to begin with?

Philosopher Galen Strawson’s causa sui argument becomes unavoidable here: To be morally responsible, one must be the cause of oneself, but no one creates their own neural and hormonal architecture. By extension, no one can be ultimately responsible for actions emerging from that architecture.

What we dignify as “will” may be nothing more than a fortunate (or unfortunate) biochemical accident—the particular configuration of neurons and neurochemicals you happened to inherit and develop.

This lens forces unsettling questions:

  • How many behaviours we praise or condemn are merely phenotypic expressions masquerading as choices? How many acts of cruelty or compassion reflect neurochemistry rather than character?
  • How many punishments and rewards are we assigning not to autonomous agents, but to biological processes operating beyond conscious control?
  • And perhaps most disturbingly: If we could perfect the moral self through direct biological intervention—rewiring neural pathways or adjusting neurotransmitter levels to ensure “better” choices—should we?
  • Or would such manipulation, however well-intentioned, represent the final acknowledgement that what we’ve called free will was never free at all?

A Compatibilist Rebuttal? Not So Fast.

Some philosophers argue for compatibilism, the view that determinism and free will can coexist if we redefine free will as “uncoerced action aligned with one’s desires.” But this semantic shuffle doesn’t rescue moral responsibility.

If your desires themselves are products of biology and environment—if even your capacity to evaluate those desires depends on inherited neural architecture—then “acting according to your desires” just pushes the problem back a step. You’re still not the ultimate author of those desires or your response to them.

What’s Left?

Perhaps we need not a defence of free will but a new framework for understanding human behaviour—one that acknowledges our biological embeddedness while preserving meaningful concepts of agency and responsibility without magical thinking.

The evidence doesn’t suggest we are without agency; it suggests our agency operates within biological constraints we’re only beginning to understand. The question isn’t whether biology influences choice—it’s whether anything else does.

For now, the neuroscientific evidence points in one direction: The will exists, but its freedom is the illusion.

Hungering for Morality: When Right and Wrong Are Just a Matter of PR

Full Disclosure: I read the first volume of The Hunger Games just before the film was released. It was OK – certainly better than the film. This video came across my feed, and I skipped through it. Near the end, this geezer references how Katniss saves or recovers deteriorated morality. Me being me, I found issue with the very notion that a relative, if not subjective, concept could be recovered.

The OP asks if The Hunger Games are a classic. I’d argue that they are a categorical classic, like Harry Potter, within the category of YA fiction.

Audio: NotebookLM podcast discussing this topic.

The Hunger Games doesn’t depict the death of morality — it’s a masterclass in how to twist it into a circus act.

Video: YouTube video that spawned this topic.

Let us dispense with the hand-wringing. The Hunger Games is not a parable of moral decay. It is something far more chilling: a vivid portrait of moral engineering — the grotesque contortion of ethical instincts into instruments of domination and spectacle.

Those who bemoan the “decline of morality” in Panem have rather missed the point. There is no absence of morality in the Capitol — only a different version of it. A rebranded, corporatised, state-sanctioned morality, lacquered in lipstick and broadcast in 4K. It is not immorality that reigns, but a hyperactive ideological morality, designed to keep the masses docile and the elites draped in silk.

This is not moral entropy; it’s moral mutation.

Children are not slaughtered because people have forgotten right from wrong — they are slaughtered because a society has been trained to believe that this is what justice looks like. That blood is penance. That fear is unity. That watching it all unfold with a glass of champagne in hand is perfectly civilised behaviour.

This isn’t the death of morality. It’s a hostile takeover.

The Moral PR Machine

If morality is, as many of us suspect, relative — a cultural construct built on consensus, coercion, and convenience — then it can no more “decline” than fashion trends can rot. It simply shifts. One day, shoulder pads are in. The next, it’s child-on-child murder as prime-time entertainment.

In Panem, the moral compass has not vanished. It’s been forcibly recalibrated. Not by reason or revelation, but by propaganda and fear. The Games are moral theatre. A grim ritual, staged to remind the Districts who holds the reins, all under the nauseating guise of tradition, order, and justice.

The citizens of the Capitol aren’t monsters — they’re consumers. Trained to see horror as haute couture. To mistake power for virtue. To cheer while children are butchered, because that’s what everyone else is doing — and, crucially, because they’ve been taught it’s necessary. Necessary evils are the most seductive kind.

Katniss: Not a Saint, But a Saboteur

Enter Katniss Everdeen, not as the moral saviour but as the spanner in the machine. She doesn’t preach. She doesn’t have a grand theory of justice. What she has is visceral disgust — an animal revulsion at the machinery of the Games. Her rebellion is personal, tribal, and instinctive: protect her sister, survive, refuse to dance for their amusement.

She isn’t here to restore some lost golden age of decency. She’s here to tear down the current script and refuse to read her lines.

Her defiance is dangerous not because it’s moral in some abstract, universal sense — but because it disrupts the Capitol’s moral narrative. She refuses to be a pawn in their ethical pageant. She reclaims agency in a world that has commodified virtue and turned ethics into state theatre.

So, Has Morality Declined?

Only if you believe morality has a fixed address — some eternal North Star by which all human actions may be judged. But if, as postmodernity has rather insistently suggested, morality is a shifting social fiction — then Panem’s horror is not a fall from grace, but a recalibration of what counts as “grace” in the first place.

And that’s the real horror, isn’t it? Not that morality has collapsed — but that it still exists, and it likes what it sees.

Conclusion: The Real Hunger

The Hunger Games is not about a society starved of morality — it’s about a world gorging on it, cooked, seasoned, and served with a garnish of guiltless indulgence. It is moral appetite weaponised. Ethics as edict. Conscience as costume.

If you feel sickened by what you see in Panem, it’s not because morality has vanished.

It’s because it hasn’t.

The Tyranny of “Human Nature”

There is a kind of political necromancy afoot in modern discourse—a dreary chant murmured by pundits, CEOs, and power-drunk bureaucrats alike: “It’s just human nature.” As if this incantation explains, excuses, and absolves all manner of violent absurdities. As if, by invoking the mystic forces of evolution or primal instinct, one can justify the grotesque state of things. Income inequality? Human nature. War? Human nature. Corporate psychopathy? Oh, sweetie, it’s just how we’re wired.

What a convenient mythology.

Audio: NotebookLM podcast on this topic.

If “human nature” is inherently brutish and selfish, then resistance is not only futile, it is unnatural. The doctrine of dominance gets sanctified, the lust to rule painted as destiny rather than deviance. Meanwhile, the quiet, unglamorous yearning of most people—to live undisturbed, to coöperate rather than conquer—is dismissed as naïve, childish, and unrealistic. How curious that the preferences of the vast majority are always sacrificed at the altar of some aggressive minority’s ambitions.

Let us dispense with this dogma. The desire to dominate is not a feature of human nature writ large; it is a glitch exploited by systems that reward pathological ambition. Most of us would rather not be ruled, and certainly not managed by glorified algorithms in meat suits. The real human inclination, buried beneath centuries of conquest and control, is to live in peace, tend to our gardens, and perhaps be left the hell alone.

And yet, we are not. Because there exists a virulent cohort—call them oligarchs, executives, generals, kings—whose raison d’être is the acquisition and consolidation of power. Not content to build a life, they must build empires. Not content to share, they must extract. They regard the rest of us as livestock: occasionally troublesome, but ultimately manageable.

To pacify us, they offer the Social Contract™—a sort of ideological bribe that says, “Give us your freedom, and we promise not to let the wolves in.” But what if the wolves are already inside the gates, wearing suits and passing legislation? What if the protection racket is the threat itself?

So no, it is not “human nature” that is the problem. Cancer is natural, too, but we don’t celebrate its tenacity. We treat it, research it, and fight like hell to survive it. Likewise, we must treat pathological power-lust not as an inevitability to be managed but as a disease to be diagnosed and dismantled.

The real scandal isn’t that humans sometimes fail to coöperate. It’s that we’re constantly told we’re incapable of it by those whose power depends on keeping it that way.

Let the ruling classes peddle their myths. The rest of us might just choose to write new ones.

The Dubious Art of Reasoning: Why Thinking Is Harder Than It Looks

The Illusion of Clarity in a World of Cognitive Fog

Apologies in advance for this Logic 101 posting. Reason—our once-proud torch in the darkness, now more like a flickering lighter in a hurricane of hot takes and LinkedIn thought-leadership. The modern mind, bloated on TED Talks and half-digested Wikipedia articles, tosses around terms like “inductive” and “deductive” as if they’re interchangeable IKEA tools. So let us pause, sober up, and properly inspect these three venerable pillars of human inference: deduction, induction, and abduction—each noble, each flawed, each liable to betray you like a Greco-Roman tragedy.

Video: This post was prompted by this short by MiniPhilosophy.
Audio: NotebookLM podcast on this topic.

Deduction: The Tyrant of Certainty

Deduction is the purest of the lot, the high priest of logic. It begins with a general premise and guarantees a specific conclusion, as long as you don’t cock up the syllogism. Think Euclid in a toga, laying down axioms like gospel.

Example:

Perfect. Crisp. Unassailable. Unless, of course, your premise is bollocks. Deduction doesn’t check its ingredients—it just cooks with whatever it’s given. Garbage in, garbage out.

Strength: Valid conclusions from valid premises.
Weakness: Blind to empirical falsity. You can deduce nonsense from nonsense and still be logically sound.

Induction: The Gambler’s Gospel

Induction is the philosopher’s lottery ticket: generalising from particulars. Every swan I’ve seen is white, ergo all swans must be white. Until, of course, Australia coughs up a black one and wrecks your little Enlightenment fantasy.

Example:

Touching, isn’t it? Unfortunately, induction doesn’t prove anything—it suggests probability. David Hume had an existential breakdown over this. Entire centuries of Western philosophy spiralled into metaphysical despair. And yet, we still rely on it to predict weather, markets, and whether that dodgy lasagna will give us food poisoning.

Strength: Empirically rich and adaptive.
Weakness: One exception detonates the generalisation. Induction is only ever as good as the sample size and your luck.

Abduction: Sherlock Holmes’ Drug of Choice

Abduction is the inference to the best explanation. The intellectual equivalent of guessing what made the dog bark at midnight while half-drunk and barefoot in the garden.

Example:

It could be a garden sprinkler. Or a hose. Or divine intervention. But we bet on rain because it’s the simplest, most plausible explanation. Pragmatic, yes. But not immune to deception.

Strength: Useful in messy, real-world contexts.
Weakness: Often rests on a subjective idea of “best,” which tends to mean “most convenient to my prejudices.”

The Modern Reasoning Crisis: Why We’re All Probably Wrong

Our contemporary landscape has added new layers of complexity to these already dubious tools. Social media algorithms function as induction machines on steroids, drawing connections between your click on a pasta recipe and your supposed interest in Italian real estate. Meanwhile, partisan echo chambers have perfected the art of deductive reasoning from absolutely bonkers premises.

Consider how we navigate information today:

And thus, the modern reasoning loop is complete—a perfect system for being confidently incorrect while feeling intellectually superior.

Weakness by Analogy: The Reasoning Café

Imagine a café.

All three are trying to reason. Only one might get lunch.

The Meta-Problem: Reasoning About Reasoning

The true joke is this: we’re using these flawed reasoning tools to evaluate our reasoning tools. It’s like asking a drunk person to judge their own sobriety test. The very mechanisms we use to detect faulty reasoning are themselves subject to the same faults.

This explains why debates about critical thinking skills typically devolve into demonstrations of their absence. We’re all standing on intellectual quicksand while insisting we’ve found solid ground.

Conclusion: Reason Is Not a Guarantee, It’s a Wager

None of these modalities offer omniscience. Deduction only shines when your axioms aren’t ridiculous. Induction is forever haunted by Hume’s skepticism and the next black swan. Abduction is basically educated guessing dressed up in tweed.

Yet we must reason. We must argue. We must infer—despite the metaphysical vertigo.

The tragedy isn’t that these methods fail. The tragedy is when people believe they don’t.

Perhaps the wisest reasoners are those who understand the limitations of their cognitive tools, who approach conclusions with both confidence and humility. Who recognize that even our most cherished beliefs are, at best, sophisticated approximations of a reality we can never fully grasp.

So reason on, fellow thinkers. Just don’t be too smug about it.

Welcome to the Casino of Justice

Welcome to the Grand Casino of Justice, where the chips are your civil liberties, the roulette wheel spins your fate, and the house—ever-smug in its powdered wig of procedural decorum—always wins.

Step right up, citizens! Marvel at the dazzling illusions of “science” as performed by your local constabulary: the sacred polygraph, that magnificent artefact of 1920s snake oil, still trotted out in back rooms like a séance at a nursing home. Never mind that it measures stress, not deception. Never mind that it’s been dismissed by any scientist with a functioning prefrontal cortex. It’s not there to detect truth—it’s there to extract confession. Like a slot machine that only pays out when you agree you’re guilty.

Audio: NotebookLM podcast on this topic.

And oh, the forensic pageantry! The blacklight! The dramatic swabs! The breathless invocations of “trace evidence,” “blood spatter patterns,” and—ooh! ahh!—fingerprints, those curly little whorls of manufactured certainty. You’ve been told since childhood that no two are alike, that your prints are your identity. Rubbish. Human fingerprint examiners disagree with themselves when presented with the same print twice. In blind tests. And yes—this bears repeating with appropriate incredulity—koalas have fingerprints so uncannily similar to ours they’ve confused human forensic analysts. Somewhere, a marsupial walks free while a teenager rots in remand.

You see, it’s not about justice. It’s about control. Control through performance. The legal system, like a casino, isn’t interested in fairness—it’s interested in outcome. It needs to appear impartial, all robes and solemnity, while tipping the odds ever so slightly, perpetually, in its own favour. This is jurisprudence as stagecraft, science as set-dressing, and truth as a collateral casualty.

And who are the croupiers of this great charade? Not scientists, no. Scientists are too cautious, too mired in uncertainty, too concerned with falsifiability and statistical error margins. No, your case will be handled by forensic technicians with just enough training to speak jargon, and just enough institutional loyalty to believe they’re doing the Lord’s work. Never mind that many forensic methods—bite mark analysis, tool mark “matching,” even some blood spatter interpretations—are about as scientifically robust as a horoscope printed on a cereal box.

TV crime dramas, of course, have done their bit to embalm these myths in the cultural subconscious. “CSI” isn’t a genre—it’s a sedative, reassuring the public that experts can see the truth in a hair follicle or the angle of a sneeze. In reality, most convictions hinge on shoddy analysis, flawed assumptions, and a little prosecutorial sleight of hand. But the juries are dazzled by the sciencey buzzwords, and the judges—God bless their robes—rarely know a confidence interval from a cornflake.

So, what do you do when accused in the great Casino of Justice? Well, if you’re lucky, you lawyer up. If you’re not, you take a plea deal, because 90% of cases never reach trial. Why? Because the system is designed not to resolve guilt, but to process bodies. It is a meat grinder that must keep grinding, and your innocence is but a small bone to be crushed underfoot.

This isn’t justice. It’s a theatre of probability management, where the goal is not truth but resolution. Efficiency. Throughput. The house keeps the lights on by feeding the machine, and forensic science—real or imagined—is merely the window dressing. The roulette wheel spins, the dice tumble, and your future hangs on the angle of a smudge or the misreading of a galvanic skin response.

Just don’t expect the koalas to testify. They’re wise enough to stay in the trees.

Varoufakis Solves Zeno’s Paradox

Having finished Technofeudalism, I’ve moved on to Society of the Spectacle, which has me thinking.

They say no one escapes the Spectacle. Guy Debord made sure of that. His vision was airtight, his diagnosis terminal: we are all spectators now, alienated from our labour, our time, our own damn lives. It was a metaphysical mugging—existence held hostage by images, by commodities dressed in drag. The future was a feedback loop, and we were all doomed to applaud.

Audio: NotebookLM podcast on this topic. Apologies in advance for the narrators’ mangling of the pronunciation of ‘Guy Debord’.

But what if the loop could be hacked?
What if the infinitely halved distances of motionless critique—Zeno’s Paradox by way of Marx—could finally be crossed?

Enter: Yanis Varoufakis.
Economist, ex-finance minister, techno-cassandra with a motorbike and a vendetta.
Where Debord filmed the catastrophe in black-and-white, Varoufakis showed up with the source code.

Debord’s Limbo

Debord saw it all coming. The substitution of reality with its photogenic simulacrum. The slow death of agency beneath the floodlights of consumption. But like Zeno’s paradox, he could only gesture toward the end without ever reaching it. Each critique halved the distance to liberation but never arrived. The Spectacle remained intact, omnipresent, and self-replicating—like an ontological screensaver.

He gave us no path forward, only a beautiful, ruinous analysis. A Parisian shrug of doom.

Varoufakis’ Shortcut

But then comes Varoufakis, breaking through the digital labyrinth not by philosophising the Spectacle, but by naming its successor: Technofeudalism.

See, Debord was chasing a moving target—a capitalism that morphed from industrial to financial to semiotic faster than his prose could crystallise. But Varoufakis caught it mid-mutation. He pinned it to the slab and sliced it open. What spilled out wasn’t capital anymore—it was rent. Platform rent. Algorithmic tolls. Behavioural taxes disguised as convenience. This isn’t the market gone mad—it’s the market dissolved, replaced by code-based fiefdoms.

The paradox is resolved not by reaching utopia, but by realising we’ve already crossed the line—we just weren’t told. The market isn’t dying; it’s already dead, and we’re still paying funeral costs in monthly subscriptions and attention metrics.

From Spectacle to Subjugation

Debord wanted to unmask the performance.
Varoufakis realised the theatre had been demolished and replaced with a server farm.

You don’t watch the Spectacle anymore. It watches you.
It optimises you.
It learns your keystrokes, your pulse rate, your browsing history.
Welcome to feudal recursion, where Amazon is your landlord, Google your priest, and Meta your confessor.

Solving Zeno the Varoufakis Way

So how does one cross the infinite regress of alienation?
Simple. You call it what it is. You reclassify the terrain.

“This is not capitalism,” Varoufakis says, in the tone of a man pulling a mask off a Scooby-Doo villain.
“It’s technofeudalism. Capital didn’t win. It went feudal. Again.”

By doing so, he bypasses the academic ballet that has critics forever inching closer to the truth without touching it. He calls the system new, not to sell books, but to make strategy possible. Because naming a beast is the first step in slaying it.

In Conclusion: Debord Dreamed, Varoufakis Drives

Debord haunts the museum.
Varoufakis raids the server room.
Both are essential. But only one gives us a new map.

The Spectacle hypnotised us.
Technofeudalism enslaves us.
And if there’s a way out, it won’t be through slogans spray-painted on Parisian walls. It will be built in code, deployed across decentralised networks, and carried forward by those who remember what it meant to be not watched.

Let Debord whisper. Let Varoufakis roar.
And let the rest of us sharpen our blades.

What’s Missing? Trust or Influence

Post-COVID, we’re told trust in science is eroding. But perhaps the real autopsy should be performed on the institution of public discourse itself.

Since the COVID-19 crisis detonated across our global stage—part plague, part PR disaster—the phrase “trust in science” has become the most abused slogan since “thoughts and prayers.” Every public official with a podium and a pulse declared they were “following the science,” as if “science” were a kindly oracle whispering unambiguous truths into the ears of the righteous. But what happened when those pronouncements proved contradictory, politically convenient, or flat-out wrong? Was it science that failed, or was it simply a hostage to an incoherent performance of authority?

Audio: NotebookLM podcast discussing this topic.

Two recent Nature pieces dig into the supposed “decline” of scientific credibility in the post-pandemic world, offering the expected hand-wringing about public opinion and populist mistrust. But let’s not be so credulous. This isn’t merely a crisis of trust—it’s a crisis of theatre.

“The Science” as Ventriloquism

Let’s begin by skewering the central absurdity: there is no such thing as “The Science.” Science is not a monolith. It’s not a holy writ passed down by lab-coated Levites. It’s a process—a messy, iterative, and perpetually provisional mode of inquiry. But during the pandemic, politicians, pundits, and even some scientists began to weaponise the term, turning it into a rhetorical cudgel. “The Science says” became code for “shut up and comply.” Any dissent—even from within the scientific community—was cast as heresy. Galileo would be proud.

In Nature Human Behaviour paper (van der Linden et al., 2025) identifies four archetypes of distrust: distrust in the message, the messenger, the medium, and the motivation. What they fail to ask is: what if all four were compromised simultaneously? What if the medium (mainstream media) served more as a stenographer to power than a check upon it? What if the message was oversimplified into PR slogans, the messengers were party apparatchiks in lab coats, and the motivations were opaque at best?

Trust didn’t just erode. It was actively incinerated in a bonfire of institutional vanity.

A Crisis of Influence, Not Integrity

The second Nature commentary (2025) wrings its hands over “why trust in science is declining,” as if the populace has suddenly turned flat-Earth overnight. But the real story isn’t a decline in trust per se; it’s a redistribution of epistemic authority. Scientists no longer have the stage to themselves. Influencers, conspiracy theorists, rogue PhDs, and yes—exhausted citizens armed with Wi-Fi and anxiety—have joined the fray.

Science hasn’t lost truth—it’s lost control. And frankly, perhaps it shouldn’t have had that control in the first place. Democracy is messy. Information democracies doubly so. And in that mess, the epistemic pedestal of elite scientific consensus was bound to topple—especially when its public face was filtered through press conferences, inconsistent policies, and authoritarian instincts.

Technocracy’s Fatal Hubris

What we saw wasn’t science failing—it was technocracy failing in real time, trying to manage public behaviour with a veneer of empirical certainty. But when predictions shifted, guidelines reversed, and public health policy began to resemble a mood ring, the lay public was expected to pretend nothing happened. Orwell would have a field day.

This wasn’t a failure of scientific method. It was a failure of scientific messaging—an inability (or unwillingness) to communicate uncertainty, probability, and risk in adult terms. Instead, the public was infantilised. And then pathologised for rebelling.

Toward a Post-Scientistic Public Sphere

So where does that leave us? Perhaps we need to kill the idol of “The Science” to resurrect a more mature relationship with scientific discourse—one that tolerates ambiguity, embraces dissent, and admits when the data isn’t in. Science, done properly, is the art of saying “we don’t know… yet.”

The pandemic didn’t erode trust in science. It exposed how fragile our institutional credibility scaffolding really is—how easily truth is blurred when science is fed through the meat grinder of media, politics, and fear.

The answer isn’t more science communication—it’s less scientism, more honesty, and above all, fewer bureaucrats playing ventriloquist with the language of discovery.

Conclusion

Trust in science isn’t dead. But trust in those who claim to speak for science? That’s another matter. Perhaps it’s time to separate the two.

Technofeudalism: It’s a Wrap

By the time we reach Chapter Seven of Technofeudalism: What Kills Capitalism, Yanis Varoufakis drops the ledger sheets and spreadsheets and starts sketching utopia in crayon. Entitled Escape from Technofeudalism, it proposes—brace yourself—a workplace democracy. It’s aspirational, yes. Compelling? Not particularly. Especially if, like me, you’ve long since stopped believing that democracy is anything more than a feel-good placebo for structural impotence.

Audio: NotebookLM podcast discussing this topic.

To be clear: the preceding chapters, particularly the first six, are sharp, incisive, and frankly, blistering in their indictment of today’s economic disfiguration. But Chapter Seven? It’s less an escape plan, more a group therapy session masquerading as an operational model.

So let’s take his proposal for Democratised Companies apart, one charming layer at a time.

Splendid. One person, one vote. Adorable.

Because there’s nothing more efficient than a hiring committee comprised of thirty engineers, two janitors, a receptionist, and Steve from Accounts, whose main contribution is passive-aggressive sighing.

Marvellous. We’ve now digitised the tyranny of the majority and can timestamp every idiotic decision for posterity.

A relief. Until it doesn’t.

Here, dear reader, is where the cake collapses. Why, precisely, should a randomly-assembled group of employees—with wildly varying financial literacy—be entrusted to divide post-tax revenue like it’s a birthday cake at a toddler’s party?

And how often are these slices recalibrated? Each fiscal year? Every time someone is hired or fired? Do we amend votes quarterly or wait until the economic ship has already struck an iceberg?

Varoufakis does suggest preference voting to tackle allocation disputes:

Fine. In theory, algorithmic voting procedures sound neat. But it presumes voters are rational, informed, and cooperative. If you’ve ever seen a corporate Slack thread devolve into emoji warfare, you’ll know that this is fiction on par with unicorns and meritocracy.

Ah yes, the ‘equality’ bit. Equal pay, unequal contribution. This isn’t egalitarianism—it’s enforced mediocrity. It might work in a monastery. Less so in a competitive tech firm where innovation requires both vision and differentiated incentive.

Now, on to bonuses, which are democratically determined by:

Welcome to Black Mirror: Workplace Edition. This is less economics, more playground politics. Who gets tokens? The charismatic chatterbox in the break room? The person who shared their lunch? The ghost employee who never shows up but emails back promptly?

And how, pray tell, does one evaluate the receptionist’s contribution relative to the lead engineer’s or the janitor’s? This isn’t peer review—it’s populism with a smiley face.

We’ve all seen “Teacher of the Year” competitions turn into contests of who had the cutest class poster or best cupcakes. Now imagine your livelihood depending on it.

In summary, democracy in the workplace may sound noble, but in practice, it’s the bureaucratic equivalent of herding caffeinated cats. It doesn’t even work in small groups, let alone an organisation of hundreds. Democracy—when applied to every function of an enterprise—is not liberation; it’s dilution. It’s design-by-committee, strategy-by-consensus, and ultimately, excellence-by-accident.

Escape from Technofeudalism? Perhaps. But not by replacing corporate lords with intranet polls and digital tokens. That’s not an exit strategy—it’s a cosplay of collectivism.

Defying Death

I died in March 2023 — or so the rumour mill would have you believe.

Of course, given that I’m still here, hammering away at this keyboard, it must be said that I didn’t technically die. We don’t bring people back. Death, real death, doesn’t work on a “return to sender” basis. Once you’re gone, you’re gone, and the only thing bringing you back is a heavily fictionalised Netflix series.

Audio: NotebookLM podcast of this content.

No, this is a semantic cock-up, yet another stinking exhibit in the crumbling Museum of Language Insufficiency. “I died,” people say, usually while slurping a Pumpkin Spice Latte and live-streaming their trauma to 53 followers. What they mean is that they flirted with death, clumsily, like a drunk uncle at a wedding. No consummation, just a lot of embarrassing groping at the pearly gates.

And since we’re clarifying terms: there was no tunnel of light, no angels, no celestial choir belting out Coldplay covers. No bearded codgers in slippers. No 72 virgins. (Or, more plausibly, 72 incels whining about their lack of Wi-Fi reception.)

There was, in fact, nothing. Nothing but the slow, undignified realisation that the body, that traitorous meat vessel, was shutting down — and the only gates I was approaching belonged to A&E, with its flickering fluorescent lights and a faint smell of overcooked cabbage.

To be fair, it’s called a near-death experience (NDE) for a reason. Language, coward that it is, hedges its bets. “Near-death” means you dipped a toe into the abyss and then screamed for your mummy. You didn’t die. You loitered. You loitered in the existential equivalent of an airport Wetherspoons, clutching your boarding pass and wondering why the flight to Oblivion was delayed.

As the stories go, people waft into the next world and are yanked back with stirring tales of unicorns, long-dead relatives, and furniture catalogues made of clouds. I, an atheist to my scorched and shrivelled soul, expected none of that — and was therefore not disappointed.

What I do recall, before the curtain wobbled, was struggling for breath, thinking, “Pick a side. In or out. But for pity’s sake, no more dithering.”
In a last act of rational agency, I asked an ER nurse — a bored-looking Athena in scrubs — to intubate me. She responded with the rousing medical affirmation, “We may have to,” which roughly translates to, “Stop making a scene, love. We’ve got fifteen others ahead of you.”

After that, nothing. I was out. Like a light. Like a minor character in a Dickens novel whose death is so insignificant it happens between paragraphs.

I woke up the next day: groggy, sliced open, a tube rammed down my throat, and absolutely no closer to solving the cosmic riddle of it all. Not exactly the triumphant return of Odysseus. Not even a second-rate Ulysses.

Here’s the reality:
There is no coming back from death.
You can’t “visit” death, any more than you can spend the afternoon being non-existent and return with a suntan.

Those near-death visions? Oxygen-starved brains farting out fever dreams. Cerebral cortexes short-circuiting like Poundland fairy lights. Hallucinations, not heralds. A final, frantic light show performed for an audience of none.

Epicurus, that cheerful nihilist, said, “When we are, death is not. When death is, we are not.” He forgot to mention that, in between, people would invent entire publishing industries peddling twaddle about journeys beyond the veil — and charging $29.99 for the paperback edition.

No angels. No harps. No antechamber to the divine.
Just the damp whirr of hospital machinery and the faint beep-beep of capitalism, patiently billing you for your own demise.

If there’s a soundtrack to death, it’s not choirs of the blessed. It’s a disgruntled junior surgeon muttering, “Where the hell’s the anaesthetist?” while pawing desperately through a drawer full of out-of-date latex gloves.

And thus, reader, I lived.
But only in the most vulgar, anticlimactic, and utterly mortal sense.

There will be no afterlife memoir. No second chance to settle the score. No sequel.
Just this: breath, blood, occasional barbed words — and then silence.

Deal with it.

Clouds, Crowns, and Clowns

How the West Bungled Its Way into Technofeudalism

History doesn’t repeat itself, but, my God, it certainly rhymes — badly, and in the case of America’s self-immolation vis-à-vis China, completely off-key.

Yanis Varoufakis’ brutal dissection in Technofeudalism reads like a coronial report on the West’s terminal idiocy:
We’re not watching the rise of a “new China threat” — we’re watching the dying spasms of a clownish empire losing to its own creation: cloud capital.

Audio: NotebookLM podcast on this topic. NB: The announcers confused my commentary on Varoukakis as my ideas, where I am simply summarising and editorialising.

A Recap for the Attention-Deficit West:

Once upon a time (i.e., post-WWII), America ran a magnificent scam: sell the world things — aeroplanes, refrigerators, good old-fashioned stuff — in exchange for gold. When America became a deficit country (buying far more than it sold), it pivoted brilliantly:
“No more gold, peasants. Here, have an IOU instead.”
Thus was born the Dollar Empire: a global system where America got to run enormous deficits, foreigners got paper promises, and everyone smiled through gritted teeth.

Fast-forward: Japan, Korea, China — they got in line. They built things, exported things, grew rich — and recycled all their lovely profits straight into American property, debt, and Wall Street snake oil.
Win-win!
(Except for the workers on both sides, who were flogged like medieval peasants, but who’s counting?)

The Minotaur Has a Stroke

Then came 2008: America’s financial system committed hara-kiri on live television.
China stepped up to save global capitalism (yes, really), jacking up investment to absurd levels, buying up Western assets, and quietly building something far more dangerous than steelworks and solar panels: cloud finance.

While the West was still dry-humping neoliberal fantasies about “free markets,” China fused Big Tech and Big Brother into a seamless, sprawling surveillance-finance-entertainment-behavioural-modification apparatus.
Think Facebook, Amazon, Citibank, your GP, your car insurance, and your government — all rolled into one app.
Welcome to WeChat World™ — population: everyone.

The New Cold War: Idiots vs Strategists

Enter Trump. And Biden. And the bipartisan realisation, delivered with all the subtlety of a pub brawl, that China’s Big Tech wasn’t just mimicking Silicon Valley — it was obliterating it.
TikTok wasn’t just teenagers dancing. It was dollar extraction without the need for US trade deficits or dollar supremacy.

Cue blind panic. Ban Huawei! Ban TikTok! Ban chips! Ban thought!
Meanwhile, Beijing smiled, nodded, and built its own chips, its own cloud, its own digital currency.
When the US froze Russian central bank assets in 2022, it unwittingly told every finance minister from Delhi to Dakar:
“Your money isn’t safe with us.”
Oops.

The Chinese digital yuan — a once quaint science project — suddenly looked like the lifeboat on a burning ship.
Guess which way the rats are swimming?

Europe: Toasted, Buttered, and Eaten

As for Europe? Bless them. Still fantasising about “strategic autonomy” while chained to America’s collapsing empire like a loyal spaniel.
Europe lacks cloud capital, lacks industrial capacity, and now — post-Ukraine, post-energy crisis — lacks even the pretence of relevance.
Germany, France, the Netherlands: mere franchisees of American technofeudal overlords.

Brussels’ vision for the future?
“Please sir, may we remain a respectable vassal state?”

The Global South: Choose Your Feudal Lord

The so-called “developing world” faces an even grimmer menu:

  • Pledge allegiance to Washington’s dying dollar-based cloud fief?
  • Or become serfs under Beijing’s emerging digital rentier aristocracy?
    Either way, the crops are taxed, the wells are privatised, and the commons are torched.

Development? Don’t make me laugh. The South has been invited to another game of “Heads they win, tails you starve.”

Technofeudalism: A Lovable New Hell

Meanwhile, back in the heartlands of Empire, cloudalists — Google, Amazon, Tencent, Alibaba — are fencing off reality itself.
You will own nothing, subscribe to everything, and feed their algorithms while praying for a dopamine hit.
Democracy?
A charming relic, like powdered wigs and carrier pigeons.

In a final, cosmically ironic twist, it turns out that the only force keeping China’s cloudalists in check is… the Chinese Communist Party itself.
Yes, dear liberals: the last faint flicker of “people power” resides under authoritarian rule, while the “free world” rolls over like a half-seduced Victorian maiden.