Rather than recreate a recent post on my business site, LinkedIn.
(Warning: contains traces of logic, satire, and uncomfortable truths. But you knew that.)
Audio: NotebookLM podcast on the linked topic.
Itâs just refusing to cosplay as your idealised fantasy of “human” cognition.
While pundits at the Wall Street Journal lament that AI thinks with “bags of heuristics” instead of “true models,” they somehow forget that humans themselves are kludged-together Rube Goldberg disasters, lurching from cognitive bias to logical fallacy with astonishing grace.
In my latest piece, I take a flamethrower to the myth of human intellectual purity, sketch a real roadmap for modular AI evolution, and suggest (only partly in jest) that the machines are becoming more like us every day â messy, contradictory, and disturbingly effective.
Letâs rethink what “thinking” actually means. Before the machines do it for us.
We live in an age intoxicated by models: climate models, economic models, epidemiological models, cosmological modelsâeach one an exquisite confection of assumptions draped in a lab coat and paraded as gospel. Yet if you trace the bloodline of model-building back through the annals of intellectual history, you encounter two figures who coldly remind us of the scam: George Box and Hilary Lawson.
Box: The Gentle Assassin of Certainty
George Box, the celebrated statistician, is often credited with the aphorism: “All models are wrong, but some are useful.” However, Box himself never uttered this precise phrase. What he did say, in his 1976 paper Science and Statistics, was:
“Since all models are wrong, the scientist must be alert to what is importantly wrong.”
George Box
The “some are useful” flourish was added later by a public desperate to sweeten the bitter pill. Nevertheless, Box deserves credit for the lethal insight: no model, however elegant, perfectly captures reality. They are provisional guesses, finger-paintings smeared across the rough surface of the unknown.
Audio: NotebookLM podcast on this topic.
Lawson: The Arsonist Who Burned the Map
Hilary Lawson, contemporary philosopher and author of Closure: A Story of Everything, drags Box’s modest scepticism into full-blown philosophical insurrection. In a recent lecture, Lawson declared:
“You donât need truth to have a usable model.”
Hilary Lawson
Where Box warns us the emperor’s clothes don’t fit, Lawson points out that the emperor himself is a paper doll. Either way, we dress our ignorance in equations and hope no one notices the draft.
Lawson’s view is grim but clarifying: models are not mere approximations of some Platonic truth. They are closuresâtemporary, pragmatic structures we erect to intervene effectively in a world we will never fully comprehend. Reality, in Lawson’s framing, is an âopennessâ: endlessly unfolding, resistant to total capture.
The Case of the Celestial Spheres
Take Aristotle’s model of celestial spheres. Ludicrous? Yes. Obsolete? Absolutely. Yet for centuries, it allowed navigators to chart courses, astrologers to cast horoscopes, and priests to intimidate peasantsâall without the slightest whiff of heliocentrism. A model does not need to be right; it merely needs to be operational.
Our modern theoriesâBig Bang cosmology, dark matter, and quantum gravityâmay well be tomorrow’s celestial spheres: charming relics of ignorance that nonetheless built bridges, cured diseases, and sold mobile phones.
Summary Table: Lawson’s View on Models and Truth
Aspect
Lawsonâs Position
Role of Models
Tools/metaphors for intervention, not truth
Truth
Not required for usefulness
Refinement
Models are improved for practical effectiveness
Reality
Fundamentally open, never fully captured by models
Implication
Focus on utility and adaptability, not final truth
Conclusion
Box taught us to distrust the fit of our models; Lawson reminds us there is no true body underneath them. If truth is a ghost, then our models are ghost storiesâand some ghost stories, it turns out, are very good at getting us through the night.
We are left not with certainty, but with craftsmanship: the endless, imperfect art of refining our closures, knowing full well they are lies that work. Better lies. Usable lies. And perhaps, in a world without final answers, that is the most honest position of all.
A five-part descent into the illusion of autonomy, where biology writes the script, reason provides the excuse, and the self is merely the echo of its own conditioning.This is a follow-up to a recent post on the implausibility of free will.
Manipulability as Disproof
If your will can be altered without your consent, was it ever truly yours?
âIf a button on the outside of your skull can change your morality, then where, exactly, is your autonomy hiding?â
Audio: NotebookLM podcast of this topic.
Weâve heard it all before:
âSure, Iâm influenced â but at the end of the day, I choose.â But what happens when that influence isnât influence, but modulation? What if your very sense of right and wrong â your willingness to forgive, to punish, to empathise â can be dialled like a radio station?
And what if you never know itâs happening?
Your Morality Is Neurochemical
Studies using Transcranial Magnetic Stimulation (TMS) and Transcranial Direct Current Stimulation (tDCS) have shown that moral judgments can be shifted by stimulating the dorsolateral prefrontal cortex (DLPFC).
Turn it up: the subject becomes more utilitarian.
Turn it down: the subject becomes more emotionally reactive.
They make different decisions in the exact same scenarios, depending on which neural pathway is dominant.
The kicker?
They always explain their choices as though they had made them deliberately.
There is no awareness of the manipulation. Only a retrospective illusion of authorship.
A|B Testing the Soul
Letâs run a thought experiment.
Scenario A: Youâre well-fed, calm, unprovoked. Scenario B: Youâre hungry, cortisol-spiked, primed with images of threat.
Same moral dilemma. Different choice.
Query both versions of you, and both will offer coherent post hoc justifications. Neither suspects that their âwillâ was merely a biochemical condition in drag.
If both versions feel authentic, then neither can claim authority.
Your will is not sovereign. Itâs state-dependent. And if it changes without your knowledge, it was never really yours to begin with.
Even the Observer Is a Variable
To make matters worse: the person judging your decision is just as susceptible.
An irritated observer sees you as difficult. A relaxed one sees you as generous. The same action â different verdict.
And yet both observers think they are the neutral party. They are not. They are chemically calibrated hallucinations, mistaking their reaction for objective truth.
Youâre a Vending Machine, Not a Virtuoso
This isnât metaphor. Itâs architecture.
You input a stimulus.
The brain processes it using pre-loaded scripts, shaped by hormones, past trauma, fatigue, blood sugar, social context.
An output emerges.
Then the brain rationalises it, like a PR firm cleaning up after a CEOâs impulse tweet.
Reason follows emotion. Emotion is involuntary. Therefore, your reasoning is not yours. Itâs a post-event explanation for something you didnât choose to feel.
TL;DR: If It Can Be Tweaked, Itâs Not Yours
Your âmoral coreâ can be adjusted without your awareness.
You justify manipulated choices with total confidence.
Your assessors are equally chemically biased.
There is no neutral version of âyouâ â just shifting states with internal coherence.
And if your choices depend on state, and your state can be altered, then freedom is a costume worn by contingency.
This fits rather nicely into a recent theme Iâve been dissecting â The Dubious Art of Reasoning: Why Thinking Is Harder Than It Looks â particularly regarding the limitations of deductive logic built upon premises that are, shall we say, a tad suspect. So whatâs actually happening in Harrisâs tidy moral meat grinder?
Audio: NotebookLM podcast on this topic.
Let us begin at the root, the hallowed dogma no one dares blaspheme: the belief that life has value. Not just any value, mind you, but a sacred, irrefutable, axiomatic kind of value â the sort of thing whispered in holy tones and enshrined in constitutions, as though handed down by divine courier.
But letâs not genuflect just yet. âValueâ is not some transcendent essence; itâs an economic artefact. Value, properly speaking, is something tested in a marketplace. So, is there a market for human life?
Historically, yes â but one doubts Harris is invoking the Atlantic slave trade or Victorian child labour auctions. No, what heâs tapping into is a peculiarly modern, unexamined metaphysical presumption: that human beings possess inherent worth because, well, they simply must. We’ve sentimentalised supply and demand.
Now, this notion of worth â where does it come from? Let us not mince words: itâs theological. It is the residue of religious metaphysics, the spiritual afterbirth of the soul. Weâre told that all souls are precious. All life is sacred. Cue the soft lighting and trembling organ chords. But if you strip away the divine scaffolding â and I suggest we do â then this âvalueâ collapses like a soufflĂŠ in a thunderstorm. Without God, there is no soul; without soul, there is no sacredness. Without sacredness? Just meat. Glorified offal.
So what are we left with?
Null values. A society of blank spreadsheets, human lives as rows with no data in the ‘Value’ column. A radical equality of the meaningless.
Now letâs take a darker turn â because why not, since weâre already plumbing the ethical abyss. The anti-natalists, those morose prophets of philosophical pessimism, tell us not only that life lacks positive value, but that it is intrinsically a burden. A cosmic mistake. A raw deal. The moment one is born, the suffering clock starts ticking.
Flip the moral equation in The Death Lottery, and what you get is this: saving three lives is not a moral victory â itâs a net increase in sentient suffering. If you kill one to save three, youâve multiplied misery. Congratulations. Youâve created more anguish with surgical efficiency. And yet we call this a triumph of compassion?
According to this formulation, the ethical choice is not to preserve the many at the cost of the few. It is to accelerate the great forgetting. Reduce the volume of suffering, not its distribution.
But hereâs the deeper problem â and it’s a trick of philosophical stagecraft: this entire thought experiment only becomes a âdilemmaâ if you first accept the premises. That life has value. That death is bad. That ethics is a numbers game. That morality can be conducted like a cost-benefit spreadsheet in a celestial boardroom.
Yet why do we accept these assumptions? Tradition? Indoctrination? Because they sound nice on a Hallmark card? These axioms go unexamined not because they are true, but because they are emotionally convenient. They cradle us in the illusion that we are important, that our lives are imbued with cosmic significance, that our deaths are tragedies rather than banal statistical certainties.
But the truth â the unvarnished, unmarketable truth â is that The Death Lottery is not a test of morality, but a test of credulity. A rigged game. An illusion dressed in the solemn robes of logic.
And like all illusions, it vanishes the moment you stop believing in it.Let’s deconstruct the metanarratives in play. First, we are told uncritically that life has value. Moreover, this value is generally positive. But all of this is a human construct. Value is an economic concept that can be tested in a marketplace. Is there a marketplace for humans? There have been slave marketplaces, but I’m pretty sure that’s not what this aims for. There are wage and salary proxies. Again, I don’t think this is what they are targeting.
This worth is metaphysical. But allow me to cut to the chase. This concept of worth has religious roots, the value of the soul, and all souls are precious, sacred, actually. One might argue that the body is expendable, but let’s not go there. If we ignore the soul nonsense and dispense of the notion that humans have any inherent value not merely conjured, we are left with an empty set, all null values.
But let’s go further. Given anti-natalist philosophy, conscious life not only has value but is inherently negative, at least ex ante. This reverses the maths â or flips the inequality sign â to render one greater than three. It’s better to have only one suffering than three.
Ultimately, this is only a dilemma if one accepts the premises, and the only reason to do so is out of indoctrinated habit.
Postscript: Notes from the Abyss
David Benatar, in Better Never to Have Been, argues with pitiless logic that coming into existence is always a harm â that birth is a curse disguised as celebration. He offers no anaesthetic. Existence is pain; non-existence, the balm.
Peter Wessel Zapffe, the Norwegian prophet of philosophical despair, likened consciousness to a tragic evolutionary overreach â a cosmic misfire that left humanity acutely aware of its own absurdity, scrambling to muffle it with distraction, denial, and delusion. For him, the solution was elegant in its simplicity: do not reproduce. Shut the trapdoor before more souls tumble in.
And then there is Cioran, who did not so much argue as exhale. “It is not worth the bother of killing yourself, since you always kill yourself too late.” He understood what the rest of us politely ignore â that life is a fever dream from which only death delivers.
So if the question is whether one life is worth more than three, we must first ask whether any of them were worth having in the first place.
The answer, for the brave few staring into the black, may be a shrug â or silence.
Full Disclosure: I read the first volume of The Hunger Games just before the film was released. It was OK â certainly better than the film. This video came across my feed, and I skipped through it. Near the end, this geezer references how Katniss saves or recovers deteriorated morality. Me being me, I found issue with the very notion that a relative, if not subjective, concept could be recovered.
The OP asks if The Hunger Games are a classic. I’d argue that they are a categorical classic, like Harry Potter, within the category of YA fiction.
Audio: NotebookLM podcast discussing this topic.
The Hunger Games doesnât depict the death of morality â itâs a masterclass in how to twist it into a circus act.
Video: YouTube video that spawned this topic.
Let us dispense with the hand-wringing. The Hunger Games is not a parable of moral decay. It is something far more chilling: a vivid portrait of moral engineering â the grotesque contortion of ethical instincts into instruments of domination and spectacle.
Those who bemoan the âdecline of moralityâ in Panem have rather missed the point. There is no absence of morality in the Capitol â only a different version of it. A rebranded, corporatised, state-sanctioned morality, lacquered in lipstick and broadcast in 4K. It is not immorality that reigns, but a hyperactive ideological morality, designed to keep the masses docile and the elites draped in silk.
This is not moral entropy; itâs moral mutation.
Children are not slaughtered because people have forgotten right from wrong â they are slaughtered because a society has been trained to believe that this is what justice looks like. That blood is penance. That fear is unity. That watching it all unfold with a glass of champagne in hand is perfectly civilised behaviour.
This isnât the death of morality. Itâs a hostile takeover.
The Moral PR Machine
If morality is, as many of us suspect, relative â a cultural construct built on consensus, coercion, and convenience â then it can no more âdeclineâ than fashion trends can rot. It simply shifts. One day, shoulder pads are in. The next, it’s child-on-child murder as prime-time entertainment.
In Panem, the moral compass has not vanished. Itâs been forcibly recalibrated. Not by reason or revelation, but by propaganda and fear. The Games are moral theatre. A grim ritual, staged to remind the Districts who holds the reins, all under the nauseating guise of tradition, order, and justice.
The citizens of the Capitol arenât monsters â theyâre consumers. Trained to see horror as haute couture. To mistake power for virtue. To cheer while children are butchered, because thatâs what everyone else is doing â and, crucially, because theyâve been taught itâs necessary. Necessary evils are the most seductive kind.
Katniss: Not a Saint, But a Saboteur
Enter Katniss Everdeen, not as the moral saviour but as the spanner in the machine. She doesnât preach. She doesnât have a grand theory of justice. What she has is visceral disgust â an animal revulsion at the machinery of the Games. Her rebellion is personal, tribal, and instinctive: protect her sister, survive, refuse to dance for their amusement.
She isnât here to restore some lost golden age of decency. Sheâs here to tear down the current script and refuse to read her lines.
Her defiance is dangerous not because itâs moral in some abstract, universal sense â but because it disrupts the Capitolâs moral narrative. She refuses to be a pawn in their ethical pageant. She reclaims agency in a world that has commodified virtue and turned ethics into state theatre.
So, Has Morality Declined?
Only if you believe morality has a fixed address â some eternal North Star by which all human actions may be judged. But if, as postmodernity has rather insistently suggested, morality is a shifting social fiction â then Panemâs horror is not a fall from grace, but a recalibration of what counts as “grace” in the first place.
And thatâs the real horror, isnât it? Not that morality has collapsed â but that it still exists, and it likes what it sees.
Conclusion: The Real Hunger
The Hunger Games is not about a society starved of morality â itâs about a world gorging on it, cooked, seasoned, and served with a garnish of guiltless indulgence. It is moral appetite weaponised. Ethics as edict. Conscience as costume.
If you feel sickened by what you see in Panem, itâs not because morality has vanished.
There is a kind of political necromancy afoot in modern discourseâa dreary chant murmured by pundits, CEOs, and power-drunk bureaucrats alike: “Itâs just human nature.” As if this incantation explains, excuses, and absolves all manner of violent absurdities. As if, by invoking the mystic forces of evolution or primal instinct, one can justify the grotesque state of things. Income inequality? Human nature. War? Human nature. Corporate psychopathy? Oh, sweetie, itâs just how we’re wired.
What a convenient mythology.
Audio: NotebookLM podcast on this topic.
If “human nature” is inherently brutish and selfish, then resistance is not only futile, it is unnatural. The doctrine of dominance gets sanctified, the lust to rule painted as destiny rather than deviance. Meanwhile, the quiet, unglamorous yearning of most peopleâto live undisturbed, to coĂśperate rather than conquerâis dismissed as naĂŻve, childish, and unrealistic. How curious that the preferences of the vast majority are always sacrificed at the altar of some aggressive minority’s ambitions.
Let us dispense with this dogma. The desire to dominate is not a feature of human nature writ large; it is a glitch exploited by systems that reward pathological ambition. Most of us would rather not be ruled, and certainly not managed by glorified algorithms in meat suits. The real human inclination, buried beneath centuries of conquest and control, is to live in peace, tend to our gardens, and perhaps be left the hell alone.
And yet, we are not. Because there exists a virulent cohortâcall them oligarchs, executives, generals, kingsâwhose raison d’ĂŞtre is the acquisition and consolidation of power. Not content to build a life, they must build empires. Not content to share, they must extract. They regard the rest of us as livestock: occasionally troublesome, but ultimately manageable.
To pacify us, they offer the Social Contractâ˘âa sort of ideological bribe that says, “Give us your freedom, and we promise not to let the wolves in.” But what if the wolves are already inside the gates, wearing suits and passing legislation? What if the protection racket is the threat itself?
So no, it is not “human nature” that is the problem. Cancer is natural, too, but we donât celebrate its tenacity. We treat it, research it, and fight like hell to survive it. Likewise, we must treat pathological power-lust not as an inevitability to be managed but as a disease to be diagnosed and dismantled.
The real scandal isnât that humans sometimes fail to coĂśperate. Itâs that weâre constantly told weâre incapable of it by those whose power depends on keeping it that way.
Let the ruling classes peddle their myths. The rest of us might just choose to write new ones.
The Illusion of Clarity in a World of Cognitive Fog
Apologies in advance for this Logic 101 posting. Reasonâour once-proud torch in the darkness, now more like a flickering lighter in a hurricane of hot takes and LinkedIn thought-leadership. The modern mind, bloated on TED Talks and half-digested Wikipedia articles, tosses around terms like “inductive” and “deductive” as if they’re interchangeable IKEA tools. So let us pause, sober up, and properly inspect these three venerable pillars of human inference: deduction, induction, and abductionâeach noble, each flawed, each liable to betray you like a Greco-Roman tragedy.
Video: This post was prompted by this short by MiniPhilosophy.
Audio: NotebookLM podcast on this topic.
Deduction: The Tyrant of Certainty
Deduction is the purest of the lot, the high priest of logic. It begins with a general premise and guarantees a specific conclusion, as long as you don’t cock up the syllogism. Think Euclid in a toga, laying down axioms like gospel.
Example:
All humans are mortal.
Socrates is human.
Therefore, Socrates is mortal.
Perfect. Crisp. Unassailable. Unless, of course, your premise is bollocks. Deduction doesn’t check its ingredientsâit just cooks with whatever it’s given. Garbage in, garbage out.
Strength: Valid conclusions from valid premises. Weakness: Blind to empirical falsity. You can deduce nonsense from nonsense and still be logically sound.
Induction: The Gambler’s Gospel
Induction is the philosopher’s lottery ticket: generalising from particulars. Every swan I’ve seen is white, ergo all swans must be white. Until, of course, Australia coughs up a black one and wrecks your little Enlightenment fantasy.
Example:
The sun rose today.
It rose yesterday.
It has risen every day I’ve been alive.
Therefore, the sun will rise tomorrow.
Touching, isn’t it? Unfortunately, induction doesn’t prove anythingâit suggests probability. David Hume had an existential breakdown over this. Entire centuries of Western philosophy spiralled into metaphysical despair. And yet, we still rely on it to predict weather, markets, and whether that dodgy lasagna will give us food poisoning.
Strength: Empirically rich and adaptive. Weakness: One exception detonates the generalisation. Induction is only ever as good as the sample size and your luck.
Abduction: Sherlock Holmes’ Drug of Choice
Abduction is the inference to the best explanation. The intellectual equivalent of guessing what made the dog bark at midnight while half-drunk and barefoot in the garden.
Example:
The lawn is wet.
It probably rained.
It could be a garden sprinkler. Or a hose. Or divine intervention. But we bet on rain because it’s the simplest, most plausible explanation. Pragmatic, yes. But not immune to deception.
Strength: Useful in messy, real-world contexts. Weakness: Often rests on a subjective idea of “best,” which tends to mean “most convenient to my prejudices.”
The Modern Reasoning Crisis: Why We’re All Probably Wrong
Our contemporary landscape has added new layers of complexity to these already dubious tools. Social media algorithms function as induction machines on steroids, drawing connections between your click on a pasta recipe and your supposed interest in Italian real estate. Meanwhile, partisan echo chambers have perfected the art of deductive reasoning from absolutely bonkers premises.
Consider how we navigate information today:
We encounter a headline that confirms our worldview
We accept it without scrutiny (deductive failure)
We see similar headlines repeatedly (inductive trap)
We conclude our worldview is objectively correct (abductive collapse)
And thus, the modern reasoning loop is completeâa perfect system for being confidently incorrect while feeling intellectually superior.
Weakness by Analogy: The Reasoning CafĂŠ
Imagine a cafĂŠ.
Deduction is the customer who checks the menu and confidently orders “Soup of the Day,” because the chalkboard says “Today’s Soup is Tomato,” and she trusts chalkboards.
Induction is the one who has had tomato soup every Wednesday for months and assumes it’ll be tomato today againâuntil it isn’t, and now he’s wearing bisque.
Abduction sees the waiter carrying bowls of red liquid to every table and infers it’s probably tomato soup, orders it, and gets⌠gazpacho. Ice-cold disappointment.
All three are trying to reason. Only one might get lunch.
The Meta-Problem: Reasoning About Reasoning
The true joke is this: we’re using these flawed reasoning tools to evaluate our reasoning tools. It’s like asking a drunk person to judge their own sobriety test. The very mechanisms we use to detect faulty reasoning are themselves subject to the same faults.
This explains why debates about critical thinking skills typically devolve into demonstrations of their absence. We’re all standing on intellectual quicksand while insisting we’ve found solid ground.
Conclusion: Reason Is Not a Guarantee, It’s a Wager
None of these modalities offer omniscience. Deduction only shines when your axioms aren’t ridiculous. Induction is forever haunted by Hume’s skepticism and the next black swan. Abduction is basically educated guessing dressed up in tweed.
Yet we must reason. We must argue. We must inferâdespite the metaphysical vertigo.
The tragedy isn’t that these methods fail. The tragedy is when people believe they don’t.
Perhaps the wisest reasoners are those who understand the limitations of their cognitive tools, who approach conclusions with both confidence and humility. Who recognize that even our most cherished beliefs are, at best, sophisticated approximations of a reality we can never fully grasp.
So reason on, fellow thinkers. Just don’t be too smug about it.
They say no one escapes the Spectacle. Guy Debord made sure of that. His vision was airtight, his diagnosis terminal: we are all spectators now, alienated from our labour, our time, our own damn lives. It was a metaphysical muggingâexistence held hostage by images, by commodities dressed in drag. The future was a feedback loop, and we were all doomed to applaud.
Audio: NotebookLM podcast on this topic. Apologies in advance for the narrators’ mangling of the pronunciation of ‘Guy Debord’.
But what if the loop could be hacked? What if the infinitely halved distances of motionless critiqueâZenoâs Paradox by way of Marxâcould finally be crossed?
Enter: Yanis Varoufakis. Economist, ex-finance minister, techno-cassandra with a motorbike and a vendetta. Where Debord filmed the catastrophe in black-and-white, Varoufakis showed up with the source code.
Debordâs Limbo
Debord saw it all coming. The substitution of reality with its photogenic simulacrum. The slow death of agency beneath the floodlights of consumption. But like Zenoâs paradox, he could only gesture toward the end without ever reaching it. Each critique halved the distance to liberation but never arrived. The Spectacle remained intact, omnipresent, and self-replicatingâlike an ontological screensaver.
He gave us no path forward, only a beautiful, ruinous analysis. A Parisian shrug of doom.
Varoufakisâ Shortcut
But then comes Varoufakis, breaking through the digital labyrinth not by philosophising the Spectacle, but by naming its successor: Technofeudalism.
See, Debord was chasing a moving targetâa capitalism that morphed from industrial to financial to semiotic faster than his prose could crystallise. But Varoufakis caught it mid-mutation. He pinned it to the slab and sliced it open. What spilled out wasnât capital anymoreâit was rent. Platform rent. Algorithmic tolls. Behavioural taxes disguised as convenience. This isnât the market gone madâitâs the market dissolved, replaced by code-based fiefdoms.
The paradox is resolved not by reaching utopia, but by realising weâve already crossed the lineâwe just werenât told. The market isnât dying; itâs already dead, and weâre still paying funeral costs in monthly subscriptions and attention metrics.
From Spectacle to Subjugation
Debord wanted to unmask the performance. Varoufakis realised the theatre had been demolished and replaced with a server farm.
You donât watch the Spectacle anymore. It watches you. It optimises you. It learns your keystrokes, your pulse rate, your browsing history. Welcome to feudal recursion, where Amazon is your landlord, Google your priest, and Meta your confessor.
Solving Zeno the Varoufakis Way
So how does one cross the infinite regress of alienation? Simple. You call it what it is. You reclassify the terrain.
“This is not capitalism,” Varoufakis says, in the tone of a man pulling a mask off a Scooby-Doo villain. “Itâs technofeudalism. Capital didnât win. It went feudal. Again.”
By doing so, he bypasses the academic ballet that has critics forever inching closer to the truth without touching it. He calls the system new, not to sell books, but to make strategy possible. Because naming a beast is the first step in slaying it.
In Conclusion: Debord Dreamed, Varoufakis Drives
Debord haunts the museum. Varoufakis raids the server room. Both are essential. But only one gives us a new map.
The Spectacle hypnotised us. Technofeudalism enslaves us. And if thereâs a way out, it wonât be through slogans spray-painted on Parisian walls. It will be built in code, deployed across decentralised networks, and carried forward by those who remember what it meant to be not watched.
Let Debord whisper. Let Varoufakis roar. And let the rest of us sharpen our blades.
Post-COVID, weâre told trust in science is eroding. But perhaps the real autopsy should be performed on the institution of public discourse itself.
Since the COVID-19 crisis detonated across our global stageâpart plague, part PR disasterâthe phrase âtrust in scienceâ has become the most abused slogan since âthoughts and prayers.â Every public official with a podium and a pulse declared they were âfollowing the science,â as if âscienceâ were a kindly oracle whispering unambiguous truths into the ears of the righteous. But what happened when those pronouncements proved contradictory, politically convenient, or flat-out wrong? Was it science that failed, or was it simply a hostage to an incoherent performance of authority?
Audio: NotebookLM podcast discussing this topic.
Two recent Nature pieces dig into the supposed âdeclineâ of scientific credibility in the post-pandemic world, offering the expected hand-wringing about public opinion and populist mistrust. But letâs not be so credulous. This isnât merely a crisis of trustâitâs a crisis of theatre.
âThe Scienceâ as Ventriloquism
Letâs begin by skewering the central absurdity: there is no such thing as âThe Science.â Science is not a monolith. Itâs not a holy writ passed down by lab-coated Levites. Itâs a processâa messy, iterative, and perpetually provisional mode of inquiry. But during the pandemic, politicians, pundits, and even some scientists began to weaponise the term, turning it into a rhetorical cudgel. âThe Science saysâ became code for âshut up and comply.â Any dissentâeven from within the scientific communityâwas cast as heresy. Galileo would be proud.
In Nature Human Behaviour paper (van der Linden et al., 2025) identifies four archetypes of distrust: distrust in the message, the messenger, the medium, and the motivation. What they fail to ask is: what if all four were compromised simultaneously? What if the medium (mainstream media) served more as a stenographer to power than a check upon it? What if the message was oversimplified into PR slogans, the messengers were party apparatchiks in lab coats, and the motivations were opaque at best?
Trust didnât just erode. It was actively incinerated in a bonfire of institutional vanity.
A Crisis of Influence, Not Integrity
The second Nature commentary (2025) wrings its hands over âwhy trust in science is declining,â as if the populace has suddenly turned flat-Earth overnight. But the real story isnât a decline in trust per se; itâs a redistribution of epistemic authority. Scientists no longer have the stage to themselves. Influencers, conspiracy theorists, rogue PhDs, and yesâexhausted citizens armed with Wi-Fi and anxietyâhave joined the fray.
Science hasnât lost truthâitâs lost control. And frankly, perhaps it shouldnât have had that control in the first place. Democracy is messy. Information democracies doubly so. And in that mess, the epistemic pedestal of elite scientific consensus was bound to toppleâespecially when its public face was filtered through press conferences, inconsistent policies, and authoritarian instincts.
Technocracyâs Fatal Hubris
What we saw wasnât science failingâit was technocracy failing in real time, trying to manage public behaviour with a veneer of empirical certainty. But when predictions shifted, guidelines reversed, and public health policy began to resemble a mood ring, the lay public was expected to pretend nothing happened. Orwell would have a field day.
This wasnât a failure of scientific method. It was a failure of scientific messagingâan inability (or unwillingness) to communicate uncertainty, probability, and risk in adult terms. Instead, the public was infantilised. And then pathologised for rebelling.
Toward a Post-Scientistic Public Sphere
So where does that leave us? Perhaps we need to kill the idol of âThe Scienceâ to resurrect a more mature relationship with scientific discourseâone that tolerates ambiguity, embraces dissent, and admits when the data isn’t in. Science, done properly, is the art of saying âwe donât know⌠yet.â
The pandemic didnât erode trust in science. It exposed how fragile our institutional credibility scaffolding really isâhow easily truth is blurred when science is fed through the meat grinder of media, politics, and fear.
The answer isnât more science communicationâitâs less scientism, more honesty, and above all, fewer bureaucrats playing ventriloquist with the language of discovery.
Conclusion
Trust in science isnât dead. But trust in those who claim to speak for science? Thatâs another matter. Perhaps itâs time to separate the two.
By the time we reach Chapter Seven of Technofeudalism: What Kills Capitalism, Yanis Varoufakis drops the ledger sheets and spreadsheets and starts sketching utopia in crayon. Entitled Escape from Technofeudalism, it proposesâbrace yourselfâa workplace democracy. Itâs aspirational, yes. Compelling? Not particularly. Especially if, like me, youâve long since stopped believing that democracy is anything more than a feel-good placebo for structural impotence.
Audio: NotebookLM podcast discussing this topic.
To be clear: the preceding chapters, particularly the first six, are sharp, incisive, and frankly, blistering in their indictment of todayâs economic disfiguration. But Chapter Seven? It’s less an escape plan, more a group therapy session masquerading as an operational model.
So letâs take his proposal for Democratised Companiesapart, one charming layer at a time.
âImagine a corporation in which every employee has a single share that they receive when hiredâŚâ
Splendid. One person, one vote. Adorable.
âAll decisions â hiring, promotion, research, product development, pricing, strategy â are taken collectivelyâŚâ
Because thereâs nothing more efficient than a hiring committee comprised of thirty engineers, two janitors, a receptionist, and Steve from Accounts, whose main contribution is passive-aggressive sighing.
ââŚwith each employee exercising their vote via the companyâs intranetâŚâ
Marvellous. Weâve now digitised the tyranny of the majority and can timestamp every idiotic decision for posterity.
âEqual ownership does not, however, mean equal pay.â
A relief. Until it doesnât.
âPay is determined by a democratic process that divides the companyâs post-tax revenues into four slicesâŚâ
Here, dear reader, is where the cake collapses. Why, precisely, should a randomly-assembled group of employeesâwith wildly varying financial literacyâbe entrusted to divide post-tax revenue like itâs a birthday cake at a toddlerâs party?
And how often are these slices recalibrated? Each fiscal year? Every time someone is hired or fired? Do we amend votes quarterly or wait until the economic ship has already struck an iceberg?
Varoufakis does suggest preference voting to tackle allocation disputes:
âAny proposal to increase one slice must be accompanied by a proposal to reduce expenditure on one or more of the other slicesâŚâ
Fine. In theory, algorithmic voting procedures sound neat. But it presumes voters are rational, informed, and cooperative. If youâve ever seen a corporate Slack thread devolve into emoji warfare, youâll know that this is fiction on par with unicorns and meritocracy.
âThe basic pay slice is then divided equally among all staff â from persons recently employed as secretaries or cleaners to the firmâs star designers or engineers.â
Ah yes, the âequalityâ bit. Equal pay, unequal contribution. This isnât egalitarianismâitâs enforced mediocrity. It might work in a monastery. Less so in a competitive tech firm where innovation requires both vision and differentiated incentive.
Now, on to bonuses, which are democratically determined by:
â…employees each given one hundred digital tokens to distribute among their colleaguesâŚâ
Welcome to Black Mirror: Workplace Edition. This is less economics, more playground politics. Who gets tokens? The charismatic chatterbox in the break room? The person who shared their lunch? The ghost employee who never shows up but emails back promptly?
And how, pray tell, does one evaluate the receptionistâs contribution relative to the lead engineerâs or the janitorâs? This isnât peer reviewâitâs populism with a smiley face.
Weâve all seen âTeacher of the Yearâ competitions turn into contests of who had the cutest class poster or best cupcakes. Now imagine your livelihood depending on it.
In summary, democracy in the workplace may sound noble, but in practice, it’s the bureaucratic equivalent of herding caffeinated cats. It doesnât even work in small groups, let alone an organisation of hundreds. Democracyâwhen applied to every function of an enterpriseâis not liberation; itâs dilution. Itâs design-by-committee, strategy-by-consensus, and ultimately, excellence-by-accident.
Escape from Technofeudalism? Perhaps. But not by replacing corporate lords with intranet polls and digital tokens. Thatâs not an exit strategyâitâs a cosplay of collectivism.