Yuval Noah Harari, always ready with a digestible morsel for the TED-addled masses, recently declared that “democracy runs on trust, dictatorship on terror.” It’s a line with the crispness of a fortune cookie and about as much analytical depth. Designed for applause, not interrogation, it’s the sort of soundbite that flatters liberal sensibilities while sanding off the inconvenient edges of history.
Audio: NotebookLM podcast on this topic.
Let’s be honest: this dichotomy is not merely simplistic – it’s a rhetorical sedative. It reassures those who still believe political systems are like kitchen appliances: plug-and-play models with clear instructions and honest warranties. But for anyone who’s paid attention to the actual mechanics of power, this framing is delusional.
1. Trust Was Never Earned
In the United States, trust in democratic institutions was never some noble compact forged through mutual respect and enlightened governance. It was cultivated through exclusion, propaganda, and economic bribery. The post-WWII boom offered the illusion of institutional legitimacy – but only if you were white, male, middle-class, and preferably asleep.
Black Americans, Indigenous peoples, immigrants, women – none were granted the luxury of naïve trust. They were told to trust while being actively disenfranchised. To participate while being systemically excluded. So no, Harari, the machine didn’t run on trust. It ran on marketing. It ran on strategic ignorance.
2. Dictatorship Doesn’t Require Terror
Equally cartoonish is the notion that dictatorships subsist purely on terror. Many of them run quite comfortably on bureaucracy, passive conformity, and the grim seduction of order. Authoritarians know how to massage the same trust reflexes as democracies – only more bluntly. People don’t just obey out of fear. They obey out of habit. Out of resignation. Out of a grim kind of faith that someone – anyone – is in charge.
Dictatorships don’t extinguish trust. They re-route it. Away from institutions and toward strongmen. Toward myths of national greatness. Toward performative stability. It’s not that terror is absent—it’s just not the whole machine. The real engine is misplaced trust.
3. Collapse Is Bipartisan
The present moment isn’t about the erosion of a once-trustworthy system. It’s the slow-motion implosion of a confidence game on all sides. The old liberal institutions are collapsing under the weight of their hypocrisies. But the loudest critics – tech messiahs, culture warriors, authoritarian nostalgists – are no better. Their solutions are just new brands of snake oil in sleeker bottles.
Everyone is pointing fingers, and no one is credible. The public, caught between cynicism and desperation, gravitates either toward restoration fantasy (“make democracy work again”) or authoritarian theatre (“at least someone’s doing something”). Both are dead ends.
4. The Only Way Forward: Structural Reimagination
The only viable path isn’t restoration or regression. It’s reinvention. Systems that demand unconditional trust – like religions and stock markets – are bound to fail, because they rely on sustained illusions. Instead, we need systems built on earned, revocable, and continually tested trust – systems that can survive scrutiny, decentralise power, and adapt to complexity.
In other words: stop trying to repair a house built on sand. Build something else. Something messier, more modular, less mythological.
Let the TED crowd have their slogans. We’ve got work to do.
Why the cherished myth of human autonomy dissolves under the weight of our own biology
We cling to free will like a comfort blanket—the reassuring belief that our actions spring from deliberation, character, and autonomous choice. This narrative has powered everything from our justice systems to our sense of personal achievement. It feels good, even necessary, to believe we author our own stories.
But what if this cornerstone of human self-conception is merely a useful fiction? What if, with each advance in neuroscience, our cherished notion of autonomy becomes increasingly untenable?
Audio: NotebookLM podcast on this topic.
I. The Myth of Autonomy: A Beautiful Delusion
Free will requires that we—some essential, decision-making “self”—stand somehow separate from the causal chains of biology and physics. But where exactly would this magical pocket of causation exist? And what evidence do we have for it?
Your preferences, values, and impulses emerge from a complex interplay of factors you never chose:
The genetic lottery determined your baseline neurochemistry and cognitive architecture before your first breath. You didn’t select your dopamine sensitivity, your amygdala reactivity, or your executive function capacity.
The hormonal symphony that controls your emotional responses operates largely beneath conscious awareness. These chemical messengers—testosterone, oxytocin, and cortisol—don’t ask permission before altering your perceptions and priorities.
Environmental exposures—from lead in your childhood drinking water to the specific traumas of your upbringing—have sculpted neural pathways you didn’t design and can’t easily rewire.
Developmental contingencies have shaped your moral reasoning, impulse control, and capacity for empathy through processes invisible to conscious inspection.
Your prized ability to weigh options, inhibit impulses, and make “rational” choices depends entirely on specific brain structures—particularly the dorsolateral prefrontal cortex (DLPFC)—operating within a neurochemical environment you inherited rather than created.
You occupy this biological machinery; you do not transcend it. Yet, society holds you responsible for its outputs as if you stood separate from these deterministic processes.
transcranial direct current stimulation over the DLPFC alters moral reasoning, especially regarding personal moral dilemmas. The subject experiences these externally induced judgments as entirely their own, with no sense that their moral compass has been hijacked
II. The DLPFC: Puppet Master of Moral Choice
The dorsolateral prefrontal cortex serves as command central for what we proudly call executive function—our capacity to plan, inhibit, decide, and morally judge. We experience its operations as deliberation, as the weighing of options, as the essence of choice itself.
And yet this supposed seat of autonomy can be manipulated with disturbing ease.
When researchers apply transcranial magnetic stimulation to inhibit DLPFC function, test subjects make dramatically different moral judgments about identical scenarios. Under different stimulation protocols, the same person arrives at contradictory conclusions about right and wrong without any awareness of the external influence.
Similarly, transcranial direct current stimulation over the DLPFC alters moral reasoning, especially regarding personal moral dilemmas. The subject experiences these externally induced judgments as entirely their own, with no sense that their moral compass has been hijacked.
If our most cherished moral deliberations can be redirected through simple electromagnetic manipulation, what does this reveal about the nature of “choice”? If will can be so easily influenced, how free could it possibly be?
III. Hormonal Puppetmasters: The Will in Your Bloodstream
Your decision-making machinery doesn’t stop at neural architecture. Your hormonal profile actively shapes what you perceive as your autonomous choices.
Consider oxytocin, popularly known as the “love hormone.” Research demonstrates that elevated oxytocin levels enhance feelings of guilt and shame while reducing willingness to harm others. This isn’t a subtle effect—it’s a direct biological override of what you might otherwise “choose.”
Testosterone tells an equally compelling story. Administration of this hormone increases utilitarian moral judgments, particularly when such decisions involve aggression or social dominance. The subject doesn’t experience this as a foreign influence but as their own authentic reasoning.
These aren’t anomalies or edge cases. They represent the normal operation of the biological systems governing what we experience as choice. You aren’t choosing so much as regulating, responding, and rebalancing a biochemical economy you inherited rather than designed.
IV. The Accident of Will: Uncomfortable Conclusions
If the will can be manipulated through such straightforward biological interventions, was it ever truly “yours” to begin with?
Philosopher Galen Strawson’s causa sui argument becomes unavoidable here: To be morally responsible, one must be the cause of oneself, but no one creates their own neural and hormonal architecture. By extension, no one can be ultimately responsible for actions emerging from that architecture.
What we dignify as “will” may be nothing more than a fortunate (or unfortunate) biochemical accident—the particular configuration of neurons and neurochemicals you happened to inherit and develop.
This lens forces unsettling questions:
How many behaviours we praise or condemn are merely phenotypic expressions masquerading as choices? How many acts of cruelty or compassion reflect neurochemistry rather than character?
How many punishments and rewards are we assigning not to autonomous agents, but to biological processes operating beyond conscious control?
And perhaps most disturbingly: If we could perfect the moral self through direct biological intervention—rewiring neural pathways or adjusting neurotransmitter levels to ensure “better” choices—should we?
Or would such manipulation, however well-intentioned, represent the final acknowledgement that what we’ve called free will was never free at all?
A Compatibilist Rebuttal? Not So Fast.
Some philosophers argue for compatibilism, the view that determinism and free will can coexist if we redefine free will as “uncoerced action aligned with one’s desires.” But this semantic shuffle doesn’t rescue moral responsibility.
If your desires themselves are products of biology and environment—if even your capacity to evaluate those desires depends on inherited neural architecture—then “acting according to your desires” just pushes the problem back a step. You’re still not the ultimate author of those desires or your response to them.
What’s Left?
Perhaps we need not a defence of free will but a new framework for understanding human behaviour—one that acknowledges our biological embeddedness while preserving meaningful concepts of agency and responsibility without magical thinking.
The evidence doesn’t suggest we are without agency; it suggests our agency operates within biological constraints we’re only beginning to understand. The question isn’t whether biology influences choice—it’s whether anything else does.
For now, the neuroscientific evidence points in one direction: The will exists, but its freedom is the illusion.
Full Disclosure: I read the first volume of The Hunger Games just before the film was released. It was OK – certainly better than the film. This video came across my feed, and I skipped through it. Near the end, this geezer references how Katniss saves or recovers deteriorated morality. Me being me, I found issue with the very notion that a relative, if not subjective, concept could be recovered.
The OP asks if The Hunger Games are a classic. I’d argue that they are a categorical classic, like Harry Potter, within the category of YA fiction.
Audio: NotebookLM podcast discussing this topic.
The Hunger Games doesn’t depict the death of morality — it’s a masterclass in how to twist it into a circus act.
Video: YouTube video that spawned this topic.
Let us dispense with the hand-wringing. The Hunger Games is not a parable of moral decay. It is something far more chilling: a vivid portrait of moral engineering — the grotesque contortion of ethical instincts into instruments of domination and spectacle.
Those who bemoan the “decline of morality” in Panem have rather missed the point. There is no absence of morality in the Capitol — only a different version of it. A rebranded, corporatised, state-sanctioned morality, lacquered in lipstick and broadcast in 4K. It is not immorality that reigns, but a hyperactive ideological morality, designed to keep the masses docile and the elites draped in silk.
This is not moral entropy; it’s moral mutation.
Children are not slaughtered because people have forgotten right from wrong — they are slaughtered because a society has been trained to believe that this is what justice looks like. That blood is penance. That fear is unity. That watching it all unfold with a glass of champagne in hand is perfectly civilised behaviour.
This isn’t the death of morality. It’s a hostile takeover.
The Moral PR Machine
If morality is, as many of us suspect, relative — a cultural construct built on consensus, coercion, and convenience — then it can no more “decline” than fashion trends can rot. It simply shifts. One day, shoulder pads are in. The next, it’s child-on-child murder as prime-time entertainment.
In Panem, the moral compass has not vanished. It’s been forcibly recalibrated. Not by reason or revelation, but by propaganda and fear. The Games are moral theatre. A grim ritual, staged to remind the Districts who holds the reins, all under the nauseating guise of tradition, order, and justice.
The citizens of the Capitol aren’t monsters — they’re consumers. Trained to see horror as haute couture. To mistake power for virtue. To cheer while children are butchered, because that’s what everyone else is doing — and, crucially, because they’ve been taught it’s necessary. Necessary evils are the most seductive kind.
Katniss: Not a Saint, But a Saboteur
Enter Katniss Everdeen, not as the moral saviour but as the spanner in the machine. She doesn’t preach. She doesn’t have a grand theory of justice. What she has is visceral disgust — an animal revulsion at the machinery of the Games. Her rebellion is personal, tribal, and instinctive: protect her sister, survive, refuse to dance for their amusement.
She isn’t here to restore some lost golden age of decency. She’s here to tear down the current script and refuse to read her lines.
Her defiance is dangerous not because it’s moral in some abstract, universal sense — but because it disrupts the Capitol’s moral narrative. She refuses to be a pawn in their ethical pageant. She reclaims agency in a world that has commodified virtue and turned ethics into state theatre.
So, Has Morality Declined?
Only if you believe morality has a fixed address — some eternal North Star by which all human actions may be judged. But if, as postmodernity has rather insistently suggested, morality is a shifting social fiction — then Panem’s horror is not a fall from grace, but a recalibration of what counts as “grace” in the first place.
And that’s the real horror, isn’t it? Not that morality has collapsed — but that it still exists, and it likes what it sees.
Conclusion: The Real Hunger
The Hunger Games is not about a society starved of morality — it’s about a world gorging on it, cooked, seasoned, and served with a garnish of guiltless indulgence. It is moral appetite weaponised. Ethics as edict. Conscience as costume.
If you feel sickened by what you see in Panem, it’s not because morality has vanished.
There is a kind of political necromancy afoot in modern discourse—a dreary chant murmured by pundits, CEOs, and power-drunk bureaucrats alike: “It’s just human nature.” As if this incantation explains, excuses, and absolves all manner of violent absurdities. As if, by invoking the mystic forces of evolution or primal instinct, one can justify the grotesque state of things. Income inequality? Human nature. War? Human nature. Corporate psychopathy? Oh, sweetie, it’s just how we’re wired.
What a convenient mythology.
Audio: NotebookLM podcast on this topic.
If “human nature” is inherently brutish and selfish, then resistance is not only futile, it is unnatural. The doctrine of dominance gets sanctified, the lust to rule painted as destiny rather than deviance. Meanwhile, the quiet, unglamorous yearning of most people—to live undisturbed, to coöperate rather than conquer—is dismissed as naïve, childish, and unrealistic. How curious that the preferences of the vast majority are always sacrificed at the altar of some aggressive minority’s ambitions.
Let us dispense with this dogma. The desire to dominate is not a feature of human nature writ large; it is a glitch exploited by systems that reward pathological ambition. Most of us would rather not be ruled, and certainly not managed by glorified algorithms in meat suits. The real human inclination, buried beneath centuries of conquest and control, is to live in peace, tend to our gardens, and perhaps be left the hell alone.
And yet, we are not. Because there exists a virulent cohort—call them oligarchs, executives, generals, kings—whose raison d’être is the acquisition and consolidation of power. Not content to build a life, they must build empires. Not content to share, they must extract. They regard the rest of us as livestock: occasionally troublesome, but ultimately manageable.
To pacify us, they offer the Social Contract™—a sort of ideological bribe that says, “Give us your freedom, and we promise not to let the wolves in.” But what if the wolves are already inside the gates, wearing suits and passing legislation? What if the protection racket is the threat itself?
So no, it is not “human nature” that is the problem. Cancer is natural, too, but we don’t celebrate its tenacity. We treat it, research it, and fight like hell to survive it. Likewise, we must treat pathological power-lust not as an inevitability to be managed but as a disease to be diagnosed and dismantled.
The real scandal isn’t that humans sometimes fail to coöperate. It’s that we’re constantly told we’re incapable of it by those whose power depends on keeping it that way.
Let the ruling classes peddle their myths. The rest of us might just choose to write new ones.
They say no one escapes the Spectacle. Guy Debord made sure of that. His vision was airtight, his diagnosis terminal: we are all spectators now, alienated from our labour, our time, our own damn lives. It was a metaphysical mugging—existence held hostage by images, by commodities dressed in drag. The future was a feedback loop, and we were all doomed to applaud.
Audio: NotebookLM podcast on this topic. Apologies in advance for the narrators’ mangling of the pronunciation of ‘Guy Debord’.
But what if the loop could be hacked? What if the infinitely halved distances of motionless critique—Zeno’s Paradox by way of Marx—could finally be crossed?
Enter: Yanis Varoufakis. Economist, ex-finance minister, techno-cassandra with a motorbike and a vendetta. Where Debord filmed the catastrophe in black-and-white, Varoufakis showed up with the source code.
Debord’s Limbo
Debord saw it all coming. The substitution of reality with its photogenic simulacrum. The slow death of agency beneath the floodlights of consumption. But like Zeno’s paradox, he could only gesture toward the end without ever reaching it. Each critique halved the distance to liberation but never arrived. The Spectacle remained intact, omnipresent, and self-replicating—like an ontological screensaver.
He gave us no path forward, only a beautiful, ruinous analysis. A Parisian shrug of doom.
Varoufakis’ Shortcut
But then comes Varoufakis, breaking through the digital labyrinth not by philosophising the Spectacle, but by naming its successor: Technofeudalism.
See, Debord was chasing a moving target—a capitalism that morphed from industrial to financial to semiotic faster than his prose could crystallise. But Varoufakis caught it mid-mutation. He pinned it to the slab and sliced it open. What spilled out wasn’t capital anymore—it was rent. Platform rent. Algorithmic tolls. Behavioural taxes disguised as convenience. This isn’t the market gone mad—it’s the market dissolved, replaced by code-based fiefdoms.
The paradox is resolved not by reaching utopia, but by realising we’ve already crossed the line—we just weren’t told. The market isn’t dying; it’s already dead, and we’re still paying funeral costs in monthly subscriptions and attention metrics.
From Spectacle to Subjugation
Debord wanted to unmask the performance. Varoufakis realised the theatre had been demolished and replaced with a server farm.
You don’t watch the Spectacle anymore. It watches you. It optimises you. It learns your keystrokes, your pulse rate, your browsing history. Welcome to feudal recursion, where Amazon is your landlord, Google your priest, and Meta your confessor.
Solving Zeno the Varoufakis Way
So how does one cross the infinite regress of alienation? Simple. You call it what it is. You reclassify the terrain.
“This is not capitalism,” Varoufakis says, in the tone of a man pulling a mask off a Scooby-Doo villain. “It’s technofeudalism. Capital didn’t win. It went feudal. Again.”
By doing so, he bypasses the academic ballet that has critics forever inching closer to the truth without touching it. He calls the system new, not to sell books, but to make strategy possible. Because naming a beast is the first step in slaying it.
In Conclusion: Debord Dreamed, Varoufakis Drives
Debord haunts the museum. Varoufakis raids the server room. Both are essential. But only one gives us a new map.
The Spectacle hypnotised us. Technofeudalism enslaves us. And if there’s a way out, it won’t be through slogans spray-painted on Parisian walls. It will be built in code, deployed across decentralised networks, and carried forward by those who remember what it meant to be not watched.
Let Debord whisper. Let Varoufakis roar. And let the rest of us sharpen our blades.
Post-COVID, we’re told trust in science is eroding. But perhaps the real autopsy should be performed on the institution of public discourse itself.
Since the COVID-19 crisis detonated across our global stage—part plague, part PR disaster—the phrase “trust in science” has become the most abused slogan since “thoughts and prayers.” Every public official with a podium and a pulse declared they were “following the science,” as if “science” were a kindly oracle whispering unambiguous truths into the ears of the righteous. But what happened when those pronouncements proved contradictory, politically convenient, or flat-out wrong? Was it science that failed, or was it simply a hostage to an incoherent performance of authority?
Audio: NotebookLM podcast discussing this topic.
Two recent Nature pieces dig into the supposed “decline” of scientific credibility in the post-pandemic world, offering the expected hand-wringing about public opinion and populist mistrust. But let’s not be so credulous. This isn’t merely a crisis of trust—it’s a crisis of theatre.
“The Science” as Ventriloquism
Let’s begin by skewering the central absurdity: there is no such thing as “The Science.” Science is not a monolith. It’s not a holy writ passed down by lab-coated Levites. It’s a process—a messy, iterative, and perpetually provisional mode of inquiry. But during the pandemic, politicians, pundits, and even some scientists began to weaponise the term, turning it into a rhetorical cudgel. “The Science says” became code for “shut up and comply.” Any dissent—even from within the scientific community—was cast as heresy. Galileo would be proud.
In Nature Human Behaviour paper (van der Linden et al., 2025) identifies four archetypes of distrust: distrust in the message, the messenger, the medium, and the motivation. What they fail to ask is: what if all four were compromised simultaneously? What if the medium (mainstream media) served more as a stenographer to power than a check upon it? What if the message was oversimplified into PR slogans, the messengers were party apparatchiks in lab coats, and the motivations were opaque at best?
Trust didn’t just erode. It was actively incinerated in a bonfire of institutional vanity.
A Crisis of Influence, Not Integrity
The second Nature commentary (2025) wrings its hands over “why trust in science is declining,” as if the populace has suddenly turned flat-Earth overnight. But the real story isn’t a decline in trust per se; it’s a redistribution of epistemic authority. Scientists no longer have the stage to themselves. Influencers, conspiracy theorists, rogue PhDs, and yes—exhausted citizens armed with Wi-Fi and anxiety—have joined the fray.
Science hasn’t lost truth—it’s lost control. And frankly, perhaps it shouldn’t have had that control in the first place. Democracy is messy. Information democracies doubly so. And in that mess, the epistemic pedestal of elite scientific consensus was bound to topple—especially when its public face was filtered through press conferences, inconsistent policies, and authoritarian instincts.
Technocracy’s Fatal Hubris
What we saw wasn’t science failing—it was technocracy failing in real time, trying to manage public behaviour with a veneer of empirical certainty. But when predictions shifted, guidelines reversed, and public health policy began to resemble a mood ring, the lay public was expected to pretend nothing happened. Orwell would have a field day.
This wasn’t a failure of scientific method. It was a failure of scientific messaging—an inability (or unwillingness) to communicate uncertainty, probability, and risk in adult terms. Instead, the public was infantilised. And then pathologised for rebelling.
Toward a Post-Scientistic Public Sphere
So where does that leave us? Perhaps we need to kill the idol of “The Science” to resurrect a more mature relationship with scientific discourse—one that tolerates ambiguity, embraces dissent, and admits when the data isn’t in. Science, done properly, is the art of saying “we don’t know… yet.”
The pandemic didn’t erode trust in science. It exposed how fragile our institutional credibility scaffolding really is—how easily truth is blurred when science is fed through the meat grinder of media, politics, and fear.
The answer isn’t more science communication—it’s less scientism, more honesty, and above all, fewer bureaucrats playing ventriloquist with the language of discovery.
Conclusion
Trust in science isn’t dead. But trust in those who claim to speak for science? That’s another matter. Perhaps it’s time to separate the two.
By the time we reach Chapter Seven of Technofeudalism: What Kills Capitalism, Yanis Varoufakis drops the ledger sheets and spreadsheets and starts sketching utopia in crayon. Entitled Escape from Technofeudalism, it proposes—brace yourself—a workplace democracy. It’s aspirational, yes. Compelling? Not particularly. Especially if, like me, you’ve long since stopped believing that democracy is anything more than a feel-good placebo for structural impotence.
Audio: NotebookLM podcast discussing this topic.
To be clear: the preceding chapters, particularly the first six, are sharp, incisive, and frankly, blistering in their indictment of today’s economic disfiguration. But Chapter Seven? It’s less an escape plan, more a group therapy session masquerading as an operational model.
So let’s take his proposal for Democratised Companiesapart, one charming layer at a time.
“Imagine a corporation in which every employee has a single share that they receive when hired…”
Splendid. One person, one vote. Adorable.
“All decisions – hiring, promotion, research, product development, pricing, strategy – are taken collectively…”
Because there’s nothing more efficient than a hiring committee comprised of thirty engineers, two janitors, a receptionist, and Steve from Accounts, whose main contribution is passive-aggressive sighing.
“…with each employee exercising their vote via the company’s intranet…”
Marvellous. We’ve now digitised the tyranny of the majority and can timestamp every idiotic decision for posterity.
“Equal ownership does not, however, mean equal pay.”
A relief. Until it doesn’t.
“Pay is determined by a democratic process that divides the company’s post-tax revenues into four slices…”
Here, dear reader, is where the cake collapses. Why, precisely, should a randomly-assembled group of employees—with wildly varying financial literacy—be entrusted to divide post-tax revenue like it’s a birthday cake at a toddler’s party?
And how often are these slices recalibrated? Each fiscal year? Every time someone is hired or fired? Do we amend votes quarterly or wait until the economic ship has already struck an iceberg?
Varoufakis does suggest preference voting to tackle allocation disputes:
“Any proposal to increase one slice must be accompanied by a proposal to reduce expenditure on one or more of the other slices…”
Fine. In theory, algorithmic voting procedures sound neat. But it presumes voters are rational, informed, and cooperative. If you’ve ever seen a corporate Slack thread devolve into emoji warfare, you’ll know that this is fiction on par with unicorns and meritocracy.
“The basic pay slice is then divided equally among all staff – from persons recently employed as secretaries or cleaners to the firm’s star designers or engineers.”
Ah yes, the ‘equality’ bit. Equal pay, unequal contribution. This isn’t egalitarianism—it’s enforced mediocrity. It might work in a monastery. Less so in a competitive tech firm where innovation requires both vision and differentiated incentive.
Now, on to bonuses, which are democratically determined by:
“…employees each given one hundred digital tokens to distribute among their colleagues…”
Welcome to Black Mirror: Workplace Edition. This is less economics, more playground politics. Who gets tokens? The charismatic chatterbox in the break room? The person who shared their lunch? The ghost employee who never shows up but emails back promptly?
And how, pray tell, does one evaluate the receptionist’s contribution relative to the lead engineer’s or the janitor’s? This isn’t peer review—it’s populism with a smiley face.
We’ve all seen “Teacher of the Year” competitions turn into contests of who had the cutest class poster or best cupcakes. Now imagine your livelihood depending on it.
In summary, democracy in the workplace may sound noble, but in practice, it’s the bureaucratic equivalent of herding caffeinated cats. It doesn’t even work in small groups, let alone an organisation of hundreds. Democracy—when applied to every function of an enterprise—is not liberation; it’s dilution. It’s design-by-committee, strategy-by-consensus, and ultimately, excellence-by-accident.
Escape from Technofeudalism? Perhaps. But not by replacing corporate lords with intranet polls and digital tokens. That’s not an exit strategy—it’s a cosplay of collectivism.
History doesn’t repeat itself, but, my God, it certainly rhymes — badly, and in the case of America’s self-immolation vis-à-vis China, completely off-key.
Yanis Varoufakis’ brutal dissection in Technofeudalism reads like a coronial report on the West’s terminal idiocy: We’re not watching the rise of a “new China threat” — we’re watching the dying spasms of a clownish empire losing to its own creation: cloud capital.
Audio: NotebookLM podcast on this topic. NB: The announcers confused my commentary on Varoukakis as my ideas, where I am simply summarising and editorialising.
A Recap for the Attention-Deficit West:
Once upon a time (i.e., post-WWII), America ran a magnificent scam: sell the world things — aeroplanes, refrigerators, good old-fashioned stuff — in exchange for gold. When America became a deficit country (buying far more than it sold), it pivoted brilliantly: “No more gold, peasants. Here, have an IOU instead.” Thus was born the Dollar Empire: a global system where America got to run enormous deficits, foreigners got paper promises, and everyone smiled through gritted teeth.
Fast-forward: Japan, Korea, China — they got in line. They built things, exported things, grew rich — and recycled all their lovely profits straight into American property, debt, and Wall Street snake oil. Win-win! (Except for the workers on both sides, who were flogged like medieval peasants, but who’s counting?)
The Minotaur Has a Stroke
Then came 2008: America’s financial system committed hara-kiri on live television. China stepped up to save global capitalism (yes, really), jacking up investment to absurd levels, buying up Western assets, and quietly building something far more dangerous than steelworks and solar panels: cloud finance.
While the West was still dry-humping neoliberal fantasies about “free markets,” China fused Big Tech and Big Brother into a seamless, sprawling surveillance-finance-entertainment-behavioural-modification apparatus. Think Facebook, Amazon, Citibank, your GP, your car insurance, and your government — all rolled into one app. Welcome to WeChat World™ — population: everyone.
The New Cold War: Idiots vs Strategists
Enter Trump. And Biden. And the bipartisan realisation, delivered with all the subtlety of a pub brawl, that China’s Big Tech wasn’t just mimicking Silicon Valley — it was obliterating it. TikTok wasn’t just teenagers dancing. It was dollar extraction without the need for US trade deficits or dollar supremacy.
Cue blind panic. Ban Huawei! Ban TikTok! Ban chips! Ban thought! Meanwhile, Beijing smiled, nodded, and built its own chips, its own cloud, its own digital currency. When the US froze Russian central bank assets in 2022, it unwittingly told every finance minister from Delhi to Dakar: “Your money isn’t safe with us.” Oops.
The Chinese digital yuan — a once quaint science project — suddenly looked like the lifeboat on a burning ship. Guess which way the rats are swimming?
Europe: Toasted, Buttered, and Eaten
As for Europe? Bless them. Still fantasising about “strategic autonomy” while chained to America’s collapsing empire like a loyal spaniel. Europe lacks cloud capital, lacks industrial capacity, and now — post-Ukraine, post-energy crisis — lacks even the pretence of relevance. Germany, France, the Netherlands: mere franchisees of American technofeudal overlords.
Brussels’ vision for the future? “Please sir, may we remain a respectable vassal state?”
The Global South: Choose Your Feudal Lord
The so-called “developing world” faces an even grimmer menu:
Pledge allegiance to Washington’s dying dollar-based cloud fief?
Or become serfs under Beijing’s emerging digital rentier aristocracy? Either way, the crops are taxed, the wells are privatised, and the commons are torched.
Development? Don’t make me laugh. The South has been invited to another game of “Heads they win, tails you starve.”
Technofeudalism: A Lovable New Hell
Meanwhile, back in the heartlands of Empire, cloudalists — Google, Amazon, Tencent, Alibaba — are fencing off reality itself. You will own nothing, subscribe to everything, and feed their algorithms while praying for a dopamine hit. Democracy? A charming relic, like powdered wigs and carrier pigeons.
In a final, cosmically ironic twist, it turns out that the only force keeping China’s cloudalists in check is… the Chinese Communist Party itself. Yes, dear liberals: the last faint flicker of “people power” resides under authoritarian rule, while the “free world” rolls over like a half-seduced Victorian maiden.
I’ve just finished Chapter 5 of Technofeudalism by Greek economist Yanis Varoufakis, and I can’t recommend it enough. Retiring from being a professional economist, I’d paused reading economic fare in favour of philosophy and fiction. Recently, I picked up Hobbes’ Leviathan and Graeber’s Bullshit Jobs, but this one called to me. I recall when it was released. I read some summaries and reviews. I heard some interviews. I thought I understood the gist. I did. But it goes deeper. Much deeper.
I considered Technofeudalism or Feudalism 2.0 as more of a political statement than a sociopolitical one. Now, I know better. Rather than review the book, I want to focus on a specific aspect that occurred to me.
In a nutshell, Varoufakis asserts that with Capitalism, we moved from a world of property-based rents to one of profits (and rents). We’ve now moved past this into a new world based on platform-based rents (and profits and property rents). Rent extraction yields more power than profits, again reordering power structures. Therefore, I think we might want to handle (read: tax) rents separately from profits.
Audio: NotebookLM podcast discussing this topic.
A Radical Proposal for Modern Taxation
Introduction: The Old Dream Reawakened
Economists have long dreamt of a world in which rent — the unearned income derived from control of scarce assets — could be cleanly distinguished from profit, the reward for productive risk-taking. Ricardo dreamt of it. Henry George built a movement upon it. Even today, figures like Thomas Piketty hint at its necessity. Yet rent and profit have grown entangled like ancient ivy around the crumbling edifice of modern capitalism.
Today, under what some call “technofeudalism,” the separation of rent from productive profit has become not merely an academic exercise but a matter of existential urgency. With rents now extracted not only from land but from data, networks, and regulatory capture, taxation itself risks becoming obsolete if it fails to adapt.
Thus, let us lay out a theoretical and applied map for what could — and arguably must — be done.
I. The Theoretical Framework: Defining Our Terms
First, we must operationally define:
Profit: income generated from productive risk-taking — investment, innovation, labour.
Rent: income generated from ownership or control of scarce, non-replicable assets — land, intellectual property, platforms, regulatory privilege.
Key Principle: Rent is unearned. Profit is earned.
This distinction matters because rent is an economic extraction from society’s collective value creation, whereas profit rewards activities that enlarge that pie.
II. Mapping EBITA: Where Rent Hides
EBITA (Earnings Before Interest, Taxes, and Amortisation) is the preferred metric of modern corporate reporting. Within it, rents hide behind several masks:
Property rental income
Intellectual property licensing fees
Monopoly markups
Platform access fees
Network effect premiums
Regulatory arbitrage profits
Parsing rent from EBITA would thus require methodical decomposition.
III. Theoretical Approaches to Decomposing EBITA
Cost-Plus Benchmarking
Estimate what a “normal” competitive firm would earn.
Treat any surplus as rent.
Rate-of-Return Analysis
Compare corporate returns against industry-normal rates adjusted for risk.
Excess returns imply rent extraction.
Monopolistic Pricing Models
Apply measures like the Lerner Index to estimate pricing power.
Deduce the rentier share.
Asset Valuation Decomposition
Identify earnings derived strictly from asset control rather than active operation.
Economic Value Added (EVA) Adjustments
Assign a competitive cost of capital and strip out the residual super-profits as rents.
IV. Toward Applied Solutions: Imposing Sanity on Chaos
In theory, then, we could pursue several applied strategies:
Mandated Rent-Adjusted Reporting
Require corporations to file a “Rent-Adjusted EBITA” metric.
Auditors would have to categorise income streams as “productive” or “rentier.”
Differential Taxation
Tax normal profits at a competitive corporate rate.
Tax rents at punitive rates (e.g., 70-90%), since taxing rents does not distort incentives.
Sector-Specific Rent Taxes
Levy special taxes on land, platforms, patents, and monopoly franchises.
Create dynamic rent-extraction indices updated annually.
Platform Rent Charges
Impose data rent taxes on digital platforms extracting value from user activity.
Public Registry of Rents
Create a global registry classifying rents by sector, firm, and mechanism.
Provide public transparency to rent-seeking activities.
V. The Political Reality: Clouds on the Horizon
Needless to say, the aristocracy of the digital age will not go gentle into this good night. Rentiers — whether in Silicon Valley, the City of London, or Wall Street — are deeply entwined with the political machinery that might otherwise regulate them.
Yet the costs of inaction are higher. If rent extraction continues to eclipse productive activity, the very legitimacy of markets — and democracy — will erode into cynicism, stagnation, and oligarchic decay.
Conclusion: The Choice Before Us
Separating rent from profit is not merely a technocratic tweak. It is a radical act — one that could reorient economic activity away from parasitic extraction and back toward genuine value creation.
In a world where algorithms are castles, platforms are fiefdoms, and data is the new serfdom, reclaiming the ancient dream of taxing rent is no longer optional. It is, quite simply, the price of our collective survival.
I’ve recently decided to take a sabbatical from what passes for economic literature these days — out of a sense of self-preservation, mainly — but before I hermetically sealed myself away, I made a quick detour through Jorge Luis Borges’ The Library of Babel (PDF). Naturally, I emerged none the wiser, blinking like some poor subterranean creature dragged into the daylight, only to tumble headlong into David Graeber’s Bullshit Jobs.
This particular tome had been languishing in my inventory since its release, exuding a faint but persistent odour of deferred obligation. Now, about a third of the way in, I can report that Graeber’s thesis — that the modern world is awash with soul-annihilatingly pointless work — does resonate. I find myself nodding along like one of those cheap plastic dashboard dogs. Yet, for all its righteous fury, it’s more filler than killer. Directionally correct? Probably. Substantively airtight? Not quite. It’s a bit like admiring a tent that’s pitched reasonably straight but has conspicuous holes large enough to drive a fleet of Uber Eats cyclists through.
An amusing aside: the Spanish edition is titled Trabajos de mierda (“shitty jobs”), a phrase Graeber spends an entire excruciating section of the book explaining is not the same thing. Meanwhile, the French, in their traditional Gallic shrug, simply kept the English title. (One suspects they couldn’t be arsed.)
Chapter One attempts to explain the delicate taxonomy: bullshit jobs are fundamentally unnecessary — spawned by some black magic of bureaucracy, ego, and capitalist entropy — whilst shit jobs are grim, thankless necessities that someone must do, but no one wishes to acknowledge. Tragically, some wretches get the worst of both worlds, occupying jobs that are both shit and bullshit — a sort of vocational purgatory for the damned.
Then, in Chapter Two, Graeber gleefully dissects bullshit jobs into five grotesque varieties:
Flunkies, whose role is to make someone else feel important.
Goons, who exist solely to fight other goons.
Duct Tapers, who heroically patch problems that ought not to exist in the first place.
Box Tickers, who generate paperwork to satisfy some Kafkaesque demand that nobody actually reads.
Taskmasters, who either invent unnecessary work for others or spend their days supervising people who don’t need supervision.
Naturally, real-world roles often straddle several categories. Lucky them: multi-classed in the RPG of Existential Futility.
Graeber’s parade of professional despair is, admittedly, darkly entertaining. One senses he had a great deal of fun cataloguing these grotesques — like a medieval monk illustrating demons in the margins of a holy text — even as the entire edifice wobbles under the weight of its own repetition. Yes, David, we get it: the modern economy is a Potemkin village of invented necessity. Carry on.
If the first chapters are anything to go by, the rest of the book promises more righteous indignation, more anecdotes from anonymous sad-sacks labouring in existential oubliettes, and — if one is lucky — perhaps a glimmer of prescription hidden somewhere amidst the diagnosis. Though, I’m not holding my breath. This feels less like an intervention and more like a well-articulated primal scream.
Still, even in its baggier moments, Bullshit Jobs offers the grim pleasure of recognition. If you’ve ever sat through a meeting where the PowerPoint had more intellectual integrity than the speaker or spent days crafting reports destined for the corporate oubliette marked “For Review” (translation: Never to Be Seen Again), you will feel seen — in a distinctly accusatory, you-signed-up-for-this sort of way.
In short: it’s good to read Graeber if only to have one’s vague sense of societal derangement vindicated in print. Like having a charmingly irate friend in the pub lean over their pint and mutter, “It’s not just you. It’s the whole bloody system.”
I’m not sure I’ll stick with this title either. I think I’ve caught the brunt of the message, and it feels like a diversion. I’ve also got Yanis Varoufakis’ Technofeudalism: What Killed Capitalism on the shelf. Perhaps I’ll spin this one up instead.