A five-part descent into the illusion of autonomy, where biology writes the script, reason provides the excuse, and the self is merely the echo of its own conditioning.This is a follow-up to a recent post on the implausibility of free will.
Continuity Is Not Identity
You are not who you were — you are what you’ve become
“A river doesn’t remember yesterday’s curve. But we point to it and say: ‘Look, it’s still the same.’”
Audio: NotebookLM podcast on this topic.
The compatibilist move here is subtler — less grandiose than autonomy, more domesticated than sovereignty. It says: Even if you didn’t choose your traits, your past, your preferences — you’re still you. That self has endured. And moral responsibility flows from that continuity.
But this, too, is sleight of hand.
Because continuity is a pattern, not a person. And a pattern that evolves without origin is not accountable. It’s just happening.
A River Is Not a Moral Agent
Yes, you are still “you” — but only in the loosest, most cosmetic sense. The fact that your behaviour follows a recognisable pattern does not imply authorship. It merely confirms that systems tend toward stability, or path dependence, or neural canalisation.
You can be stable. You can even be consistent. But that doesn’t mean you’re choosing anything.
Continuity is not control. It’s inertia dressed up as identity.
Predictability Is Not Ownership
We mistake persistence for personhood. If someone acts one way for long enough, we assume that behaviour is theirs — that it reflects their values, their self. But all it really reflects is probability.
“You’re still you,” we say. But which you?
The one shaped by sleep deprivation?
The one under hormonal flux?
The one shaped by language, trauma, and cultural myopia?
Every iteration of “you” is a snapshot — a chemical event disguised as character.
You’re Not Rebuilding — You’re Accreting
The recursive defence — “I can change who I am” — also crumbles here. Because you don’t change yourself from nowhere. You change because something changed you. And that change, too, emerges from your condition.
Growth, reflection, habit formation — these aren’t proofs of freedom. They’re signs that adaptive systems accumulate structure.
You are not shaping clay. You are sediment, layered by time.
Character Is Compulsion in Costume
We love stories about people who “showed their true colours.” But this is narrative bias — we flatten a life’s complexity into a myth of revelation.
Yet even our finest moments — courage, restraint, sacrifice — may be nothing more than compulsions coded as character. You didn’t choose to be brave. You just were.
The brave person says: “I had no choice.” The coward says the same.
Who gets the medal is irrelevant to the question of freedom.
TL;DR: Continuity Doesn’t Mean You Own It
The self is a pattern of events, not a stable agent.
Continuity is not agency — it’s habit.
Predictability doesn’t prove ownership.
Even your finest moments might be involuntary.
And if you’re not choosing your changes, you’re just being changed.
So, no — you are not who you were. You are what you’ve become. And what you’ve become was never yours to shape freely.
📅 Coming Tomorrow:
Manipulability as Disproof
If your will can be altered without your knowledge, was it ever truly yours?
Why the cherished myth of human autonomy dissolves under the weight of our own biology
We cling to free will like a comfort blanket—the reassuring belief that our actions spring from deliberation, character, and autonomous choice. This narrative has powered everything from our justice systems to our sense of personal achievement. It feels good, even necessary, to believe we author our own stories.
But what if this cornerstone of human self-conception is merely a useful fiction? What if, with each advance in neuroscience, our cherished notion of autonomy becomes increasingly untenable?
Audio: NotebookLM podcast on this topic.
I. The Myth of Autonomy: A Beautiful Delusion
Free will requires that we—some essential, decision-making “self”—stand somehow separate from the causal chains of biology and physics. But where exactly would this magical pocket of causation exist? And what evidence do we have for it?
Your preferences, values, and impulses emerge from a complex interplay of factors you never chose:
The genetic lottery determined your baseline neurochemistry and cognitive architecture before your first breath. You didn’t select your dopamine sensitivity, your amygdala reactivity, or your executive function capacity.
The hormonal symphony that controls your emotional responses operates largely beneath conscious awareness. These chemical messengers—testosterone, oxytocin, and cortisol—don’t ask permission before altering your perceptions and priorities.
Environmental exposures—from lead in your childhood drinking water to the specific traumas of your upbringing—have sculpted neural pathways you didn’t design and can’t easily rewire.
Developmental contingencies have shaped your moral reasoning, impulse control, and capacity for empathy through processes invisible to conscious inspection.
Your prized ability to weigh options, inhibit impulses, and make “rational” choices depends entirely on specific brain structures—particularly the dorsolateral prefrontal cortex (DLPFC)—operating within a neurochemical environment you inherited rather than created.
You occupy this biological machinery; you do not transcend it. Yet, society holds you responsible for its outputs as if you stood separate from these deterministic processes.
transcranial direct current stimulation over the DLPFC alters moral reasoning, especially regarding personal moral dilemmas. The subject experiences these externally induced judgments as entirely their own, with no sense that their moral compass has been hijacked
II. The DLPFC: Puppet Master of Moral Choice
The dorsolateral prefrontal cortex serves as command central for what we proudly call executive function—our capacity to plan, inhibit, decide, and morally judge. We experience its operations as deliberation, as the weighing of options, as the essence of choice itself.
And yet this supposed seat of autonomy can be manipulated with disturbing ease.
When researchers apply transcranial magnetic stimulation to inhibit DLPFC function, test subjects make dramatically different moral judgments about identical scenarios. Under different stimulation protocols, the same person arrives at contradictory conclusions about right and wrong without any awareness of the external influence.
Similarly, transcranial direct current stimulation over the DLPFC alters moral reasoning, especially regarding personal moral dilemmas. The subject experiences these externally induced judgments as entirely their own, with no sense that their moral compass has been hijacked.
If our most cherished moral deliberations can be redirected through simple electromagnetic manipulation, what does this reveal about the nature of “choice”? If will can be so easily influenced, how free could it possibly be?
III. Hormonal Puppetmasters: The Will in Your Bloodstream
Your decision-making machinery doesn’t stop at neural architecture. Your hormonal profile actively shapes what you perceive as your autonomous choices.
Consider oxytocin, popularly known as the “love hormone.” Research demonstrates that elevated oxytocin levels enhance feelings of guilt and shame while reducing willingness to harm others. This isn’t a subtle effect—it’s a direct biological override of what you might otherwise “choose.”
Testosterone tells an equally compelling story. Administration of this hormone increases utilitarian moral judgments, particularly when such decisions involve aggression or social dominance. The subject doesn’t experience this as a foreign influence but as their own authentic reasoning.
These aren’t anomalies or edge cases. They represent the normal operation of the biological systems governing what we experience as choice. You aren’t choosing so much as regulating, responding, and rebalancing a biochemical economy you inherited rather than designed.
IV. The Accident of Will: Uncomfortable Conclusions
If the will can be manipulated through such straightforward biological interventions, was it ever truly “yours” to begin with?
Philosopher Galen Strawson’s causa sui argument becomes unavoidable here: To be morally responsible, one must be the cause of oneself, but no one creates their own neural and hormonal architecture. By extension, no one can be ultimately responsible for actions emerging from that architecture.
What we dignify as “will” may be nothing more than a fortunate (or unfortunate) biochemical accident—the particular configuration of neurons and neurochemicals you happened to inherit and develop.
This lens forces unsettling questions:
How many behaviours we praise or condemn are merely phenotypic expressions masquerading as choices? How many acts of cruelty or compassion reflect neurochemistry rather than character?
How many punishments and rewards are we assigning not to autonomous agents, but to biological processes operating beyond conscious control?
And perhaps most disturbingly: If we could perfect the moral self through direct biological intervention—rewiring neural pathways or adjusting neurotransmitter levels to ensure “better” choices—should we?
Or would such manipulation, however well-intentioned, represent the final acknowledgement that what we’ve called free will was never free at all?
A Compatibilist Rebuttal? Not So Fast.
Some philosophers argue for compatibilism, the view that determinism and free will can coexist if we redefine free will as “uncoerced action aligned with one’s desires.” But this semantic shuffle doesn’t rescue moral responsibility.
If your desires themselves are products of biology and environment—if even your capacity to evaluate those desires depends on inherited neural architecture—then “acting according to your desires” just pushes the problem back a step. You’re still not the ultimate author of those desires or your response to them.
What’s Left?
Perhaps we need not a defence of free will but a new framework for understanding human behaviour—one that acknowledges our biological embeddedness while preserving meaningful concepts of agency and responsibility without magical thinking.
The evidence doesn’t suggest we are without agency; it suggests our agency operates within biological constraints we’re only beginning to understand. The question isn’t whether biology influences choice—it’s whether anything else does.
For now, the neuroscientific evidence points in one direction: The will exists, but its freedom is the illusion.
Full Disclosure: I read the first volume of The Hunger Games just before the film was released. It was OK – certainly better than the film. This video came across my feed, and I skipped through it. Near the end, this geezer references how Katniss saves or recovers deteriorated morality. Me being me, I found issue with the very notion that a relative, if not subjective, concept could be recovered.
The OP asks if The Hunger Games are a classic. I’d argue that they are a categorical classic, like Harry Potter, within the category of YA fiction.
Audio: NotebookLM podcast discussing this topic.
The Hunger Games doesn’t depict the death of morality — it’s a masterclass in how to twist it into a circus act.
Video: YouTube video that spawned this topic.
Let us dispense with the hand-wringing. The Hunger Games is not a parable of moral decay. It is something far more chilling: a vivid portrait of moral engineering — the grotesque contortion of ethical instincts into instruments of domination and spectacle.
Those who bemoan the “decline of morality” in Panem have rather missed the point. There is no absence of morality in the Capitol — only a different version of it. A rebranded, corporatised, state-sanctioned morality, lacquered in lipstick and broadcast in 4K. It is not immorality that reigns, but a hyperactive ideological morality, designed to keep the masses docile and the elites draped in silk.
This is not moral entropy; it’s moral mutation.
Children are not slaughtered because people have forgotten right from wrong — they are slaughtered because a society has been trained to believe that this is what justice looks like. That blood is penance. That fear is unity. That watching it all unfold with a glass of champagne in hand is perfectly civilised behaviour.
This isn’t the death of morality. It’s a hostile takeover.
The Moral PR Machine
If morality is, as many of us suspect, relative — a cultural construct built on consensus, coercion, and convenience — then it can no more “decline” than fashion trends can rot. It simply shifts. One day, shoulder pads are in. The next, it’s child-on-child murder as prime-time entertainment.
In Panem, the moral compass has not vanished. It’s been forcibly recalibrated. Not by reason or revelation, but by propaganda and fear. The Games are moral theatre. A grim ritual, staged to remind the Districts who holds the reins, all under the nauseating guise of tradition, order, and justice.
The citizens of the Capitol aren’t monsters — they’re consumers. Trained to see horror as haute couture. To mistake power for virtue. To cheer while children are butchered, because that’s what everyone else is doing — and, crucially, because they’ve been taught it’s necessary. Necessary evils are the most seductive kind.
Katniss: Not a Saint, But a Saboteur
Enter Katniss Everdeen, not as the moral saviour but as the spanner in the machine. She doesn’t preach. She doesn’t have a grand theory of justice. What she has is visceral disgust — an animal revulsion at the machinery of the Games. Her rebellion is personal, tribal, and instinctive: protect her sister, survive, refuse to dance for their amusement.
She isn’t here to restore some lost golden age of decency. She’s here to tear down the current script and refuse to read her lines.
Her defiance is dangerous not because it’s moral in some abstract, universal sense — but because it disrupts the Capitol’s moral narrative. She refuses to be a pawn in their ethical pageant. She reclaims agency in a world that has commodified virtue and turned ethics into state theatre.
So, Has Morality Declined?
Only if you believe morality has a fixed address — some eternal North Star by which all human actions may be judged. But if, as postmodernity has rather insistently suggested, morality is a shifting social fiction — then Panem’s horror is not a fall from grace, but a recalibration of what counts as “grace” in the first place.
And that’s the real horror, isn’t it? Not that morality has collapsed — but that it still exists, and it likes what it sees.
Conclusion: The Real Hunger
The Hunger Games is not about a society starved of morality — it’s about a world gorging on it, cooked, seasoned, and served with a garnish of guiltless indulgence. It is moral appetite weaponised. Ethics as edict. Conscience as costume.
If you feel sickened by what you see in Panem, it’s not because morality has vanished.
There is a kind of political necromancy afoot in modern discourse—a dreary chant murmured by pundits, CEOs, and power-drunk bureaucrats alike: “It’s just human nature.” As if this incantation explains, excuses, and absolves all manner of violent absurdities. As if, by invoking the mystic forces of evolution or primal instinct, one can justify the grotesque state of things. Income inequality? Human nature. War? Human nature. Corporate psychopathy? Oh, sweetie, it’s just how we’re wired.
What a convenient mythology.
Audio: NotebookLM podcast on this topic.
If “human nature” is inherently brutish and selfish, then resistance is not only futile, it is unnatural. The doctrine of dominance gets sanctified, the lust to rule painted as destiny rather than deviance. Meanwhile, the quiet, unglamorous yearning of most people—to live undisturbed, to coöperate rather than conquer—is dismissed as naïve, childish, and unrealistic. How curious that the preferences of the vast majority are always sacrificed at the altar of some aggressive minority’s ambitions.
Let us dispense with this dogma. The desire to dominate is not a feature of human nature writ large; it is a glitch exploited by systems that reward pathological ambition. Most of us would rather not be ruled, and certainly not managed by glorified algorithms in meat suits. The real human inclination, buried beneath centuries of conquest and control, is to live in peace, tend to our gardens, and perhaps be left the hell alone.
And yet, we are not. Because there exists a virulent cohort—call them oligarchs, executives, generals, kings—whose raison d’être is the acquisition and consolidation of power. Not content to build a life, they must build empires. Not content to share, they must extract. They regard the rest of us as livestock: occasionally troublesome, but ultimately manageable.
To pacify us, they offer the Social Contract™—a sort of ideological bribe that says, “Give us your freedom, and we promise not to let the wolves in.” But what if the wolves are already inside the gates, wearing suits and passing legislation? What if the protection racket is the threat itself?
So no, it is not “human nature” that is the problem. Cancer is natural, too, but we don’t celebrate its tenacity. We treat it, research it, and fight like hell to survive it. Likewise, we must treat pathological power-lust not as an inevitability to be managed but as a disease to be diagnosed and dismantled.
The real scandal isn’t that humans sometimes fail to coöperate. It’s that we’re constantly told we’re incapable of it by those whose power depends on keeping it that way.
Let the ruling classes peddle their myths. The rest of us might just choose to write new ones.
By the time we reach Chapter Seven of Technofeudalism: What Kills Capitalism, Yanis Varoufakis drops the ledger sheets and spreadsheets and starts sketching utopia in crayon. Entitled Escape from Technofeudalism, it proposes—brace yourself—a workplace democracy. It’s aspirational, yes. Compelling? Not particularly. Especially if, like me, you’ve long since stopped believing that democracy is anything more than a feel-good placebo for structural impotence.
Audio: NotebookLM podcast discussing this topic.
To be clear: the preceding chapters, particularly the first six, are sharp, incisive, and frankly, blistering in their indictment of today’s economic disfiguration. But Chapter Seven? It’s less an escape plan, more a group therapy session masquerading as an operational model.
So let’s take his proposal for Democratised Companiesapart, one charming layer at a time.
“Imagine a corporation in which every employee has a single share that they receive when hired…”
Splendid. One person, one vote. Adorable.
“All decisions – hiring, promotion, research, product development, pricing, strategy – are taken collectively…”
Because there’s nothing more efficient than a hiring committee comprised of thirty engineers, two janitors, a receptionist, and Steve from Accounts, whose main contribution is passive-aggressive sighing.
“…with each employee exercising their vote via the company’s intranet…”
Marvellous. We’ve now digitised the tyranny of the majority and can timestamp every idiotic decision for posterity.
“Equal ownership does not, however, mean equal pay.”
A relief. Until it doesn’t.
“Pay is determined by a democratic process that divides the company’s post-tax revenues into four slices…”
Here, dear reader, is where the cake collapses. Why, precisely, should a randomly-assembled group of employees—with wildly varying financial literacy—be entrusted to divide post-tax revenue like it’s a birthday cake at a toddler’s party?
And how often are these slices recalibrated? Each fiscal year? Every time someone is hired or fired? Do we amend votes quarterly or wait until the economic ship has already struck an iceberg?
Varoufakis does suggest preference voting to tackle allocation disputes:
“Any proposal to increase one slice must be accompanied by a proposal to reduce expenditure on one or more of the other slices…”
Fine. In theory, algorithmic voting procedures sound neat. But it presumes voters are rational, informed, and cooperative. If you’ve ever seen a corporate Slack thread devolve into emoji warfare, you’ll know that this is fiction on par with unicorns and meritocracy.
“The basic pay slice is then divided equally among all staff – from persons recently employed as secretaries or cleaners to the firm’s star designers or engineers.”
Ah yes, the ‘equality’ bit. Equal pay, unequal contribution. This isn’t egalitarianism—it’s enforced mediocrity. It might work in a monastery. Less so in a competitive tech firm where innovation requires both vision and differentiated incentive.
Now, on to bonuses, which are democratically determined by:
“…employees each given one hundred digital tokens to distribute among their colleagues…”
Welcome to Black Mirror: Workplace Edition. This is less economics, more playground politics. Who gets tokens? The charismatic chatterbox in the break room? The person who shared their lunch? The ghost employee who never shows up but emails back promptly?
And how, pray tell, does one evaluate the receptionist’s contribution relative to the lead engineer’s or the janitor’s? This isn’t peer review—it’s populism with a smiley face.
We’ve all seen “Teacher of the Year” competitions turn into contests of who had the cutest class poster or best cupcakes. Now imagine your livelihood depending on it.
In summary, democracy in the workplace may sound noble, but in practice, it’s the bureaucratic equivalent of herding caffeinated cats. It doesn’t even work in small groups, let alone an organisation of hundreds. Democracy—when applied to every function of an enterprise—is not liberation; it’s dilution. It’s design-by-committee, strategy-by-consensus, and ultimately, excellence-by-accident.
Escape from Technofeudalism? Perhaps. But not by replacing corporate lords with intranet polls and digital tokens. That’s not an exit strategy—it’s a cosplay of collectivism.
I’ve just finished Chapter 5 of Technofeudalism by Greek economist Yanis Varoufakis, and I can’t recommend it enough. Retiring from being a professional economist, I’d paused reading economic fare in favour of philosophy and fiction. Recently, I picked up Hobbes’ Leviathan and Graeber’s Bullshit Jobs, but this one called to me. I recall when it was released. I read some summaries and reviews. I heard some interviews. I thought I understood the gist. I did. But it goes deeper. Much deeper.
I considered Technofeudalism or Feudalism 2.0 as more of a political statement than a sociopolitical one. Now, I know better. Rather than review the book, I want to focus on a specific aspect that occurred to me.
In a nutshell, Varoufakis asserts that with Capitalism, we moved from a world of property-based rents to one of profits (and rents). We’ve now moved past this into a new world based on platform-based rents (and profits and property rents). Rent extraction yields more power than profits, again reordering power structures. Therefore, I think we might want to handle (read: tax) rents separately from profits.
Audio: NotebookLM podcast discussing this topic.
A Radical Proposal for Modern Taxation
Introduction: The Old Dream Reawakened
Economists have long dreamt of a world in which rent — the unearned income derived from control of scarce assets — could be cleanly distinguished from profit, the reward for productive risk-taking. Ricardo dreamt of it. Henry George built a movement upon it. Even today, figures like Thomas Piketty hint at its necessity. Yet rent and profit have grown entangled like ancient ivy around the crumbling edifice of modern capitalism.
Today, under what some call “technofeudalism,” the separation of rent from productive profit has become not merely an academic exercise but a matter of existential urgency. With rents now extracted not only from land but from data, networks, and regulatory capture, taxation itself risks becoming obsolete if it fails to adapt.
Thus, let us lay out a theoretical and applied map for what could — and arguably must — be done.
I. The Theoretical Framework: Defining Our Terms
First, we must operationally define:
Profit: income generated from productive risk-taking — investment, innovation, labour.
Rent: income generated from ownership or control of scarce, non-replicable assets — land, intellectual property, platforms, regulatory privilege.
Key Principle: Rent is unearned. Profit is earned.
This distinction matters because rent is an economic extraction from society’s collective value creation, whereas profit rewards activities that enlarge that pie.
II. Mapping EBITA: Where Rent Hides
EBITA (Earnings Before Interest, Taxes, and Amortisation) is the preferred metric of modern corporate reporting. Within it, rents hide behind several masks:
Property rental income
Intellectual property licensing fees
Monopoly markups
Platform access fees
Network effect premiums
Regulatory arbitrage profits
Parsing rent from EBITA would thus require methodical decomposition.
III. Theoretical Approaches to Decomposing EBITA
Cost-Plus Benchmarking
Estimate what a “normal” competitive firm would earn.
Treat any surplus as rent.
Rate-of-Return Analysis
Compare corporate returns against industry-normal rates adjusted for risk.
Excess returns imply rent extraction.
Monopolistic Pricing Models
Apply measures like the Lerner Index to estimate pricing power.
Deduce the rentier share.
Asset Valuation Decomposition
Identify earnings derived strictly from asset control rather than active operation.
Economic Value Added (EVA) Adjustments
Assign a competitive cost of capital and strip out the residual super-profits as rents.
IV. Toward Applied Solutions: Imposing Sanity on Chaos
In theory, then, we could pursue several applied strategies:
Mandated Rent-Adjusted Reporting
Require corporations to file a “Rent-Adjusted EBITA” metric.
Auditors would have to categorise income streams as “productive” or “rentier.”
Differential Taxation
Tax normal profits at a competitive corporate rate.
Tax rents at punitive rates (e.g., 70-90%), since taxing rents does not distort incentives.
Sector-Specific Rent Taxes
Levy special taxes on land, platforms, patents, and monopoly franchises.
Create dynamic rent-extraction indices updated annually.
Platform Rent Charges
Impose data rent taxes on digital platforms extracting value from user activity.
Public Registry of Rents
Create a global registry classifying rents by sector, firm, and mechanism.
Provide public transparency to rent-seeking activities.
V. The Political Reality: Clouds on the Horizon
Needless to say, the aristocracy of the digital age will not go gentle into this good night. Rentiers — whether in Silicon Valley, the City of London, or Wall Street — are deeply entwined with the political machinery that might otherwise regulate them.
Yet the costs of inaction are higher. If rent extraction continues to eclipse productive activity, the very legitimacy of markets — and democracy — will erode into cynicism, stagnation, and oligarchic decay.
Conclusion: The Choice Before Us
Separating rent from profit is not merely a technocratic tweak. It is a radical act — one that could reorient economic activity away from parasitic extraction and back toward genuine value creation.
In a world where algorithms are castles, platforms are fiefdoms, and data is the new serfdom, reclaiming the ancient dream of taxing rent is no longer optional. It is, quite simply, the price of our collective survival.
Although the rating was not bad – 3.8 as of this writing – the reviews told a different story.
The first half of the book was worthwhile as it was a thought provoking description of how early societies may have developed social systems to encourage group cooperation and discourage harmful behaviors. The second half however was a rambling, preachy, and judgemental dissertation on postmodernism using the all too familiar tropes of social justice dialog. Overall it was poorly written though some of that may be due to translation. There is too little useful material here to justify its purchase.
Firstly, this version is from a German edition. Some people feel that some structure and communication value was lost in translation. In any case, he’s accused of being verbose and circumlocutory.
Amazon describes this book as “a sweeping history,” but the only thing sweeping about it was what I wanted to do with it under the nearest rug.
Secondly, it may be somewhat derivative of Nietzsche’s work on the same topic.
The book is rambling and incoherent, the author makes wild assertions about things that happened before recorded history, leaps from one train of thought to another, poo poos the ideas of religion but frequently quotes Nietzsche as the infallible source of knowledge…
Tonyo (reviewer)
In any case, the topic interests me, but I don’t see myself reading it any time soon.
We live in an age of two-dimensional minds trying to navigate a three-dimensional world—and doing it with all the grace of a toddler wielding a chainsaw. For over a generation, the US and UK have been polarised, Balkanised, and lobotomised by the Great Red vs. Blue Punch & Judy Show. Left, right. Us, them. Hero, villain. There is no nuance, no gradient, no middle ground. Just a glorious reduction of civilisation’s complexities into primary-coloured football teams for emotionally underdeveloped adults.
This is not politics. This is pantomime.
Audio: NotebookLM podcast on this topic.
And the real tragedy? The world isn’t even two-dimensional. It’s not even three. Try thinking of it as a cube—six sides at least, all pressing in at once, depending on your angle. Culture, history, class, geography, education, trauma, temperament, aesthetic preference—each a face of the cube. But tell that to the modern partisan and they’ll squint at you like you’ve just tried to explain jazz to a toaster.
No, to them, the world is flat. A line. A tug-of-war between two equally blinkered tribes dragging the rest of us into the pit. Pick a side or shut up, they shriek. If you’re not with us, you’re against us. If you don’t chant the correct slogans or signal the proper virtues, you’re obviously a heretic, a bigot, or—worst of all—centrist scum. They don’t want conversation; they want confirmation. Preferably in 280 characters or less.
Try introducing complexity and you’ll be accused of bothsidesism, moral cowardice, or—God forbid—thinking. It’s like throwing a Rubik’s cube into a toddler fight club.
This binary reductionism doesn’t stop at politics. Even gender—possibly the most nuanced and intimate aspect of human identity—has been flattened into a tug-of-war between biological essentialists and gender abolitionists, both sides wielding hashtags like holy relics. The irony? These same culture warriors still manage to marvel at rainbows, utterly unaware that their own worldview only permits two colours. How do they even process a traffic light?
The cult of the binary isn’t just intellectually bankrupt—it’s a threat to civilisation. We didn’t crawl out of the primordial ooze, develop language, invent calculus, and split the atom just so Karen and Kev from Facebook could reduce geopolitics to an episode of EastEnders. The world is messy. People are contradictory. Context matters. But nuance doesn’t trend.
We’re governed by algorithms, policed by outrage, and divided by design. The machinery of mass culture rewards the loudest, angriest, most wilfully ignorant voices, and we feed the beast like dopamine-addled pigeons pecking a lever. The cube has been flattened into a cartoon. And yet we wonder why everything feels broken.
So here’s a radical idea: what if we stopped flattening the world into a battlefield and started mapping it like a landscape? What if we admitted that not every problem has two sides—some have two hundred? What if we taught critical thinking instead of tribal loyalty? What if we made complexity sexy again?
But I digress. That might require imagination. And we’ve outsourced that to TikTok influencers and AI chatbots.
Meanwhile, the cube spins. And the rest of us try to hold on.
Reading Leviathan has me thinking. Nothing new, mind you—just reinvigorated. Hobbes, bless his scowling soul, is the consummate pessimist. People, in his view, are untrustworthy sods, ready to stab you in the back at the first flicker of opportunity. He doesn’t believe in community. He believes in containment.
Audio: NotebookLM discussion about this topic.
And to be fair, he’s not entirely wrong. He captures a certain cohort with uncanny accuracy. You know the type. Type-A™ personalities: the Donald Trumps, Elon Musks, Adolph Hitlers, Shahs of Iran, and that guy in marketing who always schedules meetings for 8am. The ones who salivate at the mere whiff of power, who’d sell their grandmothers for a press release and call it vision.
This, in short, is why we can’t have nice things.
Now, I’ll concede that most people want more than they have. Economics depends on this assumption like religion depends on guilt. But not everyone is driven by an insatiable lust for money, dominance, or legacy. That, my friends, is not ambition. It is pathology—a malignant, metastasising hunger that infects the likes of Trump, Musk, Bezos, Sunak, and their ilk. The hunger to rule, not just participate.
The trouble is, the majority of the world’s population are idiots—not technically, but metaphorically. Soft-headed. Overstimulated. Easily distracted by flags, influencers, and “free shipping.” And there are flavours of idiots. Musk is a lucky idiot. Trump is a useful idiot. Most are a hair’s breadth from being cannon fodder.
And then we wonder why everything’s broken.
The world could be configured differently. It could consist of autonomous collectives, each minding its own business, each respecting the other’s boundaries like courteous houseplants. But this equilibrium is shattered—always shattered—by the predatory few. The outliers. The sharks in suits. The ones who mistake governance for domination and diplomacy for personal branding.
So we build mechanisms to defend ourselves—laws, institutions, surveillance, standing armies—but these mechanisms inevitably attract the same types we were trying to ward off. Power-hungry cretins in different hats. The protectors, it turns out, are rarely benevolent dictators. They are predacious politicos, wearing virtue like a costume, mouthing justice while tightening the screws.
And here lies the eternal problem. Not technology. Not ideology. Not even ignorance.
But the recurring infestation of pathological ambition in a species otherwise just trying to get on with its day.
I’ve read Part I of Hobbes’ Leviathan and wonder what it would have been like if he filtered his thoughts through Hume or Wittgenstein. Hobbes makes Dickens read like Pollyanna. It’s an interesting historical piece, worth reading on that basis alone. It reads as if the Christian Bible had to pass through a legal review before it had been published, sapped of vigour. As bad a rap as Schopenhauer seems to get, Hobbes is the consummate Ebenezer Scrooge. Bah, humbug – you nasty, brutish, filthy animals!*
Audio: NotebookLM podcast conversation on this topic.
In any case, it got me thinking of free will and, more to the point, of will itself.
A Brief History of Humanity’s Favourite Metaphysical Scapegoat
By the time Free Will turned up to the party, the real guest of honour—the Will—had already been drinking heavily, muttering incoherently in the corner, and starting fights with anyone who made eye contact. We like to pretend that the “will” is a noble concept: the engine of our autonomy, the core of our moral selves, the brave little metaphysical organ that lets us choose kale over crisps. But in truth, it’s a bloody mess—philosophy’s equivalent of a family heirloom that no one quite understands but refuses to throw away.
So, let’s rewind. Where did this thing come from? And why, after 2,500 years of name-dropping, finger-pointing, and metaphysical gymnastics, are we still not quite sure whether we have a will, are a will, or should be suing it for damages?
Plato: Soul, Reason, and That Poor Horse
In the beginning, there was Plato, who—as with most things—half-invented the question and then wandered off before giving a straight answer. For him, the soul was a tripartite circus act: reason, spirit, and appetite. Will, as a term, didn’t get top billing—it didn’t even get its name on the poster. But the idea was there, muddling along somewhere between the charioteer (reason) and the unruly horses (desire and spiritedness).
No explicit will, mind you. Just a vague sense that the rational soul ought to be in charge, even if it had to beat the rest of itself into submission.
Aristotle: Purpose Without Pathos
Aristotle, ever the tidy-minded taxonomist, introduced prohairesis—deliberate choice—as a sort of proto-will. But again, it was all about rational calculation toward an end. Ethics was teleological, goal-oriented. You chose what aligned with eudaimonia, that smug Greek term for flourishing. Will, if it existed at all, was just reason picking out dinner options based on your telos. No inner torment, no existential rebellion—just logos in a toga.
Augustine: Sin, Suffering, and That Eternal No
Fast-forward a few hundred years, and along comes Saint Augustine, traumatised by his libido and determined to make the rest of us suffer for it. Enter voluntas: the will as the seat of choice—and the scene of the crime. Augustine is the first to really make the will bleed. He discovers he can want two incompatible things at once and feels properly appalled about it.
From this comes the classic Christian cocktail: freedom plus failure equals guilt. The will is free, but broken. It’s responsible for sin, for disobedience, for not loving God enough on Wednesdays. Thanks to Augustine, we’re stuck with the idea that the will is both the instrument of salvation and the reason we’re going to Hell.
Cheers.
Medievals: God’s Will or Yours, Pick One
The Scholastics, never ones to let an ambiguity pass unanalysed, promptly split into camps. Aquinas, ever the reasonable Dominican, says the will is subordinate to the intellect. God is rational, and so are we, mostly. But Duns Scotus and William of Ockham, the original voluntarist hooligans, argue that the will is superior—even in God. God could have made murder a virtue, they claim, and you’d just have to live with it.
From this cheerful perspective, will becomes a force of arbitrary fiat, and humans, made in God’s image, inherit the same capacity for irrational choice. The will is now more than moral; it’s metaphysical. Less reason’s servant, more chaos goblin.
Hobbes: Appetite with Delusions of Grandeur
Then along comes Thomas Hobbes, who looks at the soul and sees a wheezing machine of appetites. Will, in his famously cheery view, is simply “the last appetite before action.” No higher calling, no spiritual struggle—just the twitch that wins. Man is not a rational animal, but a selfish algorithm on legs. For Hobbes, will is where desire stumbles into motion, and morality is a polite euphemism for not getting stabbed.
Kant: The Will Gets a Makeover
Enter Immanuel Kant: powdered wig, pursed lips, and the moral rectitude of a man who scheduled his bowel movements. Kant gives us the “good will”, which acts from duty, not desire. Suddenly, the will is autonomous, rational, and morally legislative—a one-man Parliament of inner law.
It’s all terribly noble, terribly German, and entirely exhausting. For Kant, free will is not the ability to do whatever you like—it’s the capacity to choose according to moral law, even when you’d rather be asleep. The will is finally heroic—but only if it agrees to hate itself a little.
Schopenhauer: Cosmic Will, Cosmic Joke
And then the mood turns. Schopenhauer, world’s grumpiest mystic, takes Kant’s sublime will and reveals it to be a blind, thrashing, cosmic force. Will, for him, isn’t reason—it’s suffering in motion. The entire universe is will-to-live: a desperate, pointless striving that dooms us to perpetual dissatisfaction.
There is no freedom, no morality, no point. The only escape is to negate the will, preferably through aesthetic contemplation or Buddhist-like renunciation. In Schopenhauer’s world, the will is not what makes us human—it’s what makes us miserable.
Nietzsche: Transvaluation and the Will to Shout Loudest
Cue Nietzsche, who takes Schopenhauer’s howling void and says: yes, but what if we made it fabulous? For him, the will is no longer to live, but to power—to assert, to create, to impose value. “Free will” is a theologian’s fantasy, a tool of priests and moral accountants. But will itself? That’s the fire in the forge. The Übermensch doesn’t renounce the will—he rides it like a stallion into the sunset of morality.
Nietzsche doesn’t want to deny the abyss. He wants to waltz with it.
Today: Free Will and the Neuroscientific Hangover
And now? Now we’re left with compatibilists, libertarians, determinists, and neuroscientists all shouting past each other, armed with fMRI machines and TED talks. Some claim free will is an illusion, a post hoc rationalisation made by brains doing what they were always going to do. Others insist that moral responsibility requires it, even if we can’t quite locate it between the neurons.
We talk about willpower, will-to-change, political will, and free will like they’re real things. But under the hood, we’re still wrestling with the same questions Augustine posed in a North African villa: Why do I do what I don’t want to do? And more importantly, who’s doing it?
Conclusion: Where There’s a Will, There’s a Mess
From Plato’s silent horses to Nietzsche’s Dionysian pyrotechnics, the will has shape-shifted more times than a politician in an election year. It has been a rational chooser, a moral failure, a divine spark, a mechanical twitch, a cosmic torment, and an existential triumph.
Despite centuries of philosophical handwringing, what it has never been is settled.
So where there’s a will, there’s a way. But the way? Twisting, contradictory, and littered with the corpses of half-baked metaphysical systems.
Welcome to the labyrinth. Bring snacks.
* The solitary, poor, nasty, brutish, and short quote is forthcoming. Filthy animals is a nod to Home Alone.