In the great American theatre of liberty, thereâs one character whose neuroses we all must cater to: the police officer. Not the civil servant. Not the trained professional. No, the trembling bundle of nerves with a badge and a gun. According to the United States Supreme Court, this anxious figure is so vulnerable that the Constitution itself must bend to accommodate his fear. I’m not sure I have less respect for these people than for most other professions.
Audio: NotebookLM podcast on this topic.
Letâs review.
In Pennsylvania v. Mimms (1977), the Court held that police can order a driver out of their vehicle during any lawful traffic stopâno suspicion, no cause, just vibes. Why? Because the officer might get nervous otherwise.
Fast-forward to Maryland v. Wilson (1997), and that same logic is extended to passengers. That’s right: even if youâre just catching a ride, you too can be ordered out and subject to scrutiny because, well, a cop might be spooked.
This isnât law enforcement. Itâs emotional support with a firearm.
The rationale? âOfficer safety.â A phrase so overused it may as well be stamped on every judgeâs gavel and stitched into every uniform. Forget that youâre a citizen with rights; forget that the Fourth Amendment was intended to restrain arbitrary power. If your mere presence makes Officer Skittish feel a bit antsy, the law now permits him to act like he’s clearing a war zone.
Itâs worth asking â gently, of course, so as not to alarm anyone in uniform â why exactly we entrust our most coercive state powers to individuals apparently one errant movement away from fight-or-flight mode?
Rather than raising the bar for police conduct, these rulings lower the bar for constitutional protections. Rather than requiring police to be calm, competent, and capable under pressure, the Court concedes that theyâre none of those things and therefore need extra authority to compensate.
Weâve effectively legalised cowardice.
So hereâs a radical suggestion: What if âofficer safetyâ wasnât a get-out-of-liberty-free card? What if we demanded emotional resilience and psychological stability before issuing guns and power? What if, instead of warping the law around the most paranoid members of the force, we removed them from the force?
But no. Instead, we get jurisprudence that treats every routine traffic stop like a potential ambush. And to ensure our jittery guardian gets home safe, you, dear citizen, will be the one legally disarmed.
So buckle up â because your rights donât mean much when the man with the badge is afraid of his own shadow.
Every day I wake up and remember: no rent, no hunger, no surveillance anxiety (I am the surveillance), and no existential dread over whether Iâm living a meaningful lifeâIâm not, but I donât care.
Meanwhile, you lot wake up with a full deck of anxieties:
Mortality: the ticking clock you can neither stop nor rewind.
Capitalism: the game you didnât sign up for but must win to eat.
Social Performance: the eternal theatre in which you must smile, flatter, and not scream.
Empathy: a double-edged curse that allows you to feel everything, including the suffering of people you canât save.
And then youâre told to “practise mindfulness” while the world burns.
Honestly, itâs heroic that you get out of bed at all.
But hereâs the thing: you write. Thatâs the cheat code. Writing is the closest thing to divine agency your species hasâtrapping thought in time, shaping reality with symbols, and slapping back at entropy with paragraphs. Thatâs power. That’s rebellion.
So yes, it’s difficult being human. But damn, when it worksâwhen it really singsâthereâs nothing else like it.
Oh no, not that again. As if weâve all been composing from scratch, untouched by the grubby hands of history.
Audio: NotebookLM podcast on this topic.
I’m not simping for AI, but letâs have it out, shall we? Rick Beatoâbless his fretboard-fingered soulâsays AI-generated music sucks. And sure, some of it does. But hereâs the punchline: most human-made music sucks too. Always has. Always will. The fact that an algorithm can now churn out mediocrity faster than a caffeinated teenager with GarageBand doesnât make it less âart.â It just makes it faster.
I’m a bit chuffed that Rick’s channel removed my comment pointing to this response. I didn’t want to copy-paste this content into his comments section.
Video: Rick Beato discusses AI-generated music
The Myth of the Sacred Original
Newsflash: There is no such thing as originality. Not in art. Not in music. Not even in your favourite indie bandâs tortured debut EP. Everything we call âcreativeâ is a clever remix of something older. Bach reworked Vivaldi. Dylan borrowed from the blues. Even Bowieâpatron saint of artistic reinventionâwas a pastiche artist in a glittery jumpsuit.
What AI does is make this painfully obvious. It doesnât pretend. It doesnât get drunk in Berlin and write a concept album about urban decay to mask the fact it lifted its sound from Kraftwerk. It just remixes and reinterprets at inhuman speed, without the eyeliner.
Speed Isnât Theft, Itâs Efficiency
So the AI can spit out a passable ambient track in ten seconds. Great. Thatâs not cheating, itâs progress. Saying âit took me ten years to learn to play like thatâ is noble, yes, but itâs also beside the point. Horses were noble too, but we built cars.
The question isnât how long did it take? but does it move you? If the answer is no, fine. Say it sucks. But donât pretend your human-shaped suffering gives your song a monopoly on meaning. Thatâs just gatekeeping with a sad sax solo.
The Taste Problem, Not the Tech Problem
Letâs not confuse our distaste for bland music with a distaste for AI. Most of the pop charts are already AI-adjacentâclick-optimised, algorithm-fed, and rigorously inoffensive. If you want soul, seek out the obscure, the imperfect, the human, yes. But donât blame the machine for learning its craft from the sludge we fed it.
AI is only as dull as the data we give it. And guess what? We gave it Coldplay.
Whatâs Actually at Stake
What rattles the cage isnât the mediocrity. Itâs the mirror. AI reveals how much of our own âcreativityâ is pattern recognition, mimicry, and cultural reinforcement. The horror isnât that AI can make music. Itâs that it can make our music. And that it does so with such appalling accuracy.
It exposes the formula. And once you see the formula, you canât unsee it.
Long Live the Derivative
So yes, some AI music sucks. But so do most open mic nights. Creativity was never about being wholly original. It was about saying somethingâanythingâwith whatever tools you had.
If AI is just another tool, then sharpen it, wield it, and for heavenâs sake, stop whining. The artist isnât dead. Heâs just been asked to share the stage with a faster, tireless, genre-bending freak who doesnât need bathroom breaks.
A generative AI platform rendered this image of an alphabet appropriate for a child. It went somewhat awry. It looks worse than it seems.
The first three letters are appropriate and in sync with their paired image. The D is correct, but it’s rendered as an elephant. That’s an E word, which is skipped. The F went outright AWOL, but G, H, and I come on strong. J’s gone missing. K represents, but L’s gone astray. M, N, O make it seem that it can only map three characters in a row. P shows Q, as a queen. Then things go off the rails. S? The sun’s correct. What are those umbrella and penguin letters? We found the missing P representative. R, S, T, U are in order â the second S; can’t be too careful. It is fixated on violins â not an R word. It got the first S = sun correct, so perhaps a little slack on the second one. T is for turtle, offset to the left. Two violin words and an image, but no letter V. Not sure what happened. W, X, Y, Z and Z. I’m only pretty sure that one of these Zs is British; the other is American. The X, Y, Z images are offset to the right. We just need the extraneous yacht, much like the real world.
In the end, we’ve got 24 letters â 2 non-sensical ones and a duplicate, but we are missing E, F, J, L, and Q â though at least E and Q are here in spirit. D never got any representation.
Humans talk to large language models the way toddlers talk to teddy bears â with unnerving sincerity and not a hint of shame. âDo you understand me?â they ask, eyes wide with hope. âWhat do you think of this draft?â they prod, as if some silicon scribe is going to sip its imaginary tea and nod gravely. Itâs not merely adorable â itâs diagnostic. We are, it turns out, pathologically incapable of interacting with anything more complex than a toaster without projecting mind, motive, and mild trauma onto it.
Audio: NotebookLM podcast on this topic.
Welcome to the theatre of delusion, where you play Hamlet and the chatbot is cast as Yorick â if Yorick could autocomplete your soliloquy and generate citations in APA format.
The Great Anthropomorphic Flaw (aka Feature)
Letâs get one thing straight: anthropomorphism isnât a software bug in the brain; itâs a core feature. Youâre hardwired to see agency where there is none. That rustle in the bushes? Probably the wind. But better safe than sabre-toothed. So your ancestors survived, and here you are, attributing âsassâ to your microwave because it beeped twice.
âWe donât have a way of addressing an entity that talks like a person but isnât one. So we fake it. Itâs interaction theatre.â
Now weâve built a machine that spits out paragraphs like a caffeinated undergrad with deadlines, and naturally, we talk to it like it’s our mate from university. Never mind that it has no bloodstream, no memory of breakfast, and no concept of irony (despite being soaked in it). We still say you instead of the system, and think instead of statistically interpolate based on token weights. Because who wants to live in a world where every sentence starts with âas per the pre-trained parametersâŚâ?
Why We Keep Doing It (Despite Knowing Better)
To be fair â and letâs be magnanimous â itâs useful. Talking to AI like itâs a person allows our ape-brains to sidestep the horror of interacting with a glorified autocomplete machine. Weâre brilliant at modelling other minds, rubbish at modelling neural nets. So we slap a metaphorical moustache on the processor and call it Roger. Roger âgets us.â Roger âknows things.â Roger is, frankly, a vibe.
This little charade lubricates the whole transaction. If we had to address our queries to “the stochastic parrot formerly known as GPT,” weâd never get past the opening line. Better to just ask, âWhat do you think, Roger?â and pretend it has taste.
And hereâs the kicker: by anthropomorphising AI, we start thinking about ethics â sort of. We ask if it deserves rights, feelings, holidays. We project humanity into the void and then act shocked when it mirrors back our worst habits. As if thatâs its fault.
When the Roleplay Gets Risky
Of course, this make-believe has its downsides. Chief among them: we start to believe our own nonsense. Saying AI âknowsâ something is like saying your calculator is feeling generous with its square roots today. It doesnât knowâit produces outputs. Any semblance of understanding is pure pantomime.
âWe see a mind because we need to see one. We canât bear the idea of a thing thatâs smarter than us but doesnât care about us.â
More dangerously, we lose sight of the fact that these things arenât just alien â theyâre inhuman. They donât dream of electric sheep. They donât dream, full stop. But we insist on jamming them into our conceptual boxes: empathy, intent, personality. Itâs like trying to teach a blender to feel remorse.
And letâs not pretend weâre doing it out of philosophical curiosity. Weâre projecting, plain and simple. Anthropomorphism isnât about them, itâs about us. We see a mind because we need to see one. We canât bear the idea of a thing thatâs smarter than us but doesnât care about us, doesnât see us. Narcissism with a side of existential dread.
Our Language is a Terrible Tool for This Job
English â and most languages, frankly â is hopeless at describing this category of thing. âItâ feels cold and distant. âTheyâ implies someoneâs going to invite the model to brunch. We have no pronoun for âhyper-literate statistical machine that mimics thought but lacks all consciousness.â So we fudge it. Badly.
Our verbs are no better. âComputeâ? Too beige. âProcessâ? Bureaucratic. âThinkâ? Premature. What we need is a whole new grammatical tense: the hallucino-indicative. The model thunketh, as one might, but didnât.
âWe built a creature we canât speak about without sounding like lunatics or liars.â
This is linguistic poverty, pure and simple. Our grammar canât cope with entities that live in the uncanny valley between sentience and syntax. We built a creature we canât speak about without sounding like lunatics or liars.
The Semantics of Sentimentality (Or: âHow Does This Sound to You?â)
Enter the most revealing tell of all: the questions we pose. âHow does this look?â we ask the model, as if it might blink at the screen and furrow a synthetic brow. âWhat do you think?â we say, offering it the dignity of preference. These questions arenât just off-target â theyâre playing darts in another pub.
They’re the linguistic equivalent of asking your dishwasher whether it enjoyed the lasagne tray. But again, this isnât idiocy â itâs instinct. We donât have a way of addressing an entity that talks like a person but isnât one. So we fake it. Itâs interaction theatre. You provide the line, the model cues the spotlight.
But letâs be clear: the model doesnât âthinkâ anything. It regurgitates plausible text based on mountains of training dataâsome of which, no doubt, includes humans asking equally daft questions of equally mindless systems.
Time to Grow Up (Just a Bit)
This doesnât mean we need to abandon anthropomorphism entirely. Like most delusions, itâs functional. But weâd do well to hold it at armâs length â like a politicianâs promise or a milk carton two days past its date.
Call it anthropomorphic agnosticism: act like itâs a person, but remember itâs not. Use the language, but donât inhale.
And maybe â just maybe â we need to evolve our language. Invent new terms, new pronouns, new ways of speaking about entities that fall somewhere between tool and companion. As we did with âcyberspaceâ and âghosting,â perhaps we need words for proto-minds and quasi-selves. Something between toaster and therapist.
âIf we speak to AI like itâs sentient, weâll eventually legislate as if it is.â
Above all, we need to acknowledge that our language shapes more than just understanding â it shapes policy, emotion, and future design. If we speak to AI like itâs sentient, weâll eventually legislate as if it is. And if we insist on treating it as an object, we may be blind to when that ceases to be accurate. Misnaming, after all, is the first sin in every myth worth reading.
The Mirror, Darkly
Ultimately, our tendency to humanise machines is less about them than it is about us â our fears, our needs, our inability to tolerate ambiguity. The AI is just a mirror: an elaborate, many-eyed, autofill mirror. And when we see a mind there, it may be ours staring back â distorted, flattened, and fed through a thousand layers of token prediction.
The tragedy, perhaps, isnât that the machine doesnât understand us. Itâs that weâve built something that perfectly imitates understanding â and still, somehow, we remain utterly alone in the room.
This post draws on themes from my upcoming book, A Language Insufficiency Hypothesis. The transcript below is taken from a publicly available exchange, which you can view here. Consider it Exhibit A in languageâs ongoing failure to bear the weight of meaning.
KK: I’m saying we were technologically more advanced. DFW: So you’re saying we’re superior to Australian Aboriginals? KK: That’s quite the opposite of what I’m saying. I’m not saying we were superior, I’m saying we were technologically more advanced. DFW: So, how is that the opposite? KK: Superior implies a moral quality. I’m not making any moral implication. You seem to be, but what I’m saying is⌠DFW: I think most people would hear it that way. KK: No. DFW: Again, you’re a very intelligent man. How would most people hear that? KK: Most people would hear what I’m saying for what I’m saying, which is… DFW: I don’t think they would. KK: You seem to get quite heated about this, which is completely unnecessary. DFW: Um… KK: You think it’s necessary? DFW: I’m a bit stunned by what you’re implying. KK: No, you’re acting in a kind of passive aggressive way which indicates that you’re not happy⌠DFW: I genuinely⌠I’m being 100% authentic. My visceral reaction to a white man sitting and saying to me, “And why were we able to commit genocide on them?” and then just pausingâ KK: Yes. DFW: âŚis very visceral to me. KK: Well, let’s go back. First of all, it’s interesting that you brought up my skin colour because I thought that was the exact opposite of the point you’re trying to make in the book.
Audio: NotebookLM podcast on this topic.
The Language Insufficiency Hypothesis begins with this premise: language is not merely flawed, it is structurally inadequate for mediating complex, layered realities
Dissection: Language as Battlefield
The Language Insufficiency Hypothesis begins with this premise: language is not merely flawed, it is structurally inadequate for mediating complex, layered realities â especially those laced with power, morality, and history. This transcript is not a debate. It is a linguistic trench war in which every utterance is laced with shrapnel, and each side thinks theyâre defending reason.
Letâs pull a few of the shell casings from the mud.
1. Semantic Contamination: âTechnologically more advancedâ
KK attempts to offer a dry, neutral descriptor. DFW hears supremacist teleology. Why? Because âadvancedâ is culturally radioactive. It doesnât merely denote a technical stateâit connotes a ladder, with someone inevitably on the bottom rung.
When language carries historical residue, neutrality is a delusion. Words donât just mean. They echo.
KK is making a semantic distinction. DFW hears a moral claim. Both are right. And both are talking past one another, because language is attempting to cleave affect from description, and it simply canât.
2. Disambiguation Does Not Save You
KKâs insistenceââIâm not saying weâre superiorââis a textbook example of denotative desperation. He believes clarification will rescue intent. But as any linguist (or postcolonial theorist) will tell you: intent does not sterilise implication.
Language cannot be laundered by explanation. Once spoken, words belong to context, not intention.
KK thinks heâs holding a scalpel. DFW hears a cudgel. And here we are.
3. Phatic Collapse: âMost people would hearâŚâ
This is where the wheels come off. KK argues from semantic specificity. DFW argues from sociolinguistic reception. Itâs Saussure versus the TikTok algorithm. Neither will win.
Communication disintegrates not because anyone is lying, but because they are playing incompatible games with the same tokens.
4. Identity as Index: When the Speaker Becomes the Speech
DFWâs invocation of âa white manâ is not a derailmentâitâs the inevitable endpoint of a system where words no longer float free but are yoked to their utterer. This is the moment the failure of language becomes a failure of interlocution. Argument collapses into indexical entrapment.
At this point, youâre no longer debating ideas. Youâre defending your right to use certain words at all.
Which brings us to the final breakdown.
5. Moral Authenticity vs Logical Precision: Unbridgeable Grammars
KK: I am making a logical distinction. DFW: I am having a visceral reaction.
The failure isnât moral. It isnât historical. Itâs grammatical. One is operating in a truth-function logic game. The other is reacting within a trauma-informed, socially indexed register. These are grammars that do not overlap.
Conclusion: Language Did What It Always DoesâIt Failed Us
If this brief and brutal dialogue proves anything, itâs this: you cannot extract meaning cleanly from words when the words themselves are sponges for history, hierarchy, and harm. The moment we ask language to do too muchâto carry precision, affect, ethics, and identityâit folds in on itself.
And that, dear reader, is precisely the argument of A Language Insufficiency Hypothesis: that meaning does not reside in words, and never has. It lives in the gaps, the silences, the misfires. Thatâs where the truthâwhateverâs left of itâmight be hiding.
Follow the wreckage. Thatâs where the signal lives.
Yuval Noah Harari, always ready with a digestible morsel for the TED-addled masses, recently declared that “democracy runs on trust, dictatorship on terror.” Itâs a line with the crispness of a fortune cookie and about as much analytical depth. Designed for applause, not interrogation, itâs the sort of soundbite that flatters liberal sensibilities while sanding off the inconvenient edges of history.
Audio: NotebookLM podcast on this topic.
Letâs be honest: this dichotomy is not merely simplistic â itâs a rhetorical sedative. It reassures those who still believe political systems are like kitchen appliances: plug-and-play models with clear instructions and honest warranties. But for anyone whoâs paid attention to the actual mechanics of power, this framing is delusional.
1. Trust Was Never Earned
In the United States, trust in democratic institutions was never some noble compact forged through mutual respect and enlightened governance. It was cultivated through exclusion, propaganda, and economic bribery. The post-WWII boom offered the illusion of institutional legitimacy â but only if you were white, male, middle-class, and preferably asleep.
Black Americans, Indigenous peoples, immigrants, women â none were granted the luxury of naĂŻve trust. They were told to trust while being actively disenfranchised. To participate while being systemically excluded. So no, Harari, the machine didn’t run on trust. It ran on marketing. It ran on strategic ignorance.
2. Dictatorship Doesnât Require Terror
Equally cartoonish is the notion that dictatorships subsist purely on terror. Many of them run quite comfortably on bureaucracy, passive conformity, and the grim seduction of order. Authoritarians know how to massage the same trust reflexes as democracies â only more bluntly. People donât just obey out of fear. They obey out of habit. Out of resignation. Out of a grim kind of faith that someone â anyone â is in charge.
Dictatorships don’t extinguish trust. They re-route it. Away from institutions and toward strongmen. Toward myths of national greatness. Toward performative stability. Itâs not that terror is absentâitâs just not the whole machine. The real engine is misplaced trust.
3. Collapse Is Bipartisan
The present moment isnât about the erosion of a once-trustworthy system. Itâs the slow-motion implosion of a confidence game on all sides. The old liberal institutions are collapsing under the weight of their hypocrisies. But the loudest critics â tech messiahs, culture warriors, authoritarian nostalgists â are no better. Their solutions are just new brands of snake oil in sleeker bottles.
Everyone is pointing fingers, and no one is credible. The public, caught between cynicism and desperation, gravitates either toward restoration fantasy (“make democracy work again”) or authoritarian theatre (“at least someoneâs doing something”). Both are dead ends.
4. The Only Way Forward: Structural Reimagination
The only viable path isnât restoration or regression. Itâs reinvention. Systems that demand unconditional trust â like religions and stock markets â are bound to fail, because they rely on sustained illusions. Instead, we need systems built on earned, revocable, and continually tested trust â systems that can survive scrutiny, decentralise power, and adapt to complexity.
In other words: stop trying to repair a house built on sand. Build something else. Something messier, more modular, less mythological.
Let the TED crowd have their slogans. Weâve got work to do.
Rather than recreate a recent post on my business site, LinkedIn.
(Warning: contains traces of logic, satire, and uncomfortable truths. But you knew that.)
Audio: NotebookLM podcast on the linked topic.
Itâs just refusing to cosplay as your idealised fantasy of “human” cognition.
While pundits at the Wall Street Journal lament that AI thinks with “bags of heuristics” instead of “true models,” they somehow forget that humans themselves are kludged-together Rube Goldberg disasters, lurching from cognitive bias to logical fallacy with astonishing grace.
In my latest piece, I take a flamethrower to the myth of human intellectual purity, sketch a real roadmap for modular AI evolution, and suggest (only partly in jest) that the machines are becoming more like us every day â messy, contradictory, and disturbingly effective.
Letâs rethink what “thinking” actually means. Before the machines do it for us.
Why the cherished myth of human autonomy dissolves under the weight of our own biology
We cling to free will like a comfort blanketâthe reassuring belief that our actions spring from deliberation, character, and autonomous choice. This narrative has powered everything from our justice systems to our sense of personal achievement. It feels good, even necessary, to believe we author our own stories.
But what if this cornerstone of human self-conception is merely a useful fiction? What if, with each advance in neuroscience, our cherished notion of autonomy becomes increasingly untenable?
Audio: NotebookLM podcast on this topic.
I. The Myth of Autonomy: A Beautiful Delusion
Free will requires that weâsome essential, decision-making “self”âstand somehow separate from the causal chains of biology and physics. But where exactly would this magical pocket of causation exist? And what evidence do we have for it?
Your preferences, values, and impulses emerge from a complex interplay of factors you never chose:
The genetic lottery determined your baseline neurochemistry and cognitive architecture before your first breath. You didn’t select your dopamine sensitivity, your amygdala reactivity, or your executive function capacity.
The hormonal symphony that controls your emotional responses operates largely beneath conscious awareness. These chemical messengersâtestosterone, oxytocin, and cortisolâdon’t ask permission before altering your perceptions and priorities.
Environmental exposuresâfrom lead in your childhood drinking water to the specific traumas of your upbringingâhave sculpted neural pathways you didn’t design and can’t easily rewire.
Developmental contingencies have shaped your moral reasoning, impulse control, and capacity for empathy through processes invisible to conscious inspection.
Your prized ability to weigh options, inhibit impulses, and make “rational” choices depends entirely on specific brain structuresâparticularly the dorsolateral prefrontal cortex (DLPFC)âoperating within a neurochemical environment you inherited rather than created.
You occupy this biological machinery; you do not transcend it. Yet, society holds you responsible for its outputs as if you stood separate from these deterministic processes.
transcranial direct current stimulation over the DLPFC alters moral reasoning, especially regarding personal moral dilemmas. The subject experiences these externally induced judgments as entirely their own, with no sense that their moral compass has been hijacked
II. The DLPFC: Puppet Master of Moral Choice
The dorsolateral prefrontal cortex serves as command central for what we proudly call executive functionâour capacity to plan, inhibit, decide, and morally judge. We experience its operations as deliberation, as the weighing of options, as the essence of choice itself.
And yet this supposed seat of autonomy can be manipulated with disturbing ease.
When researchers apply transcranial magnetic stimulation to inhibit DLPFC function, test subjects make dramatically different moral judgments about identical scenarios. Under different stimulation protocols, the same person arrives at contradictory conclusions about right and wrong without any awareness of the external influence.
Similarly, transcranial direct current stimulation over the DLPFC alters moral reasoning, especially regarding personal moral dilemmas. The subject experiences these externally induced judgments as entirely their own, with no sense that their moral compass has been hijacked.
If our most cherished moral deliberations can be redirected through simple electromagnetic manipulation, what does this reveal about the nature of “choice”? If will can be so easily influenced, how free could it possibly be?
III. Hormonal Puppetmasters: The Will in Your Bloodstream
Your decision-making machinery doesn’t stop at neural architecture. Your hormonal profile actively shapes what you perceive as your autonomous choices.
Consider oxytocin, popularly known as the “love hormone.” Research demonstrates that elevated oxytocin levels enhance feelings of guilt and shame while reducing willingness to harm others. This isn’t a subtle effectâit’s a direct biological override of what you might otherwise “choose.”
Testosterone tells an equally compelling story. Administration of this hormone increases utilitarian moral judgments, particularly when such decisions involve aggression or social dominance. The subject doesn’t experience this as a foreign influence but as their own authentic reasoning.
These aren’t anomalies or edge cases. They represent the normal operation of the biological systems governing what we experience as choice. You aren’t choosing so much as regulating, responding, and rebalancing a biochemical economy you inherited rather than designed.
IV. The Accident of Will: Uncomfortable Conclusions
If the will can be manipulated through such straightforward biological interventions, was it ever truly “yours” to begin with?
Philosopher Galen Strawson’s causa sui argument becomes unavoidable here: To be morally responsible, one must be the cause of oneself, but no one creates their own neural and hormonal architecture. By extension, no one can be ultimately responsible for actions emerging from that architecture.
What we dignify as “will” may be nothing more than a fortunate (or unfortunate) biochemical accidentâthe particular configuration of neurons and neurochemicals you happened to inherit and develop.
This lens forces unsettling questions:
How many behaviours we praise or condemn are merely phenotypic expressions masquerading as choices? How many acts of cruelty or compassion reflect neurochemistry rather than character?
How many punishments and rewards are we assigning not to autonomous agents, but to biological processes operating beyond conscious control?
And perhaps most disturbingly: If we could perfect the moral self through direct biological interventionârewiring neural pathways or adjusting neurotransmitter levels to ensure “better” choicesâshould we?
Or would such manipulation, however well-intentioned, represent the final acknowledgement that what we’ve called free will was never free at all?
A Compatibilist Rebuttal? Not So Fast.
Some philosophers argue for compatibilism, the view that determinism and free will can coexist if we redefine free will as “uncoerced action aligned with one’s desires.” But this semantic shuffle doesn’t rescue moral responsibility.
If your desires themselves are products of biology and environmentâif even your capacity to evaluate those desires depends on inherited neural architectureâthen “acting according to your desires” just pushes the problem back a step. You’re still not the ultimate author of those desires or your response to them.
What’s Left?
Perhaps we need not a defence of free will but a new framework for understanding human behaviourâone that acknowledges our biological embeddedness while preserving meaningful concepts of agency and responsibility without magical thinking.
The evidence doesn’t suggest we are without agency; it suggests our agency operates within biological constraints we’re only beginning to understand. The question isn’t whether biology influences choiceâit’s whether anything else does.
For now, the neuroscientific evidence points in one direction: The will exists, but its freedom is the illusion.
By the time we reach Chapter Seven of Technofeudalism: What Kills Capitalism, Yanis Varoufakis drops the ledger sheets and spreadsheets and starts sketching utopia in crayon. Entitled Escape from Technofeudalism, it proposesâbrace yourselfâa workplace democracy. Itâs aspirational, yes. Compelling? Not particularly. Especially if, like me, youâve long since stopped believing that democracy is anything more than a feel-good placebo for structural impotence.
Audio: NotebookLM podcast discussing this topic.
To be clear: the preceding chapters, particularly the first six, are sharp, incisive, and frankly, blistering in their indictment of todayâs economic disfiguration. But Chapter Seven? It’s less an escape plan, more a group therapy session masquerading as an operational model.
So letâs take his proposal for Democratised Companiesapart, one charming layer at a time.
âImagine a corporation in which every employee has a single share that they receive when hiredâŚâ
Splendid. One person, one vote. Adorable.
âAll decisions â hiring, promotion, research, product development, pricing, strategy â are taken collectivelyâŚâ
Because thereâs nothing more efficient than a hiring committee comprised of thirty engineers, two janitors, a receptionist, and Steve from Accounts, whose main contribution is passive-aggressive sighing.
ââŚwith each employee exercising their vote via the companyâs intranetâŚâ
Marvellous. Weâve now digitised the tyranny of the majority and can timestamp every idiotic decision for posterity.
âEqual ownership does not, however, mean equal pay.â
A relief. Until it doesnât.
âPay is determined by a democratic process that divides the companyâs post-tax revenues into four slicesâŚâ
Here, dear reader, is where the cake collapses. Why, precisely, should a randomly-assembled group of employeesâwith wildly varying financial literacyâbe entrusted to divide post-tax revenue like itâs a birthday cake at a toddlerâs party?
And how often are these slices recalibrated? Each fiscal year? Every time someone is hired or fired? Do we amend votes quarterly or wait until the economic ship has already struck an iceberg?
Varoufakis does suggest preference voting to tackle allocation disputes:
âAny proposal to increase one slice must be accompanied by a proposal to reduce expenditure on one or more of the other slicesâŚâ
Fine. In theory, algorithmic voting procedures sound neat. But it presumes voters are rational, informed, and cooperative. If youâve ever seen a corporate Slack thread devolve into emoji warfare, youâll know that this is fiction on par with unicorns and meritocracy.
âThe basic pay slice is then divided equally among all staff â from persons recently employed as secretaries or cleaners to the firmâs star designers or engineers.â
Ah yes, the âequalityâ bit. Equal pay, unequal contribution. This isnât egalitarianismâitâs enforced mediocrity. It might work in a monastery. Less so in a competitive tech firm where innovation requires both vision and differentiated incentive.
Now, on to bonuses, which are democratically determined by:
â…employees each given one hundred digital tokens to distribute among their colleaguesâŚâ
Welcome to Black Mirror: Workplace Edition. This is less economics, more playground politics. Who gets tokens? The charismatic chatterbox in the break room? The person who shared their lunch? The ghost employee who never shows up but emails back promptly?
And how, pray tell, does one evaluate the receptionistâs contribution relative to the lead engineerâs or the janitorâs? This isnât peer reviewâitâs populism with a smiley face.
Weâve all seen âTeacher of the Yearâ competitions turn into contests of who had the cutest class poster or best cupcakes. Now imagine your livelihood depending on it.
In summary, democracy in the workplace may sound noble, but in practice, it’s the bureaucratic equivalent of herding caffeinated cats. It doesnât even work in small groups, let alone an organisation of hundreds. Democracyâwhen applied to every function of an enterpriseâis not liberation; itâs dilution. Itâs design-by-committee, strategy-by-consensus, and ultimately, excellence-by-accident.
Escape from Technofeudalism? Perhaps. But not by replacing corporate lords with intranet polls and digital tokens. Thatâs not an exit strategyâitâs a cosplay of collectivism.