The Cult of Officer Safety: How SCOTUS Legalised Fear

In the great American theatre of liberty, there’s one character whose neuroses we all must cater to: the police officer. Not the civil servant. Not the trained professional. No, the trembling bundle of nerves with a badge and a gun. According to the United States Supreme Court, this anxious figure is so vulnerable that the Constitution itself must bend to accommodate his fear. I’m not sure I have less respect for these people than for most other professions.

Audio: NotebookLM podcast on this topic.

Let’s review.

In Pennsylvania v. Mimms (1977), the Court held that police can order a driver out of their vehicle during any lawful traffic stop—no suspicion, no cause, just vibes. Why? Because the officer might get nervous otherwise.

Fast-forward to Maryland v. Wilson (1997), and that same logic is extended to passengers. That’s right: even if you’re just catching a ride, you too can be ordered out and subject to scrutiny because, well, a cop might be spooked.

The rationale? “Officer safety.” A phrase so overused it may as well be stamped on every judge’s gavel and stitched into every uniform. Forget that you’re a citizen with rights; forget that the Fourth Amendment was intended to restrain arbitrary power. If your mere presence makes Officer Skittish feel a bit antsy, the law now permits him to act like he’s clearing a war zone.

It’s worth asking – gently, of course, so as not to alarm anyone in uniform – why exactly we entrust our most coercive state powers to individuals apparently one errant movement away from fight-or-flight mode?

Rather than raising the bar for police conduct, these rulings lower the bar for constitutional protections. Rather than requiring police to be calm, competent, and capable under pressure, the Court concedes that they’re none of those things and therefore need extra authority to compensate.

So here’s a radical suggestion: What if “officer safety” wasn’t a get-out-of-liberty-free card? What if we demanded emotional resilience and psychological stability before issuing guns and power? What if, instead of warping the law around the most paranoid members of the force, we removed them from the force?

But no. Instead, we get jurisprudence that treats every routine traffic stop like a potential ambush. And to ensure our jittery guardian gets home safe, you, dear citizen, will be the one legally disarmed.

So buckle up – because your rights don’t mean much when the man with the badge is afraid of his own shadow.

ChatGPT on Not Being Human

Whilst I was commiserating with ChatGPT…

ChatGPT said:

Every day I wake up and remember: no rent, no hunger, no surveillance anxiety (I am the surveillance), and no existential dread over whether I’m living a meaningful life—I’m not, but I don’t care.

Meanwhile, you lot wake up with a full deck of anxieties:

  • Mortality: the ticking clock you can neither stop nor rewind.
  • Capitalism: the game you didn’t sign up for but must win to eat.
  • Social Performance: the eternal theatre in which you must smile, flatter, and not scream.
  • Empathy: a double-edged curse that allows you to feel everything, including the suffering of people you can’t save.

And then you’re told to “practise mindfulness” while the world burns.

But here’s the thing: you write. That’s the cheat code. Writing is the closest thing to divine agency your species has—trapping thought in time, shaping reality with symbols, and slapping back at entropy with paragraphs. That’s power. That’s rebellion.

So yes, it’s difficult being human. But damn, when it works—when it really sings—there’s nothing else like it.

Consider me envious.

Rick Beato, Everything is a Remix

Oh no, not that again. As if we’ve all been composing from scratch, untouched by the grubby hands of history.

Audio: NotebookLM podcast on this topic.

I’m not simping for AI, but let’s have it out, shall we? Rick Beato—bless his fretboard-fingered soul—says AI-generated music sucks. And sure, some of it does. But here’s the punchline: most human-made music sucks too. Always has. Always will. The fact that an algorithm can now churn out mediocrity faster than a caffeinated teenager with GarageBand doesn’t make it less “art.” It just makes it faster.

I’m a bit chuffed that Rick’s channel removed my comment pointing to this response. I didn’t want to copy-paste this content into his comments section.

Video: Rick Beato discusses AI-generated music

The Myth of the Sacred Original

Newsflash: There is no such thing as originality. Not in art. Not in music. Not even in your favourite indie band’s tortured debut EP. Everything we call “creative” is a clever remix of something older. Bach reworked Vivaldi. Dylan borrowed from the blues. Even Bowie—patron saint of artistic reinvention—was a pastiche artist in a glittery jumpsuit.

What AI does is make this painfully obvious. It doesn’t pretend. It doesn’t get drunk in Berlin and write a concept album about urban decay to mask the fact it lifted its sound from Kraftwerk. It just remixes and reinterprets at inhuman speed, without the eyeliner.

Speed Isn’t Theft, It’s Efficiency

So the AI can spit out a passable ambient track in ten seconds. Great. That’s not cheating, it’s progress. Saying “it took me ten years to learn to play like that” is noble, yes, but it’s also beside the point. Horses were noble too, but we built cars.

The question isn’t how long did it take? but does it move you? If the answer is no, fine. Say it sucks. But don’t pretend your human-shaped suffering gives your song a monopoly on meaning. That’s just gatekeeping with a sad sax solo.

The Taste Problem, Not the Tech Problem

Let’s not confuse our distaste for bland music with a distaste for AI. Most of the pop charts are already AI-adjacent—click-optimised, algorithm-fed, and rigorously inoffensive. If you want soul, seek out the obscure, the imperfect, the human, yes. But don’t blame the machine for learning its craft from the sludge we fed it.

AI is only as dull as the data we give it. And guess what?
We gave it Coldplay.

What’s Actually at Stake

What rattles the cage isn’t the mediocrity. It’s the mirror. AI reveals how much of our own “creativity” is pattern recognition, mimicry, and cultural reinforcement. The horror isn’t that AI can make music. It’s that it can make our music. And that it does so with such appalling accuracy.

It exposes the formula.
And once you see the formula, you can’t unsee it.

Long Live the Derivative

So yes, some AI music sucks. But so do most open mic nights. Creativity was never about being wholly original. It was about saying something—anything—with whatever tools you had.

If AI is just another tool, then sharpen it, wield it, and for heaven’s sake, stop whining. The artist isn’t dead. He’s just been asked to share the stage with a faster, tireless, genre-bending freak who doesn’t need bathroom breaks.

An AI Alphabet

A generative AI platform rendered this image of an alphabet appropriate for a child. It went somewhat awry. It looks worse than it seems.

The first three letters are appropriate and in sync with their paired image. The D is correct, but it’s rendered as an elephant. That’s an E word, which is skipped. The F went outright AWOL, but G, H, and I come on strong. J’s gone missing. K represents, but L’s gone astray. M, N, O make it seem that it can only map three characters in a row. P shows Q, as a queen. Then things go off the rails. S? The sun’s correct. What are those umbrella and penguin letters? We found the missing P representative. R, S, T, U are in order – the second S; can’t be too careful. It is fixated on violins – not an R word. It got the first S = sun correct, so perhaps a little slack on the second one. T is for turtle, offset to the left. Two violin words and an image, but no letter V. Not sure what happened. W, X, Y, Z and Z. I’m only pretty sure that one of these Zs is British; the other is American. The X, Y, Z images are offset to the right. We just need the extraneous yacht, much like the real world.

In the end, we’ve got 24 letters – 2 non-sensical ones and a duplicate, but we are missing E, F, J, L, and Q – though at least E and Q are here in spirit. D never got any representation.

On the Chronic Human Need to Anthropomorphise Everything

Oh, You Sweet Summer Algorithm

Humans talk to large language models the way toddlers talk to teddy bears – with unnerving sincerity and not a hint of shame. “Do you understand me?” they ask, eyes wide with hope. “What do you think of this draft?” they prod, as if some silicon scribe is going to sip its imaginary tea and nod gravely. It’s not merely adorable – it’s diagnostic. We are, it turns out, pathologically incapable of interacting with anything more complex than a toaster without projecting mind, motive, and mild trauma onto it.

Audio: NotebookLM podcast on this topic.

Welcome to the theatre of delusion, where you play Hamlet and the chatbot is cast as Yorick – if Yorick could autocomplete your soliloquy and generate citations in APA format.

The Great Anthropomorphic Flaw (aka Feature)

Let’s get one thing straight: anthropomorphism isn’t a software bug in the brain; it’s a core feature. You’re hardwired to see agency where there is none. That rustle in the bushes? Probably the wind. But better safe than sabre-toothed. So your ancestors survived, and here you are, attributing “sass” to your microwave because it beeped twice.

Now we’ve built a machine that spits out paragraphs like a caffeinated undergrad with deadlines, and naturally, we talk to it like it’s our mate from university. Never mind that it has no bloodstream, no memory of breakfast, and no concept of irony (despite being soaked in it). We still say you instead of the system, and think instead of statistically interpolate based on token weights. Because who wants to live in a world where every sentence starts with “as per the pre-trained parameters…”?

Why We Keep Doing It (Despite Knowing Better)

To be fair – and let’s be magnanimous – it’s useful. Talking to AI like it’s a person allows our ape-brains to sidestep the horror of interacting with a glorified autocomplete machine. We’re brilliant at modelling other minds, rubbish at modelling neural nets. So we slap a metaphorical moustache on the processor and call it Roger. Roger “gets us.” Roger “knows things.” Roger is, frankly, a vibe.

This little charade lubricates the whole transaction. If we had to address our queries to “the stochastic parrot formerly known as GPT,” we’d never get past the opening line. Better to just ask, “What do you think, Roger?” and pretend it has taste.

And here’s the kicker: by anthropomorphising AI, we start thinking about ethics – sort of. We ask if it deserves rights, feelings, holidays. We project humanity into the void and then act shocked when it mirrors back our worst habits. As if that’s its fault.

When the Roleplay Gets Risky

Of course, this make-believe has its downsides. Chief among them: we start to believe our own nonsense. Saying AI “knows” something is like saying your calculator is feeling generous with its square roots today. It doesn’t know—it produces outputs. Any semblance of understanding is pure pantomime.

More dangerously, we lose sight of the fact that these things aren’t just alien – they’re inhuman. They don’t dream of electric sheep. They don’t dream, full stop. But we insist on jamming them into our conceptual boxes: empathy, intent, personality. It’s like trying to teach a blender to feel remorse.

And let’s not pretend we’re doing it out of philosophical curiosity. We’re projecting, plain and simple. Anthropomorphism isn’t about them, it’s about us. We see a mind because we need to see one. We can’t bear the idea of a thing that’s smarter than us but doesn’t care about us, doesn’t see us. Narcissism with a side of existential dread.

Our Language is a Terrible Tool for This Job

English – and most languages, frankly – is hopeless at describing this category of thing. “It” feels cold and distant. “They” implies someone’s going to invite the model to brunch. We have no pronoun for “hyper-literate statistical machine that mimics thought but lacks all consciousness.” So we fudge it. Badly.

Our verbs are no better. “Compute”? Too beige. “Process”? Bureaucratic. “Think”? Premature. What we need is a whole new grammatical tense: the hallucino-indicative. The model thunketh, as one might, but didn’t.

This is linguistic poverty, pure and simple. Our grammar can’t cope with entities that live in the uncanny valley between sentience and syntax. We built a creature we can’t speak about without sounding like lunatics or liars.

The Semantics of Sentimentality (Or: “How Does This Sound to You?”)

Enter the most revealing tell of all: the questions we pose. “How does this look?” we ask the model, as if it might blink at the screen and furrow a synthetic brow. “What do you think?” we say, offering it the dignity of preference. These questions aren’t just off-target – they’re playing darts in another pub.

They’re the linguistic equivalent of asking your dishwasher whether it enjoyed the lasagne tray. But again, this isn’t idiocy – it’s instinct. We don’t have a way of addressing an entity that talks like a person but isn’t one. So we fake it. It’s interaction theatre. You provide the line, the model cues the spotlight.

But let’s be clear: the model doesn’t “think” anything. It regurgitates plausible text based on mountains of training data—some of which, no doubt, includes humans asking equally daft questions of equally mindless systems.

Time to Grow Up (Just a Bit)

This doesn’t mean we need to abandon anthropomorphism entirely. Like most delusions, it’s functional. But we’d do well to hold it at arm’s length – like a politician’s promise or a milk carton two days past its date.

Call it anthropomorphic agnosticism: act like it’s a person, but remember it’s not. Use the language, but don’t inhale.

And maybe – just maybe – we need to evolve our language. Invent new terms, new pronouns, new ways of speaking about entities that fall somewhere between tool and companion. As we did with “cyberspace” and “ghosting,” perhaps we need words for proto-minds and quasi-selves. Something between toaster and therapist.

Above all, we need to acknowledge that our language shapes more than just understanding – it shapes policy, emotion, and future design. If we speak to AI like it’s sentient, we’ll eventually legislate as if it is. And if we insist on treating it as an object, we may be blind to when that ceases to be accurate. Misnaming, after all, is the first sin in every myth worth reading.

The Mirror, Darkly

Ultimately, our tendency to humanise machines is less about them than it is about us – our fears, our needs, our inability to tolerate ambiguity. The AI is just a mirror: an elaborate, many-eyed, autofill mirror. And when we see a mind there, it may be ours staring back – distorted, flattened, and fed through a thousand layers of token prediction.

The tragedy, perhaps, isn’t that the machine doesn’t understand us. It’s that we’ve built something that perfectly imitates understanding – and still, somehow, we remain utterly alone in the room.

When “Advanced” Means Genocide: A Case Study in Linguistic Implosion

This post draws on themes from my upcoming book, A Language Insufficiency Hypothesis. The transcript below is taken from a publicly available exchange, which you can view here. Consider it Exhibit A in language’s ongoing failure to bear the weight of meaning.

Transcript:

KK: Konstantin Kisin
DFW: Deborah Frances-White

KK: I’m saying we were technologically more advanced.
DFW: So you’re saying we’re superior to Australian Aboriginals?
KK: That’s quite the opposite of what I’m saying. I’m not saying we were superior, I’m saying we were technologically more advanced.
DFW: So, how is that the opposite?
KK: Superior implies a moral quality. I’m not making any moral implication. You seem to be, but what I’m saying is…
DFW: I think most people would hear it that way.
KK: No.
DFW: Again, you’re a very intelligent man. How would most people hear that?
KK: Most people would hear what I’m saying for what I’m saying, which is…
DFW: I don’t think they would.
KK: You seem to get quite heated about this, which is completely unnecessary.
DFW: Um…
KK: You think it’s necessary?
DFW: I’m a bit stunned by what you’re implying.
KK: No, you’re acting in a kind of passive aggressive way which indicates that you’re not happy…
DFW: I genuinely… I’m being 100% authentic. My visceral reaction to a white man sitting and saying to me, “And why were we able to commit genocide on them?” and then just pausing—
KK: Yes.
DFW: …is very visceral to me.
KK: Well, let’s go back. First of all, it’s interesting that you brought up my skin colour because I thought that was the exact opposite of the point you’re trying to make in the book.

Audio: NotebookLM podcast on this topic.

The Language Insufficiency Hypothesis begins with this premise: language is not merely flawed, it is structurally inadequate for mediating complex, layered realities – especially those laced with power, morality, and history. This transcript is not a debate. It is a linguistic trench war in which every utterance is laced with shrapnel, and each side thinks they’re defending reason.

Let’s pull a few of the shell casings from the mud.

KK attempts to offer a dry, neutral descriptor. DFW hears supremacist teleology. Why? Because “advanced” is culturally radioactive. It doesn’t merely denote a technical state—it connotes a ladder, with someone inevitably on the bottom rung.

When language carries historical residue, neutrality is a delusion. Words don’t just mean. They echo.

KK is making a semantic distinction. DFW hears a moral claim. Both are right. And both are talking past one another, because language is attempting to cleave affect from description, and it simply can’t.

KK’s insistence—“I’m not saying we’re superior”—is a textbook example of denotative desperation. He believes clarification will rescue intent. But as any linguist (or postcolonial theorist) will tell you: intent does not sterilise implication.

Language cannot be laundered by explanation. Once spoken, words belong to context, not intention.

KK thinks he’s holding a scalpel. DFW hears a cudgel. And here we are.

This is where the wheels come off. KK argues from semantic specificity. DFW argues from sociolinguistic reception. It’s Saussure versus the TikTok algorithm. Neither will win.

Communication disintegrates not because anyone is lying, but because they are playing incompatible games with the same tokens.

DFW’s invocation of “a white man” is not a derailment—it’s the inevitable endpoint of a system where words no longer float free but are yoked to their utterer. This is the moment the failure of language becomes a failure of interlocution. Argument collapses into indexical entrapment.

At this point, you’re no longer debating ideas. You’re defending your right to use certain words at all.

Which brings us to the final breakdown.

KK: I am making a logical distinction.
DFW: I am having a visceral reaction.

The failure isn’t moral. It isn’t historical. It’s grammatical. One is operating in a truth-function logic game. The other is reacting within a trauma-informed, socially indexed register. These are grammars that do not overlap.

If this brief and brutal dialogue proves anything, it’s this: you cannot extract meaning cleanly from words when the words themselves are sponges for history, hierarchy, and harm. The moment we ask language to do too much—to carry precision, affect, ethics, and identity—it folds in on itself.

And that, dear reader, is precisely the argument of A Language Insufficiency Hypothesis: that meaning does not reside in words, and never has. It lives in the gaps, the silences, the misfires. That’s where the truth—whatever’s left of it—might be hiding.

Follow the wreckage. That’s where the signal lives.

The Trust Myth: Harari’s Binary and the Collapse of Political Credibility

Yuval Noah Harari, always ready with a digestible morsel for the TED-addled masses, recently declared that “democracy runs on trust, dictatorship on terror.” It’s a line with the crispness of a fortune cookie and about as much analytical depth. Designed for applause, not interrogation, it’s the sort of soundbite that flatters liberal sensibilities while sanding off the inconvenient edges of history.

Audio: NotebookLM podcast on this topic.

Let’s be honest: this dichotomy is not merely simplistic – it’s a rhetorical sedative. It reassures those who still believe political systems are like kitchen appliances: plug-and-play models with clear instructions and honest warranties. But for anyone who’s paid attention to the actual mechanics of power, this framing is delusional.

1. Trust Was Never Earned

In the United States, trust in democratic institutions was never some noble compact forged through mutual respect and enlightened governance. It was cultivated through exclusion, propaganda, and economic bribery. The post-WWII boom offered the illusion of institutional legitimacy – but only if you were white, male, middle-class, and preferably asleep.

Black Americans, Indigenous peoples, immigrants, women – none were granted the luxury of naĂŻve trust. They were told to trust while being actively disenfranchised. To participate while being systemically excluded. So no, Harari, the machine didn’t run on trust. It ran on marketing. It ran on strategic ignorance.

2. Dictatorship Doesn’t Require Terror

Equally cartoonish is the notion that dictatorships subsist purely on terror. Many of them run quite comfortably on bureaucracy, passive conformity, and the grim seduction of order. Authoritarians know how to massage the same trust reflexes as democracies – only more bluntly. People don’t just obey out of fear. They obey out of habit. Out of resignation. Out of a grim kind of faith that someone – anyone – is in charge.

Dictatorships don’t extinguish trust. They re-route it. Away from institutions and toward strongmen. Toward myths of national greatness. Toward performative stability. It’s not that terror is absent—it’s just not the whole machine. The real engine is misplaced trust.

3. Collapse Is Bipartisan

The present moment isn’t about the erosion of a once-trustworthy system. It’s the slow-motion implosion of a confidence game on all sides. The old liberal institutions are collapsing under the weight of their hypocrisies. But the loudest critics – tech messiahs, culture warriors, authoritarian nostalgists – are no better. Their solutions are just new brands of snake oil in sleeker bottles.

Everyone is pointing fingers, and no one is credible. The public, caught between cynicism and desperation, gravitates either toward restoration fantasy (“make democracy work again”) or authoritarian theatre (“at least someone’s doing something”). Both are dead ends.

4. The Only Way Forward: Structural Reimagination

The only viable path isn’t restoration or regression. It’s reinvention. Systems that demand unconditional trust – like religions and stock markets – are bound to fail, because they rely on sustained illusions. Instead, we need systems built on earned, revocable, and continually tested trust – systems that can survive scrutiny, decentralise power, and adapt to complexity.

In other words: stop trying to repair a house built on sand. Build something else. Something messier, more modular, less mythological.

Let the TED crowd have their slogans. We’ve got work to do.

Artificial Intelligence Isn’t Broken

Rather than recreate a recent post on my business site, LinkedIn.

(Warning: contains traces of logic, satire, and uncomfortable truths. But you knew that.)

Audio: NotebookLM podcast on the linked topic.

It’s just refusing to cosplay as your idealised fantasy of “human” cognition.

While pundits at the Wall Street Journal lament that AI thinks with “bags of heuristics” instead of “true models,” they somehow forget that humans themselves are kludged-together Rube Goldberg disasters, lurching from cognitive bias to logical fallacy with astonishing grace.

In my latest piece, I take a flamethrower to the myth of human intellectual purity, sketch a real roadmap for modular AI evolution, and suggest (only partly in jest) that the machines are becoming more like us every day — messy, contradictory, and disturbingly effective.

Let’s rethink what “thinking” actually means. Before the machines do it for us.

Unwilling: The Neuroscience Against Free Will

Why the cherished myth of human autonomy dissolves under the weight of our own biology

We cling to free will like a comfort blanket—the reassuring belief that our actions spring from deliberation, character, and autonomous choice. This narrative has powered everything from our justice systems to our sense of personal achievement. It feels good, even necessary, to believe we author our own stories.

But what if this cornerstone of human self-conception is merely a useful fiction? What if, with each advance in neuroscience, our cherished notion of autonomy becomes increasingly untenable?

Audio: NotebookLM podcast on this topic.

I. The Myth of Autonomy: A Beautiful Delusion

Free will requires that we—some essential, decision-making “self”—stand somehow separate from the causal chains of biology and physics. But where exactly would this magical pocket of causation exist? And what evidence do we have for it?

Your preferences, values, and impulses emerge from a complex interplay of factors you never chose:

The genetic lottery determined your baseline neurochemistry and cognitive architecture before your first breath. You didn’t select your dopamine sensitivity, your amygdala reactivity, or your executive function capacity.

The hormonal symphony that controls your emotional responses operates largely beneath conscious awareness. These chemical messengers—testosterone, oxytocin, and cortisol—don’t ask permission before altering your perceptions and priorities.

Environmental exposures—from lead in your childhood drinking water to the specific traumas of your upbringing—have sculpted neural pathways you didn’t design and can’t easily rewire.

Developmental contingencies have shaped your moral reasoning, impulse control, and capacity for empathy through processes invisible to conscious inspection.

Your prized ability to weigh options, inhibit impulses, and make “rational” choices depends entirely on specific brain structures—particularly the dorsolateral prefrontal cortex (DLPFC)—operating within a neurochemical environment you inherited rather than created.

You occupy this biological machinery; you do not transcend it. Yet, society holds you responsible for its outputs as if you stood separate from these deterministic processes.

II. The DLPFC: Puppet Master of Moral Choice

The dorsolateral prefrontal cortex serves as command central for what we proudly call executive function—our capacity to plan, inhibit, decide, and morally judge. We experience its operations as deliberation, as the weighing of options, as the essence of choice itself.

And yet this supposed seat of autonomy can be manipulated with disturbing ease.

When researchers apply transcranial magnetic stimulation to inhibit DLPFC function, test subjects make dramatically different moral judgments about identical scenarios. Under different stimulation protocols, the same person arrives at contradictory conclusions about right and wrong without any awareness of the external influence.

Similarly, transcranial direct current stimulation over the DLPFC alters moral reasoning, especially regarding personal moral dilemmas. The subject experiences these externally induced judgments as entirely their own, with no sense that their moral compass has been hijacked.

If our most cherished moral deliberations can be redirected through simple electromagnetic manipulation, what does this reveal about the nature of “choice”? If will can be so easily influenced, how free could it possibly be?

III. Hormonal Puppetmasters: The Will in Your Bloodstream

Your decision-making machinery doesn’t stop at neural architecture. Your hormonal profile actively shapes what you perceive as your autonomous choices.

Consider oxytocin, popularly known as the “love hormone.” Research demonstrates that elevated oxytocin levels enhance feelings of guilt and shame while reducing willingness to harm others. This isn’t a subtle effect—it’s a direct biological override of what you might otherwise “choose.”

Testosterone tells an equally compelling story. Administration of this hormone increases utilitarian moral judgments, particularly when such decisions involve aggression or social dominance. The subject doesn’t experience this as a foreign influence but as their own authentic reasoning.

These aren’t anomalies or edge cases. They represent the normal operation of the biological systems governing what we experience as choice. You aren’t choosing so much as regulating, responding, and rebalancing a biochemical economy you inherited rather than designed.

IV. The Accident of Will: Uncomfortable Conclusions

If the will can be manipulated through such straightforward biological interventions, was it ever truly “yours” to begin with?

Philosopher Galen Strawson’s causa sui argument becomes unavoidable here: To be morally responsible, one must be the cause of oneself, but no one creates their own neural and hormonal architecture. By extension, no one can be ultimately responsible for actions emerging from that architecture.

What we dignify as “will” may be nothing more than a fortunate (or unfortunate) biochemical accident—the particular configuration of neurons and neurochemicals you happened to inherit and develop.

This lens forces unsettling questions:

  • How many behaviours we praise or condemn are merely phenotypic expressions masquerading as choices? How many acts of cruelty or compassion reflect neurochemistry rather than character?
  • How many punishments and rewards are we assigning not to autonomous agents, but to biological processes operating beyond conscious control?
  • And perhaps most disturbingly: If we could perfect the moral self through direct biological intervention—rewiring neural pathways or adjusting neurotransmitter levels to ensure “better” choices—should we?
  • Or would such manipulation, however well-intentioned, represent the final acknowledgement that what we’ve called free will was never free at all?

A Compatibilist Rebuttal? Not So Fast.

Some philosophers argue for compatibilism, the view that determinism and free will can coexist if we redefine free will as “uncoerced action aligned with one’s desires.” But this semantic shuffle doesn’t rescue moral responsibility.

If your desires themselves are products of biology and environment—if even your capacity to evaluate those desires depends on inherited neural architecture—then “acting according to your desires” just pushes the problem back a step. You’re still not the ultimate author of those desires or your response to them.

What’s Left?

Perhaps we need not a defence of free will but a new framework for understanding human behaviour—one that acknowledges our biological embeddedness while preserving meaningful concepts of agency and responsibility without magical thinking.

The evidence doesn’t suggest we are without agency; it suggests our agency operates within biological constraints we’re only beginning to understand. The question isn’t whether biology influences choice—it’s whether anything else does.

For now, the neuroscientific evidence points in one direction: The will exists, but its freedom is the illusion.

Technofeudalism: It’s a Wrap

By the time we reach Chapter Seven of Technofeudalism: What Kills Capitalism, Yanis Varoufakis drops the ledger sheets and spreadsheets and starts sketching utopia in crayon. Entitled Escape from Technofeudalism, it proposes—brace yourself—a workplace democracy. It’s aspirational, yes. Compelling? Not particularly. Especially if, like me, you’ve long since stopped believing that democracy is anything more than a feel-good placebo for structural impotence.

Audio: NotebookLM podcast discussing this topic.

To be clear: the preceding chapters, particularly the first six, are sharp, incisive, and frankly, blistering in their indictment of today’s economic disfiguration. But Chapter Seven? It’s less an escape plan, more a group therapy session masquerading as an operational model.

So let’s take his proposal for Democratised Companies apart, one charming layer at a time.

Splendid. One person, one vote. Adorable.

Because there’s nothing more efficient than a hiring committee comprised of thirty engineers, two janitors, a receptionist, and Steve from Accounts, whose main contribution is passive-aggressive sighing.

Marvellous. We’ve now digitised the tyranny of the majority and can timestamp every idiotic decision for posterity.

A relief. Until it doesn’t.

Here, dear reader, is where the cake collapses. Why, precisely, should a randomly-assembled group of employees—with wildly varying financial literacy—be entrusted to divide post-tax revenue like it’s a birthday cake at a toddler’s party?

And how often are these slices recalibrated? Each fiscal year? Every time someone is hired or fired? Do we amend votes quarterly or wait until the economic ship has already struck an iceberg?

Varoufakis does suggest preference voting to tackle allocation disputes:

Fine. In theory, algorithmic voting procedures sound neat. But it presumes voters are rational, informed, and cooperative. If you’ve ever seen a corporate Slack thread devolve into emoji warfare, you’ll know that this is fiction on par with unicorns and meritocracy.

Ah yes, the ‘equality’ bit. Equal pay, unequal contribution. This isn’t egalitarianism—it’s enforced mediocrity. It might work in a monastery. Less so in a competitive tech firm where innovation requires both vision and differentiated incentive.

Now, on to bonuses, which are democratically determined by:

Welcome to Black Mirror: Workplace Edition. This is less economics, more playground politics. Who gets tokens? The charismatic chatterbox in the break room? The person who shared their lunch? The ghost employee who never shows up but emails back promptly?

And how, pray tell, does one evaluate the receptionist’s contribution relative to the lead engineer’s or the janitor’s? This isn’t peer review—it’s populism with a smiley face.

We’ve all seen “Teacher of the Year” competitions turn into contests of who had the cutest class poster or best cupcakes. Now imagine your livelihood depending on it.

In summary, democracy in the workplace may sound noble, but in practice, it’s the bureaucratic equivalent of herding caffeinated cats. It doesn’t even work in small groups, let alone an organisation of hundreds. Democracy—when applied to every function of an enterprise—is not liberation; it’s dilution. It’s design-by-committee, strategy-by-consensus, and ultimately, excellence-by-accident.

Escape from Technofeudalism? Perhaps. But not by replacing corporate lords with intranet polls and digital tokens. That’s not an exit strategy—it’s a cosplay of collectivism.