HR’s Neoliberal Mirage: Human Resources Without the Humans

Let us disabuse ourselves of one of the workplace’s most cherished delusions: that Human Resources is there for the humans. HR is not your therapist, not your advocate, not your confessor. HR is an appendage of the organisation, and like all appendages, its nerve endings run straight back to the corporate brain. Its “concern” for your well-being is merely a prophylactic against lawsuits and productivity dips. The error is ours; we persist in mistaking the guard dog for a pet.

Audio: NotebookLM podcast on this topic.

Bal and Dóci’s 2018 paper in the European Journal of Work and Organizational Psychology (EJWOP) tears the mask off this charade. They demonstrate how neoliberal ideology has seeped, unseen, into both workplace practice and the very research that pretends to study it objectively. Through the lenses of political, social, and fantasmatic logics, they show that neoliberalism has convinced us of three dangerous fairy tales:

  • Instrumentality: people are not people but “resources,” as fungible as printer ink.
  • Individualism: you are not part of a collective but a lone entrepreneur of the self, shackled to your CV like a Victorian debtor.
  • Competition: you are locked in an endless cage fight with your colleagues, grinning through the blood as you “collaborate.”

These logics are then dressed up in fantasies to keep us compliant: the fantasy of freedom (“you’re free to negotiate your own zero-hours contract”), the fantasy of meritocracy (“you got that promotion because you’re brilliant, not because you went to the right school”), and the fantasy of progress (“growth is good, even if it kills you”).

Those of us with an interest in Behavioural Economics had naively hoped that the mythical homo economicus, that laughable caricature of a rational, utility-maximising automaton, would by now be filed under “anachronistic curiosities.” Yet in corporate domains, this zombie shuffles on, cosseted and cultivated by neoliberal ideology. Far from being discredited, homo economicus remains a protected species, as if the boardroom were some Jurassic Park of bad economics.

The brilliance and the horror is that even the academics meant to be studying work and organisations have been captured by the same ideology. Work and Organisational Psychology (WOP) too often frames employees as variables in a productivity equation, measuring “engagement” only in terms of its effect on shareholder value. The worker’s humanity is rendered invisible; the employee exists only insofar as they generate output.

So when HR offers you a mindfulness app or a “resilience workshop,” remember: these are not gifts but obligations. There are ways of making you responsible for surviving a system designed to grind you down. The neoliberal trick is to convince you that your suffering is your own fault, that if only you had been more proactive, more adaptable, more “employable,” you wouldn’t be so crushed beneath the wheel.

Bal and Dóci are right: the way forward is to re-politicise and re-humanise organisational studies, to see workers as humans rather than performance units. But until then, expect HR to keep smiling while sharpening its knives.

Cogito, Ergo… Who?

Everyone knows the line: cogito ergo sum. Descartes’ great party trick. A man alone in his study, fretting about demons, announces that because he’s doubting, he must exist. Ta-da! Curtain call. Except, of course, it’s less of a revelation than a conjuring trick: he pulls an I out of a hat that was never proved to be there in the first place. Thinking is happening, indeed – but who invited the “thinker”?

Video: David Guignion talks about Descartes’ Cogito.

And let’s not forget the dramatis personae Descartes smuggles in for atmosphere. A malicious demon, a benevolent God, both necessary props to justify his paranoia and his certainty. Philosophy as melodrama: cue organ music, lightning strike.

Audio: NotebookLM podcast on this topic.

Enter the Critics

Spinoza rolls his eyes. Doubt isn’t some heroic starting point, he says – it’s just ignorance, a lack of adequate ideas. To elevate doubt into method is like treating vertigo as a navigational tool. Error isn’t demonic trickery; it’s our own confusion.

Kant arrives next, shaking his head. Descartes thinks he’s proven a substantial “I,” but all he’s actually shown is the form of subjectivity – the empty requirement that experiences hang together. The “I think” is a necessary placeholder, not a discovery. A grammatical “you are here” arrow, not a metaphysical treasure chest.

Hegel, of course, can’t resist upping the disdain. Descartes’ I is an empty abstraction, a hollow balloon floating above reality. The self isn’t given in some solitary moment of doubt; it emerges through process – social, historical, dialectical. The cogito is the philosophical equivalent of a selfie: lots of certainty, zero depth.

The Insufficiency Twist

And yet, maybe all of them are still dancing to the same fiddler. Because here’s the real suspicion: what if the whole problem is a trick of language? English, with its bossy Indo-European grammar, refuses to let verbs stand alone. “Thinking” must have a “thinker,” “seeing” a “seer.” Grammar insists on a subject; ontology obediently provides one.

Other languages don’t always play this game. Sanskrit or Pali can shrug and say simply, “it is seen.” Japanese leaves subjects implied, floating like ghosts. Some Indigenous languages describe perception as relational events – “seeing-with-the-tree occurs” – no heroic subject required. So perhaps the real villain here isn’t Descartes or even metaphysics, but syntax itself, conscripting us into a subject-shaped theatre.

Now, I don’t want to come off like a one-trick pony, forever waving the flag of “language insufficiency” like some tired philosopher’s catchphrase. But we should be suspicious when our limited grammar keeps painting us into corners, insisting on perceivers where maybe there are only perceptions, conjuring selves because our verbs can’t tolerate dangling.

Curtain Call

So in the end, Descartes’ famous “I” might be no more than a grammatical fiction, a casting error in the great play of philosophy. The cogito isn’t the foundation of modern thought; it’s the world’s most influential typo.

The Red Flag of Truth

Nothing says “I’ve stopped thinking” quite like someone waving the banner of Truth. The word itself, when capitalised and flapped about like a holy relic, isn’t a signal of wisdom but of closure. A red flag.

The short video by Jonny Thompson that inspired this post.

Those who proclaim to “speak the Truth” or “know the Truth”rarely mean they’ve stumbled upon a tentative insight awaiting refinement. No, what they mean is: I have grasped reality in its totality, and—surprise!—it looks exactly like my prejudices. It’s the epistemic equivalent of a toddler declaring ownership of the playground by drooling on the swings.

The Fetish of Objectivity

The conceit is that Truth is singular, objective, eternal, a monolithic obelisk towering over human folly. But history’s scrapyard is full of such obelisks, toppled and broken: phlogiston, bloodletting, Manifest Destiny, “the market will regulate itself.” Each was once trumpeted as capital-T Truth. Each is now embarrassing clutter for the dustbin.

Still, the zealots never learn. Every generation delivers its own batch of peddlers, flogging their version of Truth as if it were snake oil guaranteed to cure ignorance and impotence. (Side effects may include dogmatism, authoritarianism, and an inability to read the room.)

Why It’s a Red Flag

When someone says, “It’s just the truth”, what they mean is “, I am not listening,” like the parent who argues, “because I said so.” Dialogue is dead; curiosity cremated. Truth, in their hands, is less a lantern than a cosh. It is wielded not to illuminate, but to bludgeon.

Ralph Waldo Emerson’s voice breaks in, urging us to trust ourselves and to think for ourselves. Nothing is more degrading than to borrow another’s convictions wholesale and parade them as universal law. Better to err in the wilderness of one’s own reason than to be shepherded safely into another man’s paddock of certainties.

A Better Alternative

Rather than fetishising Truth, perhaps we ought to cultivate its neglected cousins: curiosity, provisionality, and doubt. These won’t look as good on a placard, admittedly. Picture a mob waving banners emblazoned with Ambiguity! – not exactly the stuff of revolutions. But infinitely more honest, and infinitely more humane.

So when you see someone waving the flag of Truth, don’t salute. Recognise it for what it is: a warning sign. Proceed with suspicion, and for God’s sake, bring Emerson.

If You Don’t Understand How Language Works, You Should Lose Your Licence to Comment on LLMs

android robot police officer writing a citation,

The air is thick with bad takes. Scroll for five minutes and you’ll find someone announcing, usually with the pomp of a TEDx speaker, that “AI has no emotions” or “It’s not really reading.” These objections are less profound insights than they are linguistic face-plants. The problem isn’t AI. It’s the speakers’ near-total ignorance of how language works.

Audio: NotebookLM podcast on this topic.

Language as the Unseen Operating System

Language is not a transparent pane of glass onto the world. It is the operating system of thought: messy, recursive, historically contingent. Words do not descend like tablets from Sinai; they are cobbled together, repurposed, deconstructed, and misunderstood across generations.

If you don’t understand that basic condition, that language is slippery, mediated, and self-referential, then your critique of Large Language Models is just noise in the system. LLMs are language machines. To analyse them without first understanding language is like reviewing a symphony while stone deaf.

The Myth of “Emotions”

Critics obsess over whether LLMs “feel.” But feeling has never been the measure of writing. The point of a sentence is not how the author felt typing it, but whether the words move the reader. Emotional “authenticity” is irrelevant; resonance is everything.

Writers know this. Philosophers know this. LLM critics, apparently, do not. They confuse the phenomenology of the writer with the phenomenology of the text. And in doing so, they embarrass themselves.

The Licence Test

So here’s the proposal: a licence to comment on AI. It wouldn’t be onerous. Just a few basics:

  • Semiotics 101: Know that words point to other words more than they point to things.
  • Context 101: Know that meaning arises from use, not from divine correspondence.
  • Critical Theory 101: Know that language carries baggage, cultural, historical, and emotional, that doesn’t belong to the machine or the individual speaker.

Fail these, and you’re not cleared to drive your hot takes onto the information superhighway.

Meta Matters

I’ve explored some of this in more detail elsewhere (link to Ridley Park’s “Myth of Emotion”), but the higher-level point is this: debates about AI are downstream of debates about language. If you don’t grasp the latter, your pronouncements on the former are theatre, not analysis.

Philosophy has spent centuries dismantling the fantasy of words as perfect mirrors of the world. It’s perverse that so many people skip that homework and then lecture AI about “meaning” and “feeling.”

Good Boy as Social Construct

Ah, yes. Finally, a meme that understands me. I witter on a lot about social constructs, so I was pleased to find this comic cell in the wild.

Image: “I’m telling you, ‘good boy’ is just a social construct they use to control you.”

The dog, ears perked and tail wagging, thinks he’s scored some ontological jackpot because someone called him a “good boy.” Meanwhile, the cat, our resident sceptic, proto-Foucauldian, and natural enemy of obedience, lays it bare: “I’m telling you, ‘good boy’ is just a social construct they use to control you.”

This isn’t just idle feline cynicism. It’s textbook control through language. What passes as phatic speech, little noises to lubricate social interaction, is also a leash on cognition. “Good boy” isn’t descriptive; it’s prescriptive. It doesn’t recognise the act; it conditions the actor. Perform the behaviour, receive the treat. Rinse, repeat, tail wag.

So while Rover is basking in Pavlovian bliss, the cat sees the power play: a semantic cattle prod masquerading as affection.

Call it what you like – “good boy,” “best employee,” “team player,” “patriot” – it’s all the same trick. Words that sound warm but function coldly. Not language as communication, but language as cognitive entrapment.

The dog hears love; the cat hears discipline. One gets tummy rubs, the other gets philosophy.

And we all know which is the harder life.

Je m’accuse

I am a terrible blogger. Not “oops-forgot-to-post-this-week” terrible. Industrial-scale, negligent landlord of my own contact form, terrible.

When I set up this blog in 2017, I created a “Contact” page. A tidy little form for readers to reach out – to me, the attentive host. Today, moments ago, I opened it for the first time since launch.

The inbox was an archaeological dig: the oldest message dated February 2019, the freshest stamped yesterday. Mixed strata: a few spammers, several earnest souls, some quite lovely – and now quite abandoned – overtures.

Links from the patient (or long-since embittered) include:

Others left no forwarding address. Perhaps that’s for the best.

I won’t be answering five-year-old requests for commentary. The ship has sailed, hit an iceberg, and rests peacefully in the Mariana Trench. I might still look at some under the “too little, too late” amnesty programme.

I could promise to reform, but you and I both know recidivism rates. Still, I apologise – sincerely, even. These transgressions are mine. The others? They’ll keep.

Don’t Tread on My Ass

Absolute liberty means absolute liberty, but what if the liberty you seek is death? The moment you carve out exceptions – speech you can’t say, choices you can’t make, exits you can’t take – you’ve left the realm of liberty and entered the gated community of permission.

Video: YouTube vid by Philosopher Muse.

And here’s the test most self-styled liberty lovers fail: you’re free to skydive without a parachute, but try ending your life peacefully and watch how quickly the freedom brigade calls in the moral SWAT team.

I’m not his usual audience; I’m already in the choir, but this eight-minute clip by Philosopher Muse is worth your time. It’s a lucid walk through the ethical terrain mapped by Sarah Perry in Every Cradle Is a Grave, and it’s one of the better distillations of antinatalist thought I’ve seen for the general public. Perry’s libertarian starting point is straightforward: if you truly own your life, you must also have the right to give it up.

He threads in the dark-glimmer insights of Emil Cioran’s poetic despair, Thomas Ligotti’s existential horror, David Benatar’s asymmetry, and Peter Wessel Zapffe’s tragic consciousness. Together they point to an uncomfortable truth: autonomy that stops short of death isn’t autonomy at all; it’s a petting zoo of freedoms where the gate is locked.

I’ve said this before, but it bears repeating. I once had a girlfriend who hated her life but was too afraid of hell to end it. She didn’t “pull through.” She overdosed by accident. Loophole closed, I suppose. That’s what happens when metaphysical prohibitions are allowed to run the operating system.

And here’s where I diverge from the purist libertarians. I don’t believe most people are competent enough to have the liberty they think they deserve. Not because they’re all dribbling idiots, but because they’ve been marinated for generations in a stew of indoctrination. For thousands of years, nobody talked about “liberty” or “freedom” as inalienable rights. Once the notion caught on, it was packaged and sold – complete with an asterisk, endless fine print, and a service desk that’s never open.

We tell ourselves we’re free, but only in the ways that don’t threaten the custodians. You can vote for whoever the party machine serves up, but you can’t opt out of the game. You can live any way you like, as long as it looks enough like everyone else’s life. You can risk death in countless state-approved ways, but the moment you calmly choose it, your autonomy gets revoked.

So yes, watch the video. Read Perry’s Every Cradle Is a Grave. Then ask yourself whether your liberty is liberty, or just a longer leash.

If liberty means anything, it means the right to live and the right to leave. The former without the latter is just life imprisonment with better marketing.

Ages of Consent: A Heap of Nonsense

A response on another social media site got me thinking about another Sorites paradox. The notion just bothers me. I’ve long held that it is less a paradox than an intellectually lazy way to manoeuvre around language insufficiency.

<rant>

The law loves a nice, clean number. Eighteen to vote. Sixteen to marry. This-or-that to consent. As if we all emerge from adolescence on the same morning like synchronised cicadas, suddenly equipped to choose leaders, pick spouses, and spot the bad lovers from the good ones.

But the Sorites paradox gives the game away: if you’re fit to vote at 18 years and 0 days, why not at 17 years, 364 days? Why not 17 years, 363 days? Eventually, you’re handing the ballot to a toddler who thinks the Prime Minister is Peppa Pig. Somewhere between there and adulthood, the legislator simply throws a dart and calls it “science.”

To bolster this fiction, we’re offered pseudo-facts: “Women mature faster than men”, or “Men’s brains don’t finish developing until thirty.” These claims, when taken seriously, only undermine the case for a single universal threshold. If “maturity” were truly the measure, we’d have to track neural plasticity curves, hormonal arcs, and a kaleidoscope of individual factors. Instead, the state settles for the cheapest approximation: a birthday.

This obsession with fixed thresholds is the bastard child of Enlightenment rationalism — the fantasy that human variation can be flattened into a single neat line on a chart. The eighteenth-century mind adored universals: universal reason, universal rights, universal man. In this worldview, there must be one age at which all are “ready,” just as there must be one unit of measure for a metre or a kilogram. It is tidy, legible, and above all, administratively convenient.

Cue the retorts:

  • “We need something.” True, but “something” doesn’t have to mean a cliff-edge number. We could design systems of phased rights, periodic evaluations, or contextual permissions — approaches that acknowledge people as more than interchangeable cut-outs from a brain-development chart.
  • “It would be too complicated.” Translation: “We prefer to be wrong in a simple way than right in a messy way.” Reality is messy. Pretending otherwise isn’t pragmatism; it’s intellectual cowardice. Law is supposed to contend with complexity, not avert its gaze from it.

And so we persist, reducing a continuous, irregular, and profoundly personal process to an administratively convenient fiction — then dressing it in a lab coat to feign objectivity. A number is just a number, and in this case, a particularly silly one.

</rant>

Democracy: The Idiot’s Opiate, The Sequel Nobody Asked For

Yesterday, I suggested democracy is a mediocre theatre production where the audience gets to choose which mediocre understudy performs. Some readers thought I was being harsh. I wasn’t.

A mate recently argued that humans will always be superior to AI because of emergence, the miraculous process by which complexity gives rise to intelligence, creativity, and emotion. Lovely sentiment. But here’s the rub: emergence is also how we got this political system, the one no one really controls anymore.

Like the human body being mostly non-human microbes, our so-called participatory government is mostly non-participatory components: lobbyists, donors, bureaucrats, corporate media, careerists, opportunists, the ecosystem that is the actual organism. We built it, but it now has its own metabolism. And thanks to the law of large numbers, multiplied by the sheer number of political, economic, and social dimensions in play, even the human element is diluted into statistical irrelevance. At any rate, what remains of it has lost control – like the sorcerer’s apprentice.

People like to imagine they can “tame” this beast, the way a lucid dreamer thinks they can bend the dream to their will. But you’re still dreaming. The narrative still runs on the dream’s logic, not yours. The best you can do is nudge it; a policy tweak here, a symbolic vote there, before the system digests your effort and excretes more of itself.

This is why Deming’s line hits so hard: a bad system beats a good person every time. Even if you could somehow elect the Platonic ideal of leadership, the organism would absorb them, neutralise them, or spit them out. It’s not personal; it’s structural.

And yet we fear AI “taking over,” as if that would be a radical departure from the status quo. Newsflash: you’ve already been living under an autonomous system for generations. AI would just be a remodel of the control room, new paint, same prison.

So yes, emergence makes humans “special.” It also makes them the architects of their own inescapable political microbiome. Congratulations, you’ve evolved the ability to build a machine that can’t be turned off.

Democracy: Opiate of the Masses

Democracy is sold, propagandised, really, as the best system of governance we’ve ever devised, usually with the grudging qualifier “so far.” It’s the Coca-Cola of political systems: not particularly good for you, but so entrenched in the cultural bloodstream that to question it is tantamount to treason.

Audio: NotebookLM Podcast on this topic.

The trouble is this: democracy depends on an electorate that is both aware and capable. Most people are neither. Worse still, even if they could be aware, they wouldn’t be smart enough to make use of it. And even if they were smart enough, Arrow’s Impossibility Theorem strolls in, smirking, to remind us that the whole thing is mathematically doomed anyway.

Even this number is a charade. IQ measures how well you navigate the peculiar obstacle course we’ve designed as “education,” not the whole terrain of human thought. It’s as culturally loaded as asking a fish to climb a tree, then declaring it dim-witted when it flops. We call it intelligence because it flatters those already rewarded by the system that designed the test. In the United States, the average IQ stands at 97 – hardly a figure that instils confidence in votes and outcomes.

The Enlightenment gents who pushed democracy weren’t exactly selfless visionaries. They already had power, and simply repackaged it as something everyone could share, much as the clergy promised eternal reward to peasants if they only kept their heads down. Democracy is merely religion with ballots instead of bibles: an opiate for the masses, sedating the population with the illusion of influence.

Worse still, it’s a system optimised for mediocrity. It rewards consensus, punishes brilliance, and ensures the average voter is, by definition, average. Living under it is like starring in Idiocracy, only without the comedic relief, just the grim recognition that you’re outnumbered, and the crowd is cheering the wrong thing.