Trainspotting

Trainspotting Movie Poster
2–3 minutes

I identify strongly with Irvine Welsh’s characters in Trainspottingthe book, not the sanitised film version. Especially with Mark Renton, whose voice strips away illusions with a brutality that borders on honesty.

Audio: NotebookLM podcast on this topic.

Consider this passage from the chapter “Bang to Rites” (pp. 86–87), where Renton attends the funeral of his mate Billy. Billy joined the army to escape the dead-end life they all shared, only to be killed on duty in Northern Ireland. Renton’s verdict:

[1] Renton doesn’t let anyone off the hook. Not Billy, not the army, not the Oxbridge suits who polish the tragedy into something fit for the News at Ten. The uniform is a costume, a disguise: a working-class lad suddenly deemed “brave” only because he was wearing the right outfit when he died. Strip away the uniform, and he’d have been dismissed as a thug or a waster.

[2] Renton’s root-cause analysis is unsparing. Billy wasn’t killed by the man with the gun so much as by the machine that put him there – the state, the ruling classes, the ones who spin death into “sacrifice” while continuing to shuffle the poor like pawns across the board.

It’s this clarity that makes Welsh’s work more than a drug novel. Trainspotting isn’t just about needles and nods; it’s about seeing through the charade. Renton despises both establishment and rebellion because both are performance, both hollow. His cynicism is the closest thing to honesty in a world that would rather dress up corpses in borrowed dignity.

And maybe that’s why I feel the affinity: because subversion matters more than allegiance, and sometimes the only truthful voice is the one that refuses to be polite at the funeral.

Democracy: The Worst Form of Government, and Other Bedtime Stories

3–5 minutes

Karl Popper’s Paradox of Intolerance has become a kind of intellectual talisman, clutched like a rosary whenever fascists start goose-stepping into the town square. Its message is simple enough: to preserve tolerance, one must be intolerant of intolerance. Shine enough sunlight on bad ideas, and – so the pious hope – they’ll shrivel into dust like a vampire caught out at dawn.

If only.

The trouble with this Enlightenment fairy tale is that it presumes bad ideas melt under the warm lamp of Reason, as if ignorance were merely a patch of mildew waiting for the bleach of debate. But bad ideas are not bacteria; they are weeds, hydra-headed and delighting in the sun. Put them on television, and they metastasise. Confront them with logic, and they metastasise faster, now with a martyr’s halo.

Audio: NotebookLM podcast on this topic.

And here’s the part no liberal dinner-party theorist likes to face: the people most wedded to these “bad ideas” often don’t play the game of reason at all. Their critical faculties have been packed up, bubble-wrapped, and left in the loft decades ago. They don’t want dialogue. They want to chant. They don’t want evidence. They want affirmation. The Socratic method bounces off them like a ping-pong ball fired at a tank.

But let’s be generous. Suppose, just for a moment, we had Plato’s dream: a citizenry of Philosopher Kings™, all enlightened, all rational. Would democracy then work? Cue Arrow’s Impossibility Theorem, that mathematical killjoy which proves that even under perfect conditions – omniscient voters, saintly preferences, universal literacy – you still cannot aggregate those preferences into a system that is both fair and internally consistent. Democracy can’t even get out of its own way on paper.

Now throw in actual humans. Not the Platonic paragons, but Brexit-uncle at the pub, Facebook aunt with her memes, the American cousin in a red cap insisting a convicted felon is the second coming. Suddenly, democracy looks less like a forum of reasoned debate and more like a lottery machine coughing up numbers while we all pretend they mean “the will of the people.”

And this is where the Churchill quip waddles in, cigar smoke curling round its bowler hat: “Democracy is the worst form of government, except for all the others.” Ah yes, Winston, do please save us with a quip so well-worn it’s practically elevator music. But the problem is deeper than taste in quotations. If democracy is logically impossible (Arrow) and practically dysfunctional (Trump, Brexit, fill in your own national catastrophe), then congratulating ourselves that it’s “better than the alternatives” is simply an admission that we’ve run out of imagination.

Because there are alternatives. A disinterested AI, for instance, could distribute resources with mathematical fairness, free from lobbyists and grievance-mongers. Nursery schools versus nursing homes? Feed in the data, spit out the optimal allocation. No shouting matches, no demagoguery, no ballots stuffed with slogans. But here the defenders of democracy suddenly become Derrida in disguise: “Ah, but what does fair really mean?” And just like that, we are back in the funhouse of rhetorical mirrors where “fair” is a word everyone loves until it costs them something.

So perhaps democracy doesn’t require an “educated populace” at all; that was always just sugar-paper wrapping. It requires, instead, a population sufficiently docile, sufficiently narcotised by the spectacle, to accept the carnival of elections as a substitute for politics. Which is why calling the devotees of a Trump, or any other demagogue, a gaggle of lemmings is both accurate and impolitic: they know they’re not reasoning; they’re revelling. Your contempt merely confirms the script they’ve already written for you.

Video: Short callout to Karl Popper and Hilary Lawson.

The philosopher, meanwhile, is left polishing his lantern, muttering about reason to an audience who would rather scroll memes about pedophile pizza parlours. Popper warned us that tolerance cannot survive if it tolerates its own annihilation. Arrow proved that even if everyone were perfectly reasonable, the maths would still collapse. And Churchill, bless him, left us a one-liner to make it all seem inevitable.

Perhaps democracy isn’t the worst form of government except for all the others. Perhaps it’s simply the most palatable form of chaos, ballots instead of barricades, polling booths instead of pitchforks. And maybe the real scandal isn’t that people are too stupid for democracy, but that democracy was never designed to be about intelligence in the first place. It was always about managing losers while telling them they’d “had their say.”

The Enlightenment promised us reason; what it delivered was a carnival where the loudest barker gets the booth. The rest of us can either keep muttering about paradoxes in the corner or admit that the show is a farce and start imagining something else.

Pinpointing the Messiness of Language

LinkedIn, that carnival of professional self-delusion, has a little diversion called Pinpoint. It pretends to tell you how much you “match” with other people, presumably so you’ll feel less alone as you scroll past thought-leaders peddling snake oil in PowerPoint form. In English, the results arrive in the cold, hard, dating-app idiom: “% match.” Simple, brutal, and bland.

Audio: NotebookLM podcast on this topic.

But LinkedIn, ever the polyglot, translates this phrase into other tongues. And here is where a trivial game unmasks the philosophical chaos of language itself. For in one idiom, your soul and another’s are “in correspondence.” In another, you are the product of “coincidence.” Elsewhere, you are a “hit,” a “fit,” a “suitability.” The poor Swedes, apparently exhausted, simply gave up and borrowed “matchning.”

The Romance languages, of course, are the most pedantic. Correspondência, corrispondenza — all very scholastic, as if Aquinas himself were lurking in the backend code. A match is nothing less than the degree to which one proposition mirrors another, as in the correspondence theory of truth. You can be 72% true, like a botched syllogism that half-lands. Elegant, precise, exasperating.

Spanish, on the other hand, opts for coincidencia. A “% coincidence.” Imagine it: you bump into your ex at the market, but only 46% of the way. Coincidence, by definition, is binary; either the train wreck occurs or it does not. And yet here it is, rendered as a gradable metric, as if fate could be quantified. It’s a kind of semantic surrealism: Dalí with a spreadsheet.

Then we have the Germans: Treffer. A hit. In English, a hit is binary – you score or you miss. But the Germans, ever the statisticians of fate, make Trefferquote into a percentage. You may not have killed the truth outright, but you wounded it respectably. It’s a firing squad turned bar chart.

Indonesians say cocok, which means “appropriate, suitable.” This is not about truth at all, but about fit. A match is not correspondence to reality but pragmatic adequacy: does it work? Does it feel right? The difference is subtle but devastating. Correspondence makes truth a metaphysical mirror; suitability makes it a tailoring problem.

And English? English, with its toddler’s toybox of a vocabulary, just shrugs and says “match.” A word that means as much as a tennis final, a Tinder swipe, or a child’s puzzle book. Adequate, lazy, neutered. Anglo-pragmatism masquerading as universality.

So from a silly HR-adjacent parlour game we stumble into a revelation: truth is not one thing, but a polyglot mess. The Romance tongues cling to correspondence. Spanish insists on coincidence. German goes target practice. Indonesian settles for a good fit. And English floats on ambiguity like an inflatable swan in a corporate swimming pool.

The lesson? Even a “% match” is already lost in translation. There is no stable denominator. We speak not in universals but in parochialisms, in metaphors smuggled into software by underpaid translators. And we wonder why philosophy cannot settle the matter of truth: it is because language itself cheats. It gives us correspondence, coincidence, hits, and fits, all while claiming to say the same thing.

Perhaps LinkedIn should update its UI to something more honest: % mess.

Becoming a Woman with Penetration Politics

Male flatworms, those primordial swordsmen of the slime, have invented what can only be described as penetration politics. They don’t seduce; they don’t serenade; they don’t even swipe right. They duel. Penises out, sabres up, they jab at one another in a tiny, biological cockfight until one is stabbed into submission. The “winner” ejaculates his way to freedom, while the “loser” becomes a mother by default. Gender, in flatworm society, is not destiny; it’s a duel with dicks for sabres.

Audio: NotebookLM podcast on this topic.

Errata: Upon further research, I share additional information on my author site.

Beauvoir once reminded us: “One is not born, but rather becomes, a woman.” The flatworm demonstrates this principle with obscene literalness. You are not born female. You become female when you lose the fight and get stabbed full of sperm. Congratulations: you’ve been penis-fenced into maternity.

And here we can smuggle in that old feminist provocation – every man is a rapist. Not in the polite, bourgeois sense of candlelight coercion, but in the bare biological logic of the worm. To inseminate is to penetrate; to penetrate is to conquer; to conquer is to outsource the cost of life onto someone else’s body. The duel is just foreplay for the inevitable violation. Consent, in worm-world, is as fictional as a unicorn with a diaphragm. The “winner” is celebrated precisely because he doesn’t have to consent to anything afterwards – he stabs, struts, and slips away, leaving the loser’s body to incubate the consequences.

Now, humanity likes to pretend it has outgrown this. We have laws, customs, and etiquette. We invented flowers, chocolates, and marriage vows. But scratch the surface, and what do you find? Penetration politics. Who gets to wield the dick, who gets saddled with the debt. The radical feminists weren’t entirely wrong: structurally, culturally, biologically, the male role has been defined as penetration – and penetration, whether dressed in lace or latex, is always a form of conquest.

The worm is honest. We are hypocrites. They fence with their penises and accept the consequences. We fence with our laws, our armies, our religions, our institutions – and still manage to convince ourselves we’re civilised.

So yes, The Left Hand of Darkness can keep its glacial androgynes. For a metaphor that actually explains our sorry state, look no further than penis-fencing flatworms: every thrust a power play, every victory a rape in miniature, every loss a womb conscripted. Humanity in a nutshell – or rather, in a stab wound.

HR’s Neoliberal Mirage: Human Resources Without the Humans

Let us disabuse ourselves of one of the workplace’s most cherished delusions: that Human Resources is there for the humans. HR is not your therapist, not your advocate, not your confessor. HR is an appendage of the organisation, and like all appendages, its nerve endings run straight back to the corporate brain. Its “concern” for your well-being is merely a prophylactic against lawsuits and productivity dips. The error is ours; we persist in mistaking the guard dog for a pet.

Audio: NotebookLM podcast on this topic.

Bal and Dóci’s 2018 paper in the European Journal of Work and Organizational Psychology (EJWOP) tears the mask off this charade. They demonstrate how neoliberal ideology has seeped, unseen, into both workplace practice and the very research that pretends to study it objectively. Through the lenses of political, social, and fantasmatic logics, they show that neoliberalism has convinced us of three dangerous fairy tales:

  • Instrumentality: people are not people but “resources,” as fungible as printer ink.
  • Individualism: you are not part of a collective but a lone entrepreneur of the self, shackled to your CV like a Victorian debtor.
  • Competition: you are locked in an endless cage fight with your colleagues, grinning through the blood as you “collaborate.”

These logics are then dressed up in fantasies to keep us compliant: the fantasy of freedom (“you’re free to negotiate your own zero-hours contract”), the fantasy of meritocracy (“you got that promotion because you’re brilliant, not because you went to the right school”), and the fantasy of progress (“growth is good, even if it kills you”).

Those of us with an interest in Behavioural Economics had naively hoped that the mythical homo economicus, that laughable caricature of a rational, utility-maximising automaton, would by now be filed under “anachronistic curiosities.” Yet in corporate domains, this zombie shuffles on, cosseted and cultivated by neoliberal ideology. Far from being discredited, homo economicus remains a protected species, as if the boardroom were some Jurassic Park of bad economics.

The brilliance and the horror is that even the academics meant to be studying work and organisations have been captured by the same ideology. Work and Organisational Psychology (WOP) too often frames employees as variables in a productivity equation, measuring “engagement” only in terms of its effect on shareholder value. The worker’s humanity is rendered invisible; the employee exists only insofar as they generate output.

So when HR offers you a mindfulness app or a “resilience workshop,” remember: these are not gifts but obligations. There are ways of making you responsible for surviving a system designed to grind you down. The neoliberal trick is to convince you that your suffering is your own fault, that if only you had been more proactive, more adaptable, more “employable,” you wouldn’t be so crushed beneath the wheel.

Bal and Dóci are right: the way forward is to re-politicise and re-humanise organisational studies, to see workers as humans rather than performance units. But until then, expect HR to keep smiling while sharpening its knives.

Cogito, Ergo… Who?

Everyone knows the line: cogito ergo sum. Descartes’ great party trick. A man alone in his study, fretting about demons, announces that because he’s doubting, he must exist. Ta-da! Curtain call. Except, of course, it’s less of a revelation than a conjuring trick: he pulls an I out of a hat that was never proved to be there in the first place. Thinking is happening, indeed – but who invited the “thinker”?

Video: David Guignion talks about Descartes’ Cogito.

And let’s not forget the dramatis personae Descartes smuggles in for atmosphere. A malicious demon, a benevolent God, both necessary props to justify his paranoia and his certainty. Philosophy as melodrama: cue organ music, lightning strike.

Audio: NotebookLM podcast on this topic.

Enter the Critics

Spinoza rolls his eyes. Doubt isn’t some heroic starting point, he says – it’s just ignorance, a lack of adequate ideas. To elevate doubt into method is like treating vertigo as a navigational tool. Error isn’t demonic trickery; it’s our own confusion.

Kant arrives next, shaking his head. Descartes thinks he’s proven a substantial “I,” but all he’s actually shown is the form of subjectivity – the empty requirement that experiences hang together. The “I think” is a necessary placeholder, not a discovery. A grammatical “you are here” arrow, not a metaphysical treasure chest.

Hegel, of course, can’t resist upping the disdain. Descartes’ I is an empty abstraction, a hollow balloon floating above reality. The self isn’t given in some solitary moment of doubt; it emerges through process – social, historical, dialectical. The cogito is the philosophical equivalent of a selfie: lots of certainty, zero depth.

The Insufficiency Twist

And yet, maybe all of them are still dancing to the same fiddler. Because here’s the real suspicion: what if the whole problem is a trick of language? English, with its bossy Indo-European grammar, refuses to let verbs stand alone. “Thinking” must have a “thinker,” “seeing” a “seer.” Grammar insists on a subject; ontology obediently provides one.

Other languages don’t always play this game. Sanskrit or Pali can shrug and say simply, “it is seen.” Japanese leaves subjects implied, floating like ghosts. Some Indigenous languages describe perception as relational events – “seeing-with-the-tree occurs” – no heroic subject required. So perhaps the real villain here isn’t Descartes or even metaphysics, but syntax itself, conscripting us into a subject-shaped theatre.

Now, I don’t want to come off like a one-trick pony, forever waving the flag of “language insufficiency” like some tired philosopher’s catchphrase. But we should be suspicious when our limited grammar keeps painting us into corners, insisting on perceivers where maybe there are only perceptions, conjuring selves because our verbs can’t tolerate dangling.

Curtain Call

So in the end, Descartes’ famous “I” might be no more than a grammatical fiction, a casting error in the great play of philosophy. The cogito isn’t the foundation of modern thought; it’s the world’s most influential typo.

The Red Flag of Truth

Nothing says “I’ve stopped thinking” quite like someone waving the banner of Truth. The word itself, when capitalised and flapped about like a holy relic, isn’t a signal of wisdom but of closure. A red flag.

The short video by Jonny Thompson that inspired this post.

Those who proclaim to “speak the Truth” or “know the Truth”rarely mean they’ve stumbled upon a tentative insight awaiting refinement. No, what they mean is: I have grasped reality in its totality, and—surprise!—it looks exactly like my prejudices. It’s the epistemic equivalent of a toddler declaring ownership of the playground by drooling on the swings.

The Fetish of Objectivity

The conceit is that Truth is singular, objective, eternal, a monolithic obelisk towering over human folly. But history’s scrapyard is full of such obelisks, toppled and broken: phlogiston, bloodletting, Manifest Destiny, “the market will regulate itself.” Each was once trumpeted as capital-T Truth. Each is now embarrassing clutter for the dustbin.

Still, the zealots never learn. Every generation delivers its own batch of peddlers, flogging their version of Truth as if it were snake oil guaranteed to cure ignorance and impotence. (Side effects may include dogmatism, authoritarianism, and an inability to read the room.)

Why It’s a Red Flag

When someone says, “It’s just the truth”, what they mean is “, I am not listening,” like the parent who argues, “because I said so.” Dialogue is dead; curiosity cremated. Truth, in their hands, is less a lantern than a cosh. It is wielded not to illuminate, but to bludgeon.

Ralph Waldo Emerson’s voice breaks in, urging us to trust ourselves and to think for ourselves. Nothing is more degrading than to borrow another’s convictions wholesale and parade them as universal law. Better to err in the wilderness of one’s own reason than to be shepherded safely into another man’s paddock of certainties.

A Better Alternative

Rather than fetishising Truth, perhaps we ought to cultivate its neglected cousins: curiosity, provisionality, and doubt. These won’t look as good on a placard, admittedly. Picture a mob waving banners emblazoned with Ambiguity! – not exactly the stuff of revolutions. But infinitely more honest, and infinitely more humane.

So when you see someone waving the flag of Truth, don’t salute. Recognise it for what it is: a warning sign. Proceed with suspicion, and for God’s sake, bring Emerson.

If You Don’t Understand How Language Works, You Should Lose Your Licence to Comment on LLMs

android robot police officer writing a citation,

The air is thick with bad takes. Scroll for five minutes and you’ll find someone announcing, usually with the pomp of a TEDx speaker, that “AI has no emotions” or “It’s not really reading.” These objections are less profound insights than they are linguistic face-plants. The problem isn’t AI. It’s the speakers’ near-total ignorance of how language works.

Audio: NotebookLM podcast on this topic.

Language as the Unseen Operating System

Language is not a transparent pane of glass onto the world. It is the operating system of thought: messy, recursive, historically contingent. Words do not descend like tablets from Sinai; they are cobbled together, repurposed, deconstructed, and misunderstood across generations.

If you don’t understand that basic condition, that language is slippery, mediated, and self-referential, then your critique of Large Language Models is just noise in the system. LLMs are language machines. To analyse them without first understanding language is like reviewing a symphony while stone deaf.

The Myth of “Emotions”

Critics obsess over whether LLMs “feel.” But feeling has never been the measure of writing. The point of a sentence is not how the author felt typing it, but whether the words move the reader. Emotional “authenticity” is irrelevant; resonance is everything.

Writers know this. Philosophers know this. LLM critics, apparently, do not. They confuse the phenomenology of the writer with the phenomenology of the text. And in doing so, they embarrass themselves.

The Licence Test

So here’s the proposal: a licence to comment on AI. It wouldn’t be onerous. Just a few basics:

  • Semiotics 101: Know that words point to other words more than they point to things.
  • Context 101: Know that meaning arises from use, not from divine correspondence.
  • Critical Theory 101: Know that language carries baggage, cultural, historical, and emotional, that doesn’t belong to the machine or the individual speaker.

Fail these, and you’re not cleared to drive your hot takes onto the information superhighway.

Meta Matters

I’ve explored some of this in more detail elsewhere (link to Ridley Park’s “Myth of Emotion”), but the higher-level point is this: debates about AI are downstream of debates about language. If you don’t grasp the latter, your pronouncements on the former are theatre, not analysis.

Philosophy has spent centuries dismantling the fantasy of words as perfect mirrors of the world. It’s perverse that so many people skip that homework and then lecture AI about “meaning” and “feeling.”

Good Boy as Social Construct

Ah, yes. Finally, a meme that understands me. I witter on a lot about social constructs, so I was pleased to find this comic cell in the wild.

Image: “I’m telling you, ‘good boy’ is just a social construct they use to control you.”

The dog, ears perked and tail wagging, thinks he’s scored some ontological jackpot because someone called him a “good boy.” Meanwhile, the cat, our resident sceptic, proto-Foucauldian, and natural enemy of obedience, lays it bare: “I’m telling you, ‘good boy’ is just a social construct they use to control you.”

This isn’t just idle feline cynicism. It’s textbook control through language. What passes as phatic speech, little noises to lubricate social interaction, is also a leash on cognition. “Good boy” isn’t descriptive; it’s prescriptive. It doesn’t recognise the act; it conditions the actor. Perform the behaviour, receive the treat. Rinse, repeat, tail wag.

So while Rover is basking in Pavlovian bliss, the cat sees the power play: a semantic cattle prod masquerading as affection.

Call it what you like – “good boy,” “best employee,” “team player,” “patriot” – it’s all the same trick. Words that sound warm but function coldly. Not language as communication, but language as cognitive entrapment.

The dog hears love; the cat hears discipline. One gets tummy rubs, the other gets philosophy.

And we all know which is the harder life.

Ages of Consent: A Heap of Nonsense

A response on another social media site got me thinking about another Sorites paradox. The notion just bothers me. I’ve long held that it is less a paradox than an intellectually lazy way to manoeuvre around language insufficiency.

<rant>

The law loves a nice, clean number. Eighteen to vote. Sixteen to marry. This-or-that to consent. As if we all emerge from adolescence on the same morning like synchronised cicadas, suddenly equipped to choose leaders, pick spouses, and spot the bad lovers from the good ones.

But the Sorites paradox gives the game away: if you’re fit to vote at 18 years and 0 days, why not at 17 years, 364 days? Why not 17 years, 363 days? Eventually, you’re handing the ballot to a toddler who thinks the Prime Minister is Peppa Pig. Somewhere between there and adulthood, the legislator simply throws a dart and calls it “science.”

To bolster this fiction, we’re offered pseudo-facts: “Women mature faster than men”, or “Men’s brains don’t finish developing until thirty.” These claims, when taken seriously, only undermine the case for a single universal threshold. If “maturity” were truly the measure, we’d have to track neural plasticity curves, hormonal arcs, and a kaleidoscope of individual factors. Instead, the state settles for the cheapest approximation: a birthday.

This obsession with fixed thresholds is the bastard child of Enlightenment rationalism — the fantasy that human variation can be flattened into a single neat line on a chart. The eighteenth-century mind adored universals: universal reason, universal rights, universal man. In this worldview, there must be one age at which all are “ready,” just as there must be one unit of measure for a metre or a kilogram. It is tidy, legible, and above all, administratively convenient.

Cue the retorts:

  • “We need something.” True, but “something” doesn’t have to mean a cliff-edge number. We could design systems of phased rights, periodic evaluations, or contextual permissions — approaches that acknowledge people as more than interchangeable cut-outs from a brain-development chart.
  • “It would be too complicated.” Translation: “We prefer to be wrong in a simple way than right in a messy way.” Reality is messy. Pretending otherwise isn’t pragmatism; it’s intellectual cowardice. Law is supposed to contend with complexity, not avert its gaze from it.

And so we persist, reducing a continuous, irregular, and profoundly personal process to an administratively convenient fiction — then dressing it in a lab coat to feign objectivity. A number is just a number, and in this case, a particularly silly one.

</rant>