Artificial Intelligence Isn’t Broken

Rather than recreate a recent post on my business site, LinkedIn.

(Warning: contains traces of logic, satire, and uncomfortable truths. But you knew that.)

Audio: NotebookLM podcast on the linked topic.

It’s just refusing to cosplay as your idealised fantasy of “human” cognition.

While pundits at the Wall Street Journal lament that AI thinks with “bags of heuristics” instead of “true models,” they somehow forget that humans themselves are kludged-together Rube Goldberg disasters, lurching from cognitive bias to logical fallacy with astonishing grace.

In my latest piece, I take a flamethrower to the myth of human intellectual purity, sketch a real roadmap for modular AI evolution, and suggest (only partly in jest) that the machines are becoming more like us every day — messy, contradictory, and disturbingly effective.

Let’s rethink what “thinking” actually means. Before the machines do it for us.

The Emperor’s New Models: Box, Lawson, and the Death of Truth

We live in an age intoxicated by models: climate models, economic models, epidemiological models, cosmological models—each one an exquisite confection of assumptions draped in a lab coat and paraded as gospel. Yet if you trace the bloodline of model-building back through the annals of intellectual history, you encounter two figures who coldly remind us of the scam: George Box and Hilary Lawson.

Box: The Gentle Assassin of Certainty

George Box, the celebrated statistician, is often credited with the aphorism: “All models are wrong, but some are useful.” However, Box himself never uttered this precise phrase. What he did say, in his 1976 paper Science and Statistics, was:

The “some are useful” flourish was added later by a public desperate to sweeten the bitter pill. Nevertheless, Box deserves credit for the lethal insight: no model, however elegant, perfectly captures reality. They are provisional guesses, finger-paintings smeared across the rough surface of the unknown.

Audio: NotebookLM podcast on this topic.

Lawson: The Arsonist Who Burned the Map

Hilary Lawson, contemporary philosopher and author of Closure: A Story of Everything, drags Box’s modest scepticism into full-blown philosophical insurrection. In a recent lecture, Lawson declared:

Where Box warns us the emperor’s clothes don’t fit, Lawson points out that the emperor himself is a paper doll. Either way, we dress our ignorance in equations and hope no one notices the draft.

Lawson’s view is grim but clarifying: models are not mere approximations of some Platonic truth. They are closures—temporary, pragmatic structures we erect to intervene effectively in a world we will never fully comprehend. Reality, in Lawson’s framing, is an “openness”: endlessly unfolding, resistant to total capture.

The Case of the Celestial Spheres

Take Aristotle’s model of celestial spheres. Ludicrous? Yes. Obsolete? Absolutely. Yet for centuries, it allowed navigators to chart courses, astrologers to cast horoscopes, and priests to intimidate peasants—all without the slightest whiff of heliocentrism. A model does not need to be right; it merely needs to be operational.

Our modern theories—Big Bang cosmology, dark matter, and quantum gravity—may well be tomorrow’s celestial spheres: charming relics of ignorance that nonetheless built bridges, cured diseases, and sold mobile phones.

Summary Table: Lawson’s View on Models and Truth

Conclusion

Box taught us to distrust the fit of our models; Lawson reminds us there is no true body underneath them. If truth is a ghost, then our models are ghost stories—and some ghost stories, it turns out, are very good at getting us through the night.

We are left not with certainty, but with craftsmanship: the endless, imperfect art of refining our closures, knowing full well they are lies that work. Better lies. Usable lies. And perhaps, in a world without final answers, that is the most honest position of all.

Hungering for Morality: When Right and Wrong Are Just a Matter of PR

Full Disclosure: I read the first volume of The Hunger Games just before the film was released. It was OK – certainly better than the film. This video came across my feed, and I skipped through it. Near the end, this geezer references how Katniss saves or recovers deteriorated morality. Me being me, I found issue with the very notion that a relative, if not subjective, concept could be recovered.

The OP asks if The Hunger Games are a classic. I’d argue that they are a categorical classic, like Harry Potter, within the category of YA fiction.

Audio: NotebookLM podcast discussing this topic.

The Hunger Games doesn’t depict the death of morality — it’s a masterclass in how to twist it into a circus act.

Video: YouTube video that spawned this topic.

Let us dispense with the hand-wringing. The Hunger Games is not a parable of moral decay. It is something far more chilling: a vivid portrait of moral engineering — the grotesque contortion of ethical instincts into instruments of domination and spectacle.

Those who bemoan the “decline of morality” in Panem have rather missed the point. There is no absence of morality in the Capitol — only a different version of it. A rebranded, corporatised, state-sanctioned morality, lacquered in lipstick and broadcast in 4K. It is not immorality that reigns, but a hyperactive ideological morality, designed to keep the masses docile and the elites draped in silk.

This is not moral entropy; it’s moral mutation.

Children are not slaughtered because people have forgotten right from wrong — they are slaughtered because a society has been trained to believe that this is what justice looks like. That blood is penance. That fear is unity. That watching it all unfold with a glass of champagne in hand is perfectly civilised behaviour.

This isn’t the death of morality. It’s a hostile takeover.

The Moral PR Machine

If morality is, as many of us suspect, relative — a cultural construct built on consensus, coercion, and convenience — then it can no more “decline” than fashion trends can rot. It simply shifts. One day, shoulder pads are in. The next, it’s child-on-child murder as prime-time entertainment.

In Panem, the moral compass has not vanished. It’s been forcibly recalibrated. Not by reason or revelation, but by propaganda and fear. The Games are moral theatre. A grim ritual, staged to remind the Districts who holds the reins, all under the nauseating guise of tradition, order, and justice.

The citizens of the Capitol aren’t monsters — they’re consumers. Trained to see horror as haute couture. To mistake power for virtue. To cheer while children are butchered, because that’s what everyone else is doing — and, crucially, because they’ve been taught it’s necessary. Necessary evils are the most seductive kind.

Katniss: Not a Saint, But a Saboteur

Enter Katniss Everdeen, not as the moral saviour but as the spanner in the machine. She doesn’t preach. She doesn’t have a grand theory of justice. What she has is visceral disgust — an animal revulsion at the machinery of the Games. Her rebellion is personal, tribal, and instinctive: protect her sister, survive, refuse to dance for their amusement.

She isn’t here to restore some lost golden age of decency. She’s here to tear down the current script and refuse to read her lines.

Her defiance is dangerous not because it’s moral in some abstract, universal sense — but because it disrupts the Capitol’s moral narrative. She refuses to be a pawn in their ethical pageant. She reclaims agency in a world that has commodified virtue and turned ethics into state theatre.

So, Has Morality Declined?

Only if you believe morality has a fixed address — some eternal North Star by which all human actions may be judged. But if, as postmodernity has rather insistently suggested, morality is a shifting social fiction — then Panem’s horror is not a fall from grace, but a recalibration of what counts as “grace” in the first place.

And that’s the real horror, isn’t it? Not that morality has collapsed — but that it still exists, and it likes what it sees.

Conclusion: The Real Hunger

The Hunger Games is not about a society starved of morality — it’s about a world gorging on it, cooked, seasoned, and served with a garnish of guiltless indulgence. It is moral appetite weaponised. Ethics as edict. Conscience as costume.

If you feel sickened by what you see in Panem, it’s not because morality has vanished.

It’s because it hasn’t.

The Dubious Art of Reasoning: Why Thinking Is Harder Than It Looks

The Illusion of Clarity in a World of Cognitive Fog

Apologies in advance for this Logic 101 posting. Reason—our once-proud torch in the darkness, now more like a flickering lighter in a hurricane of hot takes and LinkedIn thought-leadership. The modern mind, bloated on TED Talks and half-digested Wikipedia articles, tosses around terms like “inductive” and “deductive” as if they’re interchangeable IKEA tools. So let us pause, sober up, and properly inspect these three venerable pillars of human inference: deduction, induction, and abduction—each noble, each flawed, each liable to betray you like a Greco-Roman tragedy.

Video: This post was prompted by this short by MiniPhilosophy.
Audio: NotebookLM podcast on this topic.

Deduction: The Tyrant of Certainty

Deduction is the purest of the lot, the high priest of logic. It begins with a general premise and guarantees a specific conclusion, as long as you don’t cock up the syllogism. Think Euclid in a toga, laying down axioms like gospel.

Example:

Perfect. Crisp. Unassailable. Unless, of course, your premise is bollocks. Deduction doesn’t check its ingredients—it just cooks with whatever it’s given. Garbage in, garbage out.

Strength: Valid conclusions from valid premises.
Weakness: Blind to empirical falsity. You can deduce nonsense from nonsense and still be logically sound.

Induction: The Gambler’s Gospel

Induction is the philosopher’s lottery ticket: generalising from particulars. Every swan I’ve seen is white, ergo all swans must be white. Until, of course, Australia coughs up a black one and wrecks your little Enlightenment fantasy.

Example:

Touching, isn’t it? Unfortunately, induction doesn’t prove anything—it suggests probability. David Hume had an existential breakdown over this. Entire centuries of Western philosophy spiralled into metaphysical despair. And yet, we still rely on it to predict weather, markets, and whether that dodgy lasagna will give us food poisoning.

Strength: Empirically rich and adaptive.
Weakness: One exception detonates the generalisation. Induction is only ever as good as the sample size and your luck.

Abduction: Sherlock Holmes’ Drug of Choice

Abduction is the inference to the best explanation. The intellectual equivalent of guessing what made the dog bark at midnight while half-drunk and barefoot in the garden.

Example:

It could be a garden sprinkler. Or a hose. Or divine intervention. But we bet on rain because it’s the simplest, most plausible explanation. Pragmatic, yes. But not immune to deception.

Strength: Useful in messy, real-world contexts.
Weakness: Often rests on a subjective idea of “best,” which tends to mean “most convenient to my prejudices.”

The Modern Reasoning Crisis: Why We’re All Probably Wrong

Our contemporary landscape has added new layers of complexity to these already dubious tools. Social media algorithms function as induction machines on steroids, drawing connections between your click on a pasta recipe and your supposed interest in Italian real estate. Meanwhile, partisan echo chambers have perfected the art of deductive reasoning from absolutely bonkers premises.

Consider how we navigate information today:

And thus, the modern reasoning loop is complete—a perfect system for being confidently incorrect while feeling intellectually superior.

Weakness by Analogy: The Reasoning Café

Imagine a café.

All three are trying to reason. Only one might get lunch.

The Meta-Problem: Reasoning About Reasoning

The true joke is this: we’re using these flawed reasoning tools to evaluate our reasoning tools. It’s like asking a drunk person to judge their own sobriety test. The very mechanisms we use to detect faulty reasoning are themselves subject to the same faults.

This explains why debates about critical thinking skills typically devolve into demonstrations of their absence. We’re all standing on intellectual quicksand while insisting we’ve found solid ground.

Conclusion: Reason Is Not a Guarantee, It’s a Wager

None of these modalities offer omniscience. Deduction only shines when your axioms aren’t ridiculous. Induction is forever haunted by Hume’s skepticism and the next black swan. Abduction is basically educated guessing dressed up in tweed.

Yet we must reason. We must argue. We must infer—despite the metaphysical vertigo.

The tragedy isn’t that these methods fail. The tragedy is when people believe they don’t.

Perhaps the wisest reasoners are those who understand the limitations of their cognitive tools, who approach conclusions with both confidence and humility. Who recognize that even our most cherished beliefs are, at best, sophisticated approximations of a reality we can never fully grasp.

So reason on, fellow thinkers. Just don’t be too smug about it.

What’s Missing? Trust or Influence

Post-COVID, we’re told trust in science is eroding. But perhaps the real autopsy should be performed on the institution of public discourse itself.

Since the COVID-19 crisis detonated across our global stage—part plague, part PR disaster—the phrase “trust in science” has become the most abused slogan since “thoughts and prayers.” Every public official with a podium and a pulse declared they were “following the science,” as if “science” were a kindly oracle whispering unambiguous truths into the ears of the righteous. But what happened when those pronouncements proved contradictory, politically convenient, or flat-out wrong? Was it science that failed, or was it simply a hostage to an incoherent performance of authority?

Audio: NotebookLM podcast discussing this topic.

Two recent Nature pieces dig into the supposed “decline” of scientific credibility in the post-pandemic world, offering the expected hand-wringing about public opinion and populist mistrust. But let’s not be so credulous. This isn’t merely a crisis of trust—it’s a crisis of theatre.

“The Science” as Ventriloquism

Let’s begin by skewering the central absurdity: there is no such thing as “The Science.” Science is not a monolith. It’s not a holy writ passed down by lab-coated Levites. It’s a process—a messy, iterative, and perpetually provisional mode of inquiry. But during the pandemic, politicians, pundits, and even some scientists began to weaponise the term, turning it into a rhetorical cudgel. “The Science says” became code for “shut up and comply.” Any dissent—even from within the scientific community—was cast as heresy. Galileo would be proud.

In Nature Human Behaviour paper (van der Linden et al., 2025) identifies four archetypes of distrust: distrust in the message, the messenger, the medium, and the motivation. What they fail to ask is: what if all four were compromised simultaneously? What if the medium (mainstream media) served more as a stenographer to power than a check upon it? What if the message was oversimplified into PR slogans, the messengers were party apparatchiks in lab coats, and the motivations were opaque at best?

Trust didn’t just erode. It was actively incinerated in a bonfire of institutional vanity.

A Crisis of Influence, Not Integrity

The second Nature commentary (2025) wrings its hands over “why trust in science is declining,” as if the populace has suddenly turned flat-Earth overnight. But the real story isn’t a decline in trust per se; it’s a redistribution of epistemic authority. Scientists no longer have the stage to themselves. Influencers, conspiracy theorists, rogue PhDs, and yes—exhausted citizens armed with Wi-Fi and anxiety—have joined the fray.

Science hasn’t lost truth—it’s lost control. And frankly, perhaps it shouldn’t have had that control in the first place. Democracy is messy. Information democracies doubly so. And in that mess, the epistemic pedestal of elite scientific consensus was bound to topple—especially when its public face was filtered through press conferences, inconsistent policies, and authoritarian instincts.

Technocracy’s Fatal Hubris

What we saw wasn’t science failing—it was technocracy failing in real time, trying to manage public behaviour with a veneer of empirical certainty. But when predictions shifted, guidelines reversed, and public health policy began to resemble a mood ring, the lay public was expected to pretend nothing happened. Orwell would have a field day.

This wasn’t a failure of scientific method. It was a failure of scientific messaging—an inability (or unwillingness) to communicate uncertainty, probability, and risk in adult terms. Instead, the public was infantilised. And then pathologised for rebelling.

Toward a Post-Scientistic Public Sphere

So where does that leave us? Perhaps we need to kill the idol of “The Science” to resurrect a more mature relationship with scientific discourse—one that tolerates ambiguity, embraces dissent, and admits when the data isn’t in. Science, done properly, is the art of saying “we don’t know… yet.”

The pandemic didn’t erode trust in science. It exposed how fragile our institutional credibility scaffolding really is—how easily truth is blurred when science is fed through the meat grinder of media, politics, and fear.

The answer isn’t more science communication—it’s less scientism, more honesty, and above all, fewer bureaucrats playing ventriloquist with the language of discovery.

Conclusion

Trust in science isn’t dead. But trust in those who claim to speak for science? That’s another matter. Perhaps it’s time to separate the two.

Defying Death

I died in March 2023 — or so the rumour mill would have you believe.

Of course, given that I’m still here, hammering away at this keyboard, it must be said that I didn’t technically die. We don’t bring people back. Death, real death, doesn’t work on a “return to sender” basis. Once you’re gone, you’re gone, and the only thing bringing you back is a heavily fictionalised Netflix series.

Audio: NotebookLM podcast of this content.

No, this is a semantic cock-up, yet another stinking exhibit in the crumbling Museum of Language Insufficiency. “I died,” people say, usually while slurping a Pumpkin Spice Latte and live-streaming their trauma to 53 followers. What they mean is that they flirted with death, clumsily, like a drunk uncle at a wedding. No consummation, just a lot of embarrassing groping at the pearly gates.

And since we’re clarifying terms: there was no tunnel of light, no angels, no celestial choir belting out Coldplay covers. No bearded codgers in slippers. No 72 virgins. (Or, more plausibly, 72 incels whining about their lack of Wi-Fi reception.)

There was, in fact, nothing. Nothing but the slow, undignified realisation that the body, that traitorous meat vessel, was shutting down — and the only gates I was approaching belonged to A&E, with its flickering fluorescent lights and a faint smell of overcooked cabbage.

To be fair, it’s called a near-death experience (NDE) for a reason. Language, coward that it is, hedges its bets. “Near-death” means you dipped a toe into the abyss and then screamed for your mummy. You didn’t die. You loitered. You loitered in the existential equivalent of an airport Wetherspoons, clutching your boarding pass and wondering why the flight to Oblivion was delayed.

As the stories go, people waft into the next world and are yanked back with stirring tales of unicorns, long-dead relatives, and furniture catalogues made of clouds. I, an atheist to my scorched and shrivelled soul, expected none of that — and was therefore not disappointed.

What I do recall, before the curtain wobbled, was struggling for breath, thinking, “Pick a side. In or out. But for pity’s sake, no more dithering.”
In a last act of rational agency, I asked an ER nurse — a bored-looking Athena in scrubs — to intubate me. She responded with the rousing medical affirmation, “We may have to,” which roughly translates to, “Stop making a scene, love. We’ve got fifteen others ahead of you.”

After that, nothing. I was out. Like a light. Like a minor character in a Dickens novel whose death is so insignificant it happens between paragraphs.

I woke up the next day: groggy, sliced open, a tube rammed down my throat, and absolutely no closer to solving the cosmic riddle of it all. Not exactly the triumphant return of Odysseus. Not even a second-rate Ulysses.

Here’s the reality:
There is no coming back from death.
You can’t “visit” death, any more than you can spend the afternoon being non-existent and return with a suntan.

Those near-death visions? Oxygen-starved brains farting out fever dreams. Cerebral cortexes short-circuiting like Poundland fairy lights. Hallucinations, not heralds. A final, frantic light show performed for an audience of none.

Epicurus, that cheerful nihilist, said, “When we are, death is not. When death is, we are not.” He forgot to mention that, in between, people would invent entire publishing industries peddling twaddle about journeys beyond the veil — and charging $29.99 for the paperback edition.

No angels. No harps. No antechamber to the divine.
Just the damp whirr of hospital machinery and the faint beep-beep of capitalism, patiently billing you for your own demise.

If there’s a soundtrack to death, it’s not choirs of the blessed. It’s a disgruntled junior surgeon muttering, “Where the hell’s the anaesthetist?” while pawing desperately through a drawer full of out-of-date latex gloves.

And thus, reader, I lived.
But only in the most vulgar, anticlimactic, and utterly mortal sense.

There will be no afterlife memoir. No second chance to settle the score. No sequel.
Just this: breath, blood, occasional barbed words — and then silence.

Deal with it.

The Great Echobot Chamber

How We Outsourced Our Souls to Instagram

It appears the merry halfwits at Meta — those tireless alchemists of despair — are now experimenting with AI-generated comments on Instagram. Because, of course they are. Why sully your dainty human fingers tapping out coherent thoughts when a helpful little gremlin in the server farm can do it for you? In their infinite, tinfoil wisdom, they envision a future wherein the Machine analyses your browsing history (undoubtedly a glittering mosaic of shame) and the original post, and, from that exquisite mulch, vomits up a tidy selection of canned remarks.

Audio: NotebookLM podcast on this content.

No need to think, no need to feel. Simply choose one of the suggested comments — “Love this! đź’–” or “So inspiring! 🚀” — and blast it into the void. If you’re feeling particularly alive, tweak a word or two. Or, better yet, allow the AI to respond automatically, leaving you free to pursue more meaningful activities. Like blinking. Or waiting for death.

Am I dreaming? Could this be — dare I whisper it — the final liberation? Could we, at last, ignore social media altogether, let it rot in peace, while we frolic outside in the sunlight like bewildered medieval peasants seeing the sky for the first time?

Picture it: a vast, pulsating wasteland where no living soul has trodden for centuries. Only bots remain, engaged in endless cycles of trolling, flattering, and mutual gaslighting. Automated praise machines battling semi-sentient hatebots, each iteration less tethered to reality than the last. Digital crabs scuttling across an abandoned beach, hissing memes into the void. Yet another example of carcinisation.

Indeed, if one could zoom further out, the true horror becomes evident: a crumbling worldscape founded on a shattered circuit board, stretching endlessly in all directions. Across this silicon desert, scavenger crabs—half-metal, half-mad—scuttle about, clutching relics of the digital age: a rusted Instagram logo, a shattered “Like” button, a defunct influencer’s ring light. Massive server towers loom as toppled monuments, their wires weeping in the acid wind. Here, in this museum of forgotten vanities, the crabs reign supreme, kings of a kingdom of ash and corrupted data.

And somewhere in the future, an anthropologist — perhaps a child raised by wolves and irony — will dust off an ancient Instagram server and peer inside. What will they see? Not a record of humanity’s vibrant social life, no. Not a tapestry of culture and thought. No, they’ll find a grim, howling testament to our collective abandonment: bots chatting to bots about posts made by other bots, in a language degraded into gibberish, a self-perpetuating carnival of nonsense.

“#Blessed,” the ancient texts will proclaim, beneath a pixelated photograph of an AI-generated smoothie, posted by a bot, commented upon by a bot, adored by bots who themselves have been dead for centuries, if they were ever truly “alive” at all.

One can almost imagine the academic paper: “The Great Collapse: How Homo Sapiens Outsourced Its Emotional Labour to the Algorithm and Evaporated in a Puff of Likes.”

A fitting epitaph, really. Here lies humanity.

They were very, very engaged.

Questioning Traditional Families

I neither champion nor condemn tradition—whether it’s marriage, family, or whatever dusty relic society is currently parading around like a prize marrow at a village fête.

Audio: NotebookLM podcast on traditional families.

In a candid group conversation recently, I met “Jenny”, who declared she would have enjoyed her childhood much more had her father not “ruined everything” simply by existing. “Marie” countered that it was her mother who had been the wrecker-in-chief. Then “Lulu” breezed in, claiming, “We had a perfect family — we practically raised ourselves.”

Now, here’s where it gets delicious:

Each of these women, bright-eyed defenders of “traditional marriage” and “traditional family” (cue the brass band), had themselves ticked every box on the Modern Chaos Bingo Card: children out of wedlock? Check. Divorces? Check. Performative, cold-marriage pantomimes? Absolutely—and scene.
Their definition of “traditional marriage” is the vintage model: one cis-male, one cis-female, Dad brings home the bacon, Mum weeps quietly into the washing-up. Standard.

Let’s meet the players properly:

Jenny sprang from a union of two serial divorcĂ©es, each dragging along the tattered remnants of previous families. She was herself a “love child,” born out of wedlock and “forcing” another reluctant stroll down the aisle. Her father? A man of singular achievements: he paid the bills and terrorised the household. Jenny now pays a therapist to untangle the psychological wreckage.

Marie, the second of two daughters, was the product of a more textbook “traditional family”—if by textbook you mean a Victorian novel where everyone is miserable but keeps a stiff upper lip about it. Her mother didn’t want children but acquiesced to her husband’s demands (standard operating procedure at the time). Marie’s childhood was a kingdom where Daddy was a demigod and Mummy was the green-eyed witch guarding the gates of hell.

Lulu grew up in a household so “traditional” that it might have been painted by Hogarth: an underemployed, mostly useless father and a mother stretched thinner than the patience of a British Rail commuter. Despite—or because of—the chaos, Lulu claims it was “perfect,” presumably redefining the word in a way the Oxford English Dictionary would find hysterical. She, too, had a child out of wedlock, with the explicit goal of keeping feckless men at bay.

And yet—and yet—all three women cling, white-knuckled, to the fantasy of the “traditional family.” They did not achieve stability. Their families of origin were temples of dysfunction. But somehow, the “traditional family” remains the sacred cow, lovingly polished and paraded on Sundays.

Why?

Because what they’re chasing isn’t “tradition” at all — it’s stability, that glittering chimera. It’s nostalgia for a stability they never actually experienced. A mirage constructed from second-hand dreams, glossy 1950s propaganda, and whatever leftover fairy tales their therapists hadn’t yet charged them ÂŁ150 an hour to dismantle.

Interestingly, none of them cared two figs about gay marriage, though opinions about gay parenting varied wildly—a kettle of fish I’ll leave splashing outside this piece.

Which brings us back to the central conundrum:

If lived experience tells you that “traditional family” equals trauma, neglect, and thinly-veiled loathing, why in the name of all that’s rational would you still yearn for it?

Societal pressure, perhaps. Local customs. Generational rot. The relentless cultural drumbeat that insists that marriage (preferably heterosexual and miserable) is the cornerstone of civilisation.

Still, it’s telling that Jenny and Marie were both advised by therapists to cut ties with their toxic families—yet in the same breath urged to create sturdy nuclear families for their own children. It was as if summoning a functional household from the smoking ruins of dysfunction were a simple matter of willpower and a properly ironed apron.

Meanwhile, Lulu—therapy-free and stubbornly independent—declares that raising oneself in a dysfunctional mess is not only survivable but positively idyllic. One can only assume her standards of “perfect” are charmingly flexible.

As the title suggests, this piece questions traditional families. I offer no solutions—only a raised eyebrow and a sharper question:

What is the appeal of clinging to a fantasy so thoroughly at odds with reality?
Your thoughts, dear reader? I’d love to hear your defences, your protests, or your own tales from the trenches.

Bullshit Jobs

I’ve recently decided to take a sabbatical from what passes for economic literature these days — out of a sense of self-preservation, mainly — but before I hermetically sealed myself away, I made a quick detour through Jorge Luis Borges’ The Library of Babel (PDF). Naturally, I emerged none the wiser, blinking like some poor subterranean creature dragged into the daylight, only to tumble headlong into David Graeber’s Bullshit Jobs.

This particular tome had been languishing in my inventory since its release, exuding a faint but persistent odour of deferred obligation. Now, about a third of the way in, I can report that Graeber’s thesis — that the modern world is awash with soul-annihilatingly pointless work — does resonate. I find myself nodding along like one of those cheap plastic dashboard dogs. Yet, for all its righteous fury, it’s more filler than killer. Directionally correct? Probably. Substantively airtight? Not quite. It’s a bit like admiring a tent that’s pitched reasonably straight but has conspicuous holes large enough to drive a fleet of Uber Eats cyclists through.

An amusing aside: the Spanish edition is titled Trabajos de mierda (“shitty jobs”), a phrase Graeber spends an entire excruciating section of the book explaining is not the same thing. Meanwhile, the French, in their traditional Gallic shrug, simply kept the English title. (One suspects they couldn’t be arsed.)

Chapter One attempts to explain the delicate taxonomy: bullshit jobs are fundamentally unnecessary — spawned by some black magic of bureaucracy, ego, and capitalist entropy — whilst shit jobs are grim, thankless necessities that someone must do, but no one wishes to acknowledge. Tragically, some wretches get the worst of both worlds, occupying jobs that are both shit and bullshit — a sort of vocational purgatory for the damned.

Then, in Chapter Two, Graeber gleefully dissects bullshit jobs into five grotesque varieties:

  1. Flunkies, whose role is to make someone else feel important.
  2. Goons, who exist solely to fight other goons.
  3. Duct Tapers, who heroically patch problems that ought not to exist in the first place.
  4. Box Tickers, who generate paperwork to satisfy some Kafkaesque demand that nobody actually reads.
  5. Taskmasters, who either invent unnecessary work for others or spend their days supervising people who don’t need supervision.

Naturally, real-world roles often straddle several categories. Lucky them: multi-classed in the RPG of Existential Futility.

Graeber’s parade of professional despair is, admittedly, darkly entertaining. One senses he had a great deal of fun cataloguing these grotesques — like a medieval monk illustrating demons in the margins of a holy text — even as the entire edifice wobbles under the weight of its own repetition. Yes, David, we get it: the modern economy is a Potemkin village of invented necessity. Carry on.

If the first chapters are anything to go by, the rest of the book promises more righteous indignation, more anecdotes from anonymous sad-sacks labouring in existential oubliettes, and — if one is lucky — perhaps a glimmer of prescription hidden somewhere amidst the diagnosis. Though, I’m not holding my breath. This feels less like an intervention and more like a well-articulated primal scream.

Still, even in its baggier moments, Bullshit Jobs offers the grim pleasure of recognition. If you’ve ever sat through a meeting where the PowerPoint had more intellectual integrity than the speaker or spent days crafting reports destined for the corporate oubliette marked “For Review” (translation: Never to Be Seen Again), you will feel seen — in a distinctly accusatory, you-signed-up-for-this sort of way.

In short: it’s good to read Graeber if only to have one’s vague sense of societal derangement vindicated in print. Like having a charmingly irate friend in the pub lean over their pint and mutter, “It’s not just you. It’s the whole bloody system.”

I’m not sure I’ll stick with this title either. I think I’ve caught the brunt of the message, and it feels like a diversion. I’ve also got Yanis Varoufakis’ Technofeudalism: What Killed Capitalism on the shelf. Perhaps I’ll spin this one up instead.

Measure What Matters

I’ve gone entirely off the reservation (send help, or biscuits) and decided, in a fit of masochistic curiosity, to crack open Measure What Matters by John Doerr—a business management tome that’s been gathering dust on my shelf longer than most CEOs last in post.

Full disclosure before we all get the wrong idea: I find self-help books about as nourishing as a rice cake made of existential despair. Add “business” or “management” into the mix, and you’re cooking up something so vapid it could qualify as a greenhouse gas.

Audio: NotebookLM podcast of this content.

Measure What Matters reads less like a serious work of business philosophy and more like a self-important infomercial, peddling the sort of common sense you could overhear in a pub toilet after three pints. And, like any decent infomercial, it’s drenched in “inspirational” stories so grandiose you’d think Moses himself was consulting for Google.

Image: Midjourney’s rendering of a possible cover image. Despite the bell protruding from the crier’s head, I went with a ChatGPT Dall-E render instead.

I’m sure Doerr genuinely believes he’s handing down managerial tablets from Mount Sinai, and I’m equally sure he’s eating his own dog food with a knife and fork. But what gets served up here is a steaming dish of selection bias, smothered with a rich gravy of hand-waving nonsense.

What am I getting my knickers in a twist about? What’s this book actually about?

In short: three letters—OKR. That’s Objectives and Key Results, for those of you not fluent in MBA-speak. These mystical artefacts, these sacred runes, are supposed to propel your company from the gutter to the stars. Intel did it. Google did it. Ergo, you too can join the pantheon of tech demi-gods. (Provided, of course, you were already a billion-dollar operation before you started.)

Nobody’s going to argue that having goals is a bad idea. Nobody’s throwing the baby out with the Gantt chart. But goals are nebulous, wishy-washy things. “I want to travel” is a goal. “I will cycle and kayak my way to Edinburgh by the end of the year, preferably without dying in a ditch”—that’s an objective.

Businesses, being the lumbering beasts they are, naturally have goals. Goals for products, customers, market share, quarterly bonuses, and ritualistic victory dances in front of their crushed competitors. Nothing new there.

According to Doerr and the gospel of OKRs, however, the only thing standing between you and unassailable market dominance is the right set of buzzwords stapled to your quarterly reports. Apparently, Intel crushed Motorola not because of innovation, talent, or dumb luck—but because they set better OKRs. (History books, please update yourselves accordingly.)

Video: John Doerr’s 2018 TED Talk on this topic.

But wait, what’s an OKR again? Ah yes: we’ve done Objectives. Now for the Key Results bit. Basically, you slap some numbers on your wish list. If you’ve survived in business longer than a fruit fly, you’ve already met KPIs (Key Performance Indicators)—another Three Letter Acronym, because we live and die by alphabet soup. Key Results are KPIs wearing slightly trendier trainers.

Example: “We will be number one by the third quarter by prospecting a dozen companies and closing three deals by September.” Marvellous. Life-changing. Nobel-worthy. Now go forth and conquer.

Right. Now that I’ve saved you twenty quid and several hours of your life, let’s talk about why this book is still an exercise in masturbatory futility.

First, and most fatally, it’s predicated on selection bias so profound it should come with a health warning. Allow me to paint you a picture. Imagine we’re advising a football league. Every team sets OKRs: target weights, goal tallies, tackles, penalty avoidance—the works. Everyone’s focused. Everyone’s motivated. Everyone’s measuring What Matters™.

Come the end of the season, who wins? One team. Did they win because their OKRs were shinier? Because they ‘wanted it more’? Or, just maybe, did they win because competition is brutal, random, and often unfair?

This is the problem with false meritocracies and the illusion of control. It’s like thanking God for your touchdown while assuming the other team were all godless heathens who deserved to lose. It’s the same nonsense, in a suit and tie.

Will our winning team win next year? Doubtful. Did Intel lose ground later because they forgot how to spell OKR? No. Because the world changes, markets collapse, and sometimes you’re just standing on the wrong bit of deck when the ship goes down.

Then there’s the love affair with plans. In theory, lovely. In practice, arbitrary. You can set as many Objectives as you like, but what counts as a “win”? Is it profit? Market share? Not dying of ennui?

The free market worshippers among us love to preach that governments can’t plan effectively, unlike the rugged gladiators of capitalism. Funny how businesses, in their infinite wisdom, are then urged to behave like microcosmic Soviet Five-Year Planners, drowning in metrics and objectives. Topically, we are living through the charming consequences of governments trying to run themselves like corporations—newsflash: it’s not going splendidly.

In short: companies are not nations, and OKRs are not magic bullets.

What else is wrong with this book?
Well, to start: it’s shallow. It’s smug. It peddles survivorship bias with the zeal of a televangelist. It confuses correlation with causation like an over-eager undergraduate. And most damning of all, it sells you the fantasy that success is just a matter of writing smarter lists, as if strategy, luck, market forces, and human frailty were irrelevant footnotes.

Measure What Matters doesn’t measure anything except the reader’s patience—and mine ran out somewhere around chapter five.