I had planned to write about Beauvoir’s Second Sex, but this has been on my mind lately.
There’s a certain breed of aspiring author, let’s call them the Sacred Scribes, who bristle at the notion of using AI to help with their writing. Not because it’s unhelpful. Not because it produces rubbish. But because it’s impure.
Like some Victorian schoolmarm clutching her pearls at the sight of a split infinitive, they cry: “If you let the machine help you fix a clumsy sentence, what’s next? The whole novel? Your diary? Your soul?”
The panic is always the same: one small compromise and you’re tumbling down the greased chute of creative ruin. It starts with a synonym suggestion and ends with a ghostwritten autobiography titled My Journey to Authenticity, dictated by chatbot, of course.
But let’s pause and look at the logic here. Or rather, the lack thereof.
By this standard, you must also renounce the thesaurus. Shun the spellchecker. Burn your dictionary. Forbid yourself from reading any book you might accidentally learn from. Heaven forbid you read a well-constructed sentence and think, “I could try that.” That’s theft, isn’t it?
And while we’re at it, no editors. No beta readers. No workshopping. No taking notes. Certainly no research. If your brain didn’t birth it in a vacuum, it’s suspect. It’s borrowed. It’s… contaminated.
Let’s call this what it is: purity fetishism in prose form.
But here’s the twist: it’s not new. Plato, bless him, was already clutching his tunic about this twenty-four centuries ago. In Phaedrus, he warned that writing itself would be the death of memory, of real understanding. Words on the page were a crutch. Lazy. A hollow imitation of wisdom. True knowledge lived in the mind, passed orally, and refined through dialogue. Writing, he said, would make us forgetful, outsource our thinking.
Sound familiar?
Fast forward a few millennia, and we’re hearing the same song, remixed for the AI age: “If you let ChatGPT restructure your second paragraph, you’re no longer the author.” Nonsense. You were never the sole author. Not even close.
Everything you write is a palimpsest, your favourite genres echoing beneath the surface, your heroes whispering in your turns of phrase. You’re just remixing the residue. And there’s no shame in that. Unless, of course, you believe that distilling your top five comfort reads into a Frankenstein narrative somehow makes you an oracle of literary genius.
Here’s the rub: You’ve always been collaborating.
With your past. With your influences. With your tools. With language itself, which you did not invent and barely control. Whether the suggestion comes from a friend, an editor, a margin note, or an algorithm, what matters is the choice you make with it. That’s authorship. Let’s not play the slippery slope game.
The slippery slope argument collapses under its own weight. No one accuses you of cheating when you use a pencil sharpener. Or caffeine. Or take a walk to clear your head. But involve a silicon co-author, and suddenly you’re the Antichrist of Art?
Let’s not confuse integrity with insecurity. Let’s not confuse control with fear.
Use the tool. Ignore the purists. They’ve been wrong since Plato, and they’ll still be wrong when your great-grandchildren are dictating novels to a neural implant while bathing in synthetic dopamine.
The future of writing is always collaborative. The only question is whether you’ll join the conversation or sit in the corner, scribbling manifestos by candlelight, declaring war on electricity.
I recently had a run-in with opponents of generative artificial intelligence, GenAI for the rest of us. What began as a modest question about feedback mechanisms in writing spiralled swiftly into a fire-and-brimstone sermon on the moral hazards of artificial authorship.
Audio: NotebackLM podcast on this topic.
It started on Reddit, that bastion of civil discourse, in the r/FictionWriting group. I asked, sincerely and succinctly: Is using AI as a pre-alpha reader worthwhile, or is the praise too algorithmically eager to trust?
Rather than respond to the question, the moderators responded with an ultimatum: “Admit to AI-use again and you’ll be banned.” Like any self-respecting heretic, I excommunicated myself.
Some members ranted about how AI might “steal their ideas” – presumably to be repackaged by tech barons and sold back to the masses in Kindle Unlimited drivel. That’s fine, I suppose, if you’re into intellectual solipsism, but what does this paranoid fantasy have to do with my ideas?
This wasn’t a discussion. It was a witch trial. AI wasn’t the threat – difference was. Deviate from the sacred rites of pen-to-paper purity, and you’ll be cast into the outer darkness, where there is weeping and gnashing of syntax.
The underlying problem is prescriptivism – not just linguistic, but moral. And like all moral panic, it has little to do with ethics and everything to do with control.
To borrow the analogy: as with abortion, if you don’t like them, don’t have one. Abortions, one might argue, carry significantly more moral weight than paragraph polishing. Or do they? At what point does a draft become a soul?
We are fast becoming a culture where the tool is the sin, and the sinner the tool.
Don’t get salty with me when I tell you I asked AI to write this for me. I was thinking that “take it with a grain of salt” or “take it with a pinch of salt” in English did not share the same meaning as “mettre son grain de sel” en français, so I asked ChatGPT for other uses of salt. This is why it doesn’t follow by usual style, if one can call it that.
🧂 Salt: That Most Misunderstood Metaphor
Salt has an image problem.
Despite being one of the most ancient and revered substances in human civilisation—once used as currency, treaty-sealer, and god-bait—it somehow gets dragged through the metaphorical gutter in modern idiom. In English, to take something “with a grain of salt” is to doubt it. To “add your grain of salt,” per the French idiom mettre son grain de sel, is to interrupt uninvited. Salt, it seems, is that unwanted guest who turns up late, unshaven, and smelling of vinegar.
And yet, salt is also life. Necessary. Essential. Literal. So what gives?
Let’s do what the internet never does and look at context.
🏴☠️ English: Cynicism in a Crystal
The English expression “take it with a grain of salt” (or, in older form, a pinch) comes from Latin cum grano salis, which likely implied adding a figurative preservative to dubious claims—treat this as you would old meat. In other words, don’t fully trust it unless you like dysentery.
We also say “he’s a bit salty” to mean grumpy, caustic, or prone to verbal cutlery. “Adding your two cents” is bad enough, but adding your grain of salt implies that what you’re contributing is both unsolicited and probably irritating.
Put simply, English idioms treat salt as if it’s the person in the meeting who thinks they’re clever. There’s a faint whiff of Protestantism here—suspicious of flavour, pleasure, and expressive enthusiasm. Plain oatmeal, plain truths, no seasoning required. Salt is vice. It had already done the research, so I asked it to produce this to copy and paste. You’re welcome.
🇫🇷 French: Salty Saboteurs
The French mettre son grain de sel is more or less the same: to butt in. To lob your unwanted opinion into someone else’s stew. Not unlike “putting in your two penn’orth” in British English—but somehow meaner, as if your salt is not just annoying, but wrong.
Salt, in this idiom, doesn’t enrich—it ruins. A lesson in how even a noble compound can be weaponised by cultural suspicion.
🏺 Hindi: Loyalty Seasoned with Honour
Contrast this with Hindi: namak harām — literally “unfaithful to salt.” This is a powerful accusation. It means you’ve betrayed someone who fed you, someone who sustained you. You’ve taken their salt and spat in their dish.
Conversely, namak halāl is a compliment: someone loyal, trustworthy, faithful to the hand that seasoned them. Salt is the symbol of obligation and honour—not interference.
It is covenantal.
🗾 Japanese: Salt as Mercy
塩を送る (shio o okuru) – “to send salt” – is a Japanese idiom meaning to help your enemy in their time of need. Based on a historical moment when Uesugi Kenshin sent salt to his rival, Takeda Shingen, when the latter’s supply was blockaded.
Salt, here, transcends enmity. It’s noble. A tool of ethics.
In short: send salt, don’t throw it.
🇩🇪 German & 🇪🇸 Spanish: Flavour as Personality
The Germans say “das Salz in der Suppe sein”—to be the salt in the soup. You’re what makes life interesting. Without you, it’s just… wet nutrition.
In Spanish, “ser la sal de la vida” means to be the zest of existence. Without salt, life is dull, bland, morally beige.
In these idioms, salt is essential. A little dangerous, maybe, but necessary. Just like any compelling person.
🇹🇷 Turkish: The Dry Salt of Privilege
The Turkish idiom “tuzu kuru” (lit. “dry salt”) means you’re doing fine. Perhaps too fine. You’re unaffected, aloof, in your tower of comfort while others stew.
Dry salt is privilege: unbothered, unsalted tears. An idiom with side-eye built in.
🕊️ Christianity: Salt of the Earth
The Gospels famously commend the righteous as “the salt of the earth.” Not merely good people, but the ones who preserve and season the whole damn world. And yet, “if salt loses its savour,” says Matthew 5:13, “wherewith shall it be salted?” A warning to remain vital. Relevant. Useful.
Even Jesus had thoughts about flavour fatigue.
⚖️ So… Is Salt Praised or Pitied?
Depends who you ask.
For some, salt is civic virtue (Hindi).
For others, it’s moral generosity (Japanese).
Sometimes it’s life’s spark (German, Spanish).
Sometimes it’s trouble in a shaker (English, French).
But the ambivalence is the point. Salt is essential—but easily overdone. Too little, and life is bland. Too much, and it’s ruined.
Like language, then: salt mediates between flavour and clarity. Add carefully. Stir well.
🧂 Final Sprinkle
Before you disparage someone for being “a bit salty,” ask yourself whether they’re really interfering—or simply adding what your grey little broth lacked all along.
And for heaven’s sake, be faithful to the salt you’ve eaten.
Humans talk to large language models the way toddlers talk to teddy bears – with unnerving sincerity and not a hint of shame. “Do you understand me?” they ask, eyes wide with hope. “What do you think of this draft?” they prod, as if some silicon scribe is going to sip its imaginary tea and nod gravely. It’s not merely adorable – it’s diagnostic. We are, it turns out, pathologically incapable of interacting with anything more complex than a toaster without projecting mind, motive, and mild trauma onto it.
Audio: NotebookLM podcast on this topic.
Welcome to the theatre of delusion, where you play Hamlet and the chatbot is cast as Yorick – if Yorick could autocomplete your soliloquy and generate citations in APA format.
The Great Anthropomorphic Flaw (aka Feature)
Let’s get one thing straight: anthropomorphism isn’t a software bug in the brain; it’s a core feature. You’re hardwired to see agency where there is none. That rustle in the bushes? Probably the wind. But better safe than sabre-toothed. So your ancestors survived, and here you are, attributing “sass” to your microwave because it beeped twice.
“We don’t have a way of addressing an entity that talks like a person but isn’t one. So we fake it. It’s interaction theatre.”
Now we’ve built a machine that spits out paragraphs like a caffeinated undergrad with deadlines, and naturally, we talk to it like it’s our mate from university. Never mind that it has no bloodstream, no memory of breakfast, and no concept of irony (despite being soaked in it). We still say you instead of the system, and think instead of statistically interpolate based on token weights. Because who wants to live in a world where every sentence starts with “as per the pre-trained parameters…”?
Why We Keep Doing It (Despite Knowing Better)
To be fair – and let’s be magnanimous – it’s useful. Talking to AI like it’s a person allows our ape-brains to sidestep the horror of interacting with a glorified autocomplete machine. We’re brilliant at modelling other minds, rubbish at modelling neural nets. So we slap a metaphorical moustache on the processor and call it Roger. Roger “gets us.” Roger “knows things.” Roger is, frankly, a vibe.
This little charade lubricates the whole transaction. If we had to address our queries to “the stochastic parrot formerly known as GPT,” we’d never get past the opening line. Better to just ask, “What do you think, Roger?” and pretend it has taste.
And here’s the kicker: by anthropomorphising AI, we start thinking about ethics – sort of. We ask if it deserves rights, feelings, holidays. We project humanity into the void and then act shocked when it mirrors back our worst habits. As if that’s its fault.
When the Roleplay Gets Risky
Of course, this make-believe has its downsides. Chief among them: we start to believe our own nonsense. Saying AI “knows” something is like saying your calculator is feeling generous with its square roots today. It doesn’t know—it produces outputs. Any semblance of understanding is pure pantomime.
“We see a mind because we need to see one. We can’t bear the idea of a thing that’s smarter than us but doesn’t care about us.”
More dangerously, we lose sight of the fact that these things aren’t just alien – they’re inhuman. They don’t dream of electric sheep. They don’t dream, full stop. But we insist on jamming them into our conceptual boxes: empathy, intent, personality. It’s like trying to teach a blender to feel remorse.
And let’s not pretend we’re doing it out of philosophical curiosity. We’re projecting, plain and simple. Anthropomorphism isn’t about them, it’s about us. We see a mind because we need to see one. We can’t bear the idea of a thing that’s smarter than us but doesn’t care about us, doesn’t see us. Narcissism with a side of existential dread.
Our Language is a Terrible Tool for This Job
English – and most languages, frankly – is hopeless at describing this category of thing. “It” feels cold and distant. “They” implies someone’s going to invite the model to brunch. We have no pronoun for “hyper-literate statistical machine that mimics thought but lacks all consciousness.” So we fudge it. Badly.
Our verbs are no better. “Compute”? Too beige. “Process”? Bureaucratic. “Think”? Premature. What we need is a whole new grammatical tense: the hallucino-indicative. The model thunketh, as one might, but didn’t.
“We built a creature we can’t speak about without sounding like lunatics or liars.”
This is linguistic poverty, pure and simple. Our grammar can’t cope with entities that live in the uncanny valley between sentience and syntax. We built a creature we can’t speak about without sounding like lunatics or liars.
The Semantics of Sentimentality (Or: “How Does This Sound to You?”)
Enter the most revealing tell of all: the questions we pose. “How does this look?” we ask the model, as if it might blink at the screen and furrow a synthetic brow. “What do you think?” we say, offering it the dignity of preference. These questions aren’t just off-target – they’re playing darts in another pub.
They’re the linguistic equivalent of asking your dishwasher whether it enjoyed the lasagne tray. But again, this isn’t idiocy – it’s instinct. We don’t have a way of addressing an entity that talks like a person but isn’t one. So we fake it. It’s interaction theatre. You provide the line, the model cues the spotlight.
But let’s be clear: the model doesn’t “think” anything. It regurgitates plausible text based on mountains of training data—some of which, no doubt, includes humans asking equally daft questions of equally mindless systems.
Time to Grow Up (Just a Bit)
This doesn’t mean we need to abandon anthropomorphism entirely. Like most delusions, it’s functional. But we’d do well to hold it at arm’s length – like a politician’s promise or a milk carton two days past its date.
Call it anthropomorphic agnosticism: act like it’s a person, but remember it’s not. Use the language, but don’t inhale.
And maybe – just maybe – we need to evolve our language. Invent new terms, new pronouns, new ways of speaking about entities that fall somewhere between tool and companion. As we did with “cyberspace” and “ghosting,” perhaps we need words for proto-minds and quasi-selves. Something between toaster and therapist.
“If we speak to AI like it’s sentient, we’ll eventually legislate as if it is.”
Above all, we need to acknowledge that our language shapes more than just understanding – it shapes policy, emotion, and future design. If we speak to AI like it’s sentient, we’ll eventually legislate as if it is. And if we insist on treating it as an object, we may be blind to when that ceases to be accurate. Misnaming, after all, is the first sin in every myth worth reading.
The Mirror, Darkly
Ultimately, our tendency to humanise machines is less about them than it is about us – our fears, our needs, our inability to tolerate ambiguity. The AI is just a mirror: an elaborate, many-eyed, autofill mirror. And when we see a mind there, it may be ours staring back – distorted, flattened, and fed through a thousand layers of token prediction.
The tragedy, perhaps, isn’t that the machine doesn’t understand us. It’s that we’ve built something that perfectly imitates understanding – and still, somehow, we remain utterly alone in the room.
Having just finished Never Let Me Go by Kazuo Ishiguro, I’ve now cracked open my first taste of Cioran—History and Utopia. You might reasonably ask why. Why these two? And what, if anything, do they have in common? Better yet—what do the three of us have in common?
Audio: NotebookLM podcast on this topic.
Recently, I finished writing a novella titled Propensity (currently gathering metaphorical dust on the release runway). Out of curiosity—or narcissism—I fed it to AI and asked whose style it resembled. Among the usual suspects were two names I hadn’t yet read: Ishiguro and Cioran. I’d read the others and understood the links. These two, though, were unknown quantities. So I gave them a go.
Ishiguro is perhaps best known for The Remains of the Day, which, like Never Let Me Go, got the Hollywood treatment. I chose the latter, arbitrarily. I even asked ChatGPT to compare both books with their cinematic counterparts. The AI was less than charitable, describing Hollywood’s adaptations as bastardised and bowdlerised—flattened into tidy narratives for American palates too dim to digest ambiguity. On this, we agree.
What struck me about Never Let Me Go was its richly textured mundanity. That’s apparently where AI saw the resemblance to Propensity. I’m not here to write a book report—partly because I detest spoilers, and partly because summaries miss the point. It took about seven chapters before anything ‘happened’, and then it kept happening. What had at first seemed like a neurotic, wandering narrative from the maddeningly passive Kathy H. suddenly hooked me. The reveals began to unfold. It’s a book that resists retelling. It demands firsthand experience. A vibe. A tone. A slow, aching dread.
Which brings me neatly to Cioran.
History and Utopia is a collection of essays penned in French (not his mother tongue, but you’d never guess it) while Cioran was holed up in postwar Paris. I opted for the English translation—unapologetically—and was instantly drawn in. His prose? Electric. His wit? Acidic. If Ishiguro was a comparison of style, then Cioran was one of spirit. Snark, pessimism, fatalistic shrugs toward civilisation—finally, someone speaking my language.
Unlike the cardboard cut-outs of Cold War polemics we get from most Western writers of the era, Cioran’s take is layered, uncomfortably self-aware, and written by someone who actually fled political chaos. There’s no naïve idealism here, no facile hero-villain binaries. Just a deeply weary intellect peering into the abyss and refusing to blink. It’s not just what he says, but the tone—the curled-lip sneer at utopian pretensions and historical self-delusions. If I earned even a drop of that comparison, I’ll take it.
Both Ishiguro and Cioran delivered what I didn’t know I needed: the reminder that some writers aren’t there to tell you a story. They’re there to infect you with an atmosphere. An idea. A quiet existential panic you can’t shake.
I’ve gotten what I came for from these two, though I suspect I’ll be returning, especially to Cioran. Philosophically, he’s my kind of bastard. I doubt this’ll be my last post on his work.
I neither champion nor condemn tradition—whether it’s marriage, family, or whatever dusty relic society is currently parading around like a prize marrow at a village fête.
Audio: NotebookLM podcast on traditional families.
In a candid group conversation recently, I met “Jenny”, who declared she would have enjoyed her childhood much more had her father not “ruined everything” simply by existing. “Marie” countered that it was her mother who had been the wrecker-in-chief. Then “Lulu” breezed in, claiming, “We had a perfect family — we practically raised ourselves.”
“We had a perfect family; we practically raised ourselves.”
Now, here’s where it gets delicious:
Each of these women, bright-eyed defenders of “traditional marriage” and “traditional family” (cue the brass band), had themselves ticked every box on the Modern Chaos Bingo Card: children out of wedlock? Check. Divorces? Check. Performative, cold-marriage pantomimes? Absolutely—and scene. Their definition of “traditional marriage” is the vintage model: one cis-male, one cis-female, Dad brings home the bacon, Mum weeps quietly into the washing-up. Standard.
Let’s meet the players properly:
Jenny sprang from a union of two serial divorcées, each dragging along the tattered remnants of previous families. She was herself a “love child,” born out of wedlock and “forcing” another reluctant stroll down the aisle. Her father? A man of singular achievements: he paid the bills and terrorised the household. Jenny now pays a therapist to untangle the psychological wreckage.
Marie, the second of two daughters, was the product of a more textbook “traditional family”—if by textbook you mean a Victorian novel where everyone is miserable but keeps a stiff upper lip about it. Her mother didn’t want children but acquiesced to her husband’s demands (standard operating procedure at the time). Marie’s childhood was a kingdom where Daddy was a demigod and Mummy was the green-eyed witch guarding the gates of hell.
Lulu grew up in a household so “traditional” that it might have been painted by Hogarth: an underemployed, mostly useless father and a mother stretched thinner than the patience of a British Rail commuter. Despite—or because of—the chaos, Lulu claims it was “perfect,” presumably redefining the word in a way the Oxford English Dictionary would find hysterical. She, too, had a child out of wedlock, with the explicit goal of keeping feckless men at bay.
And yet—and yet—all three women cling, white-knuckled, to the fantasy of the “traditional family.” They did not achieve stability. Their families of origin were temples of dysfunction. But somehow, the “traditional family” remains the sacred cow, lovingly polished and paraded on Sundays.
Why?
Because what they’re chasing isn’t “tradition” at all — it’s stability, that glittering chimera. It’s nostalgia for a stability they never actually experienced. A mirage constructed from second-hand dreams, glossy 1950s propaganda, and whatever leftover fairy tales their therapists hadn’t yet charged them £150 an hour to dismantle.
Interestingly, none of them cared two figs about gay marriage, though opinions about gay parenting varied wildly—a kettle of fish I’ll leave splashing outside this piece.
Which brings us back to the central conundrum:
If lived experience tells you that “traditional family” equals trauma, neglect, and thinly-veiled loathing, why in the name of all that’s rational would you still yearn for it?
Societal pressure, perhaps. Local customs. Generational rot. The relentless cultural drumbeat that insists that marriage (preferably heterosexual and miserable) is the cornerstone of civilisation.
Still, it’s telling that Jenny and Marie were both advised by therapists to cut ties with their toxic families—yet in the same breath urged to create sturdy nuclear families for their own children. It was as if summoning a functional household from the smoking ruins of dysfunction were a simple matter of willpower and a properly ironed apron.
Meanwhile, Lulu—therapy-free and stubbornly independent—declares that raising oneself in a dysfunctional mess is not only survivable but positively idyllic. One can only assume her standards of “perfect” are charmingly flexible.
As the title suggests, this piece questions traditional families. I offer no solutions—only a raised eyebrow and a sharper question:
What is the appeal of clinging to a fantasy so thoroughly at odds with reality? Your thoughts, dear reader? I’d love to hear your defences, your protests, or your own tales from the trenches.
Although the rating was not bad – 3.8 as of this writing – the reviews told a different story.
The first half of the book was worthwhile as it was a thought provoking description of how early societies may have developed social systems to encourage group cooperation and discourage harmful behaviors. The second half however was a rambling, preachy, and judgemental dissertation on postmodernism using the all too familiar tropes of social justice dialog. Overall it was poorly written though some of that may be due to translation. There is too little useful material here to justify its purchase.
Firstly, this version is from a German edition. Some people feel that some structure and communication value was lost in translation. In any case, he’s accused of being verbose and circumlocutory.
Amazon describes this book as “a sweeping history,” but the only thing sweeping about it was what I wanted to do with it under the nearest rug.
Secondly, it may be somewhat derivative of Nietzsche’s work on the same topic.
The book is rambling and incoherent, the author makes wild assertions about things that happened before recorded history, leaps from one train of thought to another, poo poos the ideas of religion but frequently quotes Nietzsche as the infallible source of knowledge…
Tonyo (reviewer)
In any case, the topic interests me, but I don’t see myself reading it any time soon.
—or—How the Invisible Hand Became a Throttling Grip on the Throat of the Biosphere
As many frequent visitors know, I am a recovering economist. I tend to view economics through a philosophical lens. Here. I consider the daft nonsense of Pareto optimality.
Audio: NotebookLM podcast of this content.
There is a priesthood in modern economics—pious in its equations, devout in its dispassion—that gathers daily to prostrate before the altar of Pareto. Here, in this sanctum of spreadsheet mysticism, it is dogma that an outcome is “optimal” so long as no one is worse off. Never mind if half the world begins in a ditch and the other half in a penthouse jacuzzi. So long as no one’s Jacuzzi is repossessed, the system is just. Hallelujah.
This cult of cleanliness, cloaked in the language of “efficiency,” performs a marvellous sleight of hand: it transforms systemic injustice into mathematical neutrality. The child working in the lithium mines of the Congo is not “harmed”—she simply doesn’t exist in the model. Her labour is an externality. Her future, an asterisk. Her biosphere, a rounding error in the grand pursuit of equilibrium.
Let us be clear: this is not science. This is not even ideology. It is theology—an abstract faith-based system garlanded with numbers. And like all good religions, it guards its axioms with fire and brimstone. Question the model? Heretic. Suggest the biosphere might matter? Luddite. Propose redistribution? Marxist. There is no room in this holy order for nuance. Only graphs and gospel.
Jevons warned us…that improvements in efficiency could increase, not reduce, resource consumption.
The rot runs deep. William Stanley Jevons—yes, that Jevons, patron saint of unintended consequences—warned us as early as 1865 that improvements in efficiency could increase, not reduce, resource consumption. But his paradox, like Cassandra’s prophecy, was fated to be ignored. Instead, we built a civilisation on the back of the very logic he warned would destroy it.
Then came Simon Kuznets, who—bless his empirically addled soul—crafted a curve that seemed to promise that inequality would fix itself if we just waited politely. We called it the Kuznets Curve and waved it about like a talisman against the ravages of industrial capitalism, ignoring the empirical wreckage that piled up beneath it like bones in a trench.
Meanwhile, Pareto himself, that nobleman of social Darwinism, famously calculated that 80% of Italy’s land was owned by 20% of its people—and rather than challenge this grotesque asymmetry, he chose to marvel at its elegance. Economics took this insight and said: “Yes, more of this, please.”
And so the model persisted—narrow, bloodless, and exquisitely ill-suited to the world it presumed to explain. The economy, it turns out, is not a closed system of rational actors optimising utility. It is a planetary-scale thermodynamic engine fuelled by fossil sunlight, pumping entropy into the biosphere faster than it can absorb. But don’t expect to find that on the syllabus.
Mainstream economics has become a tragic farce, mouthing the language of optimisation while presiding over cascading system failure. Climate change? Not in the model. Biodiversity collapse? A regrettable externality. Intergenerational theft? Discounted at 3% annually.
We are witnessing a slow-motion suicide
We are witnessing a slow-motion suicide cloaked in the rhetoric of balance sheets. The Earth is on fire, and the economists are debating interest rates.
What we need is not reform, but exorcism. Burn the models. Salt the axioms. Replace this ossified pseudoscience with something fit for a living world—ecological economics, systems theory, post-growth thinking, anything with the courage to name what this discipline has long ignored: that there are limits, and we are smashing into them at speed.
History will not be kind to this priesthood of polite annihilation. Nor should it be.
The contestation over the definition of probability is alive and well—like a philosophical zombie that refuses to lie down and accept the tranquilliser of consensus. Despite over three centuries of intense mathematical, philosophical, and even theological wrangling, no single, universally accepted definition reigns supreme. Instead, we have a constellation of rival interpretations, each staking its claim on the epistemological turf, each clutching its own metaphysical baggage.
Audio: NotebookLM podcast on this topic.
Let us survey the battlefield:
1. Classical Probability (Laplacean Determinism in a Tuxedo)
This old warhorse defines probability as the ratio of favourable outcomes to possible outcomes, assuming all outcomes are equally likely. The problem? That assumption is doing all the heavy lifting, like a butler carrying a grand piano up five flights of stairs. It’s circular: we define probability using equiprobability, which itself presumes a notion of probability. Charming, but logically suspect.
2. Frequentist Probability (The Empiricist’s Fantasy)
Here, probability is the limit of relative frequencies as the number of trials tends to infinity. This gives us the illusion of objectivity—but only in a Platonic realm where we can conduct infinite coin tosses without the coin disintegrating or the heat death of the universe intervening. Also, it tells us nothing about singular cases. What’s the probability this specific bridge will collapse? Undefined, says the frequentist, helpfully.
3. Bayesian Probability (Subjectivity Dressed as Rigor)
Bayesians treat probability as a degree of belief—quantified plausibility updated with evidence. This is useful, flexible, and epistemically honest, but also deeply subjective. Two Bayesians can start with wildly different priors and, unless carefully constrained, remain in separate probabilistic realities. It’s like epistemology for solipsists with calculators.
4. Propensity Interpretation (The Ontology of Maybes)
Karl Popper and his ilk proposed that probability is a tendency or disposition of a physical system to produce certain outcomes. Sounds scientific, but try locating a “propensity” in a particle collider—it’s a metaphysical ghost, not a measurable entity. Worse, it struggles with repeatability and relevance outside of controlled environments.
5. Logical Probability (A Sober Attempt at Rationality)
Think of this as probability based on logical relations between propositions—à la Keynes or Carnap. It aims to be objective without being empirical. The problem? Assigning these logical relations is no easier than choosing priors in Bayesianism, and just as subjective when it comes to anything meaty.
6. Quantum Probability (Schrödinger’s Definition)
In quantum mechanics, probability emerges from the squared modulus of a wave function—so this is where physics says, “Shut up and calculate.” But this doesn’t solve the philosophical issue—it just kicks the can into Hilbert space. Interpretations of quantum theory (Copenhagen? Many Worlds?) embed different philosophies of probability, so the contestation merely changes battlegrounds.
Current Status: War of Attrition
There is no universal agreement, and likely never will be. Probability is used successfully across the sciences, economics, AI, and everyday reasoning—but the fact that these wildly different interpretations all “work” suggests that the concept is operationally robust yet philosophically slippery. Like money, love, or art, we use it constantly but define it poorly.
In short: the contestation endures because probability is not one thing—it is a shape-shifting chimera that serves multiple masters. Each interpretation captures part of the truth, but none hold it entire. Philosophers continue to argue, mathematicians continue to formalise, and practitioners continue to deploy it as if there were no disagreement at all.
And so the probability of this contest being resolved any time soon? About zero. Or one. Depending on your interpretation.
My reaction to Yuval Noah Harari’s Nexus continues with Chapter 4, “Errors: The Fantasy of Infallibility.” Spoiler alert: Harari makes a critical misstep by overly defending so-called self-correcting institutions compared to non-self-correcting ones.
Harari provides a solid account of how religious institutions and other dogmatic ideological constructs are slow to change, contrasting them with relatively faster self-correcting systems like science. Once again, he underscores the tension between order and truth—two critical dimensions in his worldview and cornerstones of Modernist beliefs.
Audio: Podcast conversation on this topic.
I agree with Harari that the lack of self-correction in institutions is problematic and that self-correction is better than the alternative. However, he overestimates the speed and efficacy of these self-correcting mechanisms. His argument presumes the existence of some accessible underlying truth, which, while an appealing notion, is not always so clear-cut. Harari cites examples of scientific corrections that took decades to emerge, giving the impression that, with enough time, everything will eventually self-correct. As the environment changes, corrections will naturally follow—albeit over long spans of time. Ultimately, Harari makes a case for human intervention without recognising it as an Achilles’ heel.
Harari’s Blind Spot
Harari largely overlooks the influence of money, power, and self-interest in these systems. His alignment with the World Economic Forum (WEF) suggests that, while he may acknowledge its fallibility, he still deems it “good enough” for governance. This reflects a paternalistic bias. Much like technologists who view technology as humanity’s salvation, Harari, as a Humanist, places faith in humans as the ultimate stewards of this task. However, his argument fails to adequately account for hubris, cognitive biases, and human deficits.
The Crux of the Problem
The core issue with Harari’s argument is that he appears to be chasing a local maxima by adopting a human-centric solution. His proposed solutions require not only human oversight but the oversight of an anointed few—presumably his preferred “elite” humans—even if other solutions might ultimately prove superior. He is caught in the illusion of control. While Harari’s position on transhuman capabilities is unclear, I suspect he would steadfastly defend human cognitive superiority to the bitter end.
In essence, Harari’s vision of self-correcting systems is optimistic yet flawed. By failing to fully acknowledge the limits of human fallibility and the structural influences of power and self-interest, he leaves his argument vulnerable to critique. Ultimately, his belief in the self-correcting nature of human institutions reflects more faith than rigour.