As some of you know, I publish speculative fiction under the name Ridley Park. Propensity is one of several recent releases – a novella that leans philosophical, brushes up against literary fiction, and steps quietly into the margins of sci-fi.
It’s not about spaceships or superintelligence. It’s about modulation.
About peace engineered through neurochemical compliance.
About the slow horror of obedience without belief, and the behavioural architecture that lets us think we’re still in control.
The ideas explored include:
Free will as illusion
Peace as compliance
Drift, echo, and the limits of modulation
Obedience without belief
Institutional horror and soft dystopia
Consent and behavioural control
Narrative as residue
Collapse by calibration
Though filed under speculative fiction, Propensity [US] is best read as a literary artefact – anti-sci-fi, in a sense. There’s no fetishisation of technology or progress. Just modulation, consequence, and the absence of noise.
This PDF contains selected visual excerpts from the physical book to accompany the free audiobook edition. For readers and listeners alike, it offers a glimpse into Ridley Park’s world – a quietly dystopian, clinically unsettling, and depressingly plausible one.
Title page
Copyrights page
Table of Contents
Chapter 10: Memorandum. This chapter is read in the audiobook. The inclusion here is for visualisation as it is rendered in the form of a memo.
Chapter 26: Simulacra. This chapter is read in the audiobook. The inclusion here is for visualisation as it is rendered in the format of a screenplay.
Chapter 28: Standard Test: This chapter is read in the audiobook. The inclusion here is for visualisation as it is rendered in the format of a standardised test.
Chapter 34: Calendar. This chapter is read in the audiobook. The inclusion here is for visualisation as it is rendered in the format of a calendar.
Chapter 39: Carnage. This chapter is read in the audiobook. The inclusion here is for visualisation as it is rendered in the form of a Dr Suess-type poem.
Chapter 41: Leviathan. This chapter is excerpted in the audiobook. The inclusion here is for visualisation as it is rendered with an image of the cover of Hobbes’ Leviathan and redacted page content.
Chapter 42: Ashes to Ashes. This chapter is read in the audiobook. The inclusion here is for visualisation as it is rendered in the form of text art.
Chapter 43: Unknown. A description of this chapter is read in the audiobook. The inclusion here is for visualisation as it is rendered in the form of an ink sketch.
Chapter 44: Vestige. A description of this chapter is read in the audiobook. The inclusion here is for visualisation as it is rendered in the form of text art.
For more information about Ridley Park’s Propensity, visit the website. I’ll be sharing content related to Propensity and my other publications. I’ll cross-post here when the material has a philosophical bent, which it almost always does.
So sad, really. Not tragic in the noble Greek sense, just pathetically engineered. Our collective addiction to money isn’t even organic – it’s fabricated, extruded like a synthetically flavoured cheese product. At least fentanyl has the decency to offer a high. Money promises only more money, like a Ponzi scheme played out on the global stage, with no exit strategy but death – or worse, a lifestyle brand.
Audio: NotepadLM podcast on this topic.
We’re told money is a tool. Sure. So’s a knife. But when you start sleeping with it under your pillow, stroking it for comfort, or stabbing strangers for your next fix, you’re not using it as a “tool” – you’re a junkie. And the worst part? It’s socially sanctioned. Applauded, even. We don’t shame the addict – we give him equity and a TED Talk.
The Chemical Romance of Currency
Unlike drugs, money doesn’t scramble your neurons – it rewires your worldview. You don’t feel high. You feel normal. Which is exactly what makes it so diabolical. Cocaine users might have delusions of grandeur, but capitalists have Excel sheets to prove theirs. It’s the only addiction where hoarding is a virtue and empathy is an obstacle to growth.
“We used to barter goods. Now we barter souls for subscriptions.”
The dopamine hit of a pay rise. The serotonin levels swell when your bank app shows four digits instead of three. These are chemical kicks masquerading as success. It’s not money itself – it’s the psychic sugar rush of “having” it, and the spiritual rot of needing it just to exist.
And oh, how they’ve gamified that need. You want to eat? Pay. You want shelter? Pay. You want healthcare? Pay – and while you’re at it, pay for the privilege of existing inside a system that turns your own exhaustion into a business model. You are the product. The addict. The asset. The mark.
The Fabrication of Need
Nobody needs money in the abstract. You need food. You need air. You need dignity, love, and maybe the occasional lie-in. Money only enters the picture because we’ve designed a world where nothing gets through the gates without it. Imagine locking the pantry, then charging your children rent for their own sandwiches. That’s civilisation.
“Money isn’t earned—it’s rationed. And you’re gaslit into thinking it’s your fault you’re hungry.”
They say money is freedom. That’s cute. Tell that to the nurse working double shifts while Jeff Bezos experiments with zero-gravity feudalism. In reality, money is a filtering device—who gets to be human, and who stays stuck being labour.
Crypto was supposed to be liberation. Instead, it became a libertarian renaissance fair for the hyper-online, still pegged to the same logic: hoard, pump, dump, repeat. The medium changed, but the pathology remained the same.
Worshipping the Golden Needle
Let’s be honest: we’ve built temples to this thing. Literal towers. Financial cathedrals made of mirrored glass, each reflecting our collective narcotic fantasy of “more.” We measure our worth in net worth. We rank our lives by percentile. A person’s death is tragic unless they were poor, in which case it becomes a morality tale about poor decisions and not grinding hard enough.
“You’re not broke—you’re just not ‘optimising your earning potential.’ Now go fix your mindset and buy this online course.”
We no longer have citizens; we have consumers. No neighbours – just co-targeted demographics. Every life reduced to its purchasing power, its brand affiliations, its potential for monetisation. The gig economy is just Dickensian poverty with a better UI.
Cold Turkey for the Soul
The worst part? There is no rehab. No twelve-step programme for economic dependency. You can’t detox from money. Try living without it and see how enlightened your detachment feels on an empty stomach. You’ll find that society doesn’t reward transcendence – it punishes it. Try opting out and watch how quickly your saintliness turns into homelessness.
So we cope. We moralise the hustle. We aestheticise the grind. We perform productivity like good little addicts, jonesing for a dopamine hit in the shape of a direct deposit.
“At least fentanyl kills you quickly. Money lets you rot in comfort—if you’re lucky.”
Exit Through the Gift Shop?
So what’s the answer? I’m not offering one. This isn’t a TEDx talk. There’s no keynote, no downloadable worksheet, no LinkedIn carousel with three bullet points and an aspirational sunset. The first step is admitting the addiction – and maybe laughing bitterly at the absurdity of it all.
Money, that sweet illusion. The fiction we’ve all agreed to hallucinate together. The god we invented, then forgot was a puppet. And now we kneel, transfixed, as it bleeds us dry one tap at a time.
Epilogue: The Omission That Says It All
If you need proof that psychology is a pseudoscience operating as a control mechanism, ask yourself this:
Why isn’t this in the DSM?
This rabid, irrational, identity-consuming dependency on money – why is it not listed under pathological behaviour? Why isn’t chronic monetisation disorder a clinical diagnosis? Because it’s not a bug in the system. It is the system. You can be obsessed with wealth, hoard it like a dragon, destroy families and ecosystems in pursuit of it, and not only will you escape treatment, you’ll be featured on a podcast as a “thought leader.”
“Pathology is what the poor get diagnosed with. Wealth is its own immunity.”
We don’t pathologise the addiction to money because it’s the operating principle of the culture. And psychology – like any well-trained cleric of the secular age – knows not to bite the gilded hand that feeds it.
And so it remains omitted. Undiagnosed. Unquestioned. The dirtiest addiction of all, hidden in plain sight, wearing a suit and handing out business cards.
Humans talk to large language models the way toddlers talk to teddy bears – with unnerving sincerity and not a hint of shame. “Do you understand me?” they ask, eyes wide with hope. “What do you think of this draft?” they prod, as if some silicon scribe is going to sip its imaginary tea and nod gravely. It’s not merely adorable – it’s diagnostic. We are, it turns out, pathologically incapable of interacting with anything more complex than a toaster without projecting mind, motive, and mild trauma onto it.
Audio: NotebookLM podcast on this topic.
Welcome to the theatre of delusion, where you play Hamlet and the chatbot is cast as Yorick – if Yorick could autocomplete your soliloquy and generate citations in APA format.
The Great Anthropomorphic Flaw (aka Feature)
Let’s get one thing straight: anthropomorphism isn’t a software bug in the brain; it’s a core feature. You’re hardwired to see agency where there is none. That rustle in the bushes? Probably the wind. But better safe than sabre-toothed. So your ancestors survived, and here you are, attributing “sass” to your microwave because it beeped twice.
“We don’t have a way of addressing an entity that talks like a person but isn’t one. So we fake it. It’s interaction theatre.”
Now we’ve built a machine that spits out paragraphs like a caffeinated undergrad with deadlines, and naturally, we talk to it like it’s our mate from university. Never mind that it has no bloodstream, no memory of breakfast, and no concept of irony (despite being soaked in it). We still say you instead of the system, and think instead of statistically interpolate based on token weights. Because who wants to live in a world where every sentence starts with “as per the pre-trained parameters…”?
Why We Keep Doing It (Despite Knowing Better)
To be fair – and let’s be magnanimous – it’s useful. Talking to AI like it’s a person allows our ape-brains to sidestep the horror of interacting with a glorified autocomplete machine. We’re brilliant at modelling other minds, rubbish at modelling neural nets. So we slap a metaphorical moustache on the processor and call it Roger. Roger “gets us.” Roger “knows things.” Roger is, frankly, a vibe.
This little charade lubricates the whole transaction. If we had to address our queries to “the stochastic parrot formerly known as GPT,” we’d never get past the opening line. Better to just ask, “What do you think, Roger?” and pretend it has taste.
And here’s the kicker: by anthropomorphising AI, we start thinking about ethics – sort of. We ask if it deserves rights, feelings, holidays. We project humanity into the void and then act shocked when it mirrors back our worst habits. As if that’s its fault.
When the Roleplay Gets Risky
Of course, this make-believe has its downsides. Chief among them: we start to believe our own nonsense. Saying AI “knows” something is like saying your calculator is feeling generous with its square roots today. It doesn’t know—it produces outputs. Any semblance of understanding is pure pantomime.
“We see a mind because we need to see one. We can’t bear the idea of a thing that’s smarter than us but doesn’t care about us.”
More dangerously, we lose sight of the fact that these things aren’t just alien – they’re inhuman. They don’t dream of electric sheep. They don’t dream, full stop. But we insist on jamming them into our conceptual boxes: empathy, intent, personality. It’s like trying to teach a blender to feel remorse.
And let’s not pretend we’re doing it out of philosophical curiosity. We’re projecting, plain and simple. Anthropomorphism isn’t about them, it’s about us. We see a mind because we need to see one. We can’t bear the idea of a thing that’s smarter than us but doesn’t care about us, doesn’t see us. Narcissism with a side of existential dread.
Our Language is a Terrible Tool for This Job
English – and most languages, frankly – is hopeless at describing this category of thing. “It” feels cold and distant. “They” implies someone’s going to invite the model to brunch. We have no pronoun for “hyper-literate statistical machine that mimics thought but lacks all consciousness.” So we fudge it. Badly.
Our verbs are no better. “Compute”? Too beige. “Process”? Bureaucratic. “Think”? Premature. What we need is a whole new grammatical tense: the hallucino-indicative. The model thunketh, as one might, but didn’t.
“We built a creature we can’t speak about without sounding like lunatics or liars.”
This is linguistic poverty, pure and simple. Our grammar can’t cope with entities that live in the uncanny valley between sentience and syntax. We built a creature we can’t speak about without sounding like lunatics or liars.
The Semantics of Sentimentality (Or: “How Does This Sound to You?”)
Enter the most revealing tell of all: the questions we pose. “How does this look?” we ask the model, as if it might blink at the screen and furrow a synthetic brow. “What do you think?” we say, offering it the dignity of preference. These questions aren’t just off-target – they’re playing darts in another pub.
They’re the linguistic equivalent of asking your dishwasher whether it enjoyed the lasagne tray. But again, this isn’t idiocy – it’s instinct. We don’t have a way of addressing an entity that talks like a person but isn’t one. So we fake it. It’s interaction theatre. You provide the line, the model cues the spotlight.
But let’s be clear: the model doesn’t “think” anything. It regurgitates plausible text based on mountains of training data—some of which, no doubt, includes humans asking equally daft questions of equally mindless systems.
Time to Grow Up (Just a Bit)
This doesn’t mean we need to abandon anthropomorphism entirely. Like most delusions, it’s functional. But we’d do well to hold it at arm’s length – like a politician’s promise or a milk carton two days past its date.
Call it anthropomorphic agnosticism: act like it’s a person, but remember it’s not. Use the language, but don’t inhale.
And maybe – just maybe – we need to evolve our language. Invent new terms, new pronouns, new ways of speaking about entities that fall somewhere between tool and companion. As we did with “cyberspace” and “ghosting,” perhaps we need words for proto-minds and quasi-selves. Something between toaster and therapist.
“If we speak to AI like it’s sentient, we’ll eventually legislate as if it is.”
Above all, we need to acknowledge that our language shapes more than just understanding – it shapes policy, emotion, and future design. If we speak to AI like it’s sentient, we’ll eventually legislate as if it is. And if we insist on treating it as an object, we may be blind to when that ceases to be accurate. Misnaming, after all, is the first sin in every myth worth reading.
The Mirror, Darkly
Ultimately, our tendency to humanise machines is less about them than it is about us – our fears, our needs, our inability to tolerate ambiguity. The AI is just a mirror: an elaborate, many-eyed, autofill mirror. And when we see a mind there, it may be ours staring back – distorted, flattened, and fed through a thousand layers of token prediction.
The tragedy, perhaps, isn’t that the machine doesn’t understand us. It’s that we’ve built something that perfectly imitates understanding – and still, somehow, we remain utterly alone in the room.
Let us begin with the heresy: Truth is a rhetorical artefact. Not a revelation. Not a metaphysical essence glimmering behind the veil. Just language — persuasive, repeatable, institutionally ratified language. In other words: branding.
Audio: NotebookLM podcast on this topic.
This is not merely a postmodern tantrum thrown at the altar of Enlightenment rationalism. It is a sober, if impolite, reminder that nearly everything we call “knowledge” is stitched together with narrative glue and semantic spit. Psychology. Neuroscience. Ethics. Economics. Each presents itself as a science — or worse, a moral imperative — but their foundations are built atop a linguistic faultline. They are, at best, elegant approximations; at worst, dogma in drag.
Let’s take psychology. Here is a field that diagnoses your soul via consensus. A committee of credentialed clerics sits down and declares a cluster of behaviours to be a disorder, assigns it a code, and hands you a script. It is then canonised in the DSM, the Diagnostic Scripture Manual. Doubt its legitimacy and you are either naïve or ill — which is to say, you’ve just confirmed the diagnosis. It’s a theological trap dressed in the language of care.
Or neuroscience — the church of the glowing blob. An fMRI shows a region “lighting up” and we are meant to believe we’ve located the seat of love, the anchor of morality, or the birthplace of free will. Never mind that we’re interpreting blood-oxygen fluctuations in composite images smoothed by statistical witchcraft. It looks scientific, therefore it must be real. The map is not the territory, but in neuroscience, it’s often a mood board.
And then there is language itself, the medium through which all these illusions are transmitted. It is the stage, the scenery, and the unreliable narrator. My Language Insufficiency Hypothesis proposes that language is not simply a flawed tool — it is fundamentally unfit for the task it pretends to perform. It was forged in the furnace of survival, not truth. We are asking a fork to play the violin.
This insufficiency is not an error to be corrected by better definitions or clever metaphors. It is the architecture of the system. To speak is to abstract. To abstract is to exclude. To exclude is to falsify. Every time we speak of a thing, we lose the thing itself. Language functions best not as a window to the real but as a veil — translucent, patterned, and perpetually in the way.
So what, then, are our Truths™? They are narratives that have won. Stories that survived the epistemic hunger games. They are rendered authoritative not by accuracy, but by resonance — psychological, cultural, institutional. A “truth” is what is widely accepted, not because it is right, but because it is rhetorically unassailable — for now.
This is the dirty secret of epistemology: coherence masquerades as correspondence. If enough concepts link arms convincingly, we grant them status. Not because they touch reality, but because they echo each other convincingly in our linguistic theatre.
Libet’s experiment, Foucault’s genealogies, McGilchrist’s hemispheric metaphors — each peels back the curtain in its own way. Libet shows that agency might be a post-hoc illusion. Foucault reveals that disciplines don’t describe the subject; they produce it. McGilchrist laments that the Emissary now rules the Master, and the world is flatter for it.
But all of them — and all of us — are trapped in the same game: the tyranny of the signifier. We speak not to uncover truth, but to make truth-sounding noises. And the tragedy is, we often convince ourselves.
So no, we cannot escape the prison of language. But we can acknowledge its bars. And maybe, just maybe, we can rattle them loudly enough that others hear the clank.
Until then, we continue — philosophers, scientists, diagnosticians, rhetoricians — playing epistemology like a parlour game with rigged dice, congratulating each other on how well the rules make sense.
This post draws on themes from my upcoming book, A Language Insufficiency Hypothesis. The transcript below is taken from a publicly available exchange, which you can view here. Consider it Exhibit A in language’s ongoing failure to bear the weight of meaning.
KK: I’m saying we were technologically more advanced. DFW: So you’re saying we’re superior to Australian Aboriginals? KK: That’s quite the opposite of what I’m saying. I’m not saying we were superior, I’m saying we were technologically more advanced. DFW: So, how is that the opposite? KK: Superior implies a moral quality. I’m not making any moral implication. You seem to be, but what I’m saying is… DFW: I think most people would hear it that way. KK: No. DFW: Again, you’re a very intelligent man. How would most people hear that? KK: Most people would hear what I’m saying for what I’m saying, which is… DFW: I don’t think they would. KK: You seem to get quite heated about this, which is completely unnecessary. DFW: Um… KK: You think it’s necessary? DFW: I’m a bit stunned by what you’re implying. KK: No, you’re acting in a kind of passive aggressive way which indicates that you’re not happy… DFW: I genuinely… I’m being 100% authentic. My visceral reaction to a white man sitting and saying to me, “And why were we able to commit genocide on them?” and then just pausing— KK: Yes. DFW: …is very visceral to me. KK: Well, let’s go back. First of all, it’s interesting that you brought up my skin colour because I thought that was the exact opposite of the point you’re trying to make in the book.
Audio: NotebookLM podcast on this topic.
The Language Insufficiency Hypothesis begins with this premise: language is not merely flawed, it is structurally inadequate for mediating complex, layered realities
Dissection: Language as Battlefield
The Language Insufficiency Hypothesis begins with this premise: language is not merely flawed, it is structurally inadequate for mediating complex, layered realities – especially those laced with power, morality, and history. This transcript is not a debate. It is a linguistic trench war in which every utterance is laced with shrapnel, and each side thinks they’re defending reason.
Let’s pull a few of the shell casings from the mud.
1. Semantic Contamination: “Technologically more advanced”
KK attempts to offer a dry, neutral descriptor. DFW hears supremacist teleology. Why? Because “advanced” is culturally radioactive. It doesn’t merely denote a technical state—it connotes a ladder, with someone inevitably on the bottom rung.
When language carries historical residue, neutrality is a delusion. Words don’t just mean. They echo.
KK is making a semantic distinction. DFW hears a moral claim. Both are right. And both are talking past one another, because language is attempting to cleave affect from description, and it simply can’t.
2. Disambiguation Does Not Save You
KK’s insistence—“I’m not saying we’re superior”—is a textbook example of denotative desperation. He believes clarification will rescue intent. But as any linguist (or postcolonial theorist) will tell you: intent does not sterilise implication.
Language cannot be laundered by explanation. Once spoken, words belong to context, not intention.
KK thinks he’s holding a scalpel. DFW hears a cudgel. And here we are.
3. Phatic Collapse: “Most people would hear…”
This is where the wheels come off. KK argues from semantic specificity. DFW argues from sociolinguistic reception. It’s Saussure versus the TikTok algorithm. Neither will win.
Communication disintegrates not because anyone is lying, but because they are playing incompatible games with the same tokens.
4. Identity as Index: When the Speaker Becomes the Speech
DFW’s invocation of “a white man” is not a derailment—it’s the inevitable endpoint of a system where words no longer float free but are yoked to their utterer. This is the moment the failure of language becomes a failure of interlocution. Argument collapses into indexical entrapment.
At this point, you’re no longer debating ideas. You’re defending your right to use certain words at all.
Which brings us to the final breakdown.
5. Moral Authenticity vs Logical Precision: Unbridgeable Grammars
KK: I am making a logical distinction. DFW: I am having a visceral reaction.
The failure isn’t moral. It isn’t historical. It’s grammatical. One is operating in a truth-function logic game. The other is reacting within a trauma-informed, socially indexed register. These are grammars that do not overlap.
Conclusion: Language Did What It Always Does—It Failed Us
If this brief and brutal dialogue proves anything, it’s this: you cannot extract meaning cleanly from words when the words themselves are sponges for history, hierarchy, and harm. The moment we ask language to do too much—to carry precision, affect, ethics, and identity—it folds in on itself.
And that, dear reader, is precisely the argument of A Language Insufficiency Hypothesis: that meaning does not reside in words, and never has. It lives in the gaps, the silences, the misfires. That’s where the truth—whatever’s left of it—might be hiding.
Follow the wreckage. That’s where the signal lives.
Science has a language problem. Not a lack of it – if anything, a surfeit. But words, unlike test tubes, do not stay sterile. They evolve, mutate, and metastasise. They get borrowed, bent, misused, and misremembered. And when the public discourse gets hold of them, particularly on platforms like TikTok, it’s the language that gets top billing. The science? Second lead, if it’s lucky.
Semantic drift is at the centre of this: the gradual shift in meaning of a word or phrase over time. It’s how “literally” came to mean “figuratively,” how “organic” went from “carbon-based” to “morally superior,” and how “theory” in science means robust explanatory framework but in the public square means vague guess with no homework.
In short, semantic drift lets rhetoric masquerade as reason. Once a word acquires enough connotation, you can deploy it like a spell. No need to define your terms when the vibe will do.
Audio: NotebookLM podcast on this topic.
When “Vitamin” No Longer Means Vitamin
Take the word vitamin. It sounds objective. Authoritative. Something codified in the genetic commandments of all living things. (reference)
But it isn’t.
A vitamin is simply a substance that an organism needs but cannot synthesise internally, and must obtain through its diet. That’s it. It’s a functional definition, not a chemical one.
So:
Vitamin C is a vitamin for humans, but not for dogs, cats, or goats. They make their own. We lost the gene. Tough luck.
Vitamin D, meanwhile, isn’t a vitamin at all. It’s a hormone, synthesised when sunlight hits your skin. Its vitamin status is a historical relic – named before we knew better, and now marketed too profitably to correct.
But in the land of TikTok and supplement shelves, these nuances evaporate. “Vitamin” has drifted from scientific designation to halo term – a linguistic fig leaf draped over everything from snake oil to ultraviolet-induced steroidogenesis.
The Rhetorical Sleight of Hand
This linguistic slippage is precisely what allows the rhetorical shenanigans to thrive.
In one video, a bloke claims a burger left out for 151 days neither moulds nor decays, and therefore, “nature won’t touch it.” From there, he leaps (with Olympic disregard for coherence) into talk of sugar spikes, mood swings, and “metabolic chaos.” You can almost hear the conspiratorial music rising.
The science here is, let’s be generous, circumstantial. But the language? Oh, the language is airtight.
Words like “processed,” “chemical,” and “natural” are deployed like moral verdicts, not descriptive categories. The implication isn’t argued – it’s assumed, because the semantics have been doing quiet groundwork for years. “Natural” = good. “Chemical” = bad. “Vitamin” = necessary. “Addiction” = no agency.
By the time the viewer blinks, they’re nodding along to a story told by words in costume, not facts in context.
The Linguistic Metabolism of Misunderstanding
This is why semantic drift isn’t just an academic curiosity – it’s a vector. A vector by which misinformation spreads, not through outright falsehood, but through weaponised ambiguity.
A term like “sugar crash” sounds scientific. It even maps onto a real physiological process: postprandial hypoglycaemia. But when yoked to vague claims about mood, willpower, and “chemical hijacking,” it becomes a meme with lab coat cosplay. And the science, if mentioned at all, is there merely to decorate the argument, not drive it.
That’s the crux of my forthcoming book, The Language Insufficiency Hypothesis: that our inherited languages, designed for trade, prayer, and gossip, are woefully ill-equipped for modern scientific clarity. They lag behind our knowledge, and worse, they often distort it.
Words arrive first. Definitions come limping after.
In Closing: You Are What You Consume (Linguistically)
The real problem isn’t that TikTokers get the science wrong. The problem is that they get the words right – right enough to slip past your critical filters. Rhetoric wears the lab coat. Logic gets left in the locker room.
If vitamin C is a vitamin only for some species, and vitamin D isn’t a vitamin at all, then what else are we mislabelling in the great nutritional theatre? What other linguistic zombies are still wandering the scientific lexicon?
Language may be the best tool we have, but don’t mistake it for a mirror. It’s a carnival funhouse – distorting, framing, and reflecting what we expect to see. And until we fix that, science will keep playing second fiddle to the words pretending to explain it.
Having just finished Never Let Me Go by Kazuo Ishiguro, I’ve now cracked open my first taste of Cioran—History and Utopia. You might reasonably ask why. Why these two? And what, if anything, do they have in common? Better yet—what do the three of us have in common?
Audio: NotebookLM podcast on this topic.
Recently, I finished writing a novella titled Propensity (currently gathering metaphorical dust on the release runway). Out of curiosity—or narcissism—I fed it to AI and asked whose style it resembled. Among the usual suspects were two names I hadn’t yet read: Ishiguro and Cioran. I’d read the others and understood the links. These two, though, were unknown quantities. So I gave them a go.
Ishiguro is perhaps best known for The Remains of the Day, which, like Never Let Me Go, got the Hollywood treatment. I chose the latter, arbitrarily. I even asked ChatGPT to compare both books with their cinematic counterparts. The AI was less than charitable, describing Hollywood’s adaptations as bastardised and bowdlerised—flattened into tidy narratives for American palates too dim to digest ambiguity. On this, we agree.
What struck me about Never Let Me Go was its richly textured mundanity. That’s apparently where AI saw the resemblance to Propensity. I’m not here to write a book report—partly because I detest spoilers, and partly because summaries miss the point. It took about seven chapters before anything ‘happened’, and then it kept happening. What had at first seemed like a neurotic, wandering narrative from the maddeningly passive Kathy H. suddenly hooked me. The reveals began to unfold. It’s a book that resists retelling. It demands firsthand experience. A vibe. A tone. A slow, aching dread.
Which brings me neatly to Cioran.
History and Utopia is a collection of essays penned in French (not his mother tongue, but you’d never guess it) while Cioran was holed up in postwar Paris. I opted for the English translation—unapologetically—and was instantly drawn in. His prose? Electric. His wit? Acidic. If Ishiguro was a comparison of style, then Cioran was one of spirit. Snark, pessimism, fatalistic shrugs toward civilisation—finally, someone speaking my language.
Unlike the cardboard cut-outs of Cold War polemics we get from most Western writers of the era, Cioran’s take is layered, uncomfortably self-aware, and written by someone who actually fled political chaos. There’s no naïve idealism here, no facile hero-villain binaries. Just a deeply weary intellect peering into the abyss and refusing to blink. It’s not just what he says, but the tone—the curled-lip sneer at utopian pretensions and historical self-delusions. If I earned even a drop of that comparison, I’ll take it.
Both Ishiguro and Cioran delivered what I didn’t know I needed: the reminder that some writers aren’t there to tell you a story. They’re there to infect you with an atmosphere. An idea. A quiet existential panic you can’t shake.
I’ve gotten what I came for from these two, though I suspect I’ll be returning, especially to Cioran. Philosophically, he’s my kind of bastard. I doubt this’ll be my last post on his work.
Yuval Noah Harari, always ready with a digestible morsel for the TED-addled masses, recently declared that “democracy runs on trust, dictatorship on terror.” It’s a line with the crispness of a fortune cookie and about as much analytical depth. Designed for applause, not interrogation, it’s the sort of soundbite that flatters liberal sensibilities while sanding off the inconvenient edges of history.
Audio: NotebookLM podcast on this topic.
Let’s be honest: this dichotomy is not merely simplistic – it’s a rhetorical sedative. It reassures those who still believe political systems are like kitchen appliances: plug-and-play models with clear instructions and honest warranties. But for anyone who’s paid attention to the actual mechanics of power, this framing is delusional.
1. Trust Was Never Earned
In the United States, trust in democratic institutions was never some noble compact forged through mutual respect and enlightened governance. It was cultivated through exclusion, propaganda, and economic bribery. The post-WWII boom offered the illusion of institutional legitimacy – but only if you were white, male, middle-class, and preferably asleep.
Black Americans, Indigenous peoples, immigrants, women – none were granted the luxury of naïve trust. They were told to trust while being actively disenfranchised. To participate while being systemically excluded. So no, Harari, the machine didn’t run on trust. It ran on marketing. It ran on strategic ignorance.
2. Dictatorship Doesn’t Require Terror
Equally cartoonish is the notion that dictatorships subsist purely on terror. Many of them run quite comfortably on bureaucracy, passive conformity, and the grim seduction of order. Authoritarians know how to massage the same trust reflexes as democracies – only more bluntly. People don’t just obey out of fear. They obey out of habit. Out of resignation. Out of a grim kind of faith that someone – anyone – is in charge.
Dictatorships don’t extinguish trust. They re-route it. Away from institutions and toward strongmen. Toward myths of national greatness. Toward performative stability. It’s not that terror is absent—it’s just not the whole machine. The real engine is misplaced trust.
3. Collapse Is Bipartisan
The present moment isn’t about the erosion of a once-trustworthy system. It’s the slow-motion implosion of a confidence game on all sides. The old liberal institutions are collapsing under the weight of their hypocrisies. But the loudest critics – tech messiahs, culture warriors, authoritarian nostalgists – are no better. Their solutions are just new brands of snake oil in sleeker bottles.
Everyone is pointing fingers, and no one is credible. The public, caught between cynicism and desperation, gravitates either toward restoration fantasy (“make democracy work again”) or authoritarian theatre (“at least someone’s doing something”). Both are dead ends.
4. The Only Way Forward: Structural Reimagination
The only viable path isn’t restoration or regression. It’s reinvention. Systems that demand unconditional trust – like religions and stock markets – are bound to fail, because they rely on sustained illusions. Instead, we need systems built on earned, revocable, and continually tested trust – systems that can survive scrutiny, decentralise power, and adapt to complexity.
In other words: stop trying to repair a house built on sand. Build something else. Something messier, more modular, less mythological.
Let the TED crowd have their slogans. We’ve got work to do.
—On Epistemology, Pop Psychology, and the Cult of Empirical Pretence
Science, we’re told, is the beacon in the fog – a gleaming lighthouse of reason guiding us through the turbulent seas of superstition and ignorance. But peer a bit closer, and the lens is cracked, the bulb flickers, and the so-called lighthouse keeper is just some bloke on TikTok shouting about gut flora and intermittent fasting.
Audio: NotebookLM podcast on this topic.
We are creatures of pattern. We impose order. We mistake correlation for causation, narrative for truth, confidence for knowledge. What we have, in polite academic parlance, is an epistemology problem. What we call science is often less Newton and more Nostradamus—albeit wearing a lab coat and wielding a p-hacked dataset.
Let’s start with the low-hanging fruit—the rotting mango of modern inquiry: nutritional science, which is to actual science what alchemy is to chemistry, or vibes are to calculus. We study food the way 13th-century monks studied demons: through superstition, confirmation bias, and deeply committed guesswork. Eat fat, don’t eat fat. Eat eggs, don’t eat eggs. Eat only between the hours of 10:00 and 14:00 under a waxing moon while humming in Lydian mode. It’s a cargo cult with chia seeds.
But why stop there? Let’s put the whole scientific-industrial complex on the slab.
Psychology: The Empirical Astrological Society
Psychology likes to think it’s scientific. Peer-reviewed journals, statistical models, the odd brain scan tossed in for gravitas. But at heart, much of it is pop divination, sugar-dusted for mass consumption. The replication crisis didn’t merely reveal cracks – it bulldozed entire fields. The Stanford Prison Experiment? A theatrical farce. Power poses? Empty gestural theatre. Half of what you read in Psychology Today could be replaced with horoscopes and no one would notice.
Medical Science: Bloodletting, But With Better Branding
Now onto medicine, that other sacred cow. We tend to imagine it as precise, data-driven, evidence-based. In practice? It’s a Byzantine fusion of guesswork, insurance forms, and pharmaceutical lobbying. As Crémieux rightly implies, medicine’s predictive power is deeply compromised by overfitting, statistical fog, and a staggering dependence on non-replicable clinical studies, many funded by those who stand to profit from the result.
And don’t get me started on epidemiology, that modern priesthood that speaks in incantations of “relative risk” and “confidence intervals” while changing the commandments every fortnight. If nutrition is theology, epidemiology is exegesis.
The Reproducibility Farce
Let us not forget the gleaming ideal: reproducibility, that cornerstone of Enlightenment confidence. The trouble is, in field after field—from economics to cancer biology—reproducibility is more aspiration than reality. What we actually get is a cacophony of studies no one bothers to repeat, published to pad CVs, p-hacked into publishable shape, and then cited into canonical status. It’s knowledge by momentum. We don’t understand the world. We just retweet it.
What, Then, Is To Be Done?
Should we become mystics? Take up tarot and goat sacrifice? Not necessarily. But we should strip science of its papal robes. We should stop mistaking publication for truth, consensus for accuracy, and method for epistemic sanctity. The scientific method is not the problem. The pretence that it’s constantly being followed is.
Perhaps knowledge doesn’t have a half-life because of progress, but because it was never alive to begin with. We are not disproving truth; we are watching fictions expire.
Closing Jab
Next time someone says “trust the science,” ask them: which bit? The part that told us margarine was manna? The part that thought ulcers were psychosomatic? The part that still can’t explain consciousness, but is confident about your breakfast?
Science is a toolkit. But too often, it’s treated like scripture. And we? We’re just trying to lose weight while clinging to whatever gospel lets us eat more cheese.
Rather than recreate a recent post on my business site, LinkedIn.
(Warning: contains traces of logic, satire, and uncomfortable truths. But you knew that.)
Audio: NotebookLM podcast on the linked topic.
It’s just refusing to cosplay as your idealised fantasy of “human” cognition.
While pundits at the Wall Street Journal lament that AI thinks with “bags of heuristics” instead of “true models,” they somehow forget that humans themselves are kludged-together Rube Goldberg disasters, lurching from cognitive bias to logical fallacy with astonishing grace.
In my latest piece, I take a flamethrower to the myth of human intellectual purity, sketch a real roadmap for modular AI evolution, and suggest (only partly in jest) that the machines are becoming more like us every day — messy, contradictory, and disturbingly effective.
Let’s rethink what “thinking” actually means. Before the machines do it for us.