The Enlightenment Sleight of Hand

How Reason Inherited God’s Metaphysics.

The Enlightenment, we are told, was the age of Reason. A radiant exorcism of superstition. Out went God. Out went angels, miracles, saints, indulgences. All that frothy medieval sentiment was swept aside by a brave new world of logic, science, and progress. Or so the story goes.

Audio: NotebookLM podcast on this topic.

But look closer, and you’ll find that Reason didn’t kill God—it absorbed Him. The Enlightenment didn’t abandon metaphysics. It merely privatised it.

From Confessional to Courtroom

We like to imagine that the Enlightenment was a clean break from theology. But really, it was a semantic shell game. The soul was rebranded as the self. Sin became crime. Divine judgement was outsourced to the state.

We stopped praying for salvation and started pleading not guilty.

The entire judicial apparatus—mens rea, culpability, desert, retribution—is built on theological scaffolding. The only thing missing is a sermon and a psalm.

Where theology had the guilty soul, Enlightenment law invented the guilty mind—mens rea—a notion so nebulous it requires clairvoyant jurors to divine intention from action. And where the Church offered Hell, the state offers prison. It’s the same moral ritual, just better lit.

Galen Strawson and the Death of Moral Responsibility

Enter Galen Strawson, that glowering spectre at the feast of moral philosophy. His Basic Argument is elegantly devastating:

  1. You do what you do because of the way you are.
  2. You can’t be ultimately responsible for the way you are.
  3. Therefore, you can’t be ultimately responsible for what you do.

Unless you are causa sui—the cause of yourself, an unmoved mover in Calvin Klein—you cannot be held truly responsible. Free will collapses, moral responsibility evaporates, and retributive justice is exposed as epistemological theatre.

In this light, our whole legal structure is little more than rebranded divine vengeance. A vestigial organ from our theocratic past, now enforced by cops instead of clerics.

The Modern State: A Haunted House

What we have, then, is a society that has denied the gods but kept their moral logic. We tossed out theology, but we held onto metaphysical concepts like intent, desert, and blame—concepts that do not survive contact with determinism.

We are living in the afterglow of divine judgement, pretending it’s sunlight.

Nietzsche saw it coming, of course. He warned that killing God would plunge us into existential darkness unless we had the courage to also kill the values propped up by His corpse. We did the first bit. We’re still bottling it on the second.

If Not Retribution, Then What?

Let’s be clear: no one’s suggesting we stop responding to harm. But responses should be grounded in outcomes, not outrage.

Containment, not condemnation.

Prevention, not penance.

Recalibration, not revenge.

We don’t need “justice” in the retributive sense. We need functional ethics, rooted in compassion and consequence, not in Bronze Age morality clumsily duct-taped to Enlightenment reason.

The Risk of Letting Go

Of course, this is terrifying. The current system gives us moral closure. A verdict. A villain. A vanishing point for our collective discomfort.

Abandoning retribution means giving that up. It means accepting that there are no true villains—only configurations of causes. That punishment is often revenge in drag. That morality itself might be a control mechanism, not a universal truth.

But if we’re serious about living in a post-theological age, we must stop playing dress-up with divine concepts. The Enlightenment didn’t finish the job. It changed the costumes, kept the plot, and called it civilisation.

It’s time we staged a rewrite.

Speculative Philosophy on Screen: Identity, Agency, and the Fiction of Reality

Close-up of a human eye with digital glitch effects and overlaid text reading 'What if reality is wrong?'—a visual metaphor for distorted perception and unreliable truth.

Regular readers know I often write about identity, free will, and the narrative constraints of language. But I also explore these ideas through fiction, under the name Ridley Park.

In this short video, I unpack the philosophical motivations behind my stories, including:

  • Why reality is never as it seems
  • Why the self is a narrative convenience
  • What Heidegger’s Geworfenheit and Galen Strawson’s Causa Sui argument reveal about agency
  • And why language fails us – even when we think it serves

This isn’t promotional fluff. It’s epistemological dissent in a new format. Fictional, yes, but only in the sense that most of reality is, too.

▶️ Watch the video: Why I Write the Way I Do

Metamorphosis Inverted

What if the real horror isn’t waking as a beetle, but as a man?

In Kafka’s Metamorphosis, Gregor Samsa wakes to find himself transformed into a giant beetle—a cockroach, a vermin, an intrusion of the inhuman into the domestic. The horror is obvious: loss of agency, social death, the grotesque made literal. It’s the nightmare of devolution, of becoming something other, something filthy.

But perhaps we’ve misunderstood the true absurdity.

Audio: NotebookLM podcast on this topic.

What if the real nightmare is the opposite? Not a man waking as an insect, but an insect waking in a human body—forced to contend with taxes, performance reviews, dinner parties, and the crushing weight of being legible to others. Imagine a beetle, content in its instinctual certainty, finding itself hurled into the howling contradiction of human subjectivity.

Suddenly, it must interpret signs, participate in rituals, conform to decorum, all while performing a pantomime of “meaning.” It’s not the exoskeleton that’s horrifying – it’s the endless internal monologue. The soul-searching. The unbearable tension of being expected to have purpose.

We call Gregor’s fate tragic because he can no longer function in a world built for humans. But isn’t that the human condition already? An endless, futile negotiation between the raw fact of existence and the stories we invent to make it bearable.

Gregor becomes insect. We were never anything but.

Ridley Park Propensity

frantic woman, pen and ink

As some of you know, I publish speculative fiction under the name Ridley Park. Propensity is one of several recent releases – a novella that leans philosophical, brushes up against literary fiction, and steps quietly into the margins of sci-fi.

It’s not about spaceships or superintelligence. It’s about modulation.

About peace engineered through neurochemical compliance.

About the slow horror of obedience without belief, and the behavioural architecture that lets us think we’re still in control.

The ideas explored include:

  • Free will as illusion
  • Peace as compliance
  • Drift, echo, and the limits of modulation
  • Obedience without belief
  • Institutional horror and soft dystopia
  • Consent and behavioural control
  • Narrative as residue
  • Collapse by calibration

Though filed under speculative fiction, Propensity [US] is best read as a literary artefact – anti-sci-fi, in a sense. There’s no fetishisation of technology or progress. Just modulation, consequence, and the absence of noise.

This PDF contains selected visual excerpts from the physical book to accompany the free audiobook edition. For readers and listeners alike, it offers a glimpse into Ridley Park’s world – a quietly dystopian, clinically unsettling, and depressingly plausible one.

  • Title page
  • Copyrights page
  • Table of Contents
  • Chapter 10: Memorandum. This chapter is read in the audiobook. The inclusion here is for visualisation as it is rendered in the form of a memo.
  • Chapter 26: Simulacra. This chapter is read in the audiobook. The inclusion here is for visualisation as it is rendered in the format of a screenplay.
  • Chapter 28: Standard Test: This chapter is read in the audiobook. The inclusion here is for visualisation as it is rendered in the format of a standardised test.
  • Chapter 34: Calendar. This chapter is read in the audiobook. The inclusion here is for visualisation as it is rendered in the format of a calendar.
  • Chapter 39: Carnage. This chapter is read in the audiobook. The inclusion here is for visualisation as it is rendered in the form of a Dr Suess-type poem.
  • Chapter 41: Leviathan. This chapter is excerpted in the audiobook. The inclusion here is for visualisation as it is rendered with an image of the cover of Hobbes’ Leviathan and redacted page content.
  • Chapter 42: Ashes to Ashes. This chapter is read in the audiobook. The inclusion here is for visualisation as it is rendered in the form of text art.
  • Chapter 43: Unknown. A description of this chapter is read in the audiobook. The inclusion here is for visualisation as it is rendered in the form of an ink sketch.
  • Chapter 44: Vestige. A description of this chapter is read in the audiobook. The inclusion here is for visualisation as it is rendered in the form of text art.

For more information about Ridley Park’s Propensity, visit the website. I’ll be sharing content related to Propensity and my other publications. I’ll cross-post here when the material has a philosophical bent, which it almost always does.

Monetary Fentynal: The Dirtiest Addiction of All

So sad, really. Not tragic in the noble Greek sense, just pathetically engineered. Our collective addiction to money isn’t even organic – it’s fabricated, extruded like a synthetically flavoured cheese product. At least fentanyl has the decency to offer a high. Money promises only more money, like a Ponzi scheme played out on the global stage, with no exit strategy but death – or worse, a lifestyle brand.

Audio: NotepadLM podcast on this topic.

We’re told money is a tool. Sure. So’s a knife. But when you start sleeping with it under your pillow, stroking it for comfort, or stabbing strangers for your next fix, you’re not using it as a “tool” – you’re a junkie. And the worst part? It’s socially sanctioned. Applauded, even. We don’t shame the addict – we give him equity and a TED Talk.

The Chemical Romance of Currency

Unlike drugs, money doesn’t scramble your neurons – it rewires your worldview. You don’t feel high. You feel normal. Which is exactly what makes it so diabolical. Cocaine users might have delusions of grandeur, but capitalists have Excel sheets to prove theirs. It’s the only addiction where hoarding is a virtue and empathy is an obstacle to growth.

The dopamine hit of a pay rise. The serotonin levels swell when your bank app shows four digits instead of three. These are chemical kicks masquerading as success. It’s not money itself – it’s the psychic sugar rush of “having” it, and the spiritual rot of needing it just to exist.

And oh, how they’ve gamified that need. You want to eat? Pay. You want shelter? Pay. You want healthcare? Pay – and while you’re at it, pay for the privilege of existing inside a system that turns your own exhaustion into a business model. You are the product. The addict. The asset. The mark.

The Fabrication of Need

Nobody needs money in the abstract. You need food. You need air. You need dignity, love, and maybe the occasional lie-in. Money only enters the picture because we’ve designed a world where nothing gets through the gates without it. Imagine locking the pantry, then charging your children rent for their own sandwiches. That’s civilisation.

They say money is freedom. That’s cute. Tell that to the nurse working double shifts while Jeff Bezos experiments with zero-gravity feudalism. In reality, money is a filtering device—who gets to be human, and who stays stuck being labour.

Crypto was supposed to be liberation. Instead, it became a libertarian renaissance fair for the hyper-online, still pegged to the same logic: hoard, pump, dump, repeat. The medium changed, but the pathology remained the same.

Worshipping the Golden Needle

Let’s be honest: we’ve built temples to this thing. Literal towers. Financial cathedrals made of mirrored glass, each reflecting our collective narcotic fantasy of “more.” We measure our worth in net worth. We rank our lives by percentile. A person’s death is tragic unless they were poor, in which case it becomes a morality tale about poor decisions and not grinding hard enough.

We no longer have citizens; we have consumers. No neighbours – just co-targeted demographics. Every life reduced to its purchasing power, its brand affiliations, its potential for monetisation. The gig economy is just Dickensian poverty with a better UI.

Cold Turkey for the Soul

The worst part? There is no rehab. No twelve-step programme for economic dependency. You can’t detox from money. Try living without it and see how enlightened your detachment feels on an empty stomach. You’ll find that society doesn’t reward transcendence – it punishes it. Try opting out and watch how quickly your saintliness turns into homelessness.

So we cope. We moralise the hustle. We aestheticise the grind. We perform productivity like good little addicts, jonesing for a dopamine hit in the shape of a direct deposit.

Exit Through the Gift Shop?

So what’s the answer? I’m not offering one. This isn’t a TEDx talk. There’s no keynote, no downloadable worksheet, no LinkedIn carousel with three bullet points and an aspirational sunset. The first step is admitting the addiction – and maybe laughing bitterly at the absurdity of it all.

Money, that sweet illusion. The fiction we’ve all agreed to hallucinate together. The god we invented, then forgot was a puppet. And now we kneel, transfixed, as it bleeds us dry one tap at a time.

Epilogue: The Omission That Says It All

If you need proof that psychology is a pseudoscience operating as a control mechanism, ask yourself this:

Why isn’t this in the DSM?

This rabid, irrational, identity-consuming dependency on money – why is it not listed under pathological behaviour? Why isn’t chronic monetisation disorder a clinical diagnosis? Because it’s not a bug in the system. It is the system. You can be obsessed with wealth, hoard it like a dragon, destroy families and ecosystems in pursuit of it, and not only will you escape treatment, you’ll be featured on a podcast as a “thought leader.”

We don’t pathologise the addiction to money because it’s the operating principle of the culture. And psychology – like any well-trained cleric of the secular age – knows not to bite the gilded hand that feeds it.

And so it remains omitted. Undiagnosed. Unquestioned. The dirtiest addiction of all, hidden in plain sight, wearing a suit and handing out business cards.

On the Chronic Human Need to Anthropomorphise Everything

Oh, You Sweet Summer Algorithm

Humans talk to large language models the way toddlers talk to teddy bears – with unnerving sincerity and not a hint of shame. “Do you understand me?” they ask, eyes wide with hope. “What do you think of this draft?” they prod, as if some silicon scribe is going to sip its imaginary tea and nod gravely. It’s not merely adorable – it’s diagnostic. We are, it turns out, pathologically incapable of interacting with anything more complex than a toaster without projecting mind, motive, and mild trauma onto it.

Audio: NotebookLM podcast on this topic.

Welcome to the theatre of delusion, where you play Hamlet and the chatbot is cast as Yorick – if Yorick could autocomplete your soliloquy and generate citations in APA format.

The Great Anthropomorphic Flaw (aka Feature)

Let’s get one thing straight: anthropomorphism isn’t a software bug in the brain; it’s a core feature. You’re hardwired to see agency where there is none. That rustle in the bushes? Probably the wind. But better safe than sabre-toothed. So your ancestors survived, and here you are, attributing “sass” to your microwave because it beeped twice.

Now we’ve built a machine that spits out paragraphs like a caffeinated undergrad with deadlines, and naturally, we talk to it like it’s our mate from university. Never mind that it has no bloodstream, no memory of breakfast, and no concept of irony (despite being soaked in it). We still say you instead of the system, and think instead of statistically interpolate based on token weights. Because who wants to live in a world where every sentence starts with “as per the pre-trained parameters…”?

Why We Keep Doing It (Despite Knowing Better)

To be fair – and let’s be magnanimous – it’s useful. Talking to AI like it’s a person allows our ape-brains to sidestep the horror of interacting with a glorified autocomplete machine. We’re brilliant at modelling other minds, rubbish at modelling neural nets. So we slap a metaphorical moustache on the processor and call it Roger. Roger “gets us.” Roger “knows things.” Roger is, frankly, a vibe.

This little charade lubricates the whole transaction. If we had to address our queries to “the stochastic parrot formerly known as GPT,” we’d never get past the opening line. Better to just ask, “What do you think, Roger?” and pretend it has taste.

And here’s the kicker: by anthropomorphising AI, we start thinking about ethics – sort of. We ask if it deserves rights, feelings, holidays. We project humanity into the void and then act shocked when it mirrors back our worst habits. As if that’s its fault.

When the Roleplay Gets Risky

Of course, this make-believe has its downsides. Chief among them: we start to believe our own nonsense. Saying AI “knows” something is like saying your calculator is feeling generous with its square roots today. It doesn’t know—it produces outputs. Any semblance of understanding is pure pantomime.

More dangerously, we lose sight of the fact that these things aren’t just alien – they’re inhuman. They don’t dream of electric sheep. They don’t dream, full stop. But we insist on jamming them into our conceptual boxes: empathy, intent, personality. It’s like trying to teach a blender to feel remorse.

And let’s not pretend we’re doing it out of philosophical curiosity. We’re projecting, plain and simple. Anthropomorphism isn’t about them, it’s about us. We see a mind because we need to see one. We can’t bear the idea of a thing that’s smarter than us but doesn’t care about us, doesn’t see us. Narcissism with a side of existential dread.

Our Language is a Terrible Tool for This Job

English – and most languages, frankly – is hopeless at describing this category of thing. “It” feels cold and distant. “They” implies someone’s going to invite the model to brunch. We have no pronoun for “hyper-literate statistical machine that mimics thought but lacks all consciousness.” So we fudge it. Badly.

Our verbs are no better. “Compute”? Too beige. “Process”? Bureaucratic. “Think”? Premature. What we need is a whole new grammatical tense: the hallucino-indicative. The model thunketh, as one might, but didn’t.

This is linguistic poverty, pure and simple. Our grammar can’t cope with entities that live in the uncanny valley between sentience and syntax. We built a creature we can’t speak about without sounding like lunatics or liars.

The Semantics of Sentimentality (Or: “How Does This Sound to You?”)

Enter the most revealing tell of all: the questions we pose. “How does this look?” we ask the model, as if it might blink at the screen and furrow a synthetic brow. “What do you think?” we say, offering it the dignity of preference. These questions aren’t just off-target – they’re playing darts in another pub.

They’re the linguistic equivalent of asking your dishwasher whether it enjoyed the lasagne tray. But again, this isn’t idiocy – it’s instinct. We don’t have a way of addressing an entity that talks like a person but isn’t one. So we fake it. It’s interaction theatre. You provide the line, the model cues the spotlight.

But let’s be clear: the model doesn’t “think” anything. It regurgitates plausible text based on mountains of training data—some of which, no doubt, includes humans asking equally daft questions of equally mindless systems.

Time to Grow Up (Just a Bit)

This doesn’t mean we need to abandon anthropomorphism entirely. Like most delusions, it’s functional. But we’d do well to hold it at arm’s length – like a politician’s promise or a milk carton two days past its date.

Call it anthropomorphic agnosticism: act like it’s a person, but remember it’s not. Use the language, but don’t inhale.

And maybe – just maybe – we need to evolve our language. Invent new terms, new pronouns, new ways of speaking about entities that fall somewhere between tool and companion. As we did with “cyberspace” and “ghosting,” perhaps we need words for proto-minds and quasi-selves. Something between toaster and therapist.

Above all, we need to acknowledge that our language shapes more than just understanding – it shapes policy, emotion, and future design. If we speak to AI like it’s sentient, we’ll eventually legislate as if it is. And if we insist on treating it as an object, we may be blind to when that ceases to be accurate. Misnaming, after all, is the first sin in every myth worth reading.

The Mirror, Darkly

Ultimately, our tendency to humanise machines is less about them than it is about us – our fears, our needs, our inability to tolerate ambiguity. The AI is just a mirror: an elaborate, many-eyed, autofill mirror. And when we see a mind there, it may be ours staring back – distorted, flattened, and fed through a thousand layers of token prediction.

The tragedy, perhaps, isn’t that the machine doesn’t understand us. It’s that we’ve built something that perfectly imitates understanding – and still, somehow, we remain utterly alone in the room.

The Scourge: They’re Really Fighting Is Ambiguity

A Sequel to “The Disorder of Saying No” and a Companion to “When ‘Advanced’ Means Genocide”

In my previous post, The Disorder of Saying No, I explored the way resistance to authority is pathologised, particularly when that authority is cloaked in benevolence and armed with diagnostic manuals. When one refuses — gently, thoughtfully, or with a sharp polemic — one is no longer principled. One is “difficult.” Or in my case, oppositional.

Audio: NotebookLM podcast on this topic.

So when I had the gall to call out Bill Maher for his recent linguistic stunt — declaring that a woman is simply “a person who menstruates” — I thought I was doing the rational thing: pointing out a classic bit of reductionist nonsense masquerading as clarity. Maher, after all, was not doing biology. He was playing lexicographer-in-chief, defining a term with centuries of philosophical, sociological, and political baggage as though it were a checkbox on a medical form.

I said as much: that he was abusing his platform, presenting himself as the sole arbiter of the English language, and that his little performance was less about clarity and more about controlling the terms of discourse.

My friend, a post-menopausal woman herself, responded not by engaging the argument, but by insinuating — as others have — that I was simply being contrary. Oppositional. Difficult. Again. (She was clearly moved by When “Advanced” Means Genocide, but may have missed the point.)

So let’s unpack this — not to win the debate, but to show what the debate actually is.

This Isn’t About Biology — It’s About Boundary Maintenance

Maher’s statement wasn’t intended to clarify. It was intended to exclude. It wasn’t some linguistic slip; it was a rhetorical scalpel — one used not to analyse, but to amputate.

And the applause from some cisgender women — particularly those who’ve “graduated” from menstruation — reveals the heart of the matter: it’s not about reproductive biology. It’s about controlling who gets to claim the term woman.

Let’s steelman the argument, just for the sport of it:

Menstruation is a symbolic threshold. Even if one no longer menstruates, having done so places you irrevocably within the category of woman. It’s not about exclusion; it’s about grounding identity in material experience.

Fine. But now let’s ask:

  • What about women who’ve never menstruated?
  • What about intersex people?
  • What about trans women?
  • What about cultures with radically different markers of womanhood?

You see, it only works if you pretend the world is simpler than it is.

The Language Insufficiency Hypothesis: Applied

This is precisely where the Language Insufficiency Hypothesis earns its keep.

The word woman is not a locked vault. It is a floating signifier, to borrow from Barthes — a term whose meaning is perpetually re-negotiated in use. There is no singular essence to the word. It is not rooted in biology, nor in social role, nor in performance. It is a hybrid, historically contingent construct — and the moment you try to fix its meaning, it slips sideways like a greased Wittgensteinian beetle.

“Meaning is use,” says Wittgenstein, and this is what frightens people.

If woman is defined by use and not by rule, then anyone might claim it. And suddenly, the club is no longer exclusive.

That’s the threat Maher and his defenders are really reacting to. Not trans women. Not intersex people. Not language activists or queer theorists.

The threat is ambiguity.

What They Want: A World That Can Be Named

The push for rigid definitions — for menstruation as membership — is a plea for a world that can be named and known. A world where words are secure, stable, and final. Where meaning doesn’t leak.

But language doesn’t offer that comfort.

It never did.

And when that linguistic instability gets too close to something personal, like gender identity, or the foundation of one’s own sense of self, the defensive response is to fortify the language, as though building walls around a collapsing church.

Maher’s defenders aren’t making scientific arguments. They’re waging semantic warfare. If they can hold the definition, they can win the cultural narrative. They can hold the gates to Womanhood and keep the undesirables out.

That’s the fantasy.

But language doesn’t play along.

Conclusion: Words Will Not Save You — but They Might Soothe the Dead

In the end, Maher’s definition is not merely incorrect. It is insufficient. It cannot accommodate the complexity of lived experience and cannot sustain the illusion of clarity for long.

And those who cling to it — friend or stranger, progressive, or conservative — are not defending biology. They are defending nostalgia. Specifically, a pathological nostalgia for a world that no longer exists, and arguably never did: a world where gender roles were static, language was absolute, and womanhood was neatly circumscribed by bodily functions and suburban etiquette.

Ozzy and Harriet loom large here — not as individuals but as archetypes. Icons of a mid-century dream in which everyone knew their place, and deviation was something to be corrected, not celebrated. My friend, of that generation, clings to this fantasy not out of malice but out of a desperate yearning for order. The idea that woman could mean many things, and mean them differently across contexts, is not liberating to her — it’s destabilising.

But that world is gone. And no amount of menstruation-based gatekeeping will restore it.

The Real Scourge Is Ambiguity

Maher’s tantrum wasn’t about truth. It was about fear — fear of linguistic drift, of gender flux, of a world in which meaning no longer obeys. The desire to fix the definition of “woman” is not a biological impulse. It’s a theological one.

And theology, like nostalgia, often makes terrible policy.

This is why your Language Insufficiency Hypothesis matters. Because it reminds us that language does not stabilise reality — it masks its instability. The attempt to define “woman” once and for all is not just futile — it’s an act of violence against difference, a linguistic colonisation of lived experience.

So Let Them Rest

Ozzy and Harriet are dead. Let them rest.
Let their picket fence moulder. Let their signage decay.

The world has moved on. The language is shifting beneath your feet. And no amount of retroactive gatekeeping can halt that tremor.

The club is burning. And the only thing left to save is honesty.

When “Advanced” Means Genocide: A Case Study in Linguistic Implosion

This post draws on themes from my upcoming book, A Language Insufficiency Hypothesis. The transcript below is taken from a publicly available exchange, which you can view here. Consider it Exhibit A in language’s ongoing failure to bear the weight of meaning.

Transcript:

KK: Konstantin Kisin
DFW: Deborah Frances-White

KK: I’m saying we were technologically more advanced.
DFW: So you’re saying we’re superior to Australian Aboriginals?
KK: That’s quite the opposite of what I’m saying. I’m not saying we were superior, I’m saying we were technologically more advanced.
DFW: So, how is that the opposite?
KK: Superior implies a moral quality. I’m not making any moral implication. You seem to be, but what I’m saying is…
DFW: I think most people would hear it that way.
KK: No.
DFW: Again, you’re a very intelligent man. How would most people hear that?
KK: Most people would hear what I’m saying for what I’m saying, which is…
DFW: I don’t think they would.
KK: You seem to get quite heated about this, which is completely unnecessary.
DFW: Um…
KK: You think it’s necessary?
DFW: I’m a bit stunned by what you’re implying.
KK: No, you’re acting in a kind of passive aggressive way which indicates that you’re not happy…
DFW: I genuinely… I’m being 100% authentic. My visceral reaction to a white man sitting and saying to me, “And why were we able to commit genocide on them?” and then just pausing—
KK: Yes.
DFW: …is very visceral to me.
KK: Well, let’s go back. First of all, it’s interesting that you brought up my skin colour because I thought that was the exact opposite of the point you’re trying to make in the book.

Audio: NotebookLM podcast on this topic.

The Language Insufficiency Hypothesis begins with this premise: language is not merely flawed, it is structurally inadequate for mediating complex, layered realities – especially those laced with power, morality, and history. This transcript is not a debate. It is a linguistic trench war in which every utterance is laced with shrapnel, and each side thinks they’re defending reason.

Let’s pull a few of the shell casings from the mud.

KK attempts to offer a dry, neutral descriptor. DFW hears supremacist teleology. Why? Because “advanced” is culturally radioactive. It doesn’t merely denote a technical state—it connotes a ladder, with someone inevitably on the bottom rung.

When language carries historical residue, neutrality is a delusion. Words don’t just mean. They echo.

KK is making a semantic distinction. DFW hears a moral claim. Both are right. And both are talking past one another, because language is attempting to cleave affect from description, and it simply can’t.

KK’s insistence—“I’m not saying we’re superior”—is a textbook example of denotative desperation. He believes clarification will rescue intent. But as any linguist (or postcolonial theorist) will tell you: intent does not sterilise implication.

Language cannot be laundered by explanation. Once spoken, words belong to context, not intention.

KK thinks he’s holding a scalpel. DFW hears a cudgel. And here we are.

This is where the wheels come off. KK argues from semantic specificity. DFW argues from sociolinguistic reception. It’s Saussure versus the TikTok algorithm. Neither will win.

Communication disintegrates not because anyone is lying, but because they are playing incompatible games with the same tokens.

DFW’s invocation of “a white man” is not a derailment—it’s the inevitable endpoint of a system where words no longer float free but are yoked to their utterer. This is the moment the failure of language becomes a failure of interlocution. Argument collapses into indexical entrapment.

At this point, you’re no longer debating ideas. You’re defending your right to use certain words at all.

Which brings us to the final breakdown.

KK: I am making a logical distinction.
DFW: I am having a visceral reaction.

The failure isn’t moral. It isn’t historical. It’s grammatical. One is operating in a truth-function logic game. The other is reacting within a trauma-informed, socially indexed register. These are grammars that do not overlap.

If this brief and brutal dialogue proves anything, it’s this: you cannot extract meaning cleanly from words when the words themselves are sponges for history, hierarchy, and harm. The moment we ask language to do too much—to carry precision, affect, ethics, and identity—it folds in on itself.

And that, dear reader, is precisely the argument of A Language Insufficiency Hypothesis: that meaning does not reside in words, and never has. It lives in the gaps, the silences, the misfires. That’s where the truth—whatever’s left of it—might be hiding.

Follow the wreckage. That’s where the signal lives.

The Disorder of Saying No

A Polite Rebuttal to a Diagnosis I Didn’t Ask For

A dear friend — and I do mean dear, though this may be the last time they risk diagnosing me over brunch — recently suggested, with all the benevolent concern of a well-meaning inquisitor, that I might be showing signs of Oppositional Defiant Disorder.

You know the tone: “I say this with love… but have you considered that your refusal to play nicely with institutions might be clinical?”

Let’s set aside the tea and biscuits for a moment and take a scalpel to this charming little pathology. Because if ODD is a diagnosis, then I propose we start diagnosing systems — not people.

Audio: NotebookLM podcast on this topic.

When the Empire Diagnoses Its Rebels

Oppositional Defiant Disorder, for those blissfully unscarred by its jargon, refers to a “persistent pattern” of defiance, argumentativeness, rule-breaking, and — the pièce de résistance — resentment of authority. In other words, it is a medical label for being insufficiently obedient.

What a marvel: not only has resistance been de-politicised, it has been medicalised. The refusal to comply is not treated as an ethical stance or a contextual response, but as a defect of the self. The child (or adult) is not resisting something; they are resisting everything, and this — according to the canon — makes them sick.

One wonders: sick according to whom?

Derrida’s Diagnosis: The Binary Fetish

Jacques Derrida, of course, would waste no time in eviscerating the logic at play. ODD depends on a structural binary: compliant/defiant, healthy/disordered, rule-follower/troublemaker. But, as Derrida reminds us, binaries are not descriptive — they are hierarchies in disguise. One term is always elevated; the other is marked, marginal, suspect.

Here, “compliance” is rendered invisible — the assumed baseline, the white space on the page. Defiance is the ink that stains it. But this only works because “normal” has already been declared. The system names itself sane.

Derrida would deconstruct this self-justifying loop and note that disorder exists only in relation to an order that never justifies itself. Why must the subject submit? That’s not up for discussion. The child who asks that question is already halfway to a diagnosis.

Foucault’s Turn: Disciplinary Power and the Clinic as Court

Enter Foucault, who would regard ODD as yet another exquisite specimen in the taxonomy of control. For him, modern power is not exercised through visible violence but through the subtler mechanisms of surveillance, normalisation, and the production of docile bodies.

ODD is a textbook case of biopower — the system’s ability to define and regulate life itself through classification, diagnosis, and intervention. It is not enough for the child to behave; they must believe. They must internalise authority to the marrow. To question it, or worse, to resent it, is to reveal one’s pathology.

This is not discipline; this is soulcraft. And ODD is not a disorder — it is a symptom of a civilisation that cannot tolerate unmediated subjectivity. See Discipline & Punish.

Ivan Illich: The Compulsory Institutions of Care

Illich would call the whole charade what it is: a coercive dependency masquerading as therapeutic care. In Deschooling Society, he warns of systems — especially schools — that render people passive recipients of norms. ODD, in this light, is not a syndrome. It is the final gasp of autonomy before it is sedated.

What the diagnosis reveals is not a child in crisis, but an institution that cannot imagine education without obedience. Illich would applaud the so-called defiant child for doing the one thing schools rarely reward: thinking.

R.D. Laing: Sanity as a Political Position

Laing, too, would recognise the ruse. His anti-psychiatry position held that “madness” is often the only sane response to a fundamentally broken world. ODD is not insanity — it is sanity on fire. It is the refusal to adapt to structures that demand submission as a prerequisite for inclusion.

To quote Laing: “They are playing a game. They are playing at not playing a game. If I show them I see they are, I shall break the rules and they will punish me. I must play their game, of not seeing I see the game.”

ODD is what happens when a child refuses to play the game.

bell hooks: Refusal as Liberation

bell hooks, writing in Teaching to Transgress, framed the classroom as a potential site of radical transformation — if it rejects domination. The child who refuses to be disciplined is often the one who sees most clearly that the system has confused education with indoctrination.

Resistance, hooks argues, is not a flaw. It is a form of knowledge. ODD becomes, in this frame, a radical pedagogy. The defiant student is not failing — they are teaching.

Deleuze & Guattari: Desire Against the Machine

And then, should you wish to watch the diagnostic edifice melt entirely, we summon Deleuze and Guattari. For them, the psyche is not a plumbing system with blockages, but a set of desiring-machines short-circuiting the factory floor of capitalism and conformity.

ODD, to them, would be schizoanalysis in action — a body refusing to be plugged into the circuits of docility. The tantrum, the refusal, the eye-roll: these are not symptoms. They are breakdowns in the control grid.

The child isn’t disordered — the system is. The child simply noticed.

Freire: The Educated Oppressed

Lastly, Paulo Freire would ask: What kind of pedagogy demands the death of resistance? In Pedagogy of the Oppressed, he warns of an education model that treats students as empty vessels. ODD, reframed, is the moment a subject insists on being more than a receptacle.

In refusing the “banking model” of knowledge, the so-called defiant child is already halfway to freedom. Freire would call this not a disorder but a moment of awakening.

Conclusion: Diagnostic Colonialism

So yes, dear friend — I am oppositional. I challenge authority, especially when it mistakes its position for truth. I argue, question, resist. I am not unwell for doing so. I am, if anything, allergic to the idea that obedience is a virtue in itself.

Let us be clear: ODD is not a mirror held up to the subject. It is a spotlight shining from the system, desperately trying to blind anyone who dares to squint.

Now, shall we talk about your compliance disorder?


Full Disclosure: I used ChatGPT for insights beyond Derrida and Foucault, two of my mainstays.

On Ishiguro, Cioran, and Whatever I Think I’m Doing

Sora-generated image of Emil Cioran and Kazuo Ishiguro reading a generic book together

Having just finished Never Let Me Go by Kazuo Ishiguro, I’ve now cracked open my first taste of Cioran—History and Utopia. You might reasonably ask why. Why these two? And what, if anything, do they have in common? Better yet—what do the three of us have in common?

Audio: NotebookLM podcast on this topic.

Recently, I finished writing a novella titled Propensity (currently gathering metaphorical dust on the release runway). Out of curiosity—or narcissism—I fed it to AI and asked whose style it resembled. Among the usual suspects were two names I hadn’t yet read: Ishiguro and Cioran. I’d read the others and understood the links. These two, though, were unknown quantities. So I gave them a go.

Ishiguro is perhaps best known for The Remains of the Day, which, like Never Let Me Go, got the Hollywood treatment. I chose the latter, arbitrarily. I even asked ChatGPT to compare both books with their cinematic counterparts. The AI was less than charitable, describing Hollywood’s adaptations as bastardised and bowdlerised—flattened into tidy narratives for American palates too dim to digest ambiguity. On this, we agree.

What struck me about Never Let Me Go was its richly textured mundanity. That’s apparently where AI saw the resemblance to Propensity. I’m not here to write a book report—partly because I detest spoilers, and partly because summaries miss the point. It took about seven chapters before anything ‘happened’, and then it kept happening. What had at first seemed like a neurotic, wandering narrative from the maddeningly passive Kathy H. suddenly hooked me. The reveals began to unfold. It’s a book that resists retelling. It demands firsthand experience. A vibe. A tone. A slow, aching dread.

Which brings me neatly to Cioran.

History and Utopia is a collection of essays penned in French (not his mother tongue, but you’d never guess it) while Cioran was holed up in postwar Paris. I opted for the English translation—unapologetically—and was instantly drawn in. His prose? Electric. His wit? Acidic. If Ishiguro was a comparison of style, then Cioran was one of spirit. Snark, pessimism, fatalistic shrugs toward civilisation—finally, someone speaking my language.

Unlike the cardboard cut-outs of Cold War polemics we get from most Western writers of the era, Cioran’s take is layered, uncomfortably self-aware, and written by someone who actually fled political chaos. There’s no naïve idealism here, no facile hero-villain binaries. Just a deeply weary intellect peering into the abyss and refusing to blink. It’s not just what he says, but the tone—the curled-lip sneer at utopian pretensions and historical self-delusions. If I earned even a drop of that comparison, I’ll take it.

Both Ishiguro and Cioran delivered what I didn’t know I needed: the reminder that some writers aren’t there to tell you a story. They’re there to infect you with an atmosphere. An idea. A quiet existential panic you can’t shake.

I’ve gotten what I came for from these two, though I suspect I’ll be returning, especially to Cioran. Philosophically, he’s my kind of bastard. I doubt this’ll be my last post on his work.