The Enlightenment: A Postmortem

Or: How the Brightest Ideas in Europe Got Us into This Bloody Mess

Disclaimer: This output is entirely ChatGPT 4o from a conversation on the failure and anachronism of Enlightenment promises. I’m trying to finish editing my next novel, so I can’t justify taking much more time to share what are ultimately my thoughts as expounded upon by generative AI. I may comment personally in future. Until then, this is what I have to share.

AI Haters, leave now or perish ye all hope.


The Enlightenment promised us emancipation from superstition, authority, and ignorance. What we got instead was bureaucracy, colonialism, and TED Talks. We replaced divine right with data dashboards and called it progress. And like any good inheritance, the will was contested, and most of us ended up with bugger-all.

Below, I take each Enlightenment virtue, pair it with its contemporary vice, and offer a detractor who saw through the Enlightenment’s powder-wigged charade. Because if we’re going down with this ship, we might as well point out the dry rot in the hull.


1. Rationalism

The Ideal: Reason shall lead us out of darkness.
The Reality: Reason led us straight into the gas chambers—with bureaucratic precision.

Detractor: Max Horkheimer & Theodor Adorno

“Enlightenment is totalitarian.”
Dialectic of Enlightenment (1944)

Horkheimer and Adorno saw what reason looks like when it slips off its leash. Instrumental rationality, they warned, doesn’t ask why—it only asks how efficiently. The result? A world where extermination is scheduled, costs are optimised, and ethics are politely filed under “subjective.”


2. Empiricism

The Ideal: Observation and experience will uncover truth.
The Reality: If it can’t be measured, it can’t be real. (Love? Not statistically significant.)

Detractor: Michel Foucault

“Truth isn’t outside power… truth is a thing of this world.”
Power/Knowledge (1977)

Foucault dismantled the whole edifice. Knowledge isn’t neutral; it’s an instrument of power. Empiricism becomes just another way of disciplining the body—measuring skulls, classifying deviants, and diagnosing women with “hysteria” for having opinions.


3. Individualism

The Ideal: The sovereign subject, free and self-determining.
The Reality: The atomised consumer, trapped in a feedback loop of self-optimisation.

Detractor: Jean Baudrillard

“The individual is no longer an autonomous subject but a terminal of multiple networks.”
Simulacra and Simulation (1981)

You wanted autonomy? You got algorithms. Baudrillard reminds us that the modern “individual” is a brand in search of market validation. You are free to be whoever you want, provided it fits within platform guidelines and doesn’t disrupt ad revenue.


4. Secularism

The Ideal: Liberation from superstition.
The Reality: We swapped saints for STEMlords and called it even.

Detractor: Charles Taylor

“We are now living in a spiritual wasteland.”
A Secular Age (2007)

Taylor—perhaps the most polite Canadian apocalypse-whisperer—reminds us that secularism didn’t replace religion with reason; it replaced mystery with malaise. We’re no longer awed, just “motivated.” Everything is explainable, and yet somehow nothing means anything.


5. Progress

The Ideal: History is a forward march toward utopia.
The Reality: History is a meat grinder in a lab coat.

Detractor: Walter Benjamin

“The storm irresistibly propels him into the future to which his back is turned.”
Theses on the Philosophy of History (1940)

Benjamin’s “angel of history” watches helplessly as the wreckage piles up—colonialism, genocide, climate collapse—all in the name of progress. Every step forward has a cost, but we keep marching, noses in the spreadsheet, ignoring the bodies behind us.


6. Universalism

The Ideal: One humanity, under Reason.
The Reality: Enlightenment values, brought to you by cannon fire and Christian missionaries.

Detractor: Gayatri Chakravorty Spivak

“White men are saving brown women from brown men.”
Can the Subaltern Speak? (1988)

Universalism was always a bit… French, wasn’t it? Spivak unmasks it as imperialism in drag—exporting “rights” and “freedom” to people who never asked for them, while ignoring the structural violence built into the Enlightenment’s own Enlightened societies.


7. Tolerance

The Ideal: Let a thousand opinions bloom.
The Reality: Tolerance, but only for those who don’t threaten the status quo.

Detractor: Karl Popper

“Unlimited tolerance must lead to the disappearance of tolerance.”
The Open Society and Its Enemies (1945)

Popper, bless him, thought tolerance needed a firewall. But in practice, “tolerance” has become a smug liberal virtue signalling its own superiority while deplatforming anyone who makes the dinner party uncomfortable. We tolerate all views—except the unseemly ones.


8. Scientific Method

The Ideal: Observe, hypothesise, repeat. Truth shall emerge.
The Reality: Publish or perish. Fund or flounder.

Detractor: Paul Feyerabend

“Science is not one thing, it is many things.”
Against Method (1975)

Feyerabend called the whole thing a farce. There is no single “method,” just a bureaucratic orthodoxy masquerading as objectivity. Today, science bends to industry, cherry-picks for grants, and buries null results in the backyard. Peer review? More like peer pressure.


9. Anti-Authoritarianism

The Ideal: Smash the throne! Burn the mitre!
The Reality: Bow to the data analytics team.

Detractor: Herbert Marcuse

“Free election of masters does not abolish the masters or the slaves.”
One-Dimensional Man (1964)

Marcuse skewered the liberal illusion of choice. We may vote, but we do so within a system that already wrote the script. Authority didn’t vanish; it just became procedural, faceless, algorithmic. Bureaucracy is the new monarchy—only with more forms.


10. Education and Encyclopaedism

The Ideal: All knowledge, accessible to all minds.
The Reality: Behind a paywall. Written in impenetrable prose. Moderated by white men with tenure.

Detractor: Ivan Illich

“School is the advertising agency which makes you believe that you need the society as it is.”
Deschooling Society (1971)

Illich pulls the curtain: education isn’t emancipatory; it’s indoctrinatory. The modern university produces not thinkers but credentialed employees. Encyclopaedias are replaced by Wikipedia, curated by anonymous pedants and revision wars. Truth is editable.


Postscript: Picking through the Rubble

So—has the Enlightenment failed?

Not exactly. It succeeded too literally. It was taken at its word. Its principles, once radical, were rendered banal. It’s not that reason, progress, or rights are inherently doomed—it’s that they were never as pure as advertised. They were always products of their time: male, white, bourgeois, and utterly convinced of their own benevolence.

If there’s a path forward, it’s not to restore Enlightenment values, but to interrogate them—mercilessly, with irony and eyes open.

After all, the problem was never darkness. It was the people with torches who thought they’d found the only path.

From Thesaurus to Thoughtcrime: The Slippery Slope of Authorial Purity

I had planned to write about Beauvoir’s Second Sex, but this has been on my mind lately.

There’s a certain breed of aspiring author, let’s call them the Sacred Scribes, who bristle at the notion of using AI to help with their writing. Not because it’s unhelpful. Not because it produces rubbish. But because it’s impure.

Like some Victorian schoolmarm clutching her pearls at the sight of a split infinitive, they cry: “If you let the machine help you fix a clumsy sentence, what’s next? The whole novel? Your diary? Your soul?”

The panic is always the same: one small compromise and you’re tumbling down the greased chute of creative ruin. It starts with a synonym suggestion and ends with a ghostwritten autobiography titled My Journey to Authenticity, dictated by chatbot, of course.

But let’s pause and look at the logic here. Or rather, the lack thereof.

By this standard, you must also renounce the thesaurus. Shun the spellchecker. Burn your dictionary. Forbid yourself from reading any book you might accidentally learn from. Heaven forbid you read a well-constructed sentence and think, “I could try that.” That’s theft, isn’t it?

And while we’re at it, no editors. No beta readers. No workshopping. No taking notes. Certainly no research. If your brain didn’t birth it in a vacuum, it’s suspect. It’s borrowed. It’s… contaminated.

Let’s call this what it is: purity fetishism in prose form.

But here’s the twist: it’s not new. Plato, bless him, was already clutching his tunic about this twenty-four centuries ago. In Phaedrus, he warned that writing itself would be the death of memory, of real understanding. Words on the page were a crutch. Lazy. A hollow imitation of wisdom. True knowledge lived in the mind, passed orally, and refined through dialogue. Writing, he said, would make us forgetful, outsource our thinking.

Sound familiar?

Fast forward a few millennia, and we’re hearing the same song, remixed for the AI age:
“If you let ChatGPT restructure your second paragraph, you’re no longer the author.”
Nonsense. You were never the sole author. Not even close.

Everything you write is a palimpsest, your favourite genres echoing beneath the surface, your heroes whispering in your turns of phrase. You’re just remixing the residue. And there’s no shame in that. Unless, of course, you believe that distilling your top five comfort reads into a Frankenstein narrative somehow makes you an oracle of literary genius.

Here’s the rub: You’ve always been collaborating.

With your past. With your influences. With your tools. With language itself, which you did not invent and barely control. Whether the suggestion comes from a friend, an editor, a margin note, or an algorithm, what matters is the choice you make with it. That’s authorship. Let’s not play the slippery slope game.

The slippery slope argument collapses under its own weight. No one accuses you of cheating when you use a pencil sharpener. Or caffeine. Or take a walk to clear your head. But involve a silicon co-author, and suddenly you’re the Antichrist of Art?

Let’s not confuse integrity with insecurity. Let’s not confuse control with fear.

Use the tool. Ignore the purists. They’ve been wrong since Plato, and they’ll still be wrong when your great-grandchildren are dictating novels to a neural implant while bathing in synthetic dopamine.

The future of writing is always collaborative. The only question is whether you’ll join the conversation or sit in the corner, scribbling manifestos by candlelight, declaring war on electricity.

Understanding Generative AI

Ok. I admit this is an expansive claim, but I write about the limitations on generative artificial intelligence relative to writers. I wrote this after encountering several Reddit responses by writers who totally misunderstand how AI works. They won’t read this, but you might want to.

Click to visit the Ridley Park Blog for this article and podcast
Video: Cybernetic robot assisting a female writer (or stealing her work)

The Heuristic Self: On Persona, Identity, and Character

Man is least himself when he talks in his own person. Give him a mask, and he will tell you the truth.”
— Oscar Wilde

Identity is an illusion—but a necessary one. It’s a shortcut. A heuristic, evolved not for truth but for coherence. We reduce ourselves and others to fixed traits to preserve continuity—psychological, social, narrative.

Audio: NotebookLM podcast on this topic. (Direct)

Audio: NotebookLM podcast on this topic. (Spotify)

In the latest post on RidleyPark.blog, we meet Sarah—a woman who survives by splintering. She has three names, three selves, three economies of interaction. Each persona—Sarah, Stacey, and Pink—fulfils a role. Each protects her in a system that punishes complexity.

Identity Is Compression

Cognitive science suggests that we don’t possess a self—we perform one. Our so-called identity is assembled post-hoc from memory, context, and social cues. It’s recursive. It’s inferred.

We are not indivisible atoms of identity. We are bundled routines, personae adapted to setting and audience.

From Performance to Survival

In Needle’s Edge, Sarah doesn’t use aliases to deceive. She uses them to survive contradictions:

  • Stacey is desirable, stable, and profitable—so long as she appears clean and composed.
  • Pink is a consumer, invisible, stripped of glamour but allowed access to the block.
  • Sarah is the residue, the name used by those who once knew her—or still believe they do.

Each persona comes with scripts, limitations, and permissions. Sarah isn’t being dishonest. She’s practicing domain-specific identity. This is no different from how professionals code-switch at work, or how people self-edit on social media.

The Literary Echo

In character development, we often demand “depth,” by which we mean contradiction. We want to see a character laugh and break. Love and lie. But Sarah shows us that contradiction isn’t depth—it’s baseline reality. Any singular identity would be a narrative failure.

Characters like Sarah expose the poverty of reduction. They resist archetype. They remind us that fiction succeeds when it reflects the multiple, the shifting, the incompatible—which is to say, the real.

What Else Might We Say?

  • That authenticity is a myth: “Just be yourself” presumes you know which self to be.
  • That moral judgment often stems from a failure to see multiple selves in others.
  • That trauma survivors often fracture not because they’re broken, but because fracturing is adaptive.
  • That in a capitalist framework, the ability to fragment and role-play becomes a survival advantage.
  • That fiction is one of the few spaces where we can explore multiple selves without collapse.

The Missing Link

For a concrete, narrative reflection of these ideas, this post on RidleyPark.blog explores how one woman carries three selves to survive three worlds—and what it costs her.

Jordan Peterson: Derivative, Disingenuous, and (Hopefully) Done

I don’t like most of Jordan Peterson’s positions. There – I’ve said it. The man, once ubiquitous, seems to have faded into the woodwork, though no doubt his disciples still cling to his every word as if he were a modern-day oracle. But recently, I caught a clip of him online, and it dredged up the same bad taste, like stumbling upon an old, forgotten sandwich at the back of the fridge.

Audio: NotebookLM podcast on this topic

Let’s be clear. My distaste for Peterson isn’t rooted in petty animosity. It’s because his material is, in my view, derivative and wrong. And by wrong, I mean I disagree with him – a subtle distinction, but an important one. There’s nothing inherently shameful about being derivative. We all are, to some extent. No thinker sprouts fully-formed from the head of Zeus. The issue is when you’re derivative and act as if you’ve just split the atom of human insight.

Peterson tips his hat to Nietzsche – fair enough – but buries his far greater debt to Jung under layers of self-mythologising. He parades his ideas before audiences, many of whom lack the background to spot the patchwork, and gaslights them into believing they’re witnessing originality. They’re not. They’re witnessing a remixed greatest-hits album, passed off as a debut.

Image: Gratuitous, mean-spirited meme.

Now, I get it. My ideas, too, are derivative. Sometimes it’s coincidence – great minds and all that – but when I trace the thread back to its source, I acknowledge it. Nietzsche? Subjectivity of morality. Foucault? Power dynamics. Wittgenstein? The insufficiency of language. I owe debts to many more: Galen Strawson, Richard Rorty, Raymond Geuss – the list goes on, and I’d gladly share my ledger. But Peterson? The man behaves as though he invented introspection.

And when I say I disagree, let’s not confuse that with some claim to divine epistemic certainty. I don’t mean he’s objectively wrong (whatever that means in the grand circus of philosophy). I mean, I disagree. If I did, well, we wouldn’t be having this conversation, would we? That’s the tragicomedy of epistemology: so many positions, so little consensus.

But here’s where my patience truly snaps: Peterson’s prescriptivism. His eagerness to spew what I see as bad ideology dressed up as universal truth. Take his stance on moral objectivism—possibly his most egregious sin. He peddles this as if morality were some Platonic form, gleaming and immutable, rather than what it is: a human construct, riddled with contingency and contradiction.

And let’s not even get started on his historical and philosophical cherry-picking. His commentary on postmodern thought alone is a masterclass in either wilful misreading or, more likely, not reading at all. Straw men abound. Bogeymen are conjured, propped up, and ritually slaughtered to rapturous applause. It’s intellectually lazy and, frankly, beneath someone of his ostensible stature.

I can only hope we’ve seen the last of this man in the public sphere. And if not? Well, may he at least reform his ways—though I shan’t be holding my breath.

Molyneux, Locke, and the Cube That Shook Empiricism

Few philosophical thought experiments have managed to torment empiricists quite like Molyneux’s problem. First posed by William Molyneux to John Locke in 1688 (published in Locke’s An Essay Concerning Human Understanding), the question is deceptively simple:

If a person born blind, who has learned to distinguish a cube from a sphere by touch, were suddenly granted sight, could they, without touching the objects, correctly identify which is the cube and which is the sphere by sight alone?

I was inspired to write this article in reaction to Jonny Thmpson’s post on Philosophy Minis, shared below for context.

Video: Molyneux’s Problem

Locke, ever the champion of sensory experience as the foundation of knowledge, gave a confident empiricist’s answer: no. For Locke, ideas are the products of sensory impressions, and each sense provides its own stream of ideas, which must be combined and associated through experience. The newly sighted person, he argued, would have no prior visual idea of what a cube or sphere looks like, only tactile ones; they would need to learn anew how vision maps onto the world.

Audio: NotebookLM podcast on this topic.

This puzzle has persisted through centuries precisely because it forces us to confront the assumptions at the heart of empiricism: that all knowledge derives from sensory experience and that our senses, while distinct, can somehow cohere into a unified understanding of the world.

Empiricism, Epistemology, and A Priori Knowledge: The Context

Before we dismantle the cube further, let’s sweep some conceptual debris out of the way. Empiricism is the view that knowledge comes primarily (or exclusively) through sensory experience. It stands opposed to rationalism, which argues for the role of innate ideas or reason independent of sense experience.

Epistemology, the grandiloquent term for the study of knowledge, concerns itself with questions like: What is knowledge? How is it acquired? Can we know anything with certainty?

And then there is the spectre of a priori knowledge – that which is known independent of experience. A mathematical truth (e.g., 2 + 2 = 4) is often cited as a classic a priori case. Molyneux’s problem challenges empiricists because it demands an account of how ideas from one sensory modality (touch) might map onto another (vision) without prior experience of the mapping—an a priori leap, if you will.

The Language Correspondence Trap

While Molyneux and Locke framed this as an epistemological riddle, we can unmask it as something more insidious: a failure of language correspondence. The question presumes that the labels “cube” and “sphere” – tied in the blind person’s mind to tactile experiences – would, or should, carry over intact to the new visual experiences. But this presumption smuggles in a linguistic sleight of hand.

The word “cube” for the blind person means a specific configuration of tactile sensations: edges, vertices, flat planes. The word “sphere” means smoothness, unbroken curvature, no edges. These are concepts anchored entirely in touch. When vision enters the fray, we expect these words to transcend modalities – to leap from the tactile to the visual, as if their meanings were universal tokens rather than context-bound markers. The question is not merely: can the person see the cube? but rather: can the person’s tactile language map onto the visual world without translation or recalibration?

What Molyneux’s problem thus exposes is the assumption that linguistic labels transparently correspond to external reality, regardless of sensory apparatus. This is the mirage at the heart of Locke’s empiricism, the idea that once a word tags an object through experience, that tag is universally valid across sensory experiences. The cube and sphere aren’t just objects of knowledge; they are signs, semiotic constructs whose meaning depends on the sensory, social, and linguistic contexts in which they arise.

The Semiotic Shambles

Molyneux’s cube reveals the cracks in the correspondence theory of language: the naïve belief that words have stable meanings that latch onto stable objects or properties in the world. In fact, the meaning of “cube” or “sphere” is as much a product of sensory context as it is of external form. The newly sighted person isn’t merely lacking visual knowledge; they are confronted with a translation problem – a semantic chasm between tactile signification and visual signification.

If, as my Language Insufficiency Hypothesis asserts, language is inadequate to fully capture and transmit experience across contexts, then Molyneux’s problem is not an oddity but an inevitability. It exposes that our conceptual frameworks are not universal keys to reality but rickety bridges between islands of sense and meaning. The cube problem is less about empiricism’s limits in epistemology and more about its blind faith in linguistic coherence.

In short, Molyneux’s cube is not simply an empirical puzzle; it is a monument to language’s failure to correspond cleanly with the world, a reminder that what we call knowledge is often just well-worn habit dressed up in linguistic finery.

A Final Reflection

Molyneux’s problem, reframed through the lens of language insufficiency, reveals that our greatest epistemic challenges are also our greatest linguistic ones. Before we can speak of knowing a cube or sphere by sight, we must reckon with the unspoken question: do our words mean what we think they mean across the changing stage of experience?

That, dear reader, is the cube that haunts empiricism still.

The Purpose of Purpose

I’m a nihilist. Possibly always have been. But let’s get one thing straight: nihilism is not despair. That’s a slander cooked up by the Meaning Merchants – the sentimentalists and functionalists who can’t get through breakfast without hallucinating some grand purpose to butter their toast. They fear the void, so they fill it. With God. With country. With yoga.

Audio: NotebookLM podcast on this topic.

Humans are obsessed with function. Seeing it. Creating it. Projecting it onto everything, like graffiti on the cosmos. Everything must mean something. Even nonsense gets rebranded as metaphor. Why do men have nipples? Why does a fork exist if you’re just going to eat soup? Doesn’t matter – it must do something. When we can’t find this function, we invent it.

But function isn’t discovered – it’s manufactured. A collaboration between our pattern-seeking brains and our desperate need for relevance, where function becomes fiction, where language and anthropomorphism go to copulate. A neat little fiction. An ontological fantasy. We ask, “What is the function of the human in this grand ballet of entropy and expansion?” Answer: there isn’t one. None. Nada. Cosmic indifference doesn’t write job descriptions.

And yet we prance around in lab coats and uniforms – doctors, arsonists, firemen, philosophers – playing roles in a drama no one is watching. We build professions and identities the way children host tea parties for dolls. Elaborate rituals of pretend, choreographed displays of purpose. Satisfying? Sometimes. Meaningful? Don’t kid yourself.

We’ve constructed these meaning-machines – society, culture, progress – not because they’re real, but because they help us forget that they’re not. It’s theatre. Absurdist, and often bad. But it gives us something to do between birth and decomposition.

Sisyphus had his rock. We have careers.

But let’s not confuse labour for meaning, or imagination for truth. The boulder never reaches the top, and that’s not failure. That’s the show.

So roll the stone. Build the company. Write the blog. Pour tea for Barbie. Just don’t lie to yourself about what it all means.

Because it doesn’t mean anything.

The Indexing Abyss: A Cautionary Tale in Eight Chapters

There, I said it.

I’m almost finished with A Language Insufficiency Hypothesis, the book I’ve been labouring over for what feels like the gestation period of a particularly reluctant elephant. To be clear: the manuscript is done. Written. Edited. Blessed. But there remains one final circle of publishing hell—the index.

Now, if you’re wondering how motivated I am to return to indexing, consider this: I’m writing this blog post instead. If that doesn’t scream avoidance with an airhorn, nothing will.

Audio: NotebookLM podcast on this topic.

I began indexing over a month ago. I made it through two chapters of eight, then promptly wandered off to write a couple of novellas. As you do. One started as a short story—famous last words—and evolved into a novella. The muse struck again. Another “short story” appeared, and like an unattended sourdough starter, it fermented into a 15,000-word novelette. Apparently, I write short stories the way Americans pour wine: unintentionally generous.

With several unpublished manuscripts loitering on my hard drive like unemployed theatre majors, I figured it was time to release one into the wild. So I did. I published the novelette to Kindle, and just today, the paperback proof landed in my postbox like a smug little trophy.

And then, because I’m an unrepentant completionist (or a masochist—jury’s out), I thought: why not release the novella too? I’ve been told novellas and novelettes are unpopular due to “perceived value.” Apparently, people would rather buy a pound of gristle than 200 grams of sirloin. And yet, in the same breath, they claim no one has time for long books anymore. Perhaps these are different tribes of illiterates. I suppose we’ll find out.

Let’s talk logistics. Writing a book is only the beginning—and frankly, it’s the easy part. Fingers to keyboard, ideas to page. Done. I use Word, like most tragically conventional authors. Planning? Minimal. These were short stories, remember? That was the plan.

Next comes layout. Enter Adobe InDesign—because once you’ve seen what Word does to complex layouts, you never go back. Export to PDF, pray to the typographic gods, and move on.

Then there’s the cover. I lean on Illustrator and Photoshop. Photoshop is familiar, like a worn-in shoe; Illustrator is the smug cousin who turns up late but saves the day with scalable vectors. This time, I used Illustrator for the cover—lesson learnt from past pixelation traumas. Hardback to paperback conversion? A breeze when your artwork isn’t made of crayon scribbles and hope.

Covers, in case you’ve never assembled one, are ridiculous. Front. Back. Spine. Optional dust jacket if you’re feeling fancy (I wasn’t). You need titles, subtitles, your name in a legible font, and let’s not forget the barcode, which you will place correctly on the first attempt exactly never.

Unlike my first novel, where I enlisted someone with a proper design eye to handle the cover text, this time I went full minimalist. Think Scandinavian furniture catalogue meets existential despair. Classy.

Once the cover and interior are done, it’s time to wrestle with the publishing platforms. Everything is automated these days—provided you follow their arcane formatting commandments, avoid forbidden fonts, and offer up your soul. Submitting each book takes about an hour, not including the time lost choosing a price that balances “undervalued labour” and “won’t scare away cheapskates.”

Want a Kindle version? That’s another workflow entirely, full of tortured formatting, broken line breaks, and wondering why your chapter headings are now in Wingdings. Audiobooks? That’s a whole other circus, with its own animals and ringmasters. Honestly, it’s no wonder authors hire publishers. Or develop drinking problems.

But I’m stubborn. Which brings us full circle.

I’ve now got two books heading for daylight, a few more waiting in the wings, and one bloody non-fiction beast that won’t see release until I finish the damn index. No pseudonym this time. No hiding. Just me, owning my sins and hoping the final product lands somewhere between “insightful” and “mercifully short.”

So yes, life may well be a journey. But indexing is the bit where the satnav breaks, the road floods, and the boot falls off the car. Give me the destination any day. The journey can fuck right off.

Sustenance: A Book About Aliens, Language, and Everything You’re Getting Wrong

Violet aliens on a farm

So, I wrote a book and published it under Ridley Park, the pseudonym I use for fiction.

It has aliens. But don’t get excited—they’re not here to save us, probe us, or blow up the White House. They’re not even here for us.

Which is, frankly, the point.

Audio: NotebookLM podcast on this topic.

The book’s called Sustenance, and while it’s technically speculative fiction, it’s more about us than them. Or rather, it’s about how we can’t stop making everything about us—even when it shouldn’t be. Especially when it shouldn’t be.

Let’s talk themes. And yes, we’re using that word like academics do: as a smokescreen for saying uncomfortable things abstractly.

Language: The Original Scam

Language is the ultimate colonial tool. We call it communication, but it’s mostly projection. You speak. You hope. You assume. You superimpose meaning on other people like a cling film of your own ego.

Sustenance leans into this—not by showing a breakdown of communication, but by showing what happens when communication was never mutual in the first place. When the very idea of “meaning” has no purchase. It’s not about mishearing—it’s about misbeing.

Culture: A Meme You Were Born Into

Culture is the software you didn’t choose to install, and probably can’t uninstall. Most people treat it like a universal law—until they meet someone running a different OS. Cue confusion, arrogance, or violence.

The book explores what happens when cultural norms aren’t shared, and worse, aren’t even legible. Imagine trying to enforce property rights on beings who don’t understand “ownership.” It’s like trying to baptise a toaster.

Sex/Gender: You Keep Using Those Words…

One of the quiet joys of writing non-human characters is discarding human assumptions about sex and gender—and watching readers squirm.

What if sex wasn’t about power, pleasure, or identity? What if it was just a biological procedure, like cell division or pruning roses? Would you still be interested? Would you still moralise about it?

We love to believe our sex/gender constructs are inevitable. They’re not. They’re habits—often bad ones.

Consent: Your Framework Is Showing

Consent, as we use it, assumes mutual understanding, shared stakes, and equivalent agency. Remove any one of those and what’s left?

Sustenance doesn’t try to solve this—it just shows what happens when those assumptions fall apart. Spoiler: it’s not pretty, but it is honest.

Projection: The Mirror That Lies

Humans are deeply committed to anthropocentrism. If it walks like us, or flinches like us, it must be us. This is why we get so disoriented when faced with the truly alien: it won’t dance to our tune, and we’re left staring at ourselves in the funhouse mirror.

This isn’t a book about aliens.

It’s a book about the ways we refuse to see what’s not us.

Memory: The Autobiography of Your Justifications

Memory is not a record. It’s a defence attorney with a narrative license. We rewrite the past to make ourselves look consistent, or innocent, or right.

In Sustenance, memory acts less as a tether to truth and more as a sculpting tool—a way to carve guilt into something manageable. Something you can live with. Until you can’t.

In Summary: It’s Not About Them. It’s About You.

If that sounds bleak, good. It’s meant to.

But it’s also a warning: don’t get too comfortable in your own categories. They’re only universal until you meet someone who doesn’t share them.

Like I said, it’s not really about the aliens.

It’s about us.


If you enjoy fiction that’s more unsettling than escapist, more question than answer, you might be interested in Sustenance. It’s live on Kindle now for the cost of a regrettable coffee:

📘 Sustenance on Amazon US
Also available in the UK, DE, FR, ES, IT, NL, JP, BR, CA, MX, AU, and IN—because alienation is a universal language.

On Ishiguro, Cioran, and Whatever I Think I’m Doing

Sora-generated image of Emil Cioran and Kazuo Ishiguro reading a generic book together

Having just finished Never Let Me Go by Kazuo Ishiguro, I’ve now cracked open my first taste of Cioran—History and Utopia. You might reasonably ask why. Why these two? And what, if anything, do they have in common? Better yet—what do the three of us have in common?

Audio: NotebookLM podcast on this topic.

Recently, I finished writing a novella titled Propensity (currently gathering metaphorical dust on the release runway). Out of curiosity—or narcissism—I fed it to AI and asked whose style it resembled. Among the usual suspects were two names I hadn’t yet read: Ishiguro and Cioran. I’d read the others and understood the links. These two, though, were unknown quantities. So I gave them a go.

Ishiguro is perhaps best known for The Remains of the Day, which, like Never Let Me Go, got the Hollywood treatment. I chose the latter, arbitrarily. I even asked ChatGPT to compare both books with their cinematic counterparts. The AI was less than charitable, describing Hollywood’s adaptations as bastardised and bowdlerised—flattened into tidy narratives for American palates too dim to digest ambiguity. On this, we agree.

What struck me about Never Let Me Go was its richly textured mundanity. That’s apparently where AI saw the resemblance to Propensity. I’m not here to write a book report—partly because I detest spoilers, and partly because summaries miss the point. It took about seven chapters before anything ‘happened’, and then it kept happening. What had at first seemed like a neurotic, wandering narrative from the maddeningly passive Kathy H. suddenly hooked me. The reveals began to unfold. It’s a book that resists retelling. It demands firsthand experience. A vibe. A tone. A slow, aching dread.

Which brings me neatly to Cioran.

History and Utopia is a collection of essays penned in French (not his mother tongue, but you’d never guess it) while Cioran was holed up in postwar Paris. I opted for the English translation—unapologetically—and was instantly drawn in. His prose? Electric. His wit? Acidic. If Ishiguro was a comparison of style, then Cioran was one of spirit. Snark, pessimism, fatalistic shrugs toward civilisation—finally, someone speaking my language.

Unlike the cardboard cut-outs of Cold War polemics we get from most Western writers of the era, Cioran’s take is layered, uncomfortably self-aware, and written by someone who actually fled political chaos. There’s no naïve idealism here, no facile hero-villain binaries. Just a deeply weary intellect peering into the abyss and refusing to blink. It’s not just what he says, but the tone—the curled-lip sneer at utopian pretensions and historical self-delusions. If I earned even a drop of that comparison, I’ll take it.

Both Ishiguro and Cioran delivered what I didn’t know I needed: the reminder that some writers aren’t there to tell you a story. They’re there to infect you with an atmosphere. An idea. A quiet existential panic you can’t shake.

I’ve gotten what I came for from these two, though I suspect I’ll be returning, especially to Cioran. Philosophically, he’s my kind of bastard. I doubt this’ll be my last post on his work.