On Agency, Suicide, and the Moving Train

I’ve been working through the opening chapters of Octavia Butler’s Dawn. At one point, the alien Jdahya tells Lilith, “We watched you commit mass suicide.”*

The line unsettles not because of the apocalypse itself, but because of what it presumes: that “humanity” acted as one, as if billions of disparate lives could be collapsed into a single decision. A few pulled triggers, a few applauded, some resisted despite the odds, and most simply endured. From the alien vantage, nuance vanishes. A species is judged by its outcome, not by the uneven distribution of responsibility that produced it.

This is hardly foreign to us. Nationalism thrives on the same flattening. We won the war. We lost the match. A handful act; the many claim the glory or swallow the shame by association. Sartre takes it further with his “no excuses” dictum, even to do nothing is to choose. Howard Zinn’s “You can’t remain neutral on a moving train” makes the same move, cloaked in the borrowed authority of physics. Yet relativity undermines it: on the train, you are still; on the ground, you are moving. Whether neutrality is possible depends entirely on your frame of reference.

What all these formulations share is a kind of metaphysical inflation. “Agency” is treated as a universal essence, something evenly spread across the human condition. But in practice, it is anything but. Most people are not shaping history; they are being dragged along by it.

One might sketch the orientations toward the collective “apple cart” like this:

  • Tippers with a vision: the revolutionaries, ideologues, or would-be prophets who claim to know how the cart should be overturned.
  • Sycophants: clinging to the side, riding the momentum of others’ power, hoping for crumbs.
  • Egoists: indifferent to the cart’s fate, focused on personal comfort, advantage, or escape.
  • Stabilisers: most people, clinging to the cart as it wobbles, preferring continuity to upheaval.
  • Survivors: those who endure, waiting out storms, not out of “agency” but necessity.

The Stabilisers and Survivors blur into the same crowd, the former still half-convinced their vote between arsenic and cyanide matters, the latter no longer believing the story at all. They resemble Seligman’s shocked dogs, conditioned to sit through pain because movement feels futile.

And so “humanity” never truly acts as one. Agency is uneven, fragile, and often absent. Yet whether in Sartre’s philosophy, Zinn’s slogans, or Jdahya’s extraterrestrial indictment, the temptation is always to collapse plurality into a single will; you chose this, all of you. It is neat, rhetorically satisfying, and yet wrong.

Perhaps Butler’s aliens, clinical in their judgment, are simply holding up a mirror to the fictions we already tell about ourselves.


As an aside, this version of the book cover is risible. Not to devolve into identity politics, but Lilith is a dark-skinned woman, not a pale ginger. I can only assume that some target science fiction readers have a propensity to prefer white, sapphic adjacent characters.

I won’t even comment further on the faux 3D title treatment, relic of 1980s marketing.


Spoiler Alert: As this statement about mass suicide is a Chapter 2 event, I am not inclined to consider it a spoiler. False alarm.

Sustenance Novella free on Kindle

On 7–8 September 2025, the Kindle version of my Ridley Park novella Sustenance will be available free to everyone on Amazon. (It’s always free if you’re a KindleUnlimited member, but these two days open it up to all readers.)

👉 https://www.amazon.com/dp/B0F9PTK9N2

So what is Sustenance?

It’s a novella that begins with the dust and grit of rural Iowa – soybean fields, rusted trucks, a small town where everyone knows your name (and your secrets). At first glance, it reads like plainspoken realism, narrated by a local mechanic who insists he’s just a “regular guy.” But then the ground literally shifts. A crash. Figures glimpsed by firelight in the woods. Naked, violet-skinned beings who don’t laugh, don’t sleep, don’t even breathe.

What follows is not your usual alien-invasion story. It’s quieter, stranger, and more unsettling. The encounters with the visitors aren’t about lasers or spaceships – they’re about language, culture, and the limits of human understanding. What happens when concepts like propertylaw, or even woman and man don’t translate? What does it mean when intimacy itself becomes a site of misunderstanding?

Sustenance is for readers who:

  • Gravitate toward literary fiction with a speculative edge rather than straight genre beats
  • Appreciate the mix of the banal and the uncanny – the smell of corn dust giving way to the shock of alien otherness
  • Are interested in themes of language, power, misunderstanding, and human self-deception
  • Enjoy writers like Jeff VanderMeer, Margaret Atwood, Octavia Butler, or Denis Johnson – voices that blur realism, philosophy, and estrangement

This isn’t a story that offers tidy answers. It lingers, provokes, and resists easy moral closure. Think of it less as a sci-fi romp and more as a philosophical fable wrapped in small-town dust and cicada-song.

This version of the book is available in these Kindle storefronts:
United States, United Kingdom, Germany, France, Spain, Italy, Netherlands, Japan, Brazil, Canada, Mexico, Australia, and India

For more details, visit the Sustenance page.

📚 Grab your free Kindle copy on 7–8 September 2025

I made this Kindle version available for free to get some reviews. This promotion is all or nothing, so take advantage of the opportunity. If you want to leave a review, please do.

The Enlightenment: A Postmortem

Or: How the Brightest Ideas in Europe Got Us into This Bloody Mess

Disclaimer: This output is entirely ChatGPT 4o from a conversation on the failure and anachronism of Enlightenment promises. I’m trying to finish editing my next novel, so I can’t justify taking much more time to share what are ultimately my thoughts as expounded upon by generative AI. I may comment personally in future. Until then, this is what I have to share.

AI Haters, leave now or perish ye all hope.


The Enlightenment promised us emancipation from superstition, authority, and ignorance. What we got instead was bureaucracy, colonialism, and TED Talks. We replaced divine right with data dashboards and called it progress. And like any good inheritance, the will was contested, and most of us ended up with bugger-all.

Below, I take each Enlightenment virtue, pair it with its contemporary vice, and offer a detractor who saw through the Enlightenment’s powder-wigged charade. Because if we’re going down with this ship, we might as well point out the dry rot in the hull.


1. Rationalism

The Ideal: Reason shall lead us out of darkness.
The Reality: Reason led us straight into the gas chambers—with bureaucratic precision.

Detractor: Max Horkheimer & Theodor Adorno

“Enlightenment is totalitarian.”
Dialectic of Enlightenment (1944)

Horkheimer and Adorno saw what reason looks like when it slips off its leash. Instrumental rationality, they warned, doesn’t ask why—it only asks how efficiently. The result? A world where extermination is scheduled, costs are optimised, and ethics are politely filed under “subjective.”


2. Empiricism

The Ideal: Observation and experience will uncover truth.
The Reality: If it can’t be measured, it can’t be real. (Love? Not statistically significant.)

Detractor: Michel Foucault

“Truth isn’t outside power… truth is a thing of this world.”
Power/Knowledge (1977)

Foucault dismantled the whole edifice. Knowledge isn’t neutral; it’s an instrument of power. Empiricism becomes just another way of disciplining the body—measuring skulls, classifying deviants, and diagnosing women with “hysteria” for having opinions.


3. Individualism

The Ideal: The sovereign subject, free and self-determining.
The Reality: The atomised consumer, trapped in a feedback loop of self-optimisation.

Detractor: Jean Baudrillard

“The individual is no longer an autonomous subject but a terminal of multiple networks.”
Simulacra and Simulation (1981)

You wanted autonomy? You got algorithms. Baudrillard reminds us that the modern “individual” is a brand in search of market validation. You are free to be whoever you want, provided it fits within platform guidelines and doesn’t disrupt ad revenue.


4. Secularism

The Ideal: Liberation from superstition.
The Reality: We swapped saints for STEMlords and called it even.

Detractor: Charles Taylor

“We are now living in a spiritual wasteland.”
A Secular Age (2007)

Taylor—perhaps the most polite Canadian apocalypse-whisperer—reminds us that secularism didn’t replace religion with reason; it replaced mystery with malaise. We’re no longer awed, just “motivated.” Everything is explainable, and yet somehow nothing means anything.


5. Progress

The Ideal: History is a forward march toward utopia.
The Reality: History is a meat grinder in a lab coat.

Detractor: Walter Benjamin

“The storm irresistibly propels him into the future to which his back is turned.”
Theses on the Philosophy of History (1940)

Benjamin’s “angel of history” watches helplessly as the wreckage piles up—colonialism, genocide, climate collapse—all in the name of progress. Every step forward has a cost, but we keep marching, noses in the spreadsheet, ignoring the bodies behind us.


6. Universalism

The Ideal: One humanity, under Reason.
The Reality: Enlightenment values, brought to you by cannon fire and Christian missionaries.

Detractor: Gayatri Chakravorty Spivak

“White men are saving brown women from brown men.”
Can the Subaltern Speak? (1988)

Universalism was always a bit… French, wasn’t it? Spivak unmasks it as imperialism in drag—exporting “rights” and “freedom” to people who never asked for them, while ignoring the structural violence built into the Enlightenment’s own Enlightened societies.


7. Tolerance

The Ideal: Let a thousand opinions bloom.
The Reality: Tolerance, but only for those who don’t threaten the status quo.

Detractor: Karl Popper

“Unlimited tolerance must lead to the disappearance of tolerance.”
The Open Society and Its Enemies (1945)

Popper, bless him, thought tolerance needed a firewall. But in practice, “tolerance” has become a smug liberal virtue signalling its own superiority while deplatforming anyone who makes the dinner party uncomfortable. We tolerate all views—except the unseemly ones.


8. Scientific Method

The Ideal: Observe, hypothesise, repeat. Truth shall emerge.
The Reality: Publish or perish. Fund or flounder.

Detractor: Paul Feyerabend

“Science is not one thing, it is many things.”
Against Method (1975)

Feyerabend called the whole thing a farce. There is no single “method,” just a bureaucratic orthodoxy masquerading as objectivity. Today, science bends to industry, cherry-picks for grants, and buries null results in the backyard. Peer review? More like peer pressure.


9. Anti-Authoritarianism

The Ideal: Smash the throne! Burn the mitre!
The Reality: Bow to the data analytics team.

Detractor: Herbert Marcuse

“Free election of masters does not abolish the masters or the slaves.”
One-Dimensional Man (1964)

Marcuse skewered the liberal illusion of choice. We may vote, but we do so within a system that already wrote the script. Authority didn’t vanish; it just became procedural, faceless, algorithmic. Bureaucracy is the new monarchy—only with more forms.


10. Education and Encyclopaedism

The Ideal: All knowledge, accessible to all minds.
The Reality: Behind a paywall. Written in impenetrable prose. Moderated by white men with tenure.

Detractor: Ivan Illich

“School is the advertising agency which makes you believe that you need the society as it is.”
Deschooling Society (1971)

Illich pulls the curtain: education isn’t emancipatory; it’s indoctrinatory. The modern university produces not thinkers but credentialed employees. Encyclopaedias are replaced by Wikipedia, curated by anonymous pedants and revision wars. Truth is editable.


Postscript: Picking through the Rubble

So—has the Enlightenment failed?

Not exactly. It succeeded too literally. It was taken at its word. Its principles, once radical, were rendered banal. It’s not that reason, progress, or rights are inherently doomed—it’s that they were never as pure as advertised. They were always products of their time: male, white, bourgeois, and utterly convinced of their own benevolence.

If there’s a path forward, it’s not to restore Enlightenment values, but to interrogate them—mercilessly, with irony and eyes open.

After all, the problem was never darkness. It was the people with torches who thought they’d found the only path.

From Thesaurus to Thoughtcrime: The Slippery Slope of Authorial Purity

I had planned to write about Beauvoir’s Second Sex, but this has been on my mind lately.

There’s a certain breed of aspiring author, let’s call them the Sacred Scribes, who bristle at the notion of using AI to help with their writing. Not because it’s unhelpful. Not because it produces rubbish. But because it’s impure.

Like some Victorian schoolmarm clutching her pearls at the sight of a split infinitive, they cry: “If you let the machine help you fix a clumsy sentence, what’s next? The whole novel? Your diary? Your soul?”

The panic is always the same: one small compromise and you’re tumbling down the greased chute of creative ruin. It starts with a synonym suggestion and ends with a ghostwritten autobiography titled My Journey to Authenticity, dictated by chatbot, of course.

But let’s pause and look at the logic here. Or rather, the lack thereof.

By this standard, you must also renounce the thesaurus. Shun the spellchecker. Burn your dictionary. Forbid yourself from reading any book you might accidentally learn from. Heaven forbid you read a well-constructed sentence and think, “I could try that.” That’s theft, isn’t it?

And while we’re at it, no editors. No beta readers. No workshopping. No taking notes. Certainly no research. If your brain didn’t birth it in a vacuum, it’s suspect. It’s borrowed. It’s… contaminated.

Let’s call this what it is: purity fetishism in prose form.

But here’s the twist: it’s not new. Plato, bless him, was already clutching his tunic about this twenty-four centuries ago. In Phaedrus, he warned that writing itself would be the death of memory, of real understanding. Words on the page were a crutch. Lazy. A hollow imitation of wisdom. True knowledge lived in the mind, passed orally, and refined through dialogue. Writing, he said, would make us forgetful, outsource our thinking.

Sound familiar?

Fast forward a few millennia, and we’re hearing the same song, remixed for the AI age:
“If you let ChatGPT restructure your second paragraph, you’re no longer the author.”
Nonsense. You were never the sole author. Not even close.

Everything you write is a palimpsest, your favourite genres echoing beneath the surface, your heroes whispering in your turns of phrase. You’re just remixing the residue. And there’s no shame in that. Unless, of course, you believe that distilling your top five comfort reads into a Frankenstein narrative somehow makes you an oracle of literary genius.

Here’s the rub: You’ve always been collaborating.

With your past. With your influences. With your tools. With language itself, which you did not invent and barely control. Whether the suggestion comes from a friend, an editor, a margin note, or an algorithm, what matters is the choice you make with it. That’s authorship. Let’s not play the slippery slope game.

The slippery slope argument collapses under its own weight. No one accuses you of cheating when you use a pencil sharpener. Or caffeine. Or take a walk to clear your head. But involve a silicon co-author, and suddenly you’re the Antichrist of Art?

Let’s not confuse integrity with insecurity. Let’s not confuse control with fear.

Use the tool. Ignore the purists. They’ve been wrong since Plato, and they’ll still be wrong when your great-grandchildren are dictating novels to a neural implant while bathing in synthetic dopamine.

The future of writing is always collaborative. The only question is whether you’ll join the conversation or sit in the corner, scribbling manifestos by candlelight, declaring war on electricity.

Understanding Generative AI

Ok. I admit this is an expansive claim, but I write about the limitations on generative artificial intelligence relative to writers. I wrote this after encountering several Reddit responses by writers who totally misunderstand how AI works. They won’t read this, but you might want to.

Click to visit the Ridley Park Blog for this article and podcast
Video: Cybernetic robot assisting a female writer (or stealing her work)

The Heuristic Self: On Persona, Identity, and Character

Man is least himself when he talks in his own person. Give him a mask, and he will tell you the truth.”
— Oscar Wilde

Identity is an illusion—but a necessary one. It’s a shortcut. A heuristic, evolved not for truth but for coherence. We reduce ourselves and others to fixed traits to preserve continuity—psychological, social, narrative.

Audio: NotebookLM podcast on this topic. (Direct)

Audio: NotebookLM podcast on this topic. (Spotify)

In the latest post on RidleyPark.blog, we meet Sarah—a woman who survives by splintering. She has three names, three selves, three economies of interaction. Each persona—Sarah, Stacey, and Pink—fulfils a role. Each protects her in a system that punishes complexity.

Identity Is Compression

Cognitive science suggests that we don’t possess a self—we perform one. Our so-called identity is assembled post-hoc from memory, context, and social cues. It’s recursive. It’s inferred.

We are not indivisible atoms of identity. We are bundled routines, personae adapted to setting and audience.

From Performance to Survival

In Needle’s Edge, Sarah doesn’t use aliases to deceive. She uses them to survive contradictions:

  • Stacey is desirable, stable, and profitable—so long as she appears clean and composed.
  • Pink is a consumer, invisible, stripped of glamour but allowed access to the block.
  • Sarah is the residue, the name used by those who once knew her—or still believe they do.

Each persona comes with scripts, limitations, and permissions. Sarah isn’t being dishonest. She’s practicing domain-specific identity. This is no different from how professionals code-switch at work, or how people self-edit on social media.

The Literary Echo

In character development, we often demand “depth,” by which we mean contradiction. We want to see a character laugh and break. Love and lie. But Sarah shows us that contradiction isn’t depth—it’s baseline reality. Any singular identity would be a narrative failure.

Characters like Sarah expose the poverty of reduction. They resist archetype. They remind us that fiction succeeds when it reflects the multiple, the shifting, the incompatible—which is to say, the real.

What Else Might We Say?

  • That authenticity is a myth: “Just be yourself” presumes you know which self to be.
  • That moral judgment often stems from a failure to see multiple selves in others.
  • That trauma survivors often fracture not because they’re broken, but because fracturing is adaptive.
  • That in a capitalist framework, the ability to fragment and role-play becomes a survival advantage.
  • That fiction is one of the few spaces where we can explore multiple selves without collapse.

The Missing Link

For a concrete, narrative reflection of these ideas, this post on RidleyPark.blog explores how one woman carries three selves to survive three worlds—and what it costs her.

Jordan Peterson: Derivative, Disingenuous, and (Hopefully) Done

I don’t like most of Jordan Peterson’s positions. There – I’ve said it. The man, once ubiquitous, seems to have faded into the woodwork, though no doubt his disciples still cling to his every word as if he were a modern-day oracle. But recently, I caught a clip of him online, and it dredged up the same bad taste, like stumbling upon an old, forgotten sandwich at the back of the fridge.

Audio: NotebookLM podcast on this topic

Let’s be clear. My distaste for Peterson isn’t rooted in petty animosity. It’s because his material is, in my view, derivative and wrong. And by wrong, I mean I disagree with him – a subtle distinction, but an important one. There’s nothing inherently shameful about being derivative. We all are, to some extent. No thinker sprouts fully-formed from the head of Zeus. The issue is when you’re derivative and act as if you’ve just split the atom of human insight.

Peterson tips his hat to Nietzsche – fair enough – but buries his far greater debt to Jung under layers of self-mythologising. He parades his ideas before audiences, many of whom lack the background to spot the patchwork, and gaslights them into believing they’re witnessing originality. They’re not. They’re witnessing a remixed greatest-hits album, passed off as a debut.

Image: Gratuitous, mean-spirited meme.

Now, I get it. My ideas, too, are derivative. Sometimes it’s coincidence – great minds and all that – but when I trace the thread back to its source, I acknowledge it. Nietzsche? Subjectivity of morality. Foucault? Power dynamics. Wittgenstein? The insufficiency of language. I owe debts to many more: Galen Strawson, Richard Rorty, Raymond Geuss – the list goes on, and I’d gladly share my ledger. But Peterson? The man behaves as though he invented introspection.

And when I say I disagree, let’s not confuse that with some claim to divine epistemic certainty. I don’t mean he’s objectively wrong (whatever that means in the grand circus of philosophy). I mean, I disagree. If I did, well, we wouldn’t be having this conversation, would we? That’s the tragicomedy of epistemology: so many positions, so little consensus.

But here’s where my patience truly snaps: Peterson’s prescriptivism. His eagerness to spew what I see as bad ideology dressed up as universal truth. Take his stance on moral objectivism—possibly his most egregious sin. He peddles this as if morality were some Platonic form, gleaming and immutable, rather than what it is: a human construct, riddled with contingency and contradiction.

And let’s not even get started on his historical and philosophical cherry-picking. His commentary on postmodern thought alone is a masterclass in either wilful misreading or, more likely, not reading at all. Straw men abound. Bogeymen are conjured, propped up, and ritually slaughtered to rapturous applause. It’s intellectually lazy and, frankly, beneath someone of his ostensible stature.

I can only hope we’ve seen the last of this man in the public sphere. And if not? Well, may he at least reform his ways—though I shan’t be holding my breath.

Molyneux, Locke, and the Cube That Shook Empiricism

Few philosophical thought experiments have managed to torment empiricists quite like Molyneux’s problem. First posed by William Molyneux to John Locke in 1688 (published in Locke’s An Essay Concerning Human Understanding), the question is deceptively simple:

If a person born blind, who has learned to distinguish a cube from a sphere by touch, were suddenly granted sight, could they, without touching the objects, correctly identify which is the cube and which is the sphere by sight alone?

I was inspired to write this article in reaction to Jonny Thmpson’s post on Philosophy Minis, shared below for context.

Video: Molyneux’s Problem

Locke, ever the champion of sensory experience as the foundation of knowledge, gave a confident empiricist’s answer: no. For Locke, ideas are the products of sensory impressions, and each sense provides its own stream of ideas, which must be combined and associated through experience. The newly sighted person, he argued, would have no prior visual idea of what a cube or sphere looks like, only tactile ones; they would need to learn anew how vision maps onto the world.

Audio: NotebookLM podcast on this topic.

This puzzle has persisted through centuries precisely because it forces us to confront the assumptions at the heart of empiricism: that all knowledge derives from sensory experience and that our senses, while distinct, can somehow cohere into a unified understanding of the world.

Empiricism, Epistemology, and A Priori Knowledge: The Context

Before we dismantle the cube further, let’s sweep some conceptual debris out of the way. Empiricism is the view that knowledge comes primarily (or exclusively) through sensory experience. It stands opposed to rationalism, which argues for the role of innate ideas or reason independent of sense experience.

Epistemology, the grandiloquent term for the study of knowledge, concerns itself with questions like: What is knowledge? How is it acquired? Can we know anything with certainty?

And then there is the spectre of a priori knowledge – that which is known independent of experience. A mathematical truth (e.g., 2 + 2 = 4) is often cited as a classic a priori case. Molyneux’s problem challenges empiricists because it demands an account of how ideas from one sensory modality (touch) might map onto another (vision) without prior experience of the mapping—an a priori leap, if you will.

The Language Correspondence Trap

While Molyneux and Locke framed this as an epistemological riddle, we can unmask it as something more insidious: a failure of language correspondence. The question presumes that the labels “cube” and “sphere” – tied in the blind person’s mind to tactile experiences – would, or should, carry over intact to the new visual experiences. But this presumption smuggles in a linguistic sleight of hand.

The word “cube” for the blind person means a specific configuration of tactile sensations: edges, vertices, flat planes. The word “sphere” means smoothness, unbroken curvature, no edges. These are concepts anchored entirely in touch. When vision enters the fray, we expect these words to transcend modalities – to leap from the tactile to the visual, as if their meanings were universal tokens rather than context-bound markers. The question is not merely: can the person see the cube? but rather: can the person’s tactile language map onto the visual world without translation or recalibration?

What Molyneux’s problem thus exposes is the assumption that linguistic labels transparently correspond to external reality, regardless of sensory apparatus. This is the mirage at the heart of Locke’s empiricism, the idea that once a word tags an object through experience, that tag is universally valid across sensory experiences. The cube and sphere aren’t just objects of knowledge; they are signs, semiotic constructs whose meaning depends on the sensory, social, and linguistic contexts in which they arise.

The Semiotic Shambles

Molyneux’s cube reveals the cracks in the correspondence theory of language: the naïve belief that words have stable meanings that latch onto stable objects or properties in the world. In fact, the meaning of “cube” or “sphere” is as much a product of sensory context as it is of external form. The newly sighted person isn’t merely lacking visual knowledge; they are confronted with a translation problem – a semantic chasm between tactile signification and visual signification.

If, as my Language Insufficiency Hypothesis asserts, language is inadequate to fully capture and transmit experience across contexts, then Molyneux’s problem is not an oddity but an inevitability. It exposes that our conceptual frameworks are not universal keys to reality but rickety bridges between islands of sense and meaning. The cube problem is less about empiricism’s limits in epistemology and more about its blind faith in linguistic coherence.

In short, Molyneux’s cube is not simply an empirical puzzle; it is a monument to language’s failure to correspond cleanly with the world, a reminder that what we call knowledge is often just well-worn habit dressed up in linguistic finery.

A Final Reflection

Molyneux’s problem, reframed through the lens of language insufficiency, reveals that our greatest epistemic challenges are also our greatest linguistic ones. Before we can speak of knowing a cube or sphere by sight, we must reckon with the unspoken question: do our words mean what we think they mean across the changing stage of experience?

That, dear reader, is the cube that haunts empiricism still.

The Purpose of Purpose

I’m a nihilist. Possibly always have been. But let’s get one thing straight: nihilism is not despair. That’s a slander cooked up by the Meaning Merchants – the sentimentalists and functionalists who can’t get through breakfast without hallucinating some grand purpose to butter their toast. They fear the void, so they fill it. With God. With country. With yoga.

Audio: NotebookLM podcast on this topic.

Humans are obsessed with function. Seeing it. Creating it. Projecting it onto everything, like graffiti on the cosmos. Everything must mean something. Even nonsense gets rebranded as metaphor. Why do men have nipples? Why does a fork exist if you’re just going to eat soup? Doesn’t matter – it must do something. When we can’t find this function, we invent it.

But function isn’t discovered – it’s manufactured. A collaboration between our pattern-seeking brains and our desperate need for relevance, where function becomes fiction, where language and anthropomorphism go to copulate. A neat little fiction. An ontological fantasy. We ask, “What is the function of the human in this grand ballet of entropy and expansion?” Answer: there isn’t one. None. Nada. Cosmic indifference doesn’t write job descriptions.

And yet we prance around in lab coats and uniforms – doctors, arsonists, firemen, philosophers – playing roles in a drama no one is watching. We build professions and identities the way children host tea parties for dolls. Elaborate rituals of pretend, choreographed displays of purpose. Satisfying? Sometimes. Meaningful? Don’t kid yourself.

We’ve constructed these meaning-machines – society, culture, progress – not because they’re real, but because they help us forget that they’re not. It’s theatre. Absurdist, and often bad. But it gives us something to do between birth and decomposition.

Sisyphus had his rock. We have careers.

But let’s not confuse labour for meaning, or imagination for truth. The boulder never reaches the top, and that’s not failure. That’s the show.

So roll the stone. Build the company. Write the blog. Pour tea for Barbie. Just don’t lie to yourself about what it all means.

Because it doesn’t mean anything.

The Indexing Abyss: A Cautionary Tale in Eight Chapters

There, I said it.

I’m almost finished with A Language Insufficiency Hypothesis, the book I’ve been labouring over for what feels like the gestation period of a particularly reluctant elephant. To be clear: the manuscript is done. Written. Edited. Blessed. But there remains one final circle of publishing hell—the index.

Now, if you’re wondering how motivated I am to return to indexing, consider this: I’m writing this blog post instead. If that doesn’t scream avoidance with an airhorn, nothing will.

Audio: NotebookLM podcast on this topic.

I began indexing over a month ago. I made it through two chapters of eight, then promptly wandered off to write a couple of novellas. As you do. One started as a short story—famous last words—and evolved into a novella. The muse struck again. Another “short story” appeared, and like an unattended sourdough starter, it fermented into a 15,000-word novelette. Apparently, I write short stories the way Americans pour wine: unintentionally generous.

With several unpublished manuscripts loitering on my hard drive like unemployed theatre majors, I figured it was time to release one into the wild. So I did. I published the novelette to Kindle, and just today, the paperback proof landed in my postbox like a smug little trophy.

And then, because I’m an unrepentant completionist (or a masochist—jury’s out), I thought: why not release the novella too? I’ve been told novellas and novelettes are unpopular due to “perceived value.” Apparently, people would rather buy a pound of gristle than 200 grams of sirloin. And yet, in the same breath, they claim no one has time for long books anymore. Perhaps these are different tribes of illiterates. I suppose we’ll find out.

Let’s talk logistics. Writing a book is only the beginning—and frankly, it’s the easy part. Fingers to keyboard, ideas to page. Done. I use Word, like most tragically conventional authors. Planning? Minimal. These were short stories, remember? That was the plan.

Next comes layout. Enter Adobe InDesign—because once you’ve seen what Word does to complex layouts, you never go back. Export to PDF, pray to the typographic gods, and move on.

Then there’s the cover. I lean on Illustrator and Photoshop. Photoshop is familiar, like a worn-in shoe; Illustrator is the smug cousin who turns up late but saves the day with scalable vectors. This time, I used Illustrator for the cover—lesson learnt from past pixelation traumas. Hardback to paperback conversion? A breeze when your artwork isn’t made of crayon scribbles and hope.

Covers, in case you’ve never assembled one, are ridiculous. Front. Back. Spine. Optional dust jacket if you’re feeling fancy (I wasn’t). You need titles, subtitles, your name in a legible font, and let’s not forget the barcode, which you will place correctly on the first attempt exactly never.

Unlike my first novel, where I enlisted someone with a proper design eye to handle the cover text, this time I went full minimalist. Think Scandinavian furniture catalogue meets existential despair. Classy.

Once the cover and interior are done, it’s time to wrestle with the publishing platforms. Everything is automated these days—provided you follow their arcane formatting commandments, avoid forbidden fonts, and offer up your soul. Submitting each book takes about an hour, not including the time lost choosing a price that balances “undervalued labour” and “won’t scare away cheapskates.”

Want a Kindle version? That’s another workflow entirely, full of tortured formatting, broken line breaks, and wondering why your chapter headings are now in Wingdings. Audiobooks? That’s a whole other circus, with its own animals and ringmasters. Honestly, it’s no wonder authors hire publishers. Or develop drinking problems.

But I’m stubborn. Which brings us full circle.

I’ve now got two books heading for daylight, a few more waiting in the wings, and one bloody non-fiction beast that won’t see release until I finish the damn index. No pseudonym this time. No hiding. Just me, owning my sins and hoping the final product lands somewhere between “insightful” and “mercifully short.”

So yes, life may well be a journey. But indexing is the bit where the satnav breaks, the road floods, and the boot falls off the car. Give me the destination any day. The journey can fuck right off.