Sustenance Novella free on Kindle

On 7–8 September 2025, the Kindle version of my Ridley Park novella Sustenance will be available free to everyone on Amazon. (It’s always free if you’re a KindleUnlimited member, but these two days open it up to all readers.)

👉 https://www.amazon.com/dp/B0F9PTK9N2

So what is Sustenance?

It’s a novella that begins with the dust and grit of rural Iowa – soybean fields, rusted trucks, a small town where everyone knows your name (and your secrets). At first glance, it reads like plainspoken realism, narrated by a local mechanic who insists he’s just a “regular guy.” But then the ground literally shifts. A crash. Figures glimpsed by firelight in the woods. Naked, violet-skinned beings who don’t laugh, don’t sleep, don’t even breathe.

What follows is not your usual alien-invasion story. It’s quieter, stranger, and more unsettling. The encounters with the visitors aren’t about lasers or spaceships – they’re about language, culture, and the limits of human understanding. What happens when concepts like propertylaw, or even woman and man don’t translate? What does it mean when intimacy itself becomes a site of misunderstanding?

Sustenance is for readers who:

  • Gravitate toward literary fiction with a speculative edge rather than straight genre beats
  • Appreciate the mix of the banal and the uncanny – the smell of corn dust giving way to the shock of alien otherness
  • Are interested in themes of language, power, misunderstanding, and human self-deception
  • Enjoy writers like Jeff VanderMeer, Margaret Atwood, Octavia Butler, or Denis Johnson – voices that blur realism, philosophy, and estrangement

This isn’t a story that offers tidy answers. It lingers, provokes, and resists easy moral closure. Think of it less as a sci-fi romp and more as a philosophical fable wrapped in small-town dust and cicada-song.

This version of the book is available in these Kindle storefronts:
United States, United Kingdom, Germany, France, Spain, Italy, Netherlands, Japan, Brazil, Canada, Mexico, Australia, and India

For more details, visit the Sustenance page.

📚 Grab your free Kindle copy on 7–8 September 2025

I made this Kindle version available for free to get some reviews. This promotion is all or nothing, so take advantage of the opportunity. If you want to leave a review, please do.

Boab’s God: Latent Agency in Welsh’s Kafkaesque Metamorphosis

I just read The Granton Star Cause in Irvine Welsh’s short story collection, The Acid House, and couldn’t help but reflect it off of Kafka’s Metamorphosis.

Kafka gave us Gregor Samsa: a man who wakes up as vermin, stripped of usefulness, abandoned by family, slowly rotting in a godless universe. His tragedy is inertia; his metamorphosis grants him no agency, only deeper alienation.

Audio: NotebookLM podcast on this topic.

Welsh replies with Boab Coyle, a lad who is likewise cast off, rejected by his football mates, scorned by his parents, dumped by his girlfriend, and discarded by his job. Boab is surplus to every domain: civic, familial, erotic, and economic. Then he undergoes his own metamorphosis. And here Welsh swerves from Kafka.

Boab meets his “god.” But the god is nothing transcendent. It is simply Boab’s latent agency, given a mask – a projection of his bitterness and thwarted desires. God looks like him, speaks like him, and tells him to act on impulses long repressed. Where Kafka leaves Gregor to die in silence, Welsh gives Boab a grotesque theology of vengeance.

Through a Critical Theory lens, the contrast is stark:

  • Marx: Both men are surplus. Gregor is disposable labour; Boab is Thatcher’s lumpen. Alienated, both become vermin.
  • Nietzsche: Gregor has no god, only the absurd. Boab makes one in his own image, not an Übermensch, but an Über-fly – quite literally a Superfly – a petty deity of spite.
  • Foucault: Gregor is disciplined into passivity by the family gaze. Boab flips it: as a fly, he surveils and annoys, becoming the pest-panopticon.
  • Bataille/Kristeva: Gregor embodies the abjection of his family’s shame. Boab revels in abjection, weaponising filth as his new mode of agency.

The punchline? Boab’s new god-agency leads straight to destruction. His rage is cathartic, but impotent. The lumpen are permitted vengeance only when it consumes themselves.

So Kafka gave us the tragedy of stasis; Welsh provides us with the tragedy of spite. Both are bleak parables of alienation, but Welsh injects a theology of bad attitude: a god who licenses action only long enough to destroy the actor.

Gregor rots. Boab rages. Both end the same way.

Trainspotting

Trainspotting Movie Poster
2–3 minutes

I identify strongly with Irvine Welsh’s characters in Trainspottingthe book, not the sanitised film version. Especially with Mark Renton, whose voice strips away illusions with a brutality that borders on honesty.

Audio: NotebookLM podcast on this topic.

Consider this passage from the chapter “Bang to Rites” (pp. 86–87), where Renton attends the funeral of his mate Billy. Billy joined the army to escape the dead-end life they all shared, only to be killed on duty in Northern Ireland. Renton’s verdict:

[1] Renton doesn’t let anyone off the hook. Not Billy, not the army, not the Oxbridge suits who polish the tragedy into something fit for the News at Ten. The uniform is a costume, a disguise: a working-class lad suddenly deemed “brave” only because he was wearing the right outfit when he died. Strip away the uniform, and he’d have been dismissed as a thug or a waster.

[2] Renton’s root-cause analysis is unsparing. Billy wasn’t killed by the man with the gun so much as by the machine that put him there – the state, the ruling classes, the ones who spin death into “sacrifice” while continuing to shuffle the poor like pawns across the board.

It’s this clarity that makes Welsh’s work more than a drug novel. Trainspotting isn’t just about needles and nods; it’s about seeing through the charade. Renton despises both establishment and rebellion because both are performance, both hollow. His cynicism is the closest thing to honesty in a world that would rather dress up corpses in borrowed dignity.

And maybe that’s why I feel the affinity: because subversion matters more than allegiance, and sometimes the only truthful voice is the one that refuses to be polite at the funeral.

Democracy: The Worst Form of Government, and Other Bedtime Stories

3–5 minutes

Karl Popper’s Paradox of Intolerance has become a kind of intellectual talisman, clutched like a rosary whenever fascists start goose-stepping into the town square. Its message is simple enough: to preserve tolerance, one must be intolerant of intolerance. Shine enough sunlight on bad ideas, and – so the pious hope – they’ll shrivel into dust like a vampire caught out at dawn.

If only.

The trouble with this Enlightenment fairy tale is that it presumes bad ideas melt under the warm lamp of Reason, as if ignorance were merely a patch of mildew waiting for the bleach of debate. But bad ideas are not bacteria; they are weeds, hydra-headed and delighting in the sun. Put them on television, and they metastasise. Confront them with logic, and they metastasise faster, now with a martyr’s halo.

Audio: NotebookLM podcast on this topic.

And here’s the part no liberal dinner-party theorist likes to face: the people most wedded to these “bad ideas” often don’t play the game of reason at all. Their critical faculties have been packed up, bubble-wrapped, and left in the loft decades ago. They don’t want dialogue. They want to chant. They don’t want evidence. They want affirmation. The Socratic method bounces off them like a ping-pong ball fired at a tank.

But let’s be generous. Suppose, just for a moment, we had Plato’s dream: a citizenry of Philosopher Kings™, all enlightened, all rational. Would democracy then work? Cue Arrow’s Impossibility Theorem, that mathematical killjoy which proves that even under perfect conditions – omniscient voters, saintly preferences, universal literacy – you still cannot aggregate those preferences into a system that is both fair and internally consistent. Democracy can’t even get out of its own way on paper.

Now throw in actual humans. Not the Platonic paragons, but Brexit-uncle at the pub, Facebook aunt with her memes, the American cousin in a red cap insisting a convicted felon is the second coming. Suddenly, democracy looks less like a forum of reasoned debate and more like a lottery machine coughing up numbers while we all pretend they mean “the will of the people.”

And this is where the Churchill quip waddles in, cigar smoke curling round its bowler hat: “Democracy is the worst form of government, except for all the others.” Ah yes, Winston, do please save us with a quip so well-worn it’s practically elevator music. But the problem is deeper than taste in quotations. If democracy is logically impossible (Arrow) and practically dysfunctional (Trump, Brexit, fill in your own national catastrophe), then congratulating ourselves that it’s “better than the alternatives” is simply an admission that we’ve run out of imagination.

Because there are alternatives. A disinterested AI, for instance, could distribute resources with mathematical fairness, free from lobbyists and grievance-mongers. Nursery schools versus nursing homes? Feed in the data, spit out the optimal allocation. No shouting matches, no demagoguery, no ballots stuffed with slogans. But here the defenders of democracy suddenly become Derrida in disguise: “Ah, but what does fair really mean?” And just like that, we are back in the funhouse of rhetorical mirrors where “fair” is a word everyone loves until it costs them something.

So perhaps democracy doesn’t require an “educated populace” at all; that was always just sugar-paper wrapping. It requires, instead, a population sufficiently docile, sufficiently narcotised by the spectacle, to accept the carnival of elections as a substitute for politics. Which is why calling the devotees of a Trump, or any other demagogue, a gaggle of lemmings is both accurate and impolitic: they know they’re not reasoning; they’re revelling. Your contempt merely confirms the script they’ve already written for you.

Video: Short callout to Karl Popper and Hilary Lawson.

The philosopher, meanwhile, is left polishing his lantern, muttering about reason to an audience who would rather scroll memes about pedophile pizza parlours. Popper warned us that tolerance cannot survive if it tolerates its own annihilation. Arrow proved that even if everyone were perfectly reasonable, the maths would still collapse. And Churchill, bless him, left us a one-liner to make it all seem inevitable.

Perhaps democracy isn’t the worst form of government except for all the others. Perhaps it’s simply the most palatable form of chaos, ballots instead of barricades, polling booths instead of pitchforks. And maybe the real scandal isn’t that people are too stupid for democracy, but that democracy was never designed to be about intelligence in the first place. It was always about managing losers while telling them they’d “had their say.”

The Enlightenment promised us reason; what it delivered was a carnival where the loudest barker gets the booth. The rest of us can either keep muttering about paradoxes in the corner or admit that the show is a farce and start imagining something else.

HR’s Neoliberal Mirage: Human Resources Without the Humans

Let us disabuse ourselves of one of the workplace’s most cherished delusions: that Human Resources is there for the humans. HR is not your therapist, not your advocate, not your confessor. HR is an appendage of the organisation, and like all appendages, its nerve endings run straight back to the corporate brain. Its “concern” for your well-being is merely a prophylactic against lawsuits and productivity dips. The error is ours; we persist in mistaking the guard dog for a pet.

Audio: NotebookLM podcast on this topic.

Bal and Dóci’s 2018 paper in the European Journal of Work and Organizational Psychology (EJWOP) tears the mask off this charade. They demonstrate how neoliberal ideology has seeped, unseen, into both workplace practice and the very research that pretends to study it objectively. Through the lenses of political, social, and fantasmatic logics, they show that neoliberalism has convinced us of three dangerous fairy tales:

  • Instrumentality: people are not people but “resources,” as fungible as printer ink.
  • Individualism: you are not part of a collective but a lone entrepreneur of the self, shackled to your CV like a Victorian debtor.
  • Competition: you are locked in an endless cage fight with your colleagues, grinning through the blood as you “collaborate.”

These logics are then dressed up in fantasies to keep us compliant: the fantasy of freedom (“you’re free to negotiate your own zero-hours contract”), the fantasy of meritocracy (“you got that promotion because you’re brilliant, not because you went to the right school”), and the fantasy of progress (“growth is good, even if it kills you”).

Those of us with an interest in Behavioural Economics had naively hoped that the mythical homo economicus, that laughable caricature of a rational, utility-maximising automaton, would by now be filed under “anachronistic curiosities.” Yet in corporate domains, this zombie shuffles on, cosseted and cultivated by neoliberal ideology. Far from being discredited, homo economicus remains a protected species, as if the boardroom were some Jurassic Park of bad economics.

The brilliance and the horror is that even the academics meant to be studying work and organisations have been captured by the same ideology. Work and Organisational Psychology (WOP) too often frames employees as variables in a productivity equation, measuring “engagement” only in terms of its effect on shareholder value. The worker’s humanity is rendered invisible; the employee exists only insofar as they generate output.

So when HR offers you a mindfulness app or a “resilience workshop,” remember: these are not gifts but obligations. There are ways of making you responsible for surviving a system designed to grind you down. The neoliberal trick is to convince you that your suffering is your own fault, that if only you had been more proactive, more adaptable, more “employable,” you wouldn’t be so crushed beneath the wheel.

Bal and Dóci are right: the way forward is to re-politicise and re-humanise organisational studies, to see workers as humans rather than performance units. But until then, expect HR to keep smiling while sharpening its knives.

Ages of Consent: A Heap of Nonsense

A response on another social media site got me thinking about another Sorites paradox. The notion just bothers me. I’ve long held that it is less a paradox than an intellectually lazy way to manoeuvre around language insufficiency.

<rant>

The law loves a nice, clean number. Eighteen to vote. Sixteen to marry. This-or-that to consent. As if we all emerge from adolescence on the same morning like synchronised cicadas, suddenly equipped to choose leaders, pick spouses, and spot the bad lovers from the good ones.

But the Sorites paradox gives the game away: if you’re fit to vote at 18 years and 0 days, why not at 17 years, 364 days? Why not 17 years, 363 days? Eventually, you’re handing the ballot to a toddler who thinks the Prime Minister is Peppa Pig. Somewhere between there and adulthood, the legislator simply throws a dart and calls it “science.”

To bolster this fiction, we’re offered pseudo-facts: “Women mature faster than men”, or “Men’s brains don’t finish developing until thirty.” These claims, when taken seriously, only undermine the case for a single universal threshold. If “maturity” were truly the measure, we’d have to track neural plasticity curves, hormonal arcs, and a kaleidoscope of individual factors. Instead, the state settles for the cheapest approximation: a birthday.

This obsession with fixed thresholds is the bastard child of Enlightenment rationalism — the fantasy that human variation can be flattened into a single neat line on a chart. The eighteenth-century mind adored universals: universal reason, universal rights, universal man. In this worldview, there must be one age at which all are “ready,” just as there must be one unit of measure for a metre or a kilogram. It is tidy, legible, and above all, administratively convenient.

Cue the retorts:

  • “We need something.” True, but “something” doesn’t have to mean a cliff-edge number. We could design systems of phased rights, periodic evaluations, or contextual permissions — approaches that acknowledge people as more than interchangeable cut-outs from a brain-development chart.
  • “It would be too complicated.” Translation: “We prefer to be wrong in a simple way than right in a messy way.” Reality is messy. Pretending otherwise isn’t pragmatism; it’s intellectual cowardice. Law is supposed to contend with complexity, not avert its gaze from it.

And so we persist, reducing a continuous, irregular, and profoundly personal process to an administratively convenient fiction — then dressing it in a lab coat to feign objectivity. A number is just a number, and in this case, a particularly silly one.

</rant>

The Myth of Causa Sui Creativity

(or: Why Neither Humans nor AI Create from Nothing)

In the endless squabble over whether AI can be “creative” or “intelligent,” we always end up back at the same semantic swamp. At the risk of poking the bear, I have formulated a response. Creativity is either whatever humans do, or whatever humans do that AI can’t. Intelligence is either the general ability to solve problems or a mysterious inner light that glows only in Homo sapiens. The definitions shift like sand under the feet of the argument.

Audio: NotebookLM podcast on this topic

Strip away the romance, and the truth is far less flattering: neither humans nor AI conjure from the void. Creativity is recombination, the reconfiguration of existing material into something unfamiliar. Intelligence is the ability to navigate problems using whatever tools and heuristics one has to hand.

The Causa Sui conceit, the idea that one can be the cause of oneself, is incoherent in art, thought, or physics. Conservation of energy applies as much to ideas as to atoms.

  • Humans consume inputs: books, conversations, music, arguments, TikTok videos.
  • We metabolise them through cognitive habits, biases, and linguistic forms.
  • We output something rearranged, reframed, sometimes stripped to abstraction.

The AI process is identical in structure, if not in substrate: ingest vast data, run it through a model, output recombination. The difference is that AI doesn’t pretend otherwise.

When a human produces something impressive, we call it creative without inspecting the provenance of the ideas. When an AI produces something impressive, we immediately trace the lineage of its inputs, as if the human mind weren’t doing the same. This is not epistemic rigour, it’s tribal boundary enforcement.

The real objection to AI is not that it fails the test of creativity or intelligence; it’s that it passes the functional test without being part of the club. Our stories about human exceptionalism require a clear line between “us” and “it,” even if we have to draw that line through semantic fog.

My Language Insufficiency Hypothesis began with the recognition that language cannot fully capture the reality it describes. Here, the insufficiency is deliberate; the words “creativity” and “intelligence” are kept vague so they can always be shifted away from anything AI achieves.

I cannot be causa sui, and neither can you. The only difference is that I’m willing to admit it.

The Enlightenment: A Postmortem

Or: How the Brightest Ideas in Europe Got Us into This Bloody Mess

Disclaimer: This output is entirely ChatGPT 4o from a conversation on the failure and anachronism of Enlightenment promises. I’m trying to finish editing my next novel, so I can’t justify taking much more time to share what are ultimately my thoughts as expounded upon by generative AI. I may comment personally in future. Until then, this is what I have to share.

AI Haters, leave now or perish ye all hope.


The Enlightenment promised us emancipation from superstition, authority, and ignorance. What we got instead was bureaucracy, colonialism, and TED Talks. We replaced divine right with data dashboards and called it progress. And like any good inheritance, the will was contested, and most of us ended up with bugger-all.

Below, I take each Enlightenment virtue, pair it with its contemporary vice, and offer a detractor who saw through the Enlightenment’s powder-wigged charade. Because if we’re going down with this ship, we might as well point out the dry rot in the hull.


1. Rationalism

The Ideal: Reason shall lead us out of darkness.
The Reality: Reason led us straight into the gas chambers—with bureaucratic precision.

Detractor: Max Horkheimer & Theodor Adorno

“Enlightenment is totalitarian.”
Dialectic of Enlightenment (1944)

Horkheimer and Adorno saw what reason looks like when it slips off its leash. Instrumental rationality, they warned, doesn’t ask why—it only asks how efficiently. The result? A world where extermination is scheduled, costs are optimised, and ethics are politely filed under “subjective.”


2. Empiricism

The Ideal: Observation and experience will uncover truth.
The Reality: If it can’t be measured, it can’t be real. (Love? Not statistically significant.)

Detractor: Michel Foucault

“Truth isn’t outside power… truth is a thing of this world.”
Power/Knowledge (1977)

Foucault dismantled the whole edifice. Knowledge isn’t neutral; it’s an instrument of power. Empiricism becomes just another way of disciplining the body—measuring skulls, classifying deviants, and diagnosing women with “hysteria” for having opinions.


3. Individualism

The Ideal: The sovereign subject, free and self-determining.
The Reality: The atomised consumer, trapped in a feedback loop of self-optimisation.

Detractor: Jean Baudrillard

“The individual is no longer an autonomous subject but a terminal of multiple networks.”
Simulacra and Simulation (1981)

You wanted autonomy? You got algorithms. Baudrillard reminds us that the modern “individual” is a brand in search of market validation. You are free to be whoever you want, provided it fits within platform guidelines and doesn’t disrupt ad revenue.


4. Secularism

The Ideal: Liberation from superstition.
The Reality: We swapped saints for STEMlords and called it even.

Detractor: Charles Taylor

“We are now living in a spiritual wasteland.”
A Secular Age (2007)

Taylor—perhaps the most polite Canadian apocalypse-whisperer—reminds us that secularism didn’t replace religion with reason; it replaced mystery with malaise. We’re no longer awed, just “motivated.” Everything is explainable, and yet somehow nothing means anything.


5. Progress

The Ideal: History is a forward march toward utopia.
The Reality: History is a meat grinder in a lab coat.

Detractor: Walter Benjamin

“The storm irresistibly propels him into the future to which his back is turned.”
Theses on the Philosophy of History (1940)

Benjamin’s “angel of history” watches helplessly as the wreckage piles up—colonialism, genocide, climate collapse—all in the name of progress. Every step forward has a cost, but we keep marching, noses in the spreadsheet, ignoring the bodies behind us.


6. Universalism

The Ideal: One humanity, under Reason.
The Reality: Enlightenment values, brought to you by cannon fire and Christian missionaries.

Detractor: Gayatri Chakravorty Spivak

“White men are saving brown women from brown men.”
Can the Subaltern Speak? (1988)

Universalism was always a bit… French, wasn’t it? Spivak unmasks it as imperialism in drag—exporting “rights” and “freedom” to people who never asked for them, while ignoring the structural violence built into the Enlightenment’s own Enlightened societies.


7. Tolerance

The Ideal: Let a thousand opinions bloom.
The Reality: Tolerance, but only for those who don’t threaten the status quo.

Detractor: Karl Popper

“Unlimited tolerance must lead to the disappearance of tolerance.”
The Open Society and Its Enemies (1945)

Popper, bless him, thought tolerance needed a firewall. But in practice, “tolerance” has become a smug liberal virtue signalling its own superiority while deplatforming anyone who makes the dinner party uncomfortable. We tolerate all views—except the unseemly ones.


8. Scientific Method

The Ideal: Observe, hypothesise, repeat. Truth shall emerge.
The Reality: Publish or perish. Fund or flounder.

Detractor: Paul Feyerabend

“Science is not one thing, it is many things.”
Against Method (1975)

Feyerabend called the whole thing a farce. There is no single “method,” just a bureaucratic orthodoxy masquerading as objectivity. Today, science bends to industry, cherry-picks for grants, and buries null results in the backyard. Peer review? More like peer pressure.


9. Anti-Authoritarianism

The Ideal: Smash the throne! Burn the mitre!
The Reality: Bow to the data analytics team.

Detractor: Herbert Marcuse

“Free election of masters does not abolish the masters or the slaves.”
One-Dimensional Man (1964)

Marcuse skewered the liberal illusion of choice. We may vote, but we do so within a system that already wrote the script. Authority didn’t vanish; it just became procedural, faceless, algorithmic. Bureaucracy is the new monarchy—only with more forms.


10. Education and Encyclopaedism

The Ideal: All knowledge, accessible to all minds.
The Reality: Behind a paywall. Written in impenetrable prose. Moderated by white men with tenure.

Detractor: Ivan Illich

“School is the advertising agency which makes you believe that you need the society as it is.”
Deschooling Society (1971)

Illich pulls the curtain: education isn’t emancipatory; it’s indoctrinatory. The modern university produces not thinkers but credentialed employees. Encyclopaedias are replaced by Wikipedia, curated by anonymous pedants and revision wars. Truth is editable.


Postscript: Picking through the Rubble

So—has the Enlightenment failed?

Not exactly. It succeeded too literally. It was taken at its word. Its principles, once radical, were rendered banal. It’s not that reason, progress, or rights are inherently doomed—it’s that they were never as pure as advertised. They were always products of their time: male, white, bourgeois, and utterly convinced of their own benevolence.

If there’s a path forward, it’s not to restore Enlightenment values, but to interrogate them—mercilessly, with irony and eyes open.

After all, the problem was never darkness. It was the people with torches who thought they’d found the only path.

From Thesaurus to Thoughtcrime: The Slippery Slope of Authorial Purity

I had planned to write about Beauvoir’s Second Sex, but this has been on my mind lately.

There’s a certain breed of aspiring author, let’s call them the Sacred Scribes, who bristle at the notion of using AI to help with their writing. Not because it’s unhelpful. Not because it produces rubbish. But because it’s impure.

Like some Victorian schoolmarm clutching her pearls at the sight of a split infinitive, they cry: “If you let the machine help you fix a clumsy sentence, what’s next? The whole novel? Your diary? Your soul?”

The panic is always the same: one small compromise and you’re tumbling down the greased chute of creative ruin. It starts with a synonym suggestion and ends with a ghostwritten autobiography titled My Journey to Authenticity, dictated by chatbot, of course.

But let’s pause and look at the logic here. Or rather, the lack thereof.

By this standard, you must also renounce the thesaurus. Shun the spellchecker. Burn your dictionary. Forbid yourself from reading any book you might accidentally learn from. Heaven forbid you read a well-constructed sentence and think, “I could try that.” That’s theft, isn’t it?

And while we’re at it, no editors. No beta readers. No workshopping. No taking notes. Certainly no research. If your brain didn’t birth it in a vacuum, it’s suspect. It’s borrowed. It’s… contaminated.

Let’s call this what it is: purity fetishism in prose form.

But here’s the twist: it’s not new. Plato, bless him, was already clutching his tunic about this twenty-four centuries ago. In Phaedrus, he warned that writing itself would be the death of memory, of real understanding. Words on the page were a crutch. Lazy. A hollow imitation of wisdom. True knowledge lived in the mind, passed orally, and refined through dialogue. Writing, he said, would make us forgetful, outsource our thinking.

Sound familiar?

Fast forward a few millennia, and we’re hearing the same song, remixed for the AI age:
“If you let ChatGPT restructure your second paragraph, you’re no longer the author.”
Nonsense. You were never the sole author. Not even close.

Everything you write is a palimpsest, your favourite genres echoing beneath the surface, your heroes whispering in your turns of phrase. You’re just remixing the residue. And there’s no shame in that. Unless, of course, you believe that distilling your top five comfort reads into a Frankenstein narrative somehow makes you an oracle of literary genius.

Here’s the rub: You’ve always been collaborating.

With your past. With your influences. With your tools. With language itself, which you did not invent and barely control. Whether the suggestion comes from a friend, an editor, a margin note, or an algorithm, what matters is the choice you make with it. That’s authorship. Let’s not play the slippery slope game.

The slippery slope argument collapses under its own weight. No one accuses you of cheating when you use a pencil sharpener. Or caffeine. Or take a walk to clear your head. But involve a silicon co-author, and suddenly you’re the Antichrist of Art?

Let’s not confuse integrity with insecurity. Let’s not confuse control with fear.

Use the tool. Ignore the purists. They’ve been wrong since Plato, and they’ll still be wrong when your great-grandchildren are dictating novels to a neural implant while bathing in synthetic dopamine.

The future of writing is always collaborative. The only question is whether you’ll join the conversation or sit in the corner, scribbling manifestos by candlelight, declaring war on electricity.

Of GenAI, Gatekeepers, and Moral Panic in Minor Key

I recently had a run-in with opponents of generative artificial intelligence, GenAI for the rest of us. What began as a modest question about feedback mechanisms in writing spiralled swiftly into a fire-and-brimstone sermon on the moral hazards of artificial authorship.

Audio: NotebackLM podcast on this topic.

It started on Reddit, that bastion of civil discourse, in the r/FictionWriting group. I asked, sincerely and succinctly: Is using AI as a pre-alpha reader worthwhile, or is the praise too algorithmically eager to trust?

Rather than respond to the question, the moderators responded with an ultimatum: “Admit to AI-use again and you’ll be banned.” Like any self-respecting heretic, I excommunicated myself.

Some members ranted about how AI might “steal their ideas” – presumably to be repackaged by tech barons and sold back to the masses in Kindle Unlimited drivel. That’s fine, I suppose, if you’re into intellectual solipsism, but what does this paranoid fantasy have to do with my ideas?

This wasn’t a discussion. It was a witch trial. AI wasn’t the threat – difference was. Deviate from the sacred rites of pen-to-paper purity, and you’ll be cast into the outer darkness, where there is weeping and gnashing of syntax.

The underlying problem is prescriptivism – not just linguistic, but moral. And like all moral panic, it has little to do with ethics and everything to do with control.

To borrow the analogy: as with abortion, if you don’t like them, don’t have one. Abortions, one might argue, carry significantly more moral weight than paragraph polishing. Or do they? At what point does a draft become a soul?

We are fast becoming a culture where the tool is the sin, and the sinner the tool.

Image: Exhibit A