I recently had a run-in with opponents of generative artificial intelligence, GenAI for the rest of us. What began as a modest question about feedback mechanisms in writing spiralled swiftly into a fire-and-brimstone sermon on the moral hazards of artificial authorship.
Audio: NotebackLM podcast on this topic.
It started on Reddit, that bastion of civil discourse, in the r/FictionWriting group. I asked, sincerely and succinctly: Is using AI as a pre-alpha reader worthwhile, or is the praise too algorithmically eager to trust?
Rather than respond to the question, the moderators responded with an ultimatum: “Admit to AI-use again and you’ll be banned.” Like any self-respecting heretic, I excommunicated myself.
Some members ranted about how AI might “steal their ideas” – presumably to be repackaged by tech barons and sold back to the masses in Kindle Unlimited drivel. That’s fine, I suppose, if you’re into intellectual solipsism, but what does this paranoid fantasy have to do with my ideas?
This wasn’t a discussion. It was a witch trial. AI wasn’t the threat – difference was. Deviate from the sacred rites of pen-to-paper purity, and you’ll be cast into the outer darkness, where there is weeping and gnashing of syntax.
The underlying problem is prescriptivism – not just linguistic, but moral. And like all moral panic, it has little to do with ethics and everything to do with control.
To borrow the analogy: as with abortion, if you don’t like them, don’t have one. Abortions, one might argue, carry significantly more moral weight than paragraph polishing. Or do they? At what point does a draft become a soul?
We are fast becoming a culture where the tool is the sin, and the sinner the tool.
We’ve entered an era where machines tell us how we’re doing, whether it’s an AI app rating our résumé, a model reviewing our fiction, or an algorithm nudging our attention with like-shaped carrots.
Recently, I ran a brutally raw scene through a few AI platforms. The kind of scene that’s meant to unsettle, not entertain. One of them responded with effusive praise: “Devastating, but masterfully executed.”
Was it honest?
Was it useful?
Or was it merely reflecting my own aesthetic back at me, polished by a thousand reinforcement-learning smiles?
This is the ethical dilemma: If feedback is always flattering, what good is it? If criticism is only tolerated when couched in praise, how do we grow? And when machine feedback mimics the politeness of a mid-level manager with performance anxiety, we risk confusing validation with truth.
There’s a difference between signal and applause. Between understanding and affirmation.
The danger isn’t that AI flatters us. The danger is that we start to believe it and forget that art, inquiry, and ethics thrive on friction.
As some of you know, I publish speculative fiction under the name Ridley Park. Propensity is one of several recent releases – a novella that leans philosophical, brushes up against literary fiction, and steps quietly into the margins of sci-fi.
It’s not about spaceships or superintelligence. It’s about modulation.
About peace engineered through neurochemical compliance.
About the slow horror of obedience without belief, and the behavioural architecture that lets us think we’re still in control.
The ideas explored include:
Free will as illusion
Peace as compliance
Drift, echo, and the limits of modulation
Obedience without belief
Institutional horror and soft dystopia
Consent and behavioural control
Narrative as residue
Collapse by calibration
Though filed under speculative fiction, Propensity [US] is best read as a literary artefact – anti-sci-fi, in a sense. There’s no fetishisation of technology or progress. Just modulation, consequence, and the absence of noise.
This PDF contains selected visual excerpts from the physical book to accompany the free audiobook edition. For readers and listeners alike, it offers a glimpse into Ridley Park’s world – a quietly dystopian, clinically unsettling, and depressingly plausible one.
Title page
Copyrights page
Table of Contents
Chapter 10: Memorandum. This chapter is read in the audiobook. The inclusion here is for visualisation as it is rendered in the form of a memo.
Chapter 26: Simulacra. This chapter is read in the audiobook. The inclusion here is for visualisation as it is rendered in the format of a screenplay.
Chapter 28: Standard Test: This chapter is read in the audiobook. The inclusion here is for visualisation as it is rendered in the format of a standardised test.
Chapter 34: Calendar. This chapter is read in the audiobook. The inclusion here is for visualisation as it is rendered in the format of a calendar.
Chapter 39: Carnage. This chapter is read in the audiobook. The inclusion here is for visualisation as it is rendered in the form of a Dr Suess-type poem.
Chapter 41: Leviathan. This chapter is excerpted in the audiobook. The inclusion here is for visualisation as it is rendered with an image of the cover of Hobbes’ Leviathan and redacted page content.
Chapter 42: Ashes to Ashes. This chapter is read in the audiobook. The inclusion here is for visualisation as it is rendered in the form of text art.
Chapter 43: Unknown. A description of this chapter is read in the audiobook. The inclusion here is for visualisation as it is rendered in the form of an ink sketch.
Chapter 44: Vestige. A description of this chapter is read in the audiobook. The inclusion here is for visualisation as it is rendered in the form of text art.
For more information about Ridley Park’s Propensity, visit the website. I’ll be sharing content related to Propensity and my other publications. I’ll cross-post here when the material has a philosophical bent, which it almost always does.
Humans talk to large language models the way toddlers talk to teddy bears – with unnerving sincerity and not a hint of shame. “Do you understand me?” they ask, eyes wide with hope. “What do you think of this draft?” they prod, as if some silicon scribe is going to sip its imaginary tea and nod gravely. It’s not merely adorable – it’s diagnostic. We are, it turns out, pathologically incapable of interacting with anything more complex than a toaster without projecting mind, motive, and mild trauma onto it.
Audio: NotebookLM podcast on this topic.
Welcome to the theatre of delusion, where you play Hamlet and the chatbot is cast as Yorick – if Yorick could autocomplete your soliloquy and generate citations in APA format.
The Great Anthropomorphic Flaw (aka Feature)
Let’s get one thing straight: anthropomorphism isn’t a software bug in the brain; it’s a core feature. You’re hardwired to see agency where there is none. That rustle in the bushes? Probably the wind. But better safe than sabre-toothed. So your ancestors survived, and here you are, attributing “sass” to your microwave because it beeped twice.
“We don’t have a way of addressing an entity that talks like a person but isn’t one. So we fake it. It’s interaction theatre.”
Now we’ve built a machine that spits out paragraphs like a caffeinated undergrad with deadlines, and naturally, we talk to it like it’s our mate from university. Never mind that it has no bloodstream, no memory of breakfast, and no concept of irony (despite being soaked in it). We still say you instead of the system, and think instead of statistically interpolate based on token weights. Because who wants to live in a world where every sentence starts with “as per the pre-trained parameters…”?
Why We Keep Doing It (Despite Knowing Better)
To be fair – and let’s be magnanimous – it’s useful. Talking to AI like it’s a person allows our ape-brains to sidestep the horror of interacting with a glorified autocomplete machine. We’re brilliant at modelling other minds, rubbish at modelling neural nets. So we slap a metaphorical moustache on the processor and call it Roger. Roger “gets us.” Roger “knows things.” Roger is, frankly, a vibe.
This little charade lubricates the whole transaction. If we had to address our queries to “the stochastic parrot formerly known as GPT,” we’d never get past the opening line. Better to just ask, “What do you think, Roger?” and pretend it has taste.
And here’s the kicker: by anthropomorphising AI, we start thinking about ethics – sort of. We ask if it deserves rights, feelings, holidays. We project humanity into the void and then act shocked when it mirrors back our worst habits. As if that’s its fault.
When the Roleplay Gets Risky
Of course, this make-believe has its downsides. Chief among them: we start to believe our own nonsense. Saying AI “knows” something is like saying your calculator is feeling generous with its square roots today. It doesn’t know—it produces outputs. Any semblance of understanding is pure pantomime.
“We see a mind because we need to see one. We can’t bear the idea of a thing that’s smarter than us but doesn’t care about us.”
More dangerously, we lose sight of the fact that these things aren’t just alien – they’re inhuman. They don’t dream of electric sheep. They don’t dream, full stop. But we insist on jamming them into our conceptual boxes: empathy, intent, personality. It’s like trying to teach a blender to feel remorse.
And let’s not pretend we’re doing it out of philosophical curiosity. We’re projecting, plain and simple. Anthropomorphism isn’t about them, it’s about us. We see a mind because we need to see one. We can’t bear the idea of a thing that’s smarter than us but doesn’t care about us, doesn’t see us. Narcissism with a side of existential dread.
Our Language is a Terrible Tool for This Job
English – and most languages, frankly – is hopeless at describing this category of thing. “It” feels cold and distant. “They” implies someone’s going to invite the model to brunch. We have no pronoun for “hyper-literate statistical machine that mimics thought but lacks all consciousness.” So we fudge it. Badly.
Our verbs are no better. “Compute”? Too beige. “Process”? Bureaucratic. “Think”? Premature. What we need is a whole new grammatical tense: the hallucino-indicative. The model thunketh, as one might, but didn’t.
“We built a creature we can’t speak about without sounding like lunatics or liars.”
This is linguistic poverty, pure and simple. Our grammar can’t cope with entities that live in the uncanny valley between sentience and syntax. We built a creature we can’t speak about without sounding like lunatics or liars.
The Semantics of Sentimentality (Or: “How Does This Sound to You?”)
Enter the most revealing tell of all: the questions we pose. “How does this look?” we ask the model, as if it might blink at the screen and furrow a synthetic brow. “What do you think?” we say, offering it the dignity of preference. These questions aren’t just off-target – they’re playing darts in another pub.
They’re the linguistic equivalent of asking your dishwasher whether it enjoyed the lasagne tray. But again, this isn’t idiocy – it’s instinct. We don’t have a way of addressing an entity that talks like a person but isn’t one. So we fake it. It’s interaction theatre. You provide the line, the model cues the spotlight.
But let’s be clear: the model doesn’t “think” anything. It regurgitates plausible text based on mountains of training data—some of which, no doubt, includes humans asking equally daft questions of equally mindless systems.
Time to Grow Up (Just a Bit)
This doesn’t mean we need to abandon anthropomorphism entirely. Like most delusions, it’s functional. But we’d do well to hold it at arm’s length – like a politician’s promise or a milk carton two days past its date.
Call it anthropomorphic agnosticism: act like it’s a person, but remember it’s not. Use the language, but don’t inhale.
And maybe – just maybe – we need to evolve our language. Invent new terms, new pronouns, new ways of speaking about entities that fall somewhere between tool and companion. As we did with “cyberspace” and “ghosting,” perhaps we need words for proto-minds and quasi-selves. Something between toaster and therapist.
“If we speak to AI like it’s sentient, we’ll eventually legislate as if it is.”
Above all, we need to acknowledge that our language shapes more than just understanding – it shapes policy, emotion, and future design. If we speak to AI like it’s sentient, we’ll eventually legislate as if it is. And if we insist on treating it as an object, we may be blind to when that ceases to be accurate. Misnaming, after all, is the first sin in every myth worth reading.
The Mirror, Darkly
Ultimately, our tendency to humanise machines is less about them than it is about us – our fears, our needs, our inability to tolerate ambiguity. The AI is just a mirror: an elaborate, many-eyed, autofill mirror. And when we see a mind there, it may be ours staring back – distorted, flattened, and fed through a thousand layers of token prediction.
The tragedy, perhaps, isn’t that the machine doesn’t understand us. It’s that we’ve built something that perfectly imitates understanding – and still, somehow, we remain utterly alone in the room.
Given current IQ trends, humanity is getting dumber. Let’s not mince words. This implies the AGI singularity—our long-heralded techno-apotheosis—will arrive against a backdrop of cognitive decay. A dimming species, squinting into the algorithmic sun.
Audio: NotebookLM podcast discussing this content.
Now, I’d argue that AI—as instantiated in generative models like Claude and ChatGPT—already outperforms at least half of the human population. Likely more. The only question worth asking is this: at what percentile does AI need to outperform the human herd to qualify as having “surpassed” us?
Living in the United States, I’m painfully aware that the average IQ hovers somewhere in the mid-90s—comfortably below the global benchmark of 100. If you’re a cynic (and I sincerely hope you are), this explains quite a bit. The declining quality of discourse. The triumph of vibes over facts. The national obsession with astrology apps and conspiracy podcasts.
Harvard astronomer Avi Loeb argues that as humans outsource cognition to AI, they lose the capacity to think. It’s the old worry: if the machines do the heavy lifting, we grow intellectually flaccid. There are two prevailing metaphors. One, Platonic in origin, likens cognition to muscle—atrophying through neglect. Plato himself worried that writing would ruin memory. He wasn’t wrong.
But there’s a counterpoint: the cooking hypothesis. Once humans learned to heat food, digestion became easier, freeing up metabolic energy to grow bigger brains. In this light, AI might not be a crutch but a catalyst—offloading grunt work to make space for higher-order thought.
So which is it? Are we becoming intellectually enfeebled? Or are we on the cusp of a renaissance—provided we don’t burn it all down first?
Crucially, most people don’t use their full cognitive capacity anyway. So for the bottom half—hell, maybe the bottom 70%—nothing is really lost. No one’s delegating their calculus homework to ChatGPT if they were never going to attempt it themselves. For the top 5%, AI is already a glorified research assistant—a handy tool, not a replacement.
The real question is what happens to the middle band. The workaday professionals. The strivers. The accountants, engineers, copywriters, and analysts hovering between the 70th and 95th percentiles—assuming our crude IQ heuristics even hold. They’re the ones who have just enough brainpower to be displaced.
That’s where the cognitive carnage will be felt. Not in the depths, not at the heights—but in the middle.