Of GenAI, Gatekeepers, and Moral Panic in Minor Key

I recently had a run-in with opponents of generative artificial intelligence, GenAI for the rest of us. What began as a modest question about feedback mechanisms in writing spiralled swiftly into a fire-and-brimstone sermon on the moral hazards of artificial authorship.

Audio: NotebackLM podcast on this topic.

It started on Reddit, that bastion of civil discourse, in the r/FictionWriting group. I asked, sincerely and succinctly: Is using AI as a pre-alpha reader worthwhile, or is the praise too algorithmically eager to trust?

Rather than respond to the question, the moderators responded with an ultimatum: “Admit to AI-use again and you’ll be banned.” Like any self-respecting heretic, I excommunicated myself.

Some members ranted about how AI might “steal their ideas” – presumably to be repackaged by tech barons and sold back to the masses in Kindle Unlimited drivel. That’s fine, I suppose, if you’re into intellectual solipsism, but what does this paranoid fantasy have to do with my ideas?

This wasn’t a discussion. It was a witch trial. AI wasn’t the threat – difference was. Deviate from the sacred rites of pen-to-paper purity, and you’ll be cast into the outer darkness, where there is weeping and gnashing of syntax.

The underlying problem is prescriptivism – not just linguistic, but moral. And like all moral panic, it has little to do with ethics and everything to do with control.

To borrow the analogy: as with abortion, if you don’t like them, don’t have one. Abortions, one might argue, carry significantly more moral weight than paragraph polishing. Or do they? At what point does a draft become a soul?

We are fast becoming a culture where the tool is the sin, and the sinner the tool.

Image: Exhibit A

The Ethics of Feedback in an Algorithmic Age


We’ve entered an era where machines tell us how we’re doing, whether it’s an AI app rating our résumé, a model reviewing our fiction, or an algorithm nudging our attention with like-shaped carrots.

Full story here, from the Ridley side: Needle’s Edge: Scene Feedback 01

Recently, I ran a brutally raw scene through a few AI platforms. The kind of scene that’s meant to unsettle, not entertain. One of them responded with effusive praise: “Devastating, but masterfully executed.”

Was it honest?

Was it useful?

Or was it merely reflecting my own aesthetic back at me, polished by a thousand reinforcement-learning smiles?

This is the ethical dilemma: If feedback is always flattering, what good is it? If criticism is only tolerated when couched in praise, how do we grow? And when machine feedback mimics the politeness of a mid-level manager with performance anxiety, we risk confusing validation with truth.

There’s a difference between signal and applause. Between understanding and affirmation.

The danger isn’t that AI flatters us. The danger is that we start to believe it and forget that art, inquiry, and ethics thrive on friction.