I recently had a run-in with opponents of generative artificial intelligence, GenAI for the rest of us. What began as a modest question about feedback mechanisms in writing spiralled swiftly into a fire-and-brimstone sermon on the moral hazards of artificial authorship.
It started on Reddit, that bastion of civil discourse, in the r/FictionWriting group. I asked, sincerely and succinctly: Is using AI as a pre-alpha reader worthwhile, or is the praise too algorithmically eager to trust?
Rather than respond to the question, the moderators responded with an ultimatum: “Admit to AI-use again and you’ll be banned.” Like any self-respecting heretic, I excommunicated myself.
Some members ranted about how AI might “steal their ideas” – presumably to be repackaged by tech barons and sold back to the masses in Kindle Unlimited drivel. That’s fine, I suppose, if you’re into intellectual solipsism, but what does this paranoid fantasy have to do with my ideas?
This wasn’t a discussion. It was a witch trial. AI wasn’t the threat – difference was. Deviate from the sacred rites of pen-to-paper purity, and you’ll be cast into the outer darkness, where there is weeping and gnashing of syntax.
The underlying problem is prescriptivism – not just linguistic, but moral. And like all moral panic, it has little to do with ethics and everything to do with control.
To borrow the analogy: as with abortion, if you don’t like them, don’t have one. Abortions, one might argue, carry significantly more moral weight than paragraph polishing. Or do they? At what point does a draft become a soul?
We are fast becoming a culture where the tool is the sin, and the sinner the tool.
