Enough, Anough, and the Archaeology of Small Mistakes

2–3 minutes

I have acquired a minor but persistent defect. When I try to type enough, my fingers often produce anough. Not always. Often enough to notice. Enough to be, regrettably, anough.

This is not a simple typo. The e and a keys are not conspirators with shared borders. This is not owned → pwned, where adjacency and gamer muscle memory do the heavy lifting. This is something more embarrassing and more interesting: a quasi-phonetic leak. A schwa forcing its way into print without permission. A clue for how I pronounce the word – like Depeche Mode’s I can’t get enough.

Audio: NotebookLM summary podcast of this topic.

Internally, the word arrives as something like ənuf, /əˈnʌf/. English, however, offers no schwa key. So the system improvises. It grabs the nearest vowel that feels acoustically honest and hopes orthography won’t notice. Anough slips through. Language looks the other way.

Image: Archaeology of anough
Video: Depeche Mode: I Just Can’t Get Enough

Is this revelatory?

Not in the heroic sense. No breakthroughs, no flashing lights. But it is instructive in the way cracked pottery is instructive. You don’t learn anything new about ceramics, but you learn a great deal about how the thing was used.

This is exactly how historians and historical linguists treat misspellings in diaries, letters, and court records. They don’t dismiss them as noise. They mine them. Spelling errors are treated as phonetic fossils, moments where the discipline of standardisation faltered, and speech bled through. Before spelling became prescriptive, it was descriptive. People wrote how words sounded to them, not how an academy later insisted they ought to look.

That’s how vowel shifts are reconstructed. That’s how accents are approximated. That’s how entire sound systems are inferred from what appear, superficially, to be mistakes. The inconsistency is the data. The slippage is the signal.

Anough belongs to this lineage. It’s a microscopic reenactment of pre-standardised writing, occurring inside a modern, over-educated skull with autocorrect turned off. For a brief moment, sound outranks convention. Orthography lags. Then the editor arrives, appalled, to tidy things up.

What matters here is sequence. Meaning is not consulted first. Spelling rules are not consulted first. Sound gets there early, locks the door, and files the paperwork later. Conscious intention, as usual, shows up after the event and claims authorship. That’s why these slips are interesting and why polished language is often less so. Clean prose has already been censored. Typos haven’t. They show the routing. They reveal what cognition does before it pretends to be in charge.

None of this licenses forensic grandstanding. We cannot reconstruct personalities, intentions, or childhood trauma from rogue vowels. Anyone suggesting otherwise is repackaging graphology with better fonts. But as weak traces, as evidence that thought passes through sound before it passes through rules, they’re perfectly serviceable.

Language doesn’t just record history. It betrays it. Quietly. Repeatedly. In diaries, in marginalia, and occasionally, when you’re tired and trying to say you’ve had enough. Or anough.

I’ll spare you a rant on ghoti.

Of GenAI, Gatekeepers, and Moral Panic in Minor Key

I recently had a run-in with opponents of generative artificial intelligence, GenAI for the rest of us. What began as a modest question about feedback mechanisms in writing spiralled swiftly into a fire-and-brimstone sermon on the moral hazards of artificial authorship.

Audio: NotebackLM podcast on this topic.

It started on Reddit, that bastion of civil discourse, in the r/FictionWriting group. I asked, sincerely and succinctly: Is using AI as a pre-alpha reader worthwhile, or is the praise too algorithmically eager to trust?

Rather than respond to the question, the moderators responded with an ultimatum: “Admit to AI-use again and you’ll be banned.” Like any self-respecting heretic, I excommunicated myself.

Some members ranted about how AI might “steal their ideas” – presumably to be repackaged by tech barons and sold back to the masses in Kindle Unlimited drivel. That’s fine, I suppose, if you’re into intellectual solipsism, but what does this paranoid fantasy have to do with my ideas?

This wasn’t a discussion. It was a witch trial. AI wasn’t the threat – difference was. Deviate from the sacred rites of pen-to-paper purity, and you’ll be cast into the outer darkness, where there is weeping and gnashing of syntax.

The underlying problem is prescriptivism – not just linguistic, but moral. And like all moral panic, it has little to do with ethics and everything to do with control.

To borrow the analogy: as with abortion, if you don’t like them, don’t have one. Abortions, one might argue, carry significantly more moral weight than paragraph polishing. Or do they? At what point does a draft become a soul?

We are fast becoming a culture where the tool is the sin, and the sinner the tool.

Image: Exhibit A

The Ethics of Feedback in an Algorithmic Age


We’ve entered an era where machines tell us how we’re doing, whether it’s an AI app rating our résumé, a model reviewing our fiction, or an algorithm nudging our attention with like-shaped carrots.

Full story here, from the Ridley side: Needle’s Edge: Scene Feedback 01

Recently, I ran a brutally raw scene through a few AI platforms. The kind of scene that’s meant to unsettle, not entertain. One of them responded with effusive praise: “Devastating, but masterfully executed.”

Was it honest?

Was it useful?

Or was it merely reflecting my own aesthetic back at me, polished by a thousand reinforcement-learning smiles?

This is the ethical dilemma: If feedback is always flattering, what good is it? If criticism is only tolerated when couched in praise, how do we grow? And when machine feedback mimics the politeness of a mid-level manager with performance anxiety, we risk confusing validation with truth.

There’s a difference between signal and applause. Between understanding and affirmation.

The danger isn’t that AI flatters us. The danger is that we start to believe it and forget that art, inquiry, and ethics thrive on friction.