In fact, I’ve been involved with ‘artificial intelligence’ since about 1990, when I developed Wave 3 AI â expert systems. Wave 4 is the current incarnation. Still no ‘intelligence’ to speak of, but marketers and hypsters love the term. Perhaps in Wave 5, the name will finally be correct.
Aside from my historical connexion, I want to share how I am using AI in my writing â in this case, ChatGPT 5.1. I’m not going to give much backstory on the setup, but I’ll point out some internal process logic.
Audio: NotebookLM podcast on this topic.
I have completed the manuscript for a Language Insufficiency Hypothesis, so I have been sharing screenshots of each page â usually a spread â and using the GPT as a second set of eyes. I’ll feed it an image and a request, in this case, to find key terms so I can capitalise and italicise them appropriately. In this example, this is the ending paragraph of Chapter 6.
Image 1: Sample chapter copy. In good order.
This first screenshot is an example of output. As is evident, it was looking, among other things, for the capitalisation of the concepts of Presumption Gap and Effectiveness Horizon.
Image 2: Sample GPT output â bad iconography
Notice the iconographic language is a bit off. The red X is a bit out of sync with the rest of the message, which says the entry is already correct. So, two instances; no problems. Next.
In this message, I warned that it was OCRing the screenshots but not retaining the formatting, and which is a reason I was sharing images over text.
Image 3: Sample GPT output â OCR confusion
What’s interesting is that it informed me that it would now treat the image as canonical. In Image 3 (above), it’s engaging in introspection â or at least self-dialogue. This is evidence that it (1) reviewed the results of the OCR, reviewed the image (as an image), and (3) compared 1 and 2 to arrive at the conclusion that the OCR had indeed dropped the formatting.
It wasn’t enough to inform me that everything was ok or, better still, not to bother me with noise since it was already in good order. Instead, it’s like an autist talking to itself. It reminds me of Raymond in Rain Man.
Image 34 (next) is the last example. Here, the OCR confounds rendering Horizon as HĎrizon, and then points out that I should avoid the same mistake of viewing o as Ď.
Image 4: Sample GPT output â OCR corruption
Thanks for the advice. I was losing sleep worrying about this possibility.
Conclusion
This is obviously a late-stage use case. I use GPT for ideation and research. Perhaps I’ll share an example of this later. I might be able to review my earlier notes for this project, but it was started years before the latest Wave arrived.
It seems my latest rant about AI-authorship accusations stirred something in me, that I need to apologise for being a professional writer â or is that a writing professional? Blame the Enlightenment, blame writing and communication courses, whatevs. I certainly do. But since some people are still waving the pitchforks, insisting that anything too coherent must be artificially tainted, I should address the obvious point everyone keeps missing:
The writing structures people attribute to AI arenât AI inventions. Theyâre human inventions. Old ones. Codified ones. And we made the machines copy them. Sure, they have a certain cadence. It’s the cadence you’d have if you also followed the patterns you should have been taught in school or opened a book or two on the topic. I may have read one or two over the years.
Wait for it⌠The orthodoxy is ours. I hate to be the one to break it to you.
Video: AI Robot Assistant (no audio)
Professional Writing Has Its Own House Rules (And Theyâre Older Than AI Neural Nets)
đ I wrote earlier how even talking about AI is censored in Reddit. đ¤ˇ
Audio: NotebookLM podcast on this topic and the last one.
Long before AI arrived to ruin civilisation and steal everyoneâs quiz-night jobs, weâd already built an entire culture around âproper writingâ. The sort of writing that would make a communications lecturer beam with pride. The Sith may come in twos; good writing comes in threes.
Tell them what youâre going to say.
Say it.
Repeat what you told them.
But wait, there’s more:
Use linear flow, not intellectual jazz.
One idea per paragraph, please.
Support it with sources.
Conclude like a responsible adult.
These arenât merely classroom antics. Theyâre the architectural grammar of academic, corporate, scientific, and policy writing. No poetic flourishes. No existential detours. No whimsical cadence. The aim is clarity, predictability, and minimal risk of misinterpretation. Itâs the textual equivalent of wearing sensible shoes to a board meeting. So when someone reads a structured piece of prose and yelps, âIt sounds like AI!â, what theyâre really saying is:
It sounds like someone who was properly trained to write in a professional context.
Je m’accuse. AI Didnât Invent Structure. We Forced It To Learn Ours. Full stop. The problem is that it did whilst most of us didn’t.
If AI tends toward this style â linear, tidy, methodical, lamentably sane â thatâs because we fed it millions of examples of âproper writingâ. It behaves professionally because we trained it on professional behaviour â surprisingly tautological. Quelle surprise, eh?
Just as you donât blame a mimeograph for producing a perfectly dull office memo, you donât blame AI for sounding like every competent academic whoâs been beaten with the stick of âclarity and cohesionâ. Itâs imitation through ingestion. Itâs mimicry through mass exposure.
And Now for the Twist: My Fiction Has None of These Constraints
My fiction roams freely. It spirals, loops, dissolves, contradicts, broods, and wanders through margins where structured writing fears to tread. It chases affect, not clarity. Rhythm, not rubrics. Experience, not exegesis.
No one wants to read an essay that sounds like Dr Seuss, but equally, no one wants a novel that reads like the bylaws of a pension committee.
Different aims, different freedoms: Academic and professional writing must behave itself. Fiction absolutely should not.
This isnât a value judgement. One isnât âtruerâ or âbetterâ than the other â only different tools for different jobs. One informs; the other evokes. One communicates; the other murmurs and unsettles.
Not to come off like Dr Phil (or Dr Suess), but the accusation itself reveals the real anxiety. When someone accuses a writer of sounding âAI-like,â what they usually mean is:
âYour writing follows the conventions we taught you to follow â but now those conventions feel suspect because a machine can mimic themâ.
And thatâs not a critique of the writing. Itâs a critique of the culture around writing â a panic that the mechanical parts of our craft are now automated and thus somehow âimpureâ.
But structure is not impurity. Professional clarity is not soullessness. Repetition, sequencing, scaffolding â these arenât telltale signs of AI; theyâre the residue of centuries of human pedagogy.
AI mirrors the system. It didnât create the system. And if the systemâs beginning to look uncanny in the mirror, thatâs a problem of the system, not the reflection.
In Short: The Craft Is Still the Craft, Whether Human or Machine
Professional writing has rules because it needs them. Fiction abandons them because it can. AI imitates whichever domain you place in front of it.
The accusation that structured writing âsounds artificialâ is merely a confusion between form and origin. The form is ours. The origin is irrelevant.
If clarity is now considered suspicious, I fear for the state of discourse. But then again, Iâve feared for that for some time.
And apparently, Iâve still got more to say on the matter.