Accusations of Writing Whilst Artificial

2–3 minutes

Accusations of writing being AI are becoming more common – an irony so rich it could fund Silicon Valley for another decade. We’ve built machines to detect machines imitating us, and then we congratulate ourselves when they accuse us of being them. It’s biblical in its stupidity.

A year ago, I read an earnest little piece on ‘how to spot AI writing’. The tells? Proper grammar. Logical flow. Parallel structure. Essentially, competence. Imagine that – clarity and coherence as evidence of inhumanity. We’ve spent centuries telling students to write clearly, and now, having finally produced something that does, we call it suspicious.

Audio: NotebookLM podcast on this topic and the next one.

My own prose was recently tried and convicted by Reddit’s self-appointed literati. The charge? Too well-written, apparently. Reddit – where typos go to breed. I pop back there occasionally, against my better judgment, to find the same tribunal of keyboard Calvinists patrolling the comment fields, shouting ‘AI!’ at anything that doesn’t sound like it was composed mid-seizure. The irony, of course, is that most of them wouldn’t recognise good writing unless it came with upvotes attached.

Image: A newspaper entry that may have been generated by an AI with the surname Kahn. 🧐🤣

Now, I’ll admit: my sentences do have a certain mechanical precision. Too many em dashes, too much syntactic symmetry. But that’s not ‘AI’. That’s simply craft. Machines learned from us. They imitate our best habits because we can’t be bothered to keep them ourselves. And yet, here we are, chasing ghosts of our own creation, declaring our children inhuman.

Apparently, there are more diagnostic signs. Incorporating an Alt-26 arrow to represent progress is a telltale infraction → like this. No human, they say, would choose to illustrate A → B that way. Instead, one is faulted for remembering – or at least understanding – that Alt-key combinations exist to reveal a fuller array of options: …, ™, and so on. I’ve used these symbols long before AI Wave 4 hit shore.

Interestingly, I prefer spaced en dashes over em dashes in most cases. The em dash is an Americanism I don’t prefer to adopt, but it does reveal the American bias in the training data. I can consciously adopt a European spin; AI, lacking intent, finds this harder to remember.

I used to use em dashes freely, but now I almost avoid them—if only to sidestep the mass hysteria. Perhaps I’ll start using AI to randomly misspell words and wreck my own grammar. Or maybe I’ll ask it to output everything in AAVE, or some unholy creole of Contemporary English and Chaucer, and call it a stylistic choice. (For the record, the em dashes in this paragraph were injected by the wee-AI gods and left as a badge of shame.)

Meanwhile, I spend half my time wrestling with smaller, dumber AIs – the grammar-checkers and predictive text gremlins who think they know tone but have never felt one. They twitch at ellipses, squirm at irony, and whimper at rhetorical emphasis. They are the hall monitors of prose, the petty bureaucrats of language.

And the final absurdity? These same half-witted algorithms are the ones deputised to decide whether my writing is too good to be human.

The Prison of Process

3–4 minutes

This is the proof copy of The Illusion of Light. I reviewed it, approved it, and signalled ‘good to go’. This is being printed and distributed through KDP. I’ve used them before. They’ve been reliable.

EDIT: On the upside, I’ve been notified that the hardback version is available, but it doesn’t appear to be available in France and Canada, two target regions. Hopefully, it becomes available outside of the U.S. soon.

Until now.

My approval triggered a workflow. I know workflows. I used to design them. I also know how dumb they can be.

KDP’s process flagged an error: the text on the spine might not be on the spine. ‘Might’. Theoretically. It could be offset, cut off, or printed on a fold. I understand their reasoning – high-speed printers, mechanical variance, and return risk. I also understand statistics, and a single observation doesn’t make a trend. But anyone with eyes can see at least a couple of millimetres of clearance at the top and bottom. This isn’t a case of ‘maybe’. It’s fine.

What fascinates me here is the ritual of compliance. Once a process is codified, it becomes self-justifying. The rule exists; therefore, it must be obeyed. There is no appeal to reason – only to the flowchart.

In the 1980s, when I was an audio engineer recording to two-inch magnetic tape, some of us liked to record hot, pushing the levels just past the recommended limits. You learned to ride the edge, to court distortion without collapse. That’s how I designed the spine text. Within tolerance. With headroom.

The problem is that modern systems don’t tolerate edges. There’s no “override” button for informed judgment. My remediation path is to shrink the type by half a point, resubmit, and pretend the machine was right.

What’s absurd is the timing. The same system that generated the proof approved this layout days ago. An automated OCR scan could have caught this phantom error earlier. Instead, the machine waits until the human signs off, then throws a flag so the process can justify its existence.

KDP is still faster and saner than IngramSpark. But this is capitalism distilled: survival by being marginally less incompetent than your competitor. Optimisation, not in the sense of best possible, but of barely better than worst acceptable.

The lesson, as always, is that processes begin as aids and end as prisons. The workflow, like the Enlightenment, believes itself rational. But the longer it runs, the less it serves the human at the console and the more it worships its own perfection.

Want to talk about meta? This underscores the contents of the book itself. What the Enlightenment once called Reason, modernity now calls Process. Both pretend to neutral objectivity while enshrining obedience as virtue. The bureaucracy of light has become digital – its catechism written in checkboxes, its priests replaced by automated validators. Every workflow promises fairness; each only codifies submission. The real danger isn’t that machines will replace judgment, but that we will stop noticing when they already have.


The Story Continues: Behind the Scenes

Image: Screenshot of Illustrator layout

I’ve reduced the font size on the spine from 14 points to 13.5. It still technically bleeds over a guideline. I hope I am not forced to reduce it to 13. A reason for text on the spine is to make it visible. Hopefully, the black-and-white vertical separation will help in this regard. Fingers crossed.

Will Singularity Be Anticlimactic?

Given current IQ trends, humanity is getting dumber. Let’s not mince words. This implies the AGI singularity—our long-heralded techno-apotheosis—will arrive against a backdrop of cognitive decay. A dimming species, squinting into the algorithmic sun.

Audio: NotebookLM podcast discussing this content.

Now, I’d argue that AI—as instantiated in generative models like Claude and ChatGPT—already outperforms at least half of the human population. Likely more. The only question worth asking is this: at what percentile does AI need to outperform the human herd to qualify as having “surpassed” us?

Living in the United States, I’m painfully aware that the average IQ hovers somewhere in the mid-90s—comfortably below the global benchmark of 100. If you’re a cynic (and I sincerely hope you are), this explains quite a bit. The declining quality of discourse. The triumph of vibes over facts. The national obsession with astrology apps and conspiracy podcasts.

Harvard astronomer Avi Loeb argues that as humans outsource cognition to AI, they lose the capacity to think. It’s the old worry: if the machines do the heavy lifting, we grow intellectually flaccid. There are two prevailing metaphors. One, Platonic in origin, likens cognition to muscle—atrophying through neglect. Plato himself worried that writing would ruin memory. He wasn’t wrong.

But there’s a counterpoint: the cooking hypothesis. Once humans learned to heat food, digestion became easier, freeing up metabolic energy to grow bigger brains. In this light, AI might not be a crutch but a catalyst—offloading grunt work to make space for higher-order thought.

So which is it? Are we becoming intellectually enfeebled? Or are we on the cusp of a renaissance—provided we don’t burn it all down first?

Crucially, most people don’t use their full cognitive capacity anyway. So for the bottom half—hell, maybe the bottom 70%—nothing is really lost. No one’s delegating their calculus homework to ChatGPT if they were never going to attempt it themselves. For the top 5%, AI is already a glorified research assistant—a handy tool, not a replacement.

The real question is what happens to the middle band. The workaday professionals. The strivers. The accountants, engineers, copywriters, and analysts hovering between the 70th and 95th percentiles—assuming our crude IQ heuristics even hold. They’re the ones who have just enough brainpower to be displaced.

That’s where the cognitive carnage will be felt. Not in the depths, not at the heights—but in the middle.

What’s wrong with anarcho-syndicalism?

What’s an anarcho-syndicalist supposed to do in the advent of artificial intelligence, process automation, and robots?

Wikipedia relates anarcho-Syndicalism as follows:

Anarcho-syndicalism (also referred to as revolutionary syndicalism)[1] is a theory of anarchism that views revolutionary industrial unionism or syndicalism as a method for workers in capitalist society to gain control of an economy and thus control influence in broader society. Syndicalists consider their economic theories a strategy for facilitating worker self-activity and as an alternative co-operative economic system with democratic values and production centered on meeting human needs.

The basic principles of anarcho-syndicalism are solidaritydirect action (action undertaken without the intervention of third parties such as politicians, bureaucrats and arbitrators) and direct democracy, or workers’ self-management. The end goal of syndicalism is to abolish the wage system, regarding it as wage slavery. Anarcho-syndicalist theory therefore generally focuses on the labour movement.[2]

Anarcho-syndicalists view the primary purpose of the state as being the defense of private property, and therefore of economic, social and political privilege, denying most of its citizens the ability to enjoy material independence and the social autonomy that springs from it.[3] Reflecting the anarchist philosophy from which it draws its primary inspiration, anarcho-syndicalism is centred on the idea that power corrupts and that any hierarchy that cannot be ethically justified must either be dismantled or replaced by decentralized egalitarian control.[3]

As a matter of preference, I’ve leaned toward anarcho-syndicalism. I don’t have a lot of faith in humans or humanity to govern or self-govern. The arguments for this, whether monarchies, democracies, plutocracies, or even anarchies are each rife with its own sets of problems. Still, I favour a system where there is no class of governors, though I am more of a fan of Proudhon over Marx.

Mind you, I don’t think humans make very good judgements and are as bad in groups as individuals but for different reasons—and especially where complexity or too many choices are available. That we’ve survived this long is, quite frankly, a miracle.

This said, it isn’t my problem. My contention is with the syndicalist aspect. If all of this human as worker displacement occurs as some are forecasting, there will be precious few workers. I am not saying that this is inevitable or will ever happen. My concern is merely conditional. If this were to happen, the idea of a worker-centric system is daft.

Do we just defer to people categorically, where we arrive at simple anarchism? Without delving, there are different flavours of, and I have neither the time nor the inclination to debate, for example, the merits of anarco-capitalism (an oxymoron if there ever was one) versus anarcho-communism or anarcho-transhumanism for that matter.

Although, I like how Kant identified four kinds of government…

  • Law and freedom without force (anarchy)
  • Law and force without freedom (despotism)
  • Force without freedom and law (barbarism)
  • Force with freedom and law (republic)

…the whole notion of freedom is another weasel word, and laws without force are unenforceable—pun intended. At least the syndicalism felt like it was intentional or purposeful. I understand why Plato despised the rabble, but as with the sorites paradox in the heap-hill distinction, where to the rabble distil down to something meaningful?