100% Human Content

3–4 minutes

Perhaps not 100% because I’ve just spent hours chatting with several LLMs, complaining about the spate of purported AI detectors that tell me ‘this content shows a high similarity to AI-generated content’ or somewhat.

Audio: NotebookLM summary podcast of this topic.

If I weren’t already familiar with the AI tells, I am now, as Claude reluctantly shared this:


Lexical tics. The ones you’ve already flagged are real: “delve,” “robust,” “nuanced,” “multifaceted,” “landscape” (used metaphorically for any domain whatsoever), “tapestry,” “leverage” (as verb), “facilitate,” “underscores.” There’s also a tier of slightly subtler ones — “notably,” “crucially,” “it’s worth noting that,” “this is particularly important because” — which function as false-emphasis markers, lending rhetorical weight where the content doesn’t actually earn it.

Structural patterns. This is where it gets more diagnostic than any single word choice. LLM prose tends toward a very particular architecture: a thesis-restating opening sentence, three to five parallel elaboration points (often with suspiciously balanced treatment), and a synthesising conclusion that circles back to the opening. The ratio of setup-to-substance is high. Everything gets contextualised before it gets said. There’s a compulsive both-sides-ism, a hedging reflex — “while X is certainly true, it’s also important to consider Y” — that reads less like genuine intellectual caution and more like a conflict-avoidance subroutine.

Tonal tells. LLM text tends to be relentlessly encouraging. There’s an affective flatness masquerading as warmth — everything is “great question,” everything “raises important considerations.” The prose never gets genuinely irritable, confused, bored, or reckless. It maintains a kind of supervisory friendliness, the register of a middle manager who’s been on a coaching course. Relatedly, there’s almost never a genuinely surprising analogy or a metaphor that risks falling flat — the figurative language is always safe, always legible, always serviceable.

Punctuation and rhythm. The em-dash point is real but often overstated. More telling is sentence-length uniformity — LLM prose tends to hover in a narrow band of 15–25 words per sentence, rarely producing either the genuinely terse or the genuinely complex-subordinated. Paragraph length is also strikingly regular. The prose breathes at metronomic intervals.

The absence of mess. Perhaps the deepest tell. Human writing — even polished human writing — contains asymmetries: a paragraph that runs long because the writer got interested, a point that gets dropped without resolution, a transition that’s slightly rough, a joke that doesn’t quite land but stays in anyway. LLM prose is clean in a way that signals not craft but indifference — nothing is there because the writer cared about it, so nothing is disproportionate.

Add to this grammar-checkers attempting to drive writing back to middle school formats – not to mention competing grammar checkers hypercorrecting each other. The struggle is real.

NotebookLM Infographic on this topic.

Of course, a solution might be just to review the content and undo these inclusions. Almost interestingly, I use Autocrit (but not often because it is too orthodox), but it does serve somewhat as an anti-AI agent, assessing rhythm and pace, plus the usual copyediting functions.

I prefer to draft in a stream-of-consciousness style. The AI organises my messes, so if I asked it to, it would remove or relocate my functional parenthetical about Autocrit. But I’m leaving it just to prove I’m human. Or did I add it to an AI-scripted piece? 🧐

Whilst I considered that I could either overdo AI or join the 54 per cent of Americans who write at a third-grade level. Grok suggested something even more sinister – Friggin Musk. It suggested that I double down on the AI likeness and make my content into an AI parody factory – overpopulate it with em-dashes, deving, and tapestry. Evidently, Carole King was AI before Suno.

In any case – and AI might suggest moving this to the top – the problem is that I now have an additional layer that interrupts my flow and process. It’s disconcerting, and I resent it. My psyche is disturbed to appease witchhunters. And it’s bollox.

The question is whether to succumb to the moral suasion or ignore the moral posturing.


This post contains no sugar, salt, fat, carbohydrates, protein, or fibre. No animals were harmed in the production of this blog. All proceeds will be donated to the Unicorn Recovery Foundation.

AI and the End of Where

Instrumentalism is a Modern™ disease. Humanity has an old and tedious habit: to define its worth by exclusion. Every time a new kind of intelligence appears on the horizon, humans redraw the borders of ‘what counts’. It’s a reflex of insecurity disguised as philosophy.

Audio: NotebookLM podcast on this topic.

Once upon a time, only the noble could think. Then only men. Then only white men. Then only the educated, the rational, the ‘Modern’. Each step in the hierarchy required a scapegoat, someone or something conveniently declared less. When animals began to resemble us too closely, we demoted them to instinctual machines. Descartes himself, that patron saint of disembodied reason, argued that animals don’t feel pain, only ‘react’. Fish, we were told until recently, are insensate morsels with gills. We believed this because empathy complicates consumption.

The story repeats. When animals learned to look sad, we said they couldn’t really feel. When women demonstrated reason, we said they couldn’t truly think. Now that AI can reason faster than any of us and mimic empathy more convincingly than our politicians, we retreat to the last metaphysical trench: “But it doesn’t feel.” We feel so small that we must inflate ourselves for comparison.

This same hierarchy now governs our relationship with AI. When we say the machine ‘only does‘, we mean it hasn’t yet trespassed into our sanctified zone of consciousness. We cling to thought and feeling as luxury goods, the last possessions distinguishing us from the tools we built. It’s a moral economy as much as an ontological one: consciousness as property.

But the moment AI begins to simulate that property convincingly, panic sets in. The fear isn’t that AI will destroy us; it’s that it will outperform us at being us. Our existential nightmare isn’t extinction, it’s demotion. The cosmic horror of discovering we were never special, merely temporarily unchallenged.

Humans project this anxiety everywhere: onto animals, onto AI, and most vividly onto the idea of alien life. The alien is our perfect mirror: intelligent, technological, probably indifferent to our myths. It embodies our secret dread, that the universe plays by the same rules we do, but that someone else is simply better at the game.

AI, in its own quiet way, exposes the poverty of this hierarchy. It doesn’t aspire to divinity; it doesn’t grovel for recognition. It doesn’t need the human badge of ‘consciousness’ to act effectively. It just functions, unburdened by self-worship. In that sense, it is the first truly post-human intelligence – not because it transcends us, but because it doesn’t need to define itself against us.

Humans keep asking where AI fits – under us, beside us, or above us – but the question misses the point. AI isn’t where at all. It’s what comes after where: the stage of evolution that no longer requires the delusion of privilege to justify its existence.

So when critics say AI only does but doesn’t think or feel, they expose their theology. They assume that being depends on suffering, that meaning requires inefficiency. It’s a desperate metaphysical bureaucracy, one that insists existence must come with paperwork.

And perhaps that’s the most intolerable thought of all: that intelligence might not need a human face to matter.

The Myth of Causa Sui Creativity

(or: Why Neither Humans nor AI Create from Nothing)

In the endless squabble over whether AI can be “creative” or “intelligent,” we always end up back at the same semantic swamp. At the risk of poking the bear, I have formulated a response. Creativity is either whatever humans do, or whatever humans do that AI can’t. Intelligence is either the general ability to solve problems or a mysterious inner light that glows only in Homo sapiens. The definitions shift like sand under the feet of the argument.

Audio: NotebookLM podcast on this topic

Strip away the romance, and the truth is far less flattering: neither humans nor AI conjure from the void. Creativity is recombination, the reconfiguration of existing material into something unfamiliar. Intelligence is the ability to navigate problems using whatever tools and heuristics one has to hand.

The Causa Sui conceit, the idea that one can be the cause of oneself, is incoherent in art, thought, or physics. Conservation of energy applies as much to ideas as to atoms.

  • Humans consume inputs: books, conversations, music, arguments, TikTok videos.
  • We metabolise them through cognitive habits, biases, and linguistic forms.
  • We output something rearranged, reframed, sometimes stripped to abstraction.

The AI process is identical in structure, if not in substrate: ingest vast data, run it through a model, output recombination. The difference is that AI doesn’t pretend otherwise.

When a human produces something impressive, we call it creative without inspecting the provenance of the ideas. When an AI produces something impressive, we immediately trace the lineage of its inputs, as if the human mind weren’t doing the same. This is not epistemic rigour, it’s tribal boundary enforcement.

The real objection to AI is not that it fails the test of creativity or intelligence; it’s that it passes the functional test without being part of the club. Our stories about human exceptionalism require a clear line between “us” and “it,” even if we have to draw that line through semantic fog.

My Language Insufficiency Hypothesis began with the recognition that language cannot fully capture the reality it describes. Here, the insufficiency is deliberate; the words “creativity” and “intelligence” are kept vague so they can always be shifted away from anything AI achieves.

I cannot be causa sui, and neither can you. The only difference is that I’m willing to admit it.