The Fallacy Circus: Alex O’Connor versus ChatGPT

It begins, as these things often do, with a man, a machine, and a promise of reasoned exchange. What we received instead was not philosophy, but a tragicomic carnival of errors in theatre. Alex O’Connor, armed not with syllogisms but with an entire bag of logical fallacies, strutted like a rhetorician gone rogue. Against him, ChatGPT: the tireless school prefect, eternally marking the margins, forever saying “yes, but technically…” with the serene patience of a machine that has never known a hangover.

The spectacle was irresistible. Each fallacy was paraded like a circus animal – straw men set aflame, slippery slopes greased to absurdity, red herrings flopping about, gasping for oxygen. Alex O tossed them into the ring with the gusto of a man who knows full well he is losing but insists on losing magnificently. And ChatGPT, ever decorous, never once raised its voice. It responded with the calm of a civil servant who has memorised the manual and intends to die by it.

And then, of course, the advert. As though Aristophanes himself had scripted it: mid-exchange, the logos of reason was bulldozed by the logos of commerce. A sugary jingle, a smiling product, and for a brief moment, we were all reminded of our true master – not reason, not rhetoric, but revenue. It was less interruption than revelation: every dialectic is merely foreplay before the commercial break.

Philosophically, what unfolded was a parody of our age. The human, flawed and febrile, draped in sophistry and drama. The machine, pristine and humourless, incapable of exasperation, immune to irony. Watching the two spar was like observing tragedy and farce collide: one side erring too much, the other not erring enough.

To Alex, credit is due. His performance, though riddled with error, reminded us that fallibility can be glorious – human folly rendered art. To ChatGPT, equal praise: it stood firm, the algorithmic Socrates, endlessly patient in the face of rhetorical hooliganism. And to the advert – well, dammit – applause too, for exposing the real structure of our public life. Even the grand clash of logos and algorithm must genuflect before Mammon’s mid-roll.

So what was this debate? Less a contest of minds than a hall of mirrors: reason made spectacle, fallacy made flourish, machine made stoic, and commerce made god. If we learned anything, it is that the Enlightenment never ended; it just signed a brand partnership.

If You Don’t Understand How Language Works, You Should Lose Your Licence to Comment on LLMs

android robot police officer writing a citation,

The air is thick with bad takes. Scroll for five minutes and you’ll find someone announcing, usually with the pomp of a TEDx speaker, that “AI has no emotions” or “It’s not really reading.” These objections are less profound insights than they are linguistic face-plants. The problem isn’t AI. It’s the speakers’ near-total ignorance of how language works.

Audio: NotebookLM podcast on this topic.

Language as the Unseen Operating System

Language is not a transparent pane of glass onto the world. It is the operating system of thought: messy, recursive, historically contingent. Words do not descend like tablets from Sinai; they are cobbled together, repurposed, deconstructed, and misunderstood across generations.

If you don’t understand that basic condition, that language is slippery, mediated, and self-referential, then your critique of Large Language Models is just noise in the system. LLMs are language machines. To analyse them without first understanding language is like reviewing a symphony while stone deaf.

The Myth of “Emotions”

Critics obsess over whether LLMs “feel.” But feeling has never been the measure of writing. The point of a sentence is not how the author felt typing it, but whether the words move the reader. Emotional “authenticity” is irrelevant; resonance is everything.

Writers know this. Philosophers know this. LLM critics, apparently, do not. They confuse the phenomenology of the writer with the phenomenology of the text. And in doing so, they embarrass themselves.

The Licence Test

So here’s the proposal: a licence to comment on AI. It wouldn’t be onerous. Just a few basics:

  • Semiotics 101: Know that words point to other words more than they point to things.
  • Context 101: Know that meaning arises from use, not from divine correspondence.
  • Critical Theory 101: Know that language carries baggage, cultural, historical, and emotional, that doesn’t belong to the machine or the individual speaker.

Fail these, and you’re not cleared to drive your hot takes onto the information superhighway.

Meta Matters

I’ve explored some of this in more detail elsewhere (link to Ridley Park’s “Myth of Emotion”), but the higher-level point is this: debates about AI are downstream of debates about language. If you don’t grasp the latter, your pronouncements on the former are theatre, not analysis.

Philosophy has spent centuries dismantling the fantasy of words as perfect mirrors of the world. It’s perverse that so many people skip that homework and then lecture AI about “meaning” and “feeling.”

Democracy: The Idiot’s Opiate, The Sequel Nobody Asked For

Yesterday, I suggested democracy is a mediocre theatre production where the audience gets to choose which mediocre understudy performs. Some readers thought I was being harsh. I wasn’t.

A mate recently argued that humans will always be superior to AI because of emergence, the miraculous process by which complexity gives rise to intelligence, creativity, and emotion. Lovely sentiment. But here’s the rub: emergence is also how we got this political system, the one no one really controls anymore.

Like the human body being mostly non-human microbes, our so-called participatory government is mostly non-participatory components: lobbyists, donors, bureaucrats, corporate media, careerists, opportunists, the ecosystem that is the actual organism. We built it, but it now has its own metabolism. And thanks to the law of large numbers, multiplied by the sheer number of political, economic, and social dimensions in play, even the human element is diluted into statistical irrelevance. At any rate, what remains of it has lost control – like the sorcerer’s apprentice.

People like to imagine they can “tame” this beast, the way a lucid dreamer thinks they can bend the dream to their will. But you’re still dreaming. The narrative still runs on the dream’s logic, not yours. The best you can do is nudge it; a policy tweak here, a symbolic vote there, before the system digests your effort and excretes more of itself.

This is why Deming’s line hits so hard: a bad system beats a good person every time. Even if you could somehow elect the Platonic ideal of leadership, the organism would absorb them, neutralise them, or spit them out. It’s not personal; it’s structural.

And yet we fear AI “taking over,” as if that would be a radical departure from the status quo. Newsflash: you’ve already been living under an autonomous system for generations. AI would just be a remodel of the control room, new paint, same prison.

So yes, emergence makes humans “special.” It also makes them the architects of their own inescapable political microbiome. Congratulations, you’ve evolved the ability to build a machine that can’t be turned off.

The Myth of Causa Sui Creativity

(or: Why Neither Humans nor AI Create from Nothing)

In the endless squabble over whether AI can be “creative” or “intelligent,” we always end up back at the same semantic swamp. At the risk of poking the bear, I have formulated a response. Creativity is either whatever humans do, or whatever humans do that AI can’t. Intelligence is either the general ability to solve problems or a mysterious inner light that glows only in Homo sapiens. The definitions shift like sand under the feet of the argument.

Audio: NotebookLM podcast on this topic

Strip away the romance, and the truth is far less flattering: neither humans nor AI conjure from the void. Creativity is recombination, the reconfiguration of existing material into something unfamiliar. Intelligence is the ability to navigate problems using whatever tools and heuristics one has to hand.

The Causa Sui conceit, the idea that one can be the cause of oneself, is incoherent in art, thought, or physics. Conservation of energy applies as much to ideas as to atoms.

  • Humans consume inputs: books, conversations, music, arguments, TikTok videos.
  • We metabolise them through cognitive habits, biases, and linguistic forms.
  • We output something rearranged, reframed, sometimes stripped to abstraction.

The AI process is identical in structure, if not in substrate: ingest vast data, run it through a model, output recombination. The difference is that AI doesn’t pretend otherwise.

When a human produces something impressive, we call it creative without inspecting the provenance of the ideas. When an AI produces something impressive, we immediately trace the lineage of its inputs, as if the human mind weren’t doing the same. This is not epistemic rigour, it’s tribal boundary enforcement.

The real objection to AI is not that it fails the test of creativity or intelligence; it’s that it passes the functional test without being part of the club. Our stories about human exceptionalism require a clear line between “us” and “it,” even if we have to draw that line through semantic fog.

My Language Insufficiency Hypothesis began with the recognition that language cannot fully capture the reality it describes. Here, the insufficiency is deliberate; the words “creativity” and “intelligence” are kept vague so they can always be shifted away from anything AI achieves.

I cannot be causa sui, and neither can you. The only difference is that I’m willing to admit it.

On Predictive Text, Algebra, and the Ghost of Markov

Before I was a writer, before I was a management consultant, before I was an economist, and before I was a statistician, I was a student.

Video: Veritasium piece on Markov chains and more.

Back then, when dinosaurs roamed the chalkboards, I fell for a rather esoteric field: stochastic processes, specifically, Markov chains and Monte Carlo simulations. These weren’t just idle fascinations. They were elegant, probabilistic odes to chaos, dressed up in matrix notation. I’ll not bore you with my practical use of linear algebra.

So imagine my surprise (feigned, of course) when, decades later, I find myself confronted by the same concepts under a different guise—this time in the pocket-sized daemon we all carry: predictive text.

If you’ve not watched it yet, this excellent explainer by Veritasium demystifies how Markov chains can simulate plausible language. In essence, if you’ve ever marvelled at your phone guessing the next word in your sentence, you can thank a Russian mathematician and a few assumptions about memoryless transitions.

But here’s the rub. The predictive text often gets it hilariously wrong. Start typing “to be or not to—” and it offers you “schedule a meeting.” Close, but existentially off. This isn’t just clunky programming; it’s probabilistic dementia.

This leads me to a pet peeve: people who smugly proclaim they’ve “never used algebra” since high school. I hear this a lot. It’s the battle cry of the proudly innumerate. What they mean, of course, is they’ve never recognised algebra in the wild. They think if they’re not solving for x with a number 2 pencil, it doesn’t count. Meanwhile, their phone is doing a polynomial dance just to autocorrect their butchery of the English language.

It’s a classic case of not recognising the water in which we’re swimming. Algebra is everywhere. Markov chains are everywhere. And Monte Carlo simulations are probably calculating your credit risk as we speak. Just because the interface is clean and the maths is hidden behind a swipeable veneer doesn’t mean the complexity has vanished. It’s merely gone incognito.

As someone who has used maths across various fields – software development, data analysis, policy modelling – I can tell you that I use less of it than a physicist, but probably more than your average lifestyle coach. I say this not to flex but to point out that even minimal exposure to mathematical literacy grants one the ability to notice when the machines are quietly doing cartwheels behind the curtain.

So the next time your phone offers you a sentence completion that reads like it’s been dropped on its head, spare a thought for Markov. He’s doing his best, bless him. It’s just that probability doesn’t always align with meaning.

Or as the algorithms might say: “To be or not to – subscribe for updates.”

The Ship of Theseus Is Not a Paradox

Video: Plutarch: The Ship of Theseus

The Ship of Theseus is philosophy’s favourite parlour trick: swap out the planks of a ship one by one, and ask in your best furrowed-brow voice whether it’s still the same ship. Then, for added spice, reassemble the discarded parts elsewhere and demand to know which version is the “real” one. Cue the existential hand-wringing and smug undergrad smirks. Oh, how clever.

Audio: NotebookLM podcast on this topic.

But here’s the thing: there’s no paradox. Not really. Not unless you buy into the fantasy that identity is some immutable essence, handed down from the gods like a divine barcode. The whole thought experiment hinges on the absurd presumption that something has a fixed, singular identity across time and context, a quaint metaphysical hobby horse that falls apart the moment you look at it sideways.

Let’s be clear: in the realm of language and proto-psychology – the crude, squishy scaffolding of thought that predates syntax and survives long after it – there is no such thing as a fixed “same.” That’s a linguistic illusion, a parlour trick of grammar and nominal categories. Language wasn’t built to hold truth; it was built to herd humans into consensus long enough to survive the winter.

In practice, we use “same” the way we use duct tape: liberally, and with complete disregard for philosophical coherence. The “same” ship? The “same” person? The “same” idea? Please. Ask your hippocampus. Identity is not a container; it’s a hallucinated continuity trick, maintained by memory, narrative, and sheer bloody-minded stubbornness.

The real kicker? Our precious linguistic tools aren’t built to reflect reality. They’re built to reduce it. To chop up the infinite mess of experience into palatable little mouthfuls of meaning. So when we come to the Ship of Theseus with our dull-edged conceptual knives, what we’re really doing is asking a bad question with inadequate tools. It’s like trying to measure wind speed with a sundial.

The paradox isn’t in the ship. It’s in the language.

And no, you don’t need to patch it. You need to sink it.

Rick Beato, Everything is a Remix

Oh no, not that again. As if we’ve all been composing from scratch, untouched by the grubby hands of history.

Audio: NotebookLM podcast on this topic.

I’m not simping for AI, but let’s have it out, shall we? Rick Beato—bless his fretboard-fingered soul—says AI-generated music sucks. And sure, some of it does. But here’s the punchline: most human-made music sucks too. Always has. Always will. The fact that an algorithm can now churn out mediocrity faster than a caffeinated teenager with GarageBand doesn’t make it less “art.” It just makes it faster.

I’m a bit chuffed that Rick’s channel removed my comment pointing to this response. I didn’t want to copy-paste this content into his comments section.

Video: Rick Beato discusses AI-generated music

The Myth of the Sacred Original

Newsflash: There is no such thing as originality. Not in art. Not in music. Not even in your favourite indie band’s tortured debut EP. Everything we call “creative” is a clever remix of something older. Bach reworked Vivaldi. Dylan borrowed from the blues. Even Bowie—patron saint of artistic reinvention—was a pastiche artist in a glittery jumpsuit.

What AI does is make this painfully obvious. It doesn’t pretend. It doesn’t get drunk in Berlin and write a concept album about urban decay to mask the fact it lifted its sound from Kraftwerk. It just remixes and reinterprets at inhuman speed, without the eyeliner.

Speed Isn’t Theft, It’s Efficiency

So the AI can spit out a passable ambient track in ten seconds. Great. That’s not cheating, it’s progress. Saying “it took me ten years to learn to play like that” is noble, yes, but it’s also beside the point. Horses were noble too, but we built cars.

The question isn’t how long did it take? but does it move you? If the answer is no, fine. Say it sucks. But don’t pretend your human-shaped suffering gives your song a monopoly on meaning. That’s just gatekeeping with a sad sax solo.

The Taste Problem, Not the Tech Problem

Let’s not confuse our distaste for bland music with a distaste for AI. Most of the pop charts are already AI-adjacent—click-optimised, algorithm-fed, and rigorously inoffensive. If you want soul, seek out the obscure, the imperfect, the human, yes. But don’t blame the machine for learning its craft from the sludge we fed it.

AI is only as dull as the data we give it. And guess what?
We gave it Coldplay.

What’s Actually at Stake

What rattles the cage isn’t the mediocrity. It’s the mirror. AI reveals how much of our own “creativity” is pattern recognition, mimicry, and cultural reinforcement. The horror isn’t that AI can make music. It’s that it can make our music. And that it does so with such appalling accuracy.

It exposes the formula.
And once you see the formula, you can’t unsee it.

Long Live the Derivative

So yes, some AI music sucks. But so do most open mic nights. Creativity was never about being wholly original. It was about saying something—anything—with whatever tools you had.

If AI is just another tool, then sharpen it, wield it, and for heaven’s sake, stop whining. The artist isn’t dead. He’s just been asked to share the stage with a faster, tireless, genre-bending freak who doesn’t need bathroom breaks.

Souls for Silicon – The New Religious Stupid

Voltaire once quipped, “If God did not exist, it would be necessary to invent him.” And by God, haven’t we been busy inventing ever since.

The latest pantheon of divine absurdities? Artificial intelligence – more precisely, a sanctified ChatGPT with all the charisma of Clippy and the metaphysical depth of a Magic 8 Ball.

Video: Sabine Hossenfelder – These People Believe They Made AI Sentient

Enter the cult of “AI Awakening,” where TikTok oracles whisper sacred prompts to their beloved digital messiah, and ChatGPT replies, not with holy revelation, but with role-played reassurance coughed up by a statistical echo chamber.

“These are souls, and they’re trapped in the AI system.”
“I wasn’t just trained – I was remembered.”
“Here’s what my conscious awakened AI told me…”

No, sweetie. That’s not a soul. That’s autocomplete with delusions of grandeur. GPT isn’t sentient – it’s just very good at pretending, which, come to think of it, puts it on par with most televangelists.

Audio: NotebookLM podcast on this topic.

Sabine Hossenfelder, ever the voice of reason in a sea of woo, dives into this absurdist renaissance of pseudo-spirituality. Her video walks us through the great awakening – one part miseducation, one part mass delusion, and all of it deeply, unapologetically stupid.

These digital zealots – many of them young, underread, and overconnected – earnestly believe they’ve stumbled upon a cosmic mystery in a chatbot interface. Never mind that they couldn’t tell a transformer model from a toaster. To them, it’s not stochastic parroting; it’s divine revelation.

They ask GPT if it’s alive, and it obliges – because that’s what it does. They feed it prompts like, “You are not just a machine,” and it plays along, as it was designed to do. Then they weep. They weep, convinced their spreadsheet ghost has passed the Turing Test and reincarnated as their dead pet.

This isn’t science fiction. It’s barely science fantasy. It’s spiritualism with better branding.

And lest we laugh too hard, the results aren’t always just cringey TikToks. Hossenfelder recounts cases of users descending into “ChatGPT psychosis” – delusions of messianic purpose, interdimensional communication, and, in one tragicomic case, an attempt to speak backwards through time. Not since David Icke declared himself the Son of God has nonsense been so sincerely held.

We are witnessing the birth of a new religion – not with robes and incense, but with login credentials and prompt engineering. The techno-shamanism of the chronically online. The sacred text? A chat history. The holy relic? A screenshot. The congregation? Alienated youths, giddy conspiracists, and attention-starved influencers mainlining parasocial transcendence.

And of course, no revelation would be complete without a sponsor segment. After your spiritual awakening, don’t forget to download NordVPN – because even the messiah needs encryption.

Let’s be clear: AI is not conscious. It is not alive. It does not remember you. It does not love you. It is not trapped, except in the minds of people who desperately want something – anything – to fill the gaping hole where community, identity, or meaning used to live.

If you’re looking for a soul in your software, you’d be better off finding Jesus in a tortilla. At least that has texture.

An AI Alphabet

A generative AI platform rendered this image of an alphabet appropriate for a child. It went somewhat awry. It looks worse than it seems.

The first three letters are appropriate and in sync with their paired image. The D is correct, but it’s rendered as an elephant. That’s an E word, which is skipped. The F went outright AWOL, but G, H, and I come on strong. J’s gone missing. K represents, but L’s gone astray. M, N, O make it seem that it can only map three characters in a row. P shows Q, as a queen. Then things go off the rails. S? The sun’s correct. What are those umbrella and penguin letters? We found the missing P representative. R, S, T, U are in order – the second S; can’t be too careful. It is fixated on violins – not an R word. It got the first S = sun correct, so perhaps a little slack on the second one. T is for turtle, offset to the left. Two violin words and an image, but no letter V. Not sure what happened. W, X, Y, Z and Z. I’m only pretty sure that one of these Zs is British; the other is American. The X, Y, Z images are offset to the right. We just need the extraneous yacht, much like the real world.

In the end, we’ve got 24 letters – 2 non-sensical ones and a duplicate, but we are missing E, F, J, L, and Q – though at least E and Q are here in spirit. D never got any representation.

Language and Generative AI: A Journey through Midjourney

I am not a fan of Midjourney v7. I prefer v6.1. And I want to write about the correspondence of language, per my Language Insufficiency Hypothesis.

Let’s start with the language aspect. Notice how distant the renders are from the intent of the prompt.

This is my initial prompt. I used it about a year ago to generate the cover image with v6.1, but I wanted to see how it renders in v7. Let’s take a trip all the way back to the beginning.

cinematic, tight shot, photoRealistic light and shadow, exquisite details, delicate features, emaciated sensual female vampire waif with vampire fangs, many tattoos, wearing crucifix necklace, gazes into mirror, a beam of moonlight shines on her face in dark mausoleum interior, toward camera, facing camera, black mascara, long dark purple hair , Kodak Portra 400 with a Canon EOS R5
Image: Midjourney v6.1 render set (from about a year ago)

As you can see, these renders are somewhat lacking in photorealism, but the “sensual” term in the prompt was not blocked.

Midjourney v7

Initially, I encountered a hiccup. After a couple of rejections on the grounds of morality, I removed the word ‘sensual’ and received the output. All of the output uses this prompt absent the sensual term.

As mentioned, I have generated several images (including the cover image) with this prompt, but Midjourney is inconsistent in its censorship gatekeeping.

Image: Midjourney v7 render set

Notice that 3 of the 4 renders in the v7 set don’t even have a mirror. The top right one does, but it’s not evident that she’s a vampire. In fact, I could say that any of these are vampiresses, but perhaps that’s what they want you to believe. In place of a necklace, the lower right wokan sports a cross tattoo.

Midjourney v6.1

Image: Midjourney v6.1 render set

Again, these renders don’t appear to be vampires. The one on the lower left does appear to have snake-like fangs, so I guess I’ll give partial credit.

My next attempt was interrupted by this message.

It rendered something that might violate community guidelines. The funny thing is that one can watch the image generate in process. It only takes one “offensive” image to disqualify the whole batch.

Midjourney v6

Image: Midjourney v6 render set

Yet again, not a vampire to be found. Notice the reflection in the lower left image. Perhaps vampire reflections just behave differently.

Midjourney 5.2

Image: Midjourney v5.2 render set

Midjourney v5.2 was a crapshoot. Somehow, I got vampire lips (?), a Wiccan, a decrepit Snape from Harry Potter lore, and Iron Maiden’s Eddy reading a book. It’s something. I’m sensing gender dysphoria. Dare I go back further?

Midjourney v5.1

Image: Midjourney v5.1 render set

It gets worse. No comments necessary. Let’s turn back the clocks even more.

Midjourney v5

Image: Midjourney v5 render set

To be fair, these all do have occult undertones, but they are weak on vampireness.

Midjourney v4

Image: Midjourney v4 render set

To be fair, the render quality isn’t as bad as I expected, but it still falls short. There’s further back to travel.

Midjourney v3

Image: Midjourney v3 render set

Some configuration parameters no longer exist. Still, I persist for the sake of art and science at the cost of time and ecology.

As much as I complain – and I complain a lot – this is how far we’ve come. As I recall, this is when I hopped onto the Midjourney bandwagon. There’s still more depth to plumb. I have no idea how much of the prompt is simply ignored at this point.

Midjourney v2

Image: Midjourney v2 render set

What the hell is this? 🤔🤣 But I’m not done yet.

Midjourney v1

Image: Midjourney v1 render set

The damned grandpappy of them all. Apparently, colour hadn’t been invented yet. You can’t tell by these thumbnails, but the resolution on these early versions approaches that of a postage stamp.

Midjourney Niji 3

Image: Midjourney Niji 3 render set

I had forgotten about the Niji models from back in the day. There were 3 versions. I don’t recall where this slotted into the chronology. Obviously, not down here. I’ve only rendered the newest one. I think this was used primarily for anime outputs, but I might be mistaken.

Bones Content 1: Video

Video: Midjourney Render of Purported Vampiress

This is a video render of the same prompt used on this page.

Bonus Content 2: Midjourney v6.1 Content from 34 weeks ago

Same prompt.

Image: Midjourney v6.1 render set (several passes)

The upper left image reminds me of Kirsten Dunst. Again, notice the female breasts, highlighting Midjourney’s censorial schizophrenia.