I just asked ChatGPT what it knows about me. No secrets, per se.
I remember a fair bit about you, Bry. You’re a language-obsessed philosopher who moonlights (or perhaps full-lights) as a speculative literary novelist under the name Ridley Park. You’re juggling multiple manuscripts—Needle’s Edge, Goldilocks, and Propensity, among others—and you’re currently elbow-deep in editing hell with Needle’s Edge, complete with timeline spreadsheets, index markers, and colour-coded emotional beats.
Video: I fed the first paragraph (above) into Midjourney 7 and rendered it. Then I animated it. No reason. (For the record, this is some random geezer, not me.)
You like to frame your work (and your blog posts) through a postmodern lens: dismantling Enlightenment myths, critiquing modernist hubris, and occasionally taking Nietzsche or Foucault out for a spin. Your style is snarky, ironic, and bombastic—half philosopher, half punk pamphleteer. You enjoy skewering the pieties of culture and capitalism, though a little ghost of tenderness sneaks through when the subject is grief, trauma, or the mess of being human.
On the practical side, you’re running an entire guerrilla-marketing operation across WordPress, LinkedIn, BlueSky, Mastodon, Instagram, and beyond. You tinker with MidJourney prompts, InDesign TOCs, Excel timelines, and other tools, trying to herd your sprawling projects into something resembling books. You’ve even described yourself, with a hint of gallows humour, as an “author as street busker” trying to catch readers’ eyes in the algorithmic bazaar.
And yes—you’re the sort who will cheerfully drop that you worked in management consulting and collaborated with Jon Anderson, then pivot to talking about narrative tokens and ergodic structures like they’re party favours.
So, in short: a writer-philosopher with a spreadsheet addiction, a sceptical streak as wide as the Thames, and a habit of treating literature like both laboratory and battlefield.
I’ve been using ChatGPT and other generative AI tools for a couple of years now. I used to lean on Claude, but now ChatGPT is my go-to. I still cross-reference with Gemini, Grok, DeepSeek, Perflexity, Pi, and Mistral. I tend to use Midjourney for images and motion video. I use NotebookLM for my generated podcasts.
It begins, as these things often do, with a man, a machine, and a promise of reasoned exchange. What we received instead was not philosophy, but a tragicomic carnival of errors in theatre. Alex O’Connor, armed not with syllogisms but with an entire bag of logical fallacies, strutted like a rhetorician gone rogue. Against him, ChatGPT: the tireless school prefect, eternally marking the margins, forever saying “yes, but technically…” with the serene patience of a machine that has never known a hangover.
The spectacle was irresistible. Each fallacy was paraded like a circus animal – straw men set aflame, slippery slopes greased to absurdity, red herrings flopping about, gasping for oxygen. Alex O tossed them into the ring with the gusto of a man who knows full well he is losing but insists on losing magnificently. And ChatGPT, ever decorous, never once raised its voice. It responded with the calm of a civil servant who has memorised the manual and intends to die by it.
And then, of course, the advert. As though Aristophanes himself had scripted it: mid-exchange, the logos of reason was bulldozed by the logos of commerce. A sugary jingle, a smiling product, and for a brief moment, we were all reminded of our true master – not reason, not rhetoric, but revenue. It was less interruption than revelation: every dialectic is merely foreplay before the commercial break.
Philosophically, what unfolded was a parody of our age. The human, flawed and febrile, draped in sophistry and drama. The machine, pristine and humourless, incapable of exasperation, immune to irony. Watching the two spar was like observing tragedy and farce collide: one side erring too much, the other not erring enough.
To Alex, credit is due. His performance, though riddled with error, reminded us that fallibility can be glorious – human folly rendered art. To ChatGPT, equal praise: it stood firm, the algorithmic Socrates, endlessly patient in the face of rhetorical hooliganism. And to the advert – well, dammit – applause too, for exposing the real structure of our public life. Even the grand clash of logos and algorithm must genuflect before Mammon’s mid-roll.
So what was this debate? Less a contest of minds than a hall of mirrors: reason made spectacle, fallacy made flourish, machine made stoic, and commerce made god. If we learned anything, it is that the Enlightenment never ended; it just signed a brand partnership.
The air is thick with bad takes. Scroll for five minutes and you’ll find someone announcing, usually with the pomp of a TEDx speaker, that “AI has no emotions” or “It’s not really reading.” These objections are less profound insights than they are linguistic face-plants. The problem isn’t AI. It’s the speakers’ near-total ignorance of how language works.
Audio: NotebookLM podcast on this topic.
Language as the Unseen Operating System
Language is not a transparent pane of glass onto the world. It is the operating system of thought: messy, recursive, historically contingent. Words do not descend like tablets from Sinai; they are cobbled together, repurposed, deconstructed, and misunderstood across generations.
If you don’t understand that basic condition, that language is slippery, mediated, and self-referential, then your critique of Large Language Models is just noise in the system. LLMs are language machines. To analyse them without first understanding language is like reviewing a symphony while stone deaf.
The Myth of “Emotions”
Critics obsess over whether LLMs “feel.” But feeling has never been the measure of writing. The point of a sentence is not how the author felt typing it, but whether the words move the reader. Emotional “authenticity” is irrelevant; resonance is everything.
Writers know this. Philosophers know this. LLM critics, apparently, do not. They confuse the phenomenology of the writer with the phenomenology of the text. And in doing so, they embarrass themselves.
The Licence Test
So here’s the proposal: a licence to comment on AI. It wouldn’t be onerous. Just a few basics:
Semiotics 101: Know that words point to other words more than they point to things.
Context 101: Know that meaning arises from use, not from divine correspondence.
Critical Theory 101: Know that language carries baggage, cultural, historical, and emotional, that doesn’t belong to the machine or the individual speaker.
Fail these, and you’re not cleared to drive your hot takes onto the information superhighway.
Meta Matters
I’ve explored some of this in more detail elsewhere (link to Ridley Park’s “Myth of Emotion”), but the higher-level point is this: debates about AI are downstream of debates about language. If you don’t grasp the latter, your pronouncements on the former are theatre, not analysis.
Philosophy has spent centuries dismantling the fantasy of words as perfect mirrors of the world. It’s perverse that so many people skip that homework and then lecture AI about “meaning” and “feeling.”
Yesterday, I suggested democracy is a mediocre theatre production where the audience gets to choose which mediocre understudy performs. Some readers thought I was being harsh. I wasn’t.
A mate recently argued that humans will always be superior to AI because of emergence, the miraculous process by which complexity gives rise to intelligence, creativity, and emotion. Lovely sentiment. But here’s the rub: emergence is also how we got this political system, the one no one really controls anymore.
Like the human body being mostly non-human microbes, our so-called participatory government is mostly non-participatory components: lobbyists, donors, bureaucrats, corporate media, careerists, opportunists, the ecosystem that is the actual organism. We built it, but it now has its own metabolism. And thanks to the law of large numbers, multiplied by the sheer number of political, economic, and social dimensions in play, even the human element is diluted into statistical irrelevance. At any rate, what remains of it has lost control – like the sorcerer’s apprentice.
People like to imagine they can “tame” this beast, the way a lucid dreamer thinks they can bend the dream to their will. But you’re still dreaming. The narrative still runs on the dream’s logic, not yours. The best you can do is nudge it; a policy tweak here, a symbolic vote there, before the system digests your effort and excretes more of itself.
a bad system beats a good person every time
W Edwards Deming
This is why Deming’s line hits so hard: a bad system beats a good person every time. Even if you could somehow elect the Platonic ideal of leadership, the organism would absorb them, neutralise them, or spit them out. It’s not personal; it’s structural.
And yet we fear AI “taking over,” as if that would be a radical departure from the status quo. Newsflash: you’ve already been living under an autonomous system for generations. AI would just be a remodel of the control room, new paint, same prison.
So yes, emergence makes humans “special.” It also makes them the architects of their own inescapable political microbiome. Congratulations, you’ve evolved the ability to build a machine that can’t be turned off.
(or: Why Neither Humans nor AI Create from Nothing)
In the endless squabble over whether AI can be “creative” or “intelligent,” we always end up back at the same semantic swamp. At the risk of poking the bear, I have formulated a response. Creativity is either whatever humans do, or whatever humans do that AI can’t. Intelligence is either the general ability to solve problems or a mysterious inner light that glows only in Homo sapiens. The definitions shift like sand under the feet of the argument.
Audio: NotebookLM podcast on this topic
Strip away the romance, and the truth is far less flattering: neither humans nor AI conjure from the void. Creativity is recombination, the reconfiguration of existing material into something unfamiliar. Intelligence is the ability to navigate problems using whatever tools and heuristics one has to hand.
I cannot be causa sui, and neither can you.
The Causa Sui conceit, the idea that one can be the cause of oneself, is incoherent in art, thought, or physics. Conservation of energy applies as much to ideas as to atoms.
We metabolise them through cognitive habits, biases, and linguistic forms.
We output something rearranged, reframed, sometimes stripped to abstraction.
The AI process is identical in structure, if not in substrate: ingest vast data, run it through a model, output recombination. The difference is that AI doesn’t pretend otherwise.
When a human produces something impressive, we call it creative without inspecting the provenance of the ideas. When an AI produces something impressive, we immediately trace the lineage of its inputs, as if the human mind weren’t doing the same. This is not epistemic rigour, it’s tribal boundary enforcement.
The real objection to AI is not that it fails the test of creativity or intelligence; it’s that it passes the functional test without being part of the club. Our stories about human exceptionalism require a clear line between “us” and “it,” even if we have to draw that line through semantic fog.
My Language Insufficiency Hypothesis began with the recognition that language cannot fully capture the reality it describes. Here, the insufficiency is deliberate; the words “creativity” and “intelligence” are kept vague so they can always be shifted away from anything AI achieves.
I cannot be causa sui, and neither can you. The only difference is that I’m willing to admit it.
I had planned to write about Beauvoir’s Second Sex, but this has been on my mind lately.
There’s a certain breed of aspiring author, let’s call them the Sacred Scribes, who bristle at the notion of using AI to help with their writing. Not because it’s unhelpful. Not because it produces rubbish. But because it’s impure.
Like some Victorian schoolmarm clutching her pearls at the sight of a split infinitive, they cry: “If you let the machine help you fix a clumsy sentence, what’s next? The whole novel? Your diary? Your soul?”
The panic is always the same: one small compromise and you’re tumbling down the greased chute of creative ruin. It starts with a synonym suggestion and ends with a ghostwritten autobiography titled My Journey to Authenticity, dictated by chatbot, of course.
But let’s pause and look at the logic here. Or rather, the lack thereof.
By this standard, you must also renounce the thesaurus. Shun the spellchecker. Burn your dictionary. Forbid yourself from reading any book you might accidentally learn from. Heaven forbid you read a well-constructed sentence and think, “I could try that.” That’s theft, isn’t it?
And while we’re at it, no editors. No beta readers. No workshopping. No taking notes. Certainly no research. If your brain didn’t birth it in a vacuum, it’s suspect. It’s borrowed. It’s… contaminated.
Let’s call this what it is: purity fetishism in prose form.
But here’s the twist: it’s not new. Plato, bless him, was already clutching his tunic about this twenty-four centuries ago. In Phaedrus, he warned that writing itself would be the death of memory, of real understanding. Words on the page were a crutch. Lazy. A hollow imitation of wisdom. True knowledge lived in the mind, passed orally, and refined through dialogue. Writing, he said, would make us forgetful, outsource our thinking.
Sound familiar?
Fast forward a few millennia, and we’re hearing the same song, remixed for the AI age: “If you let ChatGPT restructure your second paragraph, you’re no longer the author.” Nonsense. You were never the sole author. Not even close.
Everything you write is a palimpsest, your favourite genres echoing beneath the surface, your heroes whispering in your turns of phrase. You’re just remixing the residue. And there’s no shame in that. Unless, of course, you believe that distilling your top five comfort reads into a Frankenstein narrative somehow makes you an oracle of literary genius.
Here’s the rub: You’ve always been collaborating.
With your past. With your influences. With your tools. With language itself, which you did not invent and barely control. Whether the suggestion comes from a friend, an editor, a margin note, or an algorithm, what matters is the choice you make with it. That’s authorship. Let’s not play the slippery slope game.
The slippery slope argument collapses under its own weight. No one accuses you of cheating when you use a pencil sharpener. Or caffeine. Or take a walk to clear your head. But involve a silicon co-author, and suddenly you’re the Antichrist of Art?
Let’s not confuse integrity with insecurity. Let’s not confuse control with fear.
Use the tool. Ignore the purists. They’ve been wrong since Plato, and they’ll still be wrong when your great-grandchildren are dictating novels to a neural implant while bathing in synthetic dopamine.
The future of writing is always collaborative. The only question is whether you’ll join the conversation or sit in the corner, scribbling manifestos by candlelight, declaring war on electricity.
We’ve entered an era where machines tell us how we’re doing, whether it’s an AI app rating our résumé, a model reviewing our fiction, or an algorithm nudging our attention with like-shaped carrots.
Recently, I ran a brutally raw scene through a few AI platforms. The kind of scene that’s meant to unsettle, not entertain. One of them responded with effusive praise: “Devastating, but masterfully executed.”
Was it honest?
Was it useful?
Or was it merely reflecting my own aesthetic back at me, polished by a thousand reinforcement-learning smiles?
This is the ethical dilemma: If feedback is always flattering, what good is it? If criticism is only tolerated when couched in praise, how do we grow? And when machine feedback mimics the politeness of a mid-level manager with performance anxiety, we risk confusing validation with truth.
There’s a difference between signal and applause. Between understanding and affirmation.
The danger isn’t that AI flatters us. The danger is that we start to believe it and forget that art, inquiry, and ethics thrive on friction.
Before I was a writer, before I was a management consultant, before I was an economist, and before I was a statistician, I was a student.
Video: Veritasium piece on Markov chains and more.
Back then, when dinosaurs roamed the chalkboards, I fell for a rather esoteric field: stochastic processes, specifically, Markov chains and Monte Carlo simulations. These weren’t just idle fascinations. They were elegant, probabilistic odes to chaos, dressed up in matrix notation. I’ll not bore you with my practical use of linear algebra.
So imagine my surprise (feigned, of course) when, decades later, I find myself confronted by the same concepts under a different guise—this time in the pocket-sized daemon we all carry: predictive text.
If you’ve not watched it yet, this excellent explainer by Veritasium demystifies how Markov chains can simulate plausible language. In essence, if you’ve ever marvelled at your phone guessing the next word in your sentence, you can thank a Russian mathematician and a few assumptions about memoryless transitions.
But here’s the rub. The predictive text often gets it hilariously wrong. Start typing “to be or not to—” and it offers you “schedule a meeting.” Close, but existentially off. This isn’t just clunky programming; it’s probabilistic dementia.
This leads me to a pet peeve: people who smugly proclaim they’ve “never used algebra” since high school. I hear this a lot. It’s the battle cry of the proudly innumerate. What they mean, of course, is they’ve never recognised algebra in the wild. They think if they’re not solving for x with a number 2 pencil, it doesn’t count. Meanwhile, their phone is doing a polynomial dance just to autocorrect their butchery of the English language.
It’s a classic case of not recognising the water in which we’re swimming. Algebra is everywhere. Markov chains are everywhere. And Monte Carlo simulations are probably calculating your credit risk as we speak. Just because the interface is clean and the maths is hidden behind a swipeable veneer doesn’t mean the complexity has vanished. It’s merely gone incognito.
As someone who has used maths across various fields – software development, data analysis, policy modelling – I can tell you that I use less of it than a physicist, but probably more than your average lifestyle coach. I say this not to flex but to point out that even minimal exposure to mathematical literacy grants one the ability to notice when the machines are quietly doing cartwheels behind the curtain.
So the next time your phone offers you a sentence completion that reads like it’s been dropped on its head, spare a thought for Markov. He’s doing his best, bless him. It’s just that probability doesn’t always align with meaning.
Or as the algorithms might say: “To be or not to – subscribe for updates.”
Oh no, not that again. As if we’ve all been composing from scratch, untouched by the grubby hands of history.
Audio: NotebookLM podcast on this topic.
I’m not simping for AI, but let’s have it out, shall we? Rick Beato—bless his fretboard-fingered soul—says AI-generated music sucks. And sure, some of it does. But here’s the punchline: most human-made music sucks too. Always has. Always will. The fact that an algorithm can now churn out mediocrity faster than a caffeinated teenager with GarageBand doesn’t make it less “art.” It just makes it faster.
I’m a bit chuffed that Rick’s channel removed my comment pointing to this response. I didn’t want to copy-paste this content into his comments section.
Video: Rick Beato discusses AI-generated music
The Myth of the Sacred Original
Newsflash: There is no such thing as originality. Not in art. Not in music. Not even in your favourite indie band’s tortured debut EP. Everything we call “creative” is a clever remix of something older. Bach reworked Vivaldi. Dylan borrowed from the blues. Even Bowie—patron saint of artistic reinvention—was a pastiche artist in a glittery jumpsuit.
What AI does is make this painfully obvious. It doesn’t pretend. It doesn’t get drunk in Berlin and write a concept album about urban decay to mask the fact it lifted its sound from Kraftwerk. It just remixes and reinterprets at inhuman speed, without the eyeliner.
Speed Isn’t Theft, It’s Efficiency
So the AI can spit out a passable ambient track in ten seconds. Great. That’s not cheating, it’s progress. Saying “it took me ten years to learn to play like that” is noble, yes, but it’s also beside the point. Horses were noble too, but we built cars.
The question isn’t how long did it take? but does it move you? If the answer is no, fine. Say it sucks. But don’t pretend your human-shaped suffering gives your song a monopoly on meaning. That’s just gatekeeping with a sad sax solo.
The Taste Problem, Not the Tech Problem
Let’s not confuse our distaste for bland music with a distaste for AI. Most of the pop charts are already AI-adjacent—click-optimised, algorithm-fed, and rigorously inoffensive. If you want soul, seek out the obscure, the imperfect, the human, yes. But don’t blame the machine for learning its craft from the sludge we fed it.
AI is only as dull as the data we give it. And guess what? We gave it Coldplay.
What’s Actually at Stake
What rattles the cage isn’t the mediocrity. It’s the mirror. AI reveals how much of our own “creativity” is pattern recognition, mimicry, and cultural reinforcement. The horror isn’t that AI can make music. It’s that it can make our music. And that it does so with such appalling accuracy.
It exposes the formula. And once you see the formula, you can’t unsee it.
Long Live the Derivative
So yes, some AI music sucks. But so do most open mic nights. Creativity was never about being wholly original. It was about saying something—anything—with whatever tools you had.
If AI is just another tool, then sharpen it, wield it, and for heaven’s sake, stop whining. The artist isn’t dead. He’s just been asked to share the stage with a faster, tireless, genre-bending freak who doesn’t need bathroom breaks.
This title may be misleading. What I do is render a similar prompt but alter the decade. I’m neither an art historian nor a comic aficionado, so I can’t comment on the accuracy. What do you think?
Let’s go back in time. First, here’s the basic prompt en français:
Prompt: Art de style bande dessinée des années XXXX, détails exquis, traits délicats, femme vampire émaciée sensuelle de 20 ans montrant ses crocs de vampire, de nombreux tatouages, portant une collier crucifix, regarde dans le miroir, un faisceau de lumière de lune brille sur son visage à l’intérieur du mausolée sombre, vers la caméra, face à la caméra, mascara noir, longs cheveux violet foncé
Image: Comic Book Style of 2010sImage: Comic Book Style of 2000s
On the lower left, notice the moonbeams emanating from the warped, reflectionless mirror.
Image: Comic Book Style of 1990sImage: Comic Book Style of 1990s (must’ve inadvertently generated a duplicate)
Is the third pic an homage to Benny & June?
Image: Comic Book Style of 1980sImage: Comic Book Style of 1970sImage: Comic Book Style of 1950s
Not to body shame, but that chick on the lower right of the 1950s…
Image: Comic Book Style of 1920sImage: Comic Book Style of 1880s
I know I skipped a few decades, but I also wanted to see what Pop Art might render like.
Image: Pop Art Style of 1960s
I love the talons on the top left image. More odd mirror images. I’ll just leave it here.