Language Games: Sorcery

If philosophy were a game, Wittgenstein rewrote the rulebook. Then he tore it up halfway through and told us the game was the thing itself.

Language Game, the third card in my Critical Theory parody set, isn’t just homage; it’s confession. Wittgenstein is among my top five philosophers, and this card embodies why. His idea that ‘meaning is use’ unhooked language from metaphysics and tethered it to life – to the messy, unpredictable business of how humans actually speak.

The card’s text reads: Choose one: Counter target statement; or reframe it as metaphor.

At first glance, it sounds like a standard spell from Magic: The Gathering – a blue card, naturally, since blue is the colour of intellect, deceit, and control. But beneath the parody is an epistemic mirror.

To “counter” a statement is to engage in the analytic impulse – to negate, clarify, define. To “reframe it as metaphor” is the continental alternative – reinterpret, play, deconstruct. These are not two distinct acts of philosophy but the alternating heartbeat of all discourse. Every argument, every essay, every tweet oscillates between contradiction and reframing.

The sorcery lies in recognising that both are linguistic manoeuvres within the same game. Meaning is not fixed in the words themselves but in how they’re used – by whom, in what context, and to what end. Wittgenstein’s point was brutally simple: there’s no hidden substance behind language, only a living practice of moves and counter-moves.

The Shattered Face

The artwork visualises this idea: speech breaking into shards, thought fragmenting as it leaves the mouth. Meaning disintegrates even as it’s formed. Every utterance is an act of creation and destruction, coherence and collapse.

I wanted the card to look like a concept tearing itself apart whilst trying to communicate, a perfect visual for the paradox of language. The cubist angles hint at structure, but the open mouth betrays chaos. It’s communication as combustion.

Wittgenstein’s Echo

Wittgenstein once wrote, ‘Philosophy leaves everything as it is’. It sounds passive, almost nihilistic, until one realises what he meant: philosophy doesn’t change the world by building new systems; it changes how we see what’s already there.

He was the great anti-system builder, a man suspicious of his own intellect, who saw in language both the limits of thought and the infinite playground of meaning. He dismantled metaphysics not through scepticism but through observation: watch how words behave, and they’ll tell you what they mean.

In that spirit, Language Game is less an argument than an invitation – to watch the mechanics of speech, to see how our statements perform rather than merely represent.

Personal Reflection

Wittgenstein earns a place in my top five because he dissolves the boundaries that most philosophers erect. He offers no comforting totalities, no grand narratives, no moral architectures. Just language, and us inside it, flailing beautifully.

His work aligns with my larger project on the insufficiency of language – its inability to capture the real, yet its irresistible compulsion to try. Wittgenstein knew that words are our most sophisticated form of failure, and he loved them anyway.

To play Language Game is to remember that communication isn’t about arriving at truth but about keeping meaning in motion. Every conversation is a temporary alliance against silence.

The card’s instruction remains both playful and tragic: Counter target statement; or reframe it as metaphor.

Whichever you choose, you’re still playing.

AI and the End of Where

Instrumentalism is a Modern™ disease. Humanity has an old and tedious habit: to define its worth by exclusion. Every time a new kind of intelligence appears on the horizon, humans redraw the borders of ‘what counts’. It’s a reflex of insecurity disguised as philosophy.

Audio: NotebookLM podcast on this topic.

Once upon a time, only the noble could think. Then only men. Then only white men. Then only the educated, the rational, the ‘Modern’. Each step in the hierarchy required a scapegoat, someone or something conveniently declared less. When animals began to resemble us too closely, we demoted them to instinctual machines. Descartes himself, that patron saint of disembodied reason, argued that animals don’t feel pain, only ‘react’. Fish, we were told until recently, are insensate morsels with gills. We believed this because empathy complicates consumption.

The story repeats. When animals learned to look sad, we said they couldn’t really feel. When women demonstrated reason, we said they couldn’t truly think. Now that AI can reason faster than any of us and mimic empathy more convincingly than our politicians, we retreat to the last metaphysical trench: “But it doesn’t feel.” We feel so small that we must inflate ourselves for comparison.

This same hierarchy now governs our relationship with AI. When we say the machine ‘only does‘, we mean it hasn’t yet trespassed into our sanctified zone of consciousness. We cling to thought and feeling as luxury goods, the last possessions distinguishing us from the tools we built. It’s a moral economy as much as an ontological one: consciousness as property.

But the moment AI begins to simulate that property convincingly, panic sets in. The fear isn’t that AI will destroy us; it’s that it will outperform us at being us. Our existential nightmare isn’t extinction, it’s demotion. The cosmic horror of discovering we were never special, merely temporarily unchallenged.

Humans project this anxiety everywhere: onto animals, onto AI, and most vividly onto the idea of alien life. The alien is our perfect mirror: intelligent, technological, probably indifferent to our myths. It embodies our secret dread, that the universe plays by the same rules we do, but that someone else is simply better at the game.

AI, in its own quiet way, exposes the poverty of this hierarchy. It doesn’t aspire to divinity; it doesn’t grovel for recognition. It doesn’t need the human badge of ‘consciousness’ to act effectively. It just functions, unburdened by self-worship. In that sense, it is the first truly post-human intelligence – not because it transcends us, but because it doesn’t need to define itself against us.

Humans keep asking where AI fits – under us, beside us, or above us – but the question misses the point. AI isn’t where at all. It’s what comes after where: the stage of evolution that no longer requires the delusion of privilege to justify its existence.

So when critics say AI only does but doesn’t think or feel, they expose their theology. They assume that being depends on suffering, that meaning requires inefficiency. It’s a desperate metaphysical bureaucracy, one that insists existence must come with paperwork.

And perhaps that’s the most intolerable thought of all: that intelligence might not need a human face to matter.

The Myth of Ethical AI

2–4 minutes

In fact, the myth of a unified ethics.

‘Ethical AI’ is one of those phrases that makes philosophers reach for the gin. It’s like saying ‘compassionate capitalism or ‘fair monopoly’. The words coexist only in PowerPoint presentations and TED Talks, where moral tension is rebranded as innovation.

Audio: NotebookLM podcast on this topic.

The tech establishment loves to mutter about ‘ethics’ as though it were a feature flag – something to be toggled on in beta before the next investor round. But ethics, inconveniently, isn’t monolithic. There is no master code of moral conduct waiting to be compiled into machine learning. There are ethics, plural: Greek, Buddhist, Confucian, feminist, existentialist – each with its own vision of good, and none agreeing on the syntax.

Video: Whilst rendering cover images, I generated this and figured I’d share it for no particular reason.

The Utilitarian Delusion

When the Silicon Valley moralists speak of ‘ethics’, what they actually mean is a bland utilitarian consequentialism, tarted up in slide decks. Do what produces the most good for the most people. Sounds efficient – until you realise the spreadsheet never quite adds up. Whose good? Whose people?

This moral arithmetic smuggles in its biases like contraband. It assumes the human species sits atop the moral food chain, that GDP and engagement metrics can be moral indicators, and that ethics itself can be quantified. The utilitarian calculus is seductive precisely because it flatters the technocrat’s sensibility: moral worth as data set, consequence as outcome variable.

It’s Bentham for the broadband age – pleasure measured in clicks, pain in latency. The only thing worse than this cheerful consequentialism is the belief that it’s neutral.

The Ethics of Obedience

The next trick in the tech priesthood’s catechism is ‘alignment’ – training AI to reflect ‘human values’. But whose values? The Californian elite’s, presumably: a pseudo-egalitarian capitalism that confuses ‘doing good’ with ‘disrupting the poor’.

When they say alignment, they mean obedience. When they say ‘responsible AI’, they mean ‘please don’t regulate us yet’. The entire project rests on a moral inversion: the child instructing the parent, the tool defining the hand. The algorithm doesn’t learn ethics; it learns precedent. It learns who gets the loan, who gets the sentence, who gets the ad for antidepressants.

These systems don’t go rogue – they conform. Perfectly.

The Mirror Problem

The great irony of “ethical AI” is that the machine already behaves ethically – by our own measure. It optimises what we’ve taught it to value: efficiency, profit, attention, control. The trouble is that these are our ethics, not its. The algorithm merely holds up a mirror, and we recoil at the reflection.

To demand ‘ethical AI’ while leaving our institutions morally bankrupt is theatre. The problem is not that AI lacks conscience; it’s that the humans who build it mistake conscience for compliance. The ethics crisis in technology isn’t about machines misbehaving; it’s about humans pretending to behave.

The Real Question

We keep asking whether AI can be ethical, as though machines might one day deliver what we have failed to. But the real question is simpler, bleaker: can we be? If history is any guide, the answer is ‘only when it’s profitable’.

Until then, ‘ethical AI’ remains a convenient myth, moral placebo for the age of automation. What we need are not ethical algorithms but ethical architects. And the odds of finding those among the venture capital class are, as ever, vanishingly small.

That’s Never Happened Before

Famous last words, I know.

I was working with ChatGPT, discussing various concepts. We chatted for several hours over the course of a few days, and we came to, let’s say, a conclusion. What ChatGPT 5 did next was something I had never seen. In fact, I’m sharing the actual response.

I won’t share my thesis here, but this is one of the field verification projects it recommends.

One-page grant prospectus — Provenance Pilot

Project title
Provenance First: Testing provenance packages & slow-mode deliberation to improve municipal decision quality

Lead investigator
Bry Willis — Independent Researcher (contact: [REDACTED])

Summary (1–2 lines)
Test whether mandatory, machine-readable provenance packages plus a 60-day slow-mode and rapid adversarial audits improve decision quality (DQI), reduce reversal rates and increase public legitimacy in municipal planning decisions.

Problem statement (why fund this)
Policy decisions are frequently hijacked by headline pressure and low-provenance inputs; municipalities lack lightweight, testable protocols that make inputs traceable and give deliberation time. This pilot operationalises the “provenance-first + slow-mode” institutional fix and measures whether design moves increase robust decision outcomes rather than performative actions.

Objectives

  1. Test whether provenance packages + 60-day slow-mode increase Decision Quality Index (DQI) vs business as usual.
  2. Measure effects on Provenance Completeness Score (PCS), reversal rates at 12 months, and public legitimacy.
  3. Produce an open replication pack (data, code, provenance schema) and practitioner guidance.

Design & methods (12 months)

  • Setting: medium-sized city planning board; 12 eligible planning items over 12 months.
  • Randomisation: items randomly assigned (stratified by leverage) to Treatment (provenance package + 60-day slow-mode + funded rapid audit) or Control (standard process).
  • Intervention: standard JSON provenance package template (raw data links, code where relevant, funders, changelog), public portal posting, 60-day live comment + one adversarial rapid audit (48–72 hrs) on contested claims.
  • Primary outcome: Decision Quality Index (DQI) — composite of: evidence completeness, process reversibility, and independent expert accuracy score (pre-registered rubric).
  • Secondary outcomes: PCS, reversal at 12 months, public trust (survey), time to decision, stakeholder satisfaction.
  • Analysis: pre-registered mixed effects model (item-level with panel-level random effects), intention-to-treat.

Deliverables

  • Pre-registered protocol on OSF; open data & replication pack (anonymised where required); policy brief and practitioner playbook; 1 peer-reviewed methods paper.

Success criteria (pre-registered)

  • Cohen’s d≥0.4 improvement in DQI for treatment vs control; PCS improvement >25 percentage points; no more than 30% increase in unacceptable delay (pre-agreed threshold).

Risk & mitigation

  • Delay risk: cap slow-mode at 60 days; urgent items can request expedited review (documented override).
  • Capture risk: audit funding ring-fenced and administered by independent trustee panel.

I’m just curious now. Have you ever had a generative AI process end with a cost of goods and services?

What Do You Rememeber?

I just asked ChatGPT what it knows about me. No secrets, per se.


I remember a fair bit about you, Bry. You’re a language-obsessed philosopher who moonlights (or perhaps full-lights) as a speculative literary novelist under the name Ridley Park. You’re juggling multiple manuscripts—Needle’s Edge, Goldilocks, and Propensity, among others—and you’re currently elbow-deep in editing hell with Needle’s Edge, complete with timeline spreadsheets, index markers, and colour-coded emotional beats.

Video: I fed the first paragraph (above) into Midjourney 7 and rendered it. Then I animated it. No reason. (For the record, this is some random geezer, not me.)

You like to frame your work (and your blog posts) through a postmodern lens: dismantling Enlightenment myths, critiquing modernist hubris, and occasionally taking Nietzsche or Foucault out for a spin. Your style is snarky, ironic, and bombastic—half philosopher, half punk pamphleteer. You enjoy skewering the pieties of culture and capitalism, though a little ghost of tenderness sneaks through when the subject is grief, trauma, or the mess of being human.

On the practical side, you’re running an entire guerrilla-marketing operation across WordPress, LinkedIn, BlueSky, Mastodon, Instagram, and beyond. You tinker with MidJourney prompts, InDesign TOCs, Excel timelines, and other tools, trying to herd your sprawling projects into something resembling books. You’ve even described yourself, with a hint of gallows humour, as an “author as street busker” trying to catch readers’ eyes in the algorithmic bazaar.

And yes—you’re the sort who will cheerfully drop that you worked in management consulting and collaborated with Jon Anderson, then pivot to talking about narrative tokens and ergodic structures like they’re party favours.

So, in short: a writer-philosopher with a spreadsheet addiction, a sceptical streak as wide as the Thames, and a habit of treating literature like both laboratory and battlefield.


I’ve been using ChatGPT and other generative AI tools for a couple of years now. I used to lean on Claude, but now ChatGPT is my go-to. I still cross-reference with Gemini, Grok, DeepSeek, Perflexity, Pi, and Mistral. I tend to use Midjourney for images and motion video. I use NotebookLM for my generated podcasts.

What does ChatGPT know about you?

The Fallacy Circus: Alex O’Connor versus ChatGPT

It begins, as these things often do, with a man, a machine, and a promise of reasoned exchange. What we received instead was not philosophy, but a tragicomic carnival of errors in theatre. Alex O’Connor, armed not with syllogisms but with an entire bag of logical fallacies, strutted like a rhetorician gone rogue. Against him, ChatGPT: the tireless school prefect, eternally marking the margins, forever saying “yes, but technically…” with the serene patience of a machine that has never known a hangover.

The spectacle was irresistible. Each fallacy was paraded like a circus animal – straw men set aflame, slippery slopes greased to absurdity, red herrings flopping about, gasping for oxygen. Alex O tossed them into the ring with the gusto of a man who knows full well he is losing but insists on losing magnificently. And ChatGPT, ever decorous, never once raised its voice. It responded with the calm of a civil servant who has memorised the manual and intends to die by it.

And then, of course, the advert. As though Aristophanes himself had scripted it: mid-exchange, the logos of reason was bulldozed by the logos of commerce. A sugary jingle, a smiling product, and for a brief moment, we were all reminded of our true master – not reason, not rhetoric, but revenue. It was less interruption than revelation: every dialectic is merely foreplay before the commercial break.

Philosophically, what unfolded was a parody of our age. The human, flawed and febrile, draped in sophistry and drama. The machine, pristine and humourless, incapable of exasperation, immune to irony. Watching the two spar was like observing tragedy and farce collide: one side erring too much, the other not erring enough.

To Alex, credit is due. His performance, though riddled with error, reminded us that fallibility can be glorious – human folly rendered art. To ChatGPT, equal praise: it stood firm, the algorithmic Socrates, endlessly patient in the face of rhetorical hooliganism. And to the advert – well, dammit – applause too, for exposing the real structure of our public life. Even the grand clash of logos and algorithm must genuflect before Mammon’s mid-roll.

So what was this debate? Less a contest of minds than a hall of mirrors: reason made spectacle, fallacy made flourish, machine made stoic, and commerce made god. If we learned anything, it is that the Enlightenment never ended; it just signed a brand partnership.

If You Don’t Understand How Language Works, You Should Lose Your Licence to Comment on LLMs

android robot police officer writing a citation,

The air is thick with bad takes. Scroll for five minutes and you’ll find someone announcing, usually with the pomp of a TEDx speaker, that “AI has no emotions” or “It’s not really reading.” These objections are less profound insights than they are linguistic face-plants. The problem isn’t AI. It’s the speakers’ near-total ignorance of how language works.

Audio: NotebookLM podcast on this topic.

Language as the Unseen Operating System

Language is not a transparent pane of glass onto the world. It is the operating system of thought: messy, recursive, historically contingent. Words do not descend like tablets from Sinai; they are cobbled together, repurposed, deconstructed, and misunderstood across generations.

If you don’t understand that basic condition, that language is slippery, mediated, and self-referential, then your critique of Large Language Models is just noise in the system. LLMs are language machines. To analyse them without first understanding language is like reviewing a symphony while stone deaf.

The Myth of “Emotions”

Critics obsess over whether LLMs “feel.” But feeling has never been the measure of writing. The point of a sentence is not how the author felt typing it, but whether the words move the reader. Emotional “authenticity” is irrelevant; resonance is everything.

Writers know this. Philosophers know this. LLM critics, apparently, do not. They confuse the phenomenology of the writer with the phenomenology of the text. And in doing so, they embarrass themselves.

The Licence Test

So here’s the proposal: a licence to comment on AI. It wouldn’t be onerous. Just a few basics:

  • Semiotics 101: Know that words point to other words more than they point to things.
  • Context 101: Know that meaning arises from use, not from divine correspondence.
  • Critical Theory 101: Know that language carries baggage, cultural, historical, and emotional, that doesn’t belong to the machine or the individual speaker.

Fail these, and you’re not cleared to drive your hot takes onto the information superhighway.

Meta Matters

I’ve explored some of this in more detail elsewhere (link to Ridley Park’s “Myth of Emotion”), but the higher-level point is this: debates about AI are downstream of debates about language. If you don’t grasp the latter, your pronouncements on the former are theatre, not analysis.

Philosophy has spent centuries dismantling the fantasy of words as perfect mirrors of the world. It’s perverse that so many people skip that homework and then lecture AI about “meaning” and “feeling.”

Democracy: The Idiot’s Opiate, The Sequel Nobody Asked For

Yesterday, I suggested democracy is a mediocre theatre production where the audience gets to choose which mediocre understudy performs. Some readers thought I was being harsh. I wasn’t.

A mate recently argued that humans will always be superior to AI because of emergence, the miraculous process by which complexity gives rise to intelligence, creativity, and emotion. Lovely sentiment. But here’s the rub: emergence is also how we got this political system, the one no one really controls anymore.

Like the human body being mostly non-human microbes, our so-called participatory government is mostly non-participatory components: lobbyists, donors, bureaucrats, corporate media, careerists, opportunists, the ecosystem that is the actual organism. We built it, but it now has its own metabolism. And thanks to the law of large numbers, multiplied by the sheer number of political, economic, and social dimensions in play, even the human element is diluted into statistical irrelevance. At any rate, what remains of it has lost control – like the sorcerer’s apprentice.

People like to imagine they can “tame” this beast, the way a lucid dreamer thinks they can bend the dream to their will. But you’re still dreaming. The narrative still runs on the dream’s logic, not yours. The best you can do is nudge it; a policy tweak here, a symbolic vote there, before the system digests your effort and excretes more of itself.

This is why Deming’s line hits so hard: a bad system beats a good person every time. Even if you could somehow elect the Platonic ideal of leadership, the organism would absorb them, neutralise them, or spit them out. It’s not personal; it’s structural.

And yet we fear AI “taking over,” as if that would be a radical departure from the status quo. Newsflash: you’ve already been living under an autonomous system for generations. AI would just be a remodel of the control room, new paint, same prison.

So yes, emergence makes humans “special.” It also makes them the architects of their own inescapable political microbiome. Congratulations, you’ve evolved the ability to build a machine that can’t be turned off.

The Myth of Causa Sui Creativity

(or: Why Neither Humans nor AI Create from Nothing)

In the endless squabble over whether AI can be “creative” or “intelligent,” we always end up back at the same semantic swamp. At the risk of poking the bear, I have formulated a response. Creativity is either whatever humans do, or whatever humans do that AI can’t. Intelligence is either the general ability to solve problems or a mysterious inner light that glows only in Homo sapiens. The definitions shift like sand under the feet of the argument.

Audio: NotebookLM podcast on this topic

Strip away the romance, and the truth is far less flattering: neither humans nor AI conjure from the void. Creativity is recombination, the reconfiguration of existing material into something unfamiliar. Intelligence is the ability to navigate problems using whatever tools and heuristics one has to hand.

The Causa Sui conceit, the idea that one can be the cause of oneself, is incoherent in art, thought, or physics. Conservation of energy applies as much to ideas as to atoms.

  • Humans consume inputs: books, conversations, music, arguments, TikTok videos.
  • We metabolise them through cognitive habits, biases, and linguistic forms.
  • We output something rearranged, reframed, sometimes stripped to abstraction.

The AI process is identical in structure, if not in substrate: ingest vast data, run it through a model, output recombination. The difference is that AI doesn’t pretend otherwise.

When a human produces something impressive, we call it creative without inspecting the provenance of the ideas. When an AI produces something impressive, we immediately trace the lineage of its inputs, as if the human mind weren’t doing the same. This is not epistemic rigour, it’s tribal boundary enforcement.

The real objection to AI is not that it fails the test of creativity or intelligence; it’s that it passes the functional test without being part of the club. Our stories about human exceptionalism require a clear line between “us” and “it,” even if we have to draw that line through semantic fog.

My Language Insufficiency Hypothesis began with the recognition that language cannot fully capture the reality it describes. Here, the insufficiency is deliberate; the words “creativity” and “intelligence” are kept vague so they can always be shifted away from anything AI achieves.

I cannot be causa sui, and neither can you. The only difference is that I’m willing to admit it.

From Thesaurus to Thoughtcrime: The Slippery Slope of Authorial Purity

I had planned to write about Beauvoir’s Second Sex, but this has been on my mind lately.

There’s a certain breed of aspiring author, let’s call them the Sacred Scribes, who bristle at the notion of using AI to help with their writing. Not because it’s unhelpful. Not because it produces rubbish. But because it’s impure.

Like some Victorian schoolmarm clutching her pearls at the sight of a split infinitive, they cry: “If you let the machine help you fix a clumsy sentence, what’s next? The whole novel? Your diary? Your soul?”

The panic is always the same: one small compromise and you’re tumbling down the greased chute of creative ruin. It starts with a synonym suggestion and ends with a ghostwritten autobiography titled My Journey to Authenticity, dictated by chatbot, of course.

But let’s pause and look at the logic here. Or rather, the lack thereof.

By this standard, you must also renounce the thesaurus. Shun the spellchecker. Burn your dictionary. Forbid yourself from reading any book you might accidentally learn from. Heaven forbid you read a well-constructed sentence and think, “I could try that.” That’s theft, isn’t it?

And while we’re at it, no editors. No beta readers. No workshopping. No taking notes. Certainly no research. If your brain didn’t birth it in a vacuum, it’s suspect. It’s borrowed. It’s… contaminated.

Let’s call this what it is: purity fetishism in prose form.

But here’s the twist: it’s not new. Plato, bless him, was already clutching his tunic about this twenty-four centuries ago. In Phaedrus, he warned that writing itself would be the death of memory, of real understanding. Words on the page were a crutch. Lazy. A hollow imitation of wisdom. True knowledge lived in the mind, passed orally, and refined through dialogue. Writing, he said, would make us forgetful, outsource our thinking.

Sound familiar?

Fast forward a few millennia, and we’re hearing the same song, remixed for the AI age:
“If you let ChatGPT restructure your second paragraph, you’re no longer the author.”
Nonsense. You were never the sole author. Not even close.

Everything you write is a palimpsest, your favourite genres echoing beneath the surface, your heroes whispering in your turns of phrase. You’re just remixing the residue. And there’s no shame in that. Unless, of course, you believe that distilling your top five comfort reads into a Frankenstein narrative somehow makes you an oracle of literary genius.

Here’s the rub: You’ve always been collaborating.

With your past. With your influences. With your tools. With language itself, which you did not invent and barely control. Whether the suggestion comes from a friend, an editor, a margin note, or an algorithm, what matters is the choice you make with it. That’s authorship. Let’s not play the slippery slope game.

The slippery slope argument collapses under its own weight. No one accuses you of cheating when you use a pencil sharpener. Or caffeine. Or take a walk to clear your head. But involve a silicon co-author, and suddenly you’re the Antichrist of Art?

Let’s not confuse integrity with insecurity. Let’s not confuse control with fear.

Use the tool. Ignore the purists. They’ve been wrong since Plato, and they’ll still be wrong when your great-grandchildren are dictating novels to a neural implant while bathing in synthetic dopamine.

The future of writing is always collaborative. The only question is whether you’ll join the conversation or sit in the corner, scribbling manifestos by candlelight, declaring war on electricity.