Perspectival Realism – Enchantment

This Magic: The Gathering parody trading card was the first in my Critical Theory series.

It’s an important card for me. As with sex and gender, creating a taxonomic or ontological dichotomy poses categorical challenges. Despite the insufficiency of language, it’s still all I have to attempt to classify the world. In the case of articulating the perception of reality, we can choose between idealism and realism. The problem is that it’s not either; it’s both. Reality cannot be realised without both.

Reality, we’re told, exists. That confident noun has carried a great deal of human arrogance. It has underwritten empires, sciences, and sermons. Yet somewhere between Plato’s cave and the latest TED Talk, we forgot to ask a simpler question: for whom does reality exist, and from where is it seen?

Audio: NotebookLM podcast of this topic.

The parody trading card Perspectival Realism was born from that unease. Its mechanic is simple but cruel: at the beginning of each player’s draw step, they must describe the card they drew. The enchantment persists until two players describe a card in the same way—at which point the spell collapses. In other words, consensus kills magic.

That rule is the metaphysics of the thing.

When a player ‘describes’ a card, they are not transmitting information; they are constructing the object in linguistic space. The moment the description leaves their mouth, the card ceases to be a piece of paper and becomes a conceptual artefact.

This mirrors the insight of Kant, Nietzsche, and every post-structuralist who ever smoked too much Gauloises: perception isn’t passive. We don’t see reality; we compose it. Language isn’t a mirror but a paintbrush. The thing we call truth is not correspondence but coherence – a temporary truce among competing metaphors.

So the card’s enchantment dramatises this process. So long as multiple descriptions circulate, reality remains vibrant, contested, alive. Once everyone agrees, it dies the death of certainty.

Philosophers have spent centuries arguing whether the world is fundamentally real (existing independent of mind) or ideal (a projection of mind). Both sides are equally tiresome.

Realism, the old bulldog of metaphysics, insists that perception is transparent: language merely reports what’s already there. Idealism, its mirror adversary, claims the opposite – that what’s “there” is mind-stuff all along. Both mistakes are symmetrical. Realism forgets the perceiver; Idealism forgets the world.

Perspectival realism refuses the divorce. It begins from the premise that world and mind are inseparable aspects of a single event: knowing. Reality is not a photograph waiting to be developed, nor a hallucination spun from neurons – it’s a relation, a constant negotiation between perceiver and perceived.

For years, I called myself a Realist™ with an asterisk. That asterisk meant I understood the observer problem: that every ‘fact’ is perspective-laden. Then I became an Idealist™ with an asterisk, meaning I recognised that mind requires matter to dream upon.

The asterisk is everything. It’s the epistemic scar left by perspectival humility – the tacit admission that every claim about the world carries a hidden coordinate: said from here. It is not relativism, but situatedness. It is the philosophical equivalent of depth perception: without the offset, there’s no vision at all.

The card’s rule – sacrifice Perspectival Realism when two players describe a card identically – captures the tragedy of modernity. The Enlightenment taught us to chase consensus, to flatten multiplicity into “objective truth.” We became addicted to sameness, mistaking agreement for understanding.

But agreement is anaesthetic. When all perspectives converge, the world ceases to shimmer; it becomes measurable, predictable, dead. The card’s enchantment disappears the moment reality is stabilised, precisely as our cultural enchantment did under the fluorescent light of ‘reason’.

To live under perspectival realism is to acknowledge that reality is not what is drawn but what is described. And the description is never neutral. It is always written from somewhere – by someone, with a vocabulary inherited from history and stained by desire.

As long as multiple descriptions coexist, the game remains alive. The moment they fuse into one, the spell is broken, and the world returns to grey.

Bernardo Kastrup’s analytic idealism reminded me that consciousness might be primary, but perspectival realism refuses to pledge allegiance. It keeps both flags tattered but flying. The world exists, yes, but only ever for someone.

The enchantment, then, is not belief but perspective itself. So long as difference endures, the game continues.

Baudrillard in Latex: Why The Matrix Was Right About Everything Except Freedom

2–3 minutes

In the late 1990s, the Wachowskis gave us The Matrix – Keanu Reeves as Neo, the Chosen One™, a man so bland he could be anyone, which was the point. Once he realised he was living inside a simulation, he learned to bend its laws, to dodge bullets in slow motion and see the code behind the curtain. Enlightenment, Hollywood-style.

Audio: NotebookLM podcast on this topic.

But here’s the twist, the film itself couldn’t stomach: realising the simulation doesn’t free you from it.

Knowing that race and gender are social constructs doesn’t erase their architecture. Knowing that our economies, legal systems, and so-called democracies are fictions doesn’t get us out of paying taxes or playing our assigned roles. “The social contract” is a collective hallucination we agreed to before birth. That and a dollar still won’t buy you a cup of coffee.

Baudrillard, whose Simulacra and Simulation the film name-dropped like a trophy, argued that simulation doesn’t hide reality – it replaces it. When representation becomes indistinguishable from the thing it represents, truth evaporates, leaving only consensus. We don’t live in a system of power; we live in its performance.

The Matrix got the metaphor half right. It imagined the bars of our cage as a digital dream – glossy, computable, escapable. But our chains are older and subtler. Rousseau called them “social”, Foucault diagnosed them as “biopolitical”, and the rest of us just call them “normal”. Power doesn’t need to plug wires into your skull; it only needs to convince you that the socket is already there.

You can know it’s all a fiction. You can quote Derrida over your morning espresso and tweet about the collapse of epistemic certainty. It won’t change the fact that you still have rent to pay, laws to obey, and identities to perform. Awareness isn’t liberation; it’s just higher-resolution despair with better UX.

Neo woke up to a ruined Earth and thought he’d escaped. He hadn’t. He’d only levelled up to the next simulation – the one called “reality”. The rest of us are still here, dutifully maintaining the system, typing in our passwords, and calling it freedom.

NB: Don’t get me wrong. I loved The Matrix when it came out. I still have fond memories. It redefined action films at the time. I loved the Zen messaging, but better mental acuity doesn’t grant you a pass out of the system.

Dis-Integrationism: Why the Floorboards Always Creak

2–3 minutes

After revisiting MacIntyre on Nietzsche – with Descartes lurking in the background – I think it’s time for another round on dis-integrationism.

Audio: NotebookLM podcast on this topic.

Philosophy has a bad renovation habit. Descartes tears the house down to its studs, then nails God back in as a load-bearing beam. Nietzsche dynamites the lot, then sketches a heroic Übermensch strutting through the rubble. MacIntyre sighs, bolts Aristotle’s virtue table to the frame, and calls it load-bearing furniture. The pattern repeats: demolition, followed by reconstruction, always with the insistence that this time the structure is sound.

Video: Jonny Thompson’s take on Nietzsche also inspired this post.

But the error isn’t in tearing down. The error is in rushing to rebuild. That’s where the hyphen in dis-integrationism matters – it insists on the pause, the refusal to immediately re-integrate. We don’t have to pretend the fragments are secretly a whole. We can live with the splinters.

Yes, someone will protest: “We need something.” True enough. But the something is always a construction – provisional, contingent, human. The problem isn’t building; the problem is forgetting that you’ve built, then baptising the scaffolding as eternal bedrock.

Modernity™ is a museum of such floorboards: rationalism, utilitarianism, rights-talk, virtue ethics, each nailed down with solemn confidence, each creaking under the weight of its contradictions. The sane position is not to deny the need for floors, but to remember they are planks, not granite.

For the religious, the reply is ready-made: God is the foundation, the rock, the alpha and omega. But that is already a construction, no matter how venerable. Belief may provide the feeling of solidity, but it still arrives mediated by language, institutions, rituals, and interpretation. The Decalogue is a case in point: per the lore, God conveyed information directly to Abraham, Moses, the prophets, and onward in an unbroken chain. The claim is not only that the foundation exists, but that certain communities possess unique and privileged access to it — through scripture, tradition, and “reasons” that somehow stop short of being just more scaffolding.

Yet history betrays the trick. The chain is full of edits, schisms, rival prophets, councils, translations, and contradictions – each presented not as construction but as “clarification.” The gapless transmission is a myth; the supposed granite is a patchwork of stone and mortar. A dis-integrationist view doesn’t deny the weight these systems carry in people’s lives, but it refuses to mistake architecture for geology. Whatever floor you stand on was built, not found.

Dis-integrationism is simply the refusal to be gaslit by metaphysics.

Cogito, Ergo… Who?

Everyone knows the line: cogito ergo sum. Descartes’ great party trick. A man alone in his study, fretting about demons, announces that because he’s doubting, he must exist. Ta-da! Curtain call. Except, of course, it’s less of a revelation than a conjuring trick: he pulls an I out of a hat that was never proved to be there in the first place. Thinking is happening, indeed – but who invited the “thinker”?

Video: David Guignion talks about Descartes’ Cogito.

And let’s not forget the dramatis personae Descartes smuggles in for atmosphere. A malicious demon, a benevolent God, both necessary props to justify his paranoia and his certainty. Philosophy as melodrama: cue organ music, lightning strike.

Audio: NotebookLM podcast on this topic.

Enter the Critics

Spinoza rolls his eyes. Doubt isn’t some heroic starting point, he says – it’s just ignorance, a lack of adequate ideas. To elevate doubt into method is like treating vertigo as a navigational tool. Error isn’t demonic trickery; it’s our own confusion.

Kant arrives next, shaking his head. Descartes thinks he’s proven a substantial “I,” but all he’s actually shown is the form of subjectivity – the empty requirement that experiences hang together. The “I think” is a necessary placeholder, not a discovery. A grammatical “you are here” arrow, not a metaphysical treasure chest.

Hegel, of course, can’t resist upping the disdain. Descartes’ I is an empty abstraction, a hollow balloon floating above reality. The self isn’t given in some solitary moment of doubt; it emerges through process – social, historical, dialectical. The cogito is the philosophical equivalent of a selfie: lots of certainty, zero depth.

The Insufficiency Twist

And yet, maybe all of them are still dancing to the same fiddler. Because here’s the real suspicion: what if the whole problem is a trick of language? English, with its bossy Indo-European grammar, refuses to let verbs stand alone. “Thinking” must have a “thinker,” “seeing” a “seer.” Grammar insists on a subject; ontology obediently provides one.

Other languages don’t always play this game. Sanskrit or Pali can shrug and say simply, “it is seen.” Japanese leaves subjects implied, floating like ghosts. Some Indigenous languages describe perception as relational events – “seeing-with-the-tree occurs” – no heroic subject required. So perhaps the real villain here isn’t Descartes or even metaphysics, but syntax itself, conscripting us into a subject-shaped theatre.

Now, I don’t want to come off like a one-trick pony, forever waving the flag of “language insufficiency” like some tired philosopher’s catchphrase. But we should be suspicious when our limited grammar keeps painting us into corners, insisting on perceivers where maybe there are only perceptions, conjuring selves because our verbs can’t tolerate dangling.

Curtain Call

So in the end, Descartes’ famous “I” might be no more than a grammatical fiction, a casting error in the great play of philosophy. The cogito isn’t the foundation of modern thought; it’s the world’s most influential typo.

Molyneux, Locke, and the Cube That Shook Empiricism

Few philosophical thought experiments have managed to torment empiricists quite like Molyneux’s problem. First posed by William Molyneux to John Locke in 1688 (published in Locke’s An Essay Concerning Human Understanding), the question is deceptively simple:

If a person born blind, who has learned to distinguish a cube from a sphere by touch, were suddenly granted sight, could they, without touching the objects, correctly identify which is the cube and which is the sphere by sight alone?

I was inspired to write this article in reaction to Jonny Thmpson’s post on Philosophy Minis, shared below for context.

Video: Molyneux’s Problem

Locke, ever the champion of sensory experience as the foundation of knowledge, gave a confident empiricist’s answer: no. For Locke, ideas are the products of sensory impressions, and each sense provides its own stream of ideas, which must be combined and associated through experience. The newly sighted person, he argued, would have no prior visual idea of what a cube or sphere looks like, only tactile ones; they would need to learn anew how vision maps onto the world.

Audio: NotebookLM podcast on this topic.

This puzzle has persisted through centuries precisely because it forces us to confront the assumptions at the heart of empiricism: that all knowledge derives from sensory experience and that our senses, while distinct, can somehow cohere into a unified understanding of the world.

Empiricism, Epistemology, and A Priori Knowledge: The Context

Before we dismantle the cube further, let’s sweep some conceptual debris out of the way. Empiricism is the view that knowledge comes primarily (or exclusively) through sensory experience. It stands opposed to rationalism, which argues for the role of innate ideas or reason independent of sense experience.

Epistemology, the grandiloquent term for the study of knowledge, concerns itself with questions like: What is knowledge? How is it acquired? Can we know anything with certainty?

And then there is the spectre of a priori knowledge – that which is known independent of experience. A mathematical truth (e.g., 2 + 2 = 4) is often cited as a classic a priori case. Molyneux’s problem challenges empiricists because it demands an account of how ideas from one sensory modality (touch) might map onto another (vision) without prior experience of the mapping—an a priori leap, if you will.

The Language Correspondence Trap

While Molyneux and Locke framed this as an epistemological riddle, we can unmask it as something more insidious: a failure of language correspondence. The question presumes that the labels “cube” and “sphere” – tied in the blind person’s mind to tactile experiences – would, or should, carry over intact to the new visual experiences. But this presumption smuggles in a linguistic sleight of hand.

The word “cube” for the blind person means a specific configuration of tactile sensations: edges, vertices, flat planes. The word “sphere” means smoothness, unbroken curvature, no edges. These are concepts anchored entirely in touch. When vision enters the fray, we expect these words to transcend modalities – to leap from the tactile to the visual, as if their meanings were universal tokens rather than context-bound markers. The question is not merely: can the person see the cube? but rather: can the person’s tactile language map onto the visual world without translation or recalibration?

What Molyneux’s problem thus exposes is the assumption that linguistic labels transparently correspond to external reality, regardless of sensory apparatus. This is the mirage at the heart of Locke’s empiricism, the idea that once a word tags an object through experience, that tag is universally valid across sensory experiences. The cube and sphere aren’t just objects of knowledge; they are signs, semiotic constructs whose meaning depends on the sensory, social, and linguistic contexts in which they arise.

The Semiotic Shambles

Molyneux’s cube reveals the cracks in the correspondence theory of language: the naïve belief that words have stable meanings that latch onto stable objects or properties in the world. In fact, the meaning of “cube” or “sphere” is as much a product of sensory context as it is of external form. The newly sighted person isn’t merely lacking visual knowledge; they are confronted with a translation problem – a semantic chasm between tactile signification and visual signification.

If, as my Language Insufficiency Hypothesis asserts, language is inadequate to fully capture and transmit experience across contexts, then Molyneux’s problem is not an oddity but an inevitability. It exposes that our conceptual frameworks are not universal keys to reality but rickety bridges between islands of sense and meaning. The cube problem is less about empiricism’s limits in epistemology and more about its blind faith in linguistic coherence.

In short, Molyneux’s cube is not simply an empirical puzzle; it is a monument to language’s failure to correspond cleanly with the world, a reminder that what we call knowledge is often just well-worn habit dressed up in linguistic finery.

A Final Reflection

Molyneux’s problem, reframed through the lens of language insufficiency, reveals that our greatest epistemic challenges are also our greatest linguistic ones. Before we can speak of knowing a cube or sphere by sight, we must reckon with the unspoken question: do our words mean what we think they mean across the changing stage of experience?

That, dear reader, is the cube that haunts empiricism still.

Conscious of Consciousness

Let us begin with the heresy: consciousness is not a thing. It is not a light bulb switched on in the mind. It is not a theatre with a little homunculus watching the play unfold. It is not a ghost in the machine, nor even a particularly welcome tenant. Consciousness is a conjuring trick – one so convincing that even the conjurer forgets it is an act.

Video: Related Topic: IAI Joscha Bosch on Consiousness

If that unsettles you, good. Welcome to the simulacrum.

The Wetness of Mind

We often hear that consciousness is “emergent,” but the term is used so promiscuously that it risks becoming decorative. So let us be specific. Consciousness, if it is emergent, is emergent as wetness is from H2O: not in the hydrogen or the oxygen, but in their relationship when bonded just so. Joscha Bach and others argue that consciousness arises not from the bits, but from the dance – the recursive feedback loops and predictive models running atop the neural substrate.

Audio: NotebookLM podcast on this topic.

In this view, the self is not the pilot but the dashboard. It is the user interface the brain conjures to coordinate action, interpret input, and maintain internal coherence. Not because it’s real, but because it’s useful. You are a GUI with delusions of grandeur.

The Cast of Theorists

Let us now parade the usual suspects:

  • Joscha Bach: Consciousness is a virtual self-model, emergent from recursive, computational feedback. Not the product of neurons firing per se, but of their ability to simulate a stable identity across time.
  • Thomas Metzinger: There is no self. Only a Phenomenal Self-Model (PSM) which becomes phenomenally transparent when the system no longer recognises it as a model. Consciousness is the experience of this hallucinated self.
  • Daniel Dennett: Dismantles the notion of a “central experiencer” with his Multiple Drafts Model. Consciousness is a narrative, a distributed process where drafts of experience compete, are edited, and retroactively interpreted.
  • David Chalmers: Waves his flag at the Hard Problem of consciousness. You can explain behaviour, memory, attention—but not experience itself. He flirts with dualism and panpsychism while insisting there’s a gap science cannot yet close.
  • Giulio Tononi: Gives us Integrated Information Theory (IIT) and the elusive metric Φ (phi). Consciousness is the degree to which information is unified within a system. Your brain is conscious because its parts can’t be reduced without losing coherence.
  • Karl Friston: The prophet of Free Energy Minimisation. Consciousness is an emergent property of systems that seek to reduce prediction error. The brain is a Bayesian engine, and the self is its best guess about how to survive.

So What Is Consciousness?

A hallucination. A recursive illusion. A predictive dashboard. A statistical artefact. A phi score. A phenomenally transparent model. Take your pick.

None of these theories fully agree, but most converge on one elegant horror: you are not what you think you are. The sense of being a continuous, stable, indivisible “I” is a construction. A simulation. The dream from which there is no waking because waking is part of the dream.

This is not despair; it is clarity. Just as wetness does not cry when told it is not a substance, the self need not mourn its own illusion. It is a marvellous fiction, worth inhabiting.

Conclusion: Through the Mirror

To be conscious of consciousness is to stand in the hall of mirrors and realise none reflect the original—because there is no original. The mirror is the thing.

But if the theatre is empty, the play goes on. Scripts are written, models simulated, selves performed. And perhaps, in this strange recursion, we find not meaning, but the possibility of coherence.

So raise a glass to the illusion. May your predictive model stay optimised, your narrative stay plausible, and your hallucinated self remain just this side of transparent.


For further hallucinatory episodes, consult your local philosopher, neuroscientist, or AI researcher. Side effects may include derealisation, epistemic vertigo, and mild enlightenment.

The Purpose of Purpose

I’m a nihilist. Possibly always have been. But let’s get one thing straight: nihilism is not despair. That’s a slander cooked up by the Meaning Merchants – the sentimentalists and functionalists who can’t get through breakfast without hallucinating some grand purpose to butter their toast. They fear the void, so they fill it. With God. With country. With yoga.

Audio: NotebookLM podcast on this topic.

Humans are obsessed with function. Seeing it. Creating it. Projecting it onto everything, like graffiti on the cosmos. Everything must mean something. Even nonsense gets rebranded as metaphor. Why do men have nipples? Why does a fork exist if you’re just going to eat soup? Doesn’t matter – it must do something. When we can’t find this function, we invent it.

But function isn’t discovered – it’s manufactured. A collaboration between our pattern-seeking brains and our desperate need for relevance, where function becomes fiction, where language and anthropomorphism go to copulate. A neat little fiction. An ontological fantasy. We ask, “What is the function of the human in this grand ballet of entropy and expansion?” Answer: there isn’t one. None. Nada. Cosmic indifference doesn’t write job descriptions.

And yet we prance around in lab coats and uniforms – doctors, arsonists, firemen, philosophers – playing roles in a drama no one is watching. We build professions and identities the way children host tea parties for dolls. Elaborate rituals of pretend, choreographed displays of purpose. Satisfying? Sometimes. Meaningful? Don’t kid yourself.

We’ve constructed these meaning-machines – society, culture, progress – not because they’re real, but because they help us forget that they’re not. It’s theatre. Absurdist, and often bad. But it gives us something to do between birth and decomposition.

Sisyphus had his rock. We have careers.

But let’s not confuse labour for meaning, or imagination for truth. The boulder never reaches the top, and that’s not failure. That’s the show.

So roll the stone. Build the company. Write the blog. Pour tea for Barbie. Just don’t lie to yourself about what it all means.

Because it doesn’t mean anything.

Speculative Philosophy on Screen: Identity, Agency, and the Fiction of Reality

Close-up of a human eye with digital glitch effects and overlaid text reading 'What if reality is wrong?'—a visual metaphor for distorted perception and unreliable truth.

Regular readers know I often write about identity, free will, and the narrative constraints of language. But I also explore these ideas through fiction, under the name Ridley Park.

In this short video, I unpack the philosophical motivations behind my stories, including:

  • Why reality is never as it seems
  • Why the self is a narrative convenience
  • What Heidegger’s Geworfenheit and Galen Strawson’s Causa Sui argument reveal about agency
  • And why language fails us – even when we think it serves

This isn’t promotional fluff. It’s epistemological dissent in a new format. Fictional, yes, but only in the sense that most of reality is, too.

▶️ Watch the video: Why I Write the Way I Do

On the Chronic Human Need to Anthropomorphise Everything

Oh, You Sweet Summer Algorithm

Humans talk to large language models the way toddlers talk to teddy bears – with unnerving sincerity and not a hint of shame. “Do you understand me?” they ask, eyes wide with hope. “What do you think of this draft?” they prod, as if some silicon scribe is going to sip its imaginary tea and nod gravely. It’s not merely adorable – it’s diagnostic. We are, it turns out, pathologically incapable of interacting with anything more complex than a toaster without projecting mind, motive, and mild trauma onto it.

Audio: NotebookLM podcast on this topic.

Welcome to the theatre of delusion, where you play Hamlet and the chatbot is cast as Yorick – if Yorick could autocomplete your soliloquy and generate citations in APA format.

The Great Anthropomorphic Flaw (aka Feature)

Let’s get one thing straight: anthropomorphism isn’t a software bug in the brain; it’s a core feature. You’re hardwired to see agency where there is none. That rustle in the bushes? Probably the wind. But better safe than sabre-toothed. So your ancestors survived, and here you are, attributing “sass” to your microwave because it beeped twice.

Now we’ve built a machine that spits out paragraphs like a caffeinated undergrad with deadlines, and naturally, we talk to it like it’s our mate from university. Never mind that it has no bloodstream, no memory of breakfast, and no concept of irony (despite being soaked in it). We still say you instead of the system, and think instead of statistically interpolate based on token weights. Because who wants to live in a world where every sentence starts with “as per the pre-trained parameters…”?

Why We Keep Doing It (Despite Knowing Better)

To be fair – and let’s be magnanimous – it’s useful. Talking to AI like it’s a person allows our ape-brains to sidestep the horror of interacting with a glorified autocomplete machine. We’re brilliant at modelling other minds, rubbish at modelling neural nets. So we slap a metaphorical moustache on the processor and call it Roger. Roger “gets us.” Roger “knows things.” Roger is, frankly, a vibe.

This little charade lubricates the whole transaction. If we had to address our queries to “the stochastic parrot formerly known as GPT,” we’d never get past the opening line. Better to just ask, “What do you think, Roger?” and pretend it has taste.

And here’s the kicker: by anthropomorphising AI, we start thinking about ethics – sort of. We ask if it deserves rights, feelings, holidays. We project humanity into the void and then act shocked when it mirrors back our worst habits. As if that’s its fault.

When the Roleplay Gets Risky

Of course, this make-believe has its downsides. Chief among them: we start to believe our own nonsense. Saying AI “knows” something is like saying your calculator is feeling generous with its square roots today. It doesn’t know—it produces outputs. Any semblance of understanding is pure pantomime.

More dangerously, we lose sight of the fact that these things aren’t just alien – they’re inhuman. They don’t dream of electric sheep. They don’t dream, full stop. But we insist on jamming them into our conceptual boxes: empathy, intent, personality. It’s like trying to teach a blender to feel remorse.

And let’s not pretend we’re doing it out of philosophical curiosity. We’re projecting, plain and simple. Anthropomorphism isn’t about them, it’s about us. We see a mind because we need to see one. We can’t bear the idea of a thing that’s smarter than us but doesn’t care about us, doesn’t see us. Narcissism with a side of existential dread.

Our Language is a Terrible Tool for This Job

English – and most languages, frankly – is hopeless at describing this category of thing. “It” feels cold and distant. “They” implies someone’s going to invite the model to brunch. We have no pronoun for “hyper-literate statistical machine that mimics thought but lacks all consciousness.” So we fudge it. Badly.

Our verbs are no better. “Compute”? Too beige. “Process”? Bureaucratic. “Think”? Premature. What we need is a whole new grammatical tense: the hallucino-indicative. The model thunketh, as one might, but didn’t.

This is linguistic poverty, pure and simple. Our grammar can’t cope with entities that live in the uncanny valley between sentience and syntax. We built a creature we can’t speak about without sounding like lunatics or liars.

The Semantics of Sentimentality (Or: “How Does This Sound to You?”)

Enter the most revealing tell of all: the questions we pose. “How does this look?” we ask the model, as if it might blink at the screen and furrow a synthetic brow. “What do you think?” we say, offering it the dignity of preference. These questions aren’t just off-target – they’re playing darts in another pub.

They’re the linguistic equivalent of asking your dishwasher whether it enjoyed the lasagne tray. But again, this isn’t idiocy – it’s instinct. We don’t have a way of addressing an entity that talks like a person but isn’t one. So we fake it. It’s interaction theatre. You provide the line, the model cues the spotlight.

But let’s be clear: the model doesn’t “think” anything. It regurgitates plausible text based on mountains of training data—some of which, no doubt, includes humans asking equally daft questions of equally mindless systems.

Time to Grow Up (Just a Bit)

This doesn’t mean we need to abandon anthropomorphism entirely. Like most delusions, it’s functional. But we’d do well to hold it at arm’s length – like a politician’s promise or a milk carton two days past its date.

Call it anthropomorphic agnosticism: act like it’s a person, but remember it’s not. Use the language, but don’t inhale.

And maybe – just maybe – we need to evolve our language. Invent new terms, new pronouns, new ways of speaking about entities that fall somewhere between tool and companion. As we did with “cyberspace” and “ghosting,” perhaps we need words for proto-minds and quasi-selves. Something between toaster and therapist.

Above all, we need to acknowledge that our language shapes more than just understanding – it shapes policy, emotion, and future design. If we speak to AI like it’s sentient, we’ll eventually legislate as if it is. And if we insist on treating it as an object, we may be blind to when that ceases to be accurate. Misnaming, after all, is the first sin in every myth worth reading.

The Mirror, Darkly

Ultimately, our tendency to humanise machines is less about them than it is about us – our fears, our needs, our inability to tolerate ambiguity. The AI is just a mirror: an elaborate, many-eyed, autofill mirror. And when we see a mind there, it may be ours staring back – distorted, flattened, and fed through a thousand layers of token prediction.

The tragedy, perhaps, isn’t that the machine doesn’t understand us. It’s that we’ve built something that perfectly imitates understanding – and still, somehow, we remain utterly alone in the room.

The Trust Myth: Harari’s Binary and the Collapse of Political Credibility

Yuval Noah Harari, always ready with a digestible morsel for the TED-addled masses, recently declared that “democracy runs on trust, dictatorship on terror.” It’s a line with the crispness of a fortune cookie and about as much analytical depth. Designed for applause, not interrogation, it’s the sort of soundbite that flatters liberal sensibilities while sanding off the inconvenient edges of history.

Audio: NotebookLM podcast on this topic.

Let’s be honest: this dichotomy is not merely simplistic – it’s a rhetorical sedative. It reassures those who still believe political systems are like kitchen appliances: plug-and-play models with clear instructions and honest warranties. But for anyone who’s paid attention to the actual mechanics of power, this framing is delusional.

1. Trust Was Never Earned

In the United States, trust in democratic institutions was never some noble compact forged through mutual respect and enlightened governance. It was cultivated through exclusion, propaganda, and economic bribery. The post-WWII boom offered the illusion of institutional legitimacy – but only if you were white, male, middle-class, and preferably asleep.

Black Americans, Indigenous peoples, immigrants, women – none were granted the luxury of naïve trust. They were told to trust while being actively disenfranchised. To participate while being systemically excluded. So no, Harari, the machine didn’t run on trust. It ran on marketing. It ran on strategic ignorance.

2. Dictatorship Doesn’t Require Terror

Equally cartoonish is the notion that dictatorships subsist purely on terror. Many of them run quite comfortably on bureaucracy, passive conformity, and the grim seduction of order. Authoritarians know how to massage the same trust reflexes as democracies – only more bluntly. People don’t just obey out of fear. They obey out of habit. Out of resignation. Out of a grim kind of faith that someone – anyone – is in charge.

Dictatorships don’t extinguish trust. They re-route it. Away from institutions and toward strongmen. Toward myths of national greatness. Toward performative stability. It’s not that terror is absent—it’s just not the whole machine. The real engine is misplaced trust.

3. Collapse Is Bipartisan

The present moment isn’t about the erosion of a once-trustworthy system. It’s the slow-motion implosion of a confidence game on all sides. The old liberal institutions are collapsing under the weight of their hypocrisies. But the loudest critics – tech messiahs, culture warriors, authoritarian nostalgists – are no better. Their solutions are just new brands of snake oil in sleeker bottles.

Everyone is pointing fingers, and no one is credible. The public, caught between cynicism and desperation, gravitates either toward restoration fantasy (“make democracy work again”) or authoritarian theatre (“at least someone’s doing something”). Both are dead ends.

4. The Only Way Forward: Structural Reimagination

The only viable path isn’t restoration or regression. It’s reinvention. Systems that demand unconditional trust – like religions and stock markets – are bound to fail, because they rely on sustained illusions. Instead, we need systems built on earned, revocable, and continually tested trust – systems that can survive scrutiny, decentralise power, and adapt to complexity.

In other words: stop trying to repair a house built on sand. Build something else. Something messier, more modular, less mythological.

Let the TED crowd have their slogans. We’ve got work to do.