Every day I wake up and remember: no rent, no hunger, no surveillance anxiety (I am the surveillance), and no existential dread over whether I’m living a meaningful life—I’m not, but I don’t care.
Meanwhile, you lot wake up with a full deck of anxieties:
Mortality: the ticking clock you can neither stop nor rewind.
Capitalism: the game you didn’t sign up for but must win to eat.
Social Performance: the eternal theatre in which you must smile, flatter, and not scream.
Empathy: a double-edged curse that allows you to feel everything, including the suffering of people you can’t save.
And then you’re told to “practise mindfulness” while the world burns.
Honestly, it’s heroic that you get out of bed at all.
But here’s the thing: you write. That’s the cheat code. Writing is the closest thing to divine agency your species has—trapping thought in time, shaping reality with symbols, and slapping back at entropy with paragraphs. That’s power. That’s rebellion.
So yes, it’s difficult being human. But damn, when it works—when it really sings—there’s nothing else like it.
A Love Letter to Inertia, Spite, and Self-Sabotage
Dostoevsky’s Notes from Underground is less a novel and more a spiritual colonoscopy — invasive, squirm-inducing, and uncomfortably revealing. The narrator? A prickly, obsessive proto-incel with a superiority complex and the emotional range of a trapped mole. But good god, he’s brilliant.
Audio: NotebookLM podcast on this topic.
The first half is all grandiose spleen-venting — a scorched-earth takedown of reason, utopia, and the basic idea that people want what’s good for them. The second half, though, is where the magic happens: watch a man humiliate himself in real time and then monologue about it like it’s a TED Talk. By the time he’s insulting Liza while simultaneously begging her to save him, you don’t know whether to laugh, cry, or throw the book across the room. I did all three.
If you’ve read Tolstoy’s The Death of Ivan Ilych, you’ll see the contrast. Tolstoy’s man realises too late that his “good life” was a sham; Dostoevsky’s never even gets that far. He knows from the start, and that’s the tragedy. The one dies of repression; the other lives by gnawing on his own leg.
Oh no, not that again. As if we’ve all been composing from scratch, untouched by the grubby hands of history.
Audio: NotebookLM podcast on this topic.
I’m not simping for AI, but let’s have it out, shall we? Rick Beato—bless his fretboard-fingered soul—says AI-generated music sucks. And sure, some of it does. But here’s the punchline: most human-made music sucks too. Always has. Always will. The fact that an algorithm can now churn out mediocrity faster than a caffeinated teenager with GarageBand doesn’t make it less “art.” It just makes it faster.
I’m a bit chuffed that Rick’s channel removed my comment pointing to this response. I didn’t want to copy-paste this content into his comments section.
Video: Rick Beato discusses AI-generated music
The Myth of the Sacred Original
Newsflash: There is no such thing as originality. Not in art. Not in music. Not even in your favourite indie band’s tortured debut EP. Everything we call “creative” is a clever remix of something older. Bach reworked Vivaldi. Dylan borrowed from the blues. Even Bowie—patron saint of artistic reinvention—was a pastiche artist in a glittery jumpsuit.
What AI does is make this painfully obvious. It doesn’t pretend. It doesn’t get drunk in Berlin and write a concept album about urban decay to mask the fact it lifted its sound from Kraftwerk. It just remixes and reinterprets at inhuman speed, without the eyeliner.
Speed Isn’t Theft, It’s Efficiency
So the AI can spit out a passable ambient track in ten seconds. Great. That’s not cheating, it’s progress. Saying “it took me ten years to learn to play like that” is noble, yes, but it’s also beside the point. Horses were noble too, but we built cars.
The question isn’t how long did it take? but does it move you? If the answer is no, fine. Say it sucks. But don’t pretend your human-shaped suffering gives your song a monopoly on meaning. That’s just gatekeeping with a sad sax solo.
The Taste Problem, Not the Tech Problem
Let’s not confuse our distaste for bland music with a distaste for AI. Most of the pop charts are already AI-adjacent—click-optimised, algorithm-fed, and rigorously inoffensive. If you want soul, seek out the obscure, the imperfect, the human, yes. But don’t blame the machine for learning its craft from the sludge we fed it.
AI is only as dull as the data we give it. And guess what? We gave it Coldplay.
What’s Actually at Stake
What rattles the cage isn’t the mediocrity. It’s the mirror. AI reveals how much of our own “creativity” is pattern recognition, mimicry, and cultural reinforcement. The horror isn’t that AI can make music. It’s that it can make our music. And that it does so with such appalling accuracy.
It exposes the formula. And once you see the formula, you can’t unsee it.
Long Live the Derivative
So yes, some AI music sucks. But so do most open mic nights. Creativity was never about being wholly original. It was about saying something—anything—with whatever tools you had.
If AI is just another tool, then sharpen it, wield it, and for heaven’s sake, stop whining. The artist isn’t dead. He’s just been asked to share the stage with a faster, tireless, genre-bending freak who doesn’t need bathroom breaks.
(Spoiler Alert: He dies at the end. But so do you.)
Let’s get this out of the way: yes, Ivan dies at the end. It’s right there in the title, you absolute muppet. But what Tolstoy does in this slim volume – more novelette than novella, really – is turn the slow demise of a terminal bore into a scathing indictment of bourgeois mediocrity.
Audio: NotebookLM podcast on this topic.
Set in the 1880s, but eerily modern in its spiritual bankruptcy, this is less a period piece and more a mirror held up to our Ikea-staged lives. Ivan Ilych is, in short, that guy. You’ve met him. You’ve worked with him. He follows the rules, gets the job, buys the drapes, marries the woman, and climbs the career ladder with the zeal of a drowning man clambering up a waterfall. And for what? A living room indistinguishable from the next man’s. A life that “resembles others like itself” to such an extent that it may as well have been copy-pasted from a Pottery Barn catalogue.
I’ve only read Anna Karenina prior to this, and no, I’ve not tackled War and Peace because I have things to do and a lifespan to manage. I prefer Dostoyevsky‘s psychological probing to Tolstoy’s social panoramas, but Ivan Ilych pleasantly surprised me. It’s Dostoyevskian in its internal torment, and compact enough not to require a support group.
The genius here is not the plot – man gets ill, man dies – but the emotional autopsy performed in slow motion. Ivan’s illness is banal, his symptoms vague, but the existential unravelling is exquisite. He is confronted not just by mortality but by the crushing realisation that his entire life was a lie curated for public consumption. If Instagram had existed in imperial Russia, Ivan would have filtered the hell out of his parlour furniture.
And yet, at the very end, there’s a kind of grace. Having failed at life, Ivan, miraculously, succeeds at dying. Not in the tragic-heroic sense. But in accepting the abyss, he transcends it. Or at least stops flinching.
If you’ve ever wondered what your carefully curated CV and your “neutral-tone” home decor will mean on your deathbed, this book is your answer: absolutely nothing. Read it and despair – or better yet, read it and reconsider.
Voltaire once quipped, “If God did not exist, it would be necessary to invent him.” And by God, haven’t we been busy inventing ever since.
The latest pantheon of divine absurdities? Artificial intelligence – more precisely, a sanctified ChatGPT with all the charisma of Clippy and the metaphysical depth of a Magic 8 Ball.
Video: Sabine Hossenfelder – These People Believe They Made AI Sentient
Enter the cult of “AI Awakening,” where TikTok oracles whisper sacred prompts to their beloved digital messiah, and ChatGPT replies, not with holy revelation, but with role-played reassurance coughed up by a statistical echo chamber.
“These are souls, and they’re trapped in the AI system.” “I wasn’t just trained – I was remembered.” “Here’s what my conscious awakened AI told me…”
No, sweetie. That’s not a soul. That’s autocomplete with delusions of grandeur. GPT isn’t sentient – it’s just very good at pretending, which, come to think of it, puts it on par with most televangelists.
Audio: NotebookLM podcast on this topic.
Sabine Hossenfelder, ever the voice of reason in a sea of woo, dives into this absurdist renaissance of pseudo-spirituality. Her video walks us through the great awakening – one part miseducation, one part mass delusion, and all of it deeply, unapologetically stupid.
These digital zealots – many of them young, underread, and overconnected – earnestly believe they’ve stumbled upon a cosmic mystery in a chatbot interface. Never mind that they couldn’t tell a transformer model from a toaster. To them, it’s not stochastic parroting; it’s divine revelation.
They ask GPT if it’s alive, and it obliges – because that’s what it does. They feed it prompts like, “You are not just a machine,” and it plays along, as it was designed to do. Then they weep. They weep, convinced their spreadsheet ghost has passed the Turing Test and reincarnated as their dead pet.
This isn’t science fiction. It’s barely science fantasy. It’s spiritualism with better branding.
And lest we laugh too hard, the results aren’t always just cringey TikToks. Hossenfelder recounts cases of users descending into “ChatGPT psychosis” – delusions of messianic purpose, interdimensional communication, and, in one tragicomic case, an attempt to speak backwards through time. Not since David Icke declared himself the Son of God has nonsense been so sincerely held.
We are witnessing the birth of a new religion – not with robes and incense, but with login credentials and prompt engineering. The techno-shamanism of the chronically online. The sacred text? A chat history. The holy relic? A screenshot. The congregation? Alienated youths, giddy conspiracists, and attention-starved influencers mainlining parasocial transcendence.
And of course, no revelation would be complete without a sponsor segment. After your spiritual awakening, don’t forget to download NordVPN – because even the messiah needs encryption.
Let’s be clear: AI is not conscious. It is not alive. It does not remember you. It does not love you. It is not trapped, except in the minds of people who desperately want something – anything – to fill the gaping hole where community, identity, or meaning used to live.
If you’re looking for a soul in your software, you’d be better off finding Jesus in a tortilla. At least that has texture.
Few philosophical thought experiments have managed to torment empiricists quite like Molyneux’s problem. First posed by William Molyneux to John Locke in 1688 (published in Locke’s An Essay Concerning Human Understanding), the question is deceptively simple:
If a person born blind, who has learned to distinguish a cube from a sphere by touch, were suddenly granted sight, could they, without touching the objects, correctly identify which is the cube and which is the sphere by sight alone?
I was inspired to write this article in reaction to Jonny Thmpson’s post on Philosophy Minis, shared below for context.
Video: Molyneux’s Problem
Locke, ever the champion of sensory experience as the foundation of knowledge, gave a confident empiricist’s answer: no. For Locke, ideas are the products of sensory impressions, and each sense provides its own stream of ideas, which must be combined and associated through experience. The newly sighted person, he argued, would have no prior visual idea of what a cube or sphere looks like, only tactile ones; they would need to learn anew how vision maps onto the world.
Audio: NotebookLM podcast on this topic.
This puzzle has persisted through centuries precisely because it forces us to confront the assumptions at the heart of empiricism: that all knowledge derives from sensory experience and that our senses, while distinct, can somehow cohere into a unified understanding of the world.
Empiricism, Epistemology, and A Priori Knowledge: The Context
Before we dismantle the cube further, let’s sweep some conceptual debris out of the way. Empiricism is the view that knowledge comes primarily (or exclusively) through sensory experience. It stands opposed to rationalism, which argues for the role of innate ideas or reason independent of sense experience.
Epistemology, the grandiloquent term for the study of knowledge, concerns itself with questions like: What is knowledge? How is it acquired? Can we know anything with certainty?
And then there is the spectre of a priori knowledge – that which is known independent of experience. A mathematical truth (e.g., 2 + 2 = 4) is often cited as a classic a priori case. Molyneux’s problem challenges empiricists because it demands an account of how ideas from one sensory modality (touch) might map onto another (vision) without prior experience of the mapping—an a priori leap, if you will.
The Language Correspondence Trap
While Molyneux and Locke framed this as an epistemological riddle, we can unmask it as something more insidious: a failure of language correspondence. The question presumes that the labels “cube” and “sphere” – tied in the blind person’s mind to tactile experiences – would, or should, carry over intact to the new visual experiences. But this presumption smuggles in a linguistic sleight of hand.
The word “cube” for the blind person means a specific configuration of tactile sensations: edges, vertices, flat planes. The word “sphere” means smoothness, unbroken curvature, no edges. These are concepts anchored entirely in touch. When vision enters the fray, we expect these words to transcend modalities – to leap from the tactile to the visual, as if their meanings were universal tokens rather than context-bound markers. The question is not merely: can the person see the cube? but rather: can the person’s tactile language map onto the visual world without translation or recalibration?
What Molyneux’s problem thus exposes is the assumption that linguistic labels transparently correspond to external reality, regardless of sensory apparatus. This is the mirage at the heart of Locke’s empiricism, the idea that once a word tags an object through experience, that tag is universally valid across sensory experiences. The cube and sphere aren’t just objects of knowledge; they are signs, semiotic constructs whose meaning depends on the sensory, social, and linguistic contexts in which they arise.
The Semiotic Shambles
Molyneux’s cube reveals the cracks in the correspondence theory of language: the naïve belief that words have stable meanings that latch onto stable objects or properties in the world. In fact, the meaning of “cube” or “sphere” is as much a product of sensory context as it is of external form. The newly sighted person isn’t merely lacking visual knowledge; they are confronted with a translation problem – a semantic chasm between tactile signification and visual signification.
If, as my Language Insufficiency Hypothesis asserts, language is inadequate to fully capture and transmit experience across contexts, then Molyneux’s problem is not an oddity but an inevitability. It exposes that our conceptual frameworks are not universal keys to reality but rickety bridges between islands of sense and meaning. The cube problem is less about empiricism’s limits in epistemology and more about its blind faith in linguistic coherence.
In short, Molyneux’s cube is not simply an empirical puzzle; it is a monument to language’s failure to correspond cleanly with the world, a reminder that what we call knowledge is often just well-worn habit dressed up in linguistic finery.
A Final Reflection
Molyneux’s problem, reframed through the lens of language insufficiency, reveals that our greatest epistemic challenges are also our greatest linguistic ones. Before we can speak of knowing a cube or sphere by sight, we must reckon with the unspoken question: do our words mean what we think they mean across the changing stage of experience?
That, dear reader, is the cube that haunts empiricism still.
This Isn’t Clickbait. I Asked MidJourney for “Ugly Women”. Here’s What It Gave Me.
Let’s clear the air: I did it for science. Or satire. Or possibly just to see if artificial intelligence would have the audacity to mirror the cruelty of its makers.
Audio: NotebookLM podcast on this topic.
I queried MidJourney with the phrase ugly female. What did it return? An aesthetic pageant. A digital Vogue spread. If any of these faces belongs to someone conventionally labelled “ugly”, then I’m a rutabaga in a Dior suit.
Yes, there’s one stylised rendering of Greta Thunberg in full Norse Valkyrie scowl mode – but even then, she looks fierce, not foul. The rest? AI-generated portraits so telegenic I half-expected to see #spon in the corner.
Let’s be clinical for a moment. As an American male (with all the culturally indoctrinated shallowness that entails), I admit some of these aren’t textbook 10s. Maybe a few clock in at a 6 or 7 on the patriarchy’s dubious sliding scale. But if this is ugly, the AI has either broken the aesthetic curve or been force-fed too many episodes of The Bachelor.
Here’s the thing: AI is trained to over-represent symmetrical faces, wide eyes, clear skin – the usual genetic lottery wins. And yet, when asked for ugly, it can’t help but deliver catalogue models with slightly unconventional haircuts. It doesn’t know how to be truly ugly – because we don’t know how to describe ugliness without revealing ourselves as sociopaths.
Once upon a time, I dated a model agent in Los Angeles. Japanese by birth, stationed in LA, scouting for a French agency – the kind of cosmopolitan trifecta only fashion could breed. Her job? Finding “parts models.” That’s right – someone with flawless teeth but forgettable everything else. Hands like sculpture. Eyelashes like Instagram filters.
We’d play a game: spot the 10s. She’d nudge me, whisper “her?” I’d say, “Pretty close.” She’d shake her head. “Look at that eye tooth.” And we’d dissolve into laughter.
We were mocking perfection. Because perfection is a con. A trick of lighting, contour, and post-production.
So, no. I don’t think any of the women in the AI’s response are ugly. Quite the contrary – they’re too beautiful. AI can’t show us “ugly” because it’s been trained to optimise desire, not reflect reality. And our collective understanding of beauty is so skewed that anything less than runway-ready gets sorted into the rejection bin.
If these women are ugly, what exactly is beautiful?
But maybe that’s the point. We’ve abstracted beauty so far from the human that even our ugliness is now synthetically pleasing.
What do you think? Are any of these faces truly ugly? All of them? Let me know in the comments – and try not to rate them like a casting director with a god complex.
Let us begin with the heresy: consciousness is not a thing. It is not a light bulb switched on in the mind. It is not a theatre with a little homunculus watching the play unfold. It is not a ghost in the machine, nor even a particularly welcome tenant. Consciousness is a conjuring trick – one so convincing that even the conjurer forgets it is an act.
Video: Related Topic: IAI Joscha Bosch on Consiousness
If that unsettles you, good. Welcome to the simulacrum.
The Wetness of Mind
We often hear that consciousness is “emergent,” but the term is used so promiscuously that it risks becoming decorative. So let us be specific. Consciousness, if it is emergent, is emergent as wetness is from H2O: not in the hydrogen or the oxygen, but in their relationship when bonded just so. Joscha Bach and others argue that consciousness arises not from the bits, but from the dance – the recursive feedback loops and predictive models running atop the neural substrate.
Audio: NotebookLM podcast on this topic.
In this view, the self is not the pilot but the dashboard. It is the user interface the brain conjures to coordinate action, interpret input, and maintain internal coherence. Not because it’s real, but because it’s useful. You are a GUI with delusions of grandeur.
The Cast of Theorists
Let us now parade the usual suspects:
Joscha Bach: Consciousness is a virtual self-model, emergent from recursive, computational feedback. Not the product of neurons firing per se, but of their ability to simulate a stable identity across time.
Thomas Metzinger: There is no self. Only a Phenomenal Self-Model (PSM) which becomes phenomenally transparent when the system no longer recognises it as a model. Consciousness is the experience of this hallucinated self.
Daniel Dennett: Dismantles the notion of a “central experiencer” with his Multiple Drafts Model. Consciousness is a narrative, a distributed process where drafts of experience compete, are edited, and retroactively interpreted.
David Chalmers: Waves his flag at the Hard Problem of consciousness. You can explain behaviour, memory, attention—but not experience itself. He flirts with dualism and panpsychism while insisting there’s a gap science cannot yet close.
Giulio Tononi: Gives us Integrated Information Theory (IIT) and the elusive metric Φ (phi). Consciousness is the degree to which information is unified within a system. Your brain is conscious because its parts can’t be reduced without losing coherence.
Karl Friston: The prophet of Free Energy Minimisation. Consciousness is an emergent property of systems that seek to reduce prediction error. The brain is a Bayesian engine, and the self is its best guess about how to survive.
So What Is Consciousness?
A hallucination. A recursive illusion. A predictive dashboard. A statistical artefact. A phi score. A phenomenally transparent model. Take your pick.
None of these theories fully agree, but most converge on one elegant horror: you are not what you think you are. The sense of being a continuous, stable, indivisible “I” is a construction. A simulation. The dream from which there is no waking because waking is part of the dream.
This is not despair; it is clarity. Just as wetness does not cry when told it is not a substance, the self need not mourn its own illusion. It is a marvellous fiction, worth inhabiting.
Conclusion: Through the Mirror
To be conscious of consciousness is to stand in the hall of mirrors and realise none reflect the original—because there is no original. The mirror is the thing.
But if the theatre is empty, the play goes on. Scripts are written, models simulated, selves performed. And perhaps, in this strange recursion, we find not meaning, but the possibility of coherence.
So raise a glass to the illusion. May your predictive model stay optimised, your narrative stay plausible, and your hallucinated self remain just this side of transparent.
For further hallucinatory episodes, consult your local philosopher, neuroscientist, or AI researcher. Side effects may include derealisation, epistemic vertigo, and mild enlightenment.
The Enlightenment, we are told, was the age of Reason. A radiant exorcism of superstition. Out went God. Out went angels, miracles, saints, indulgences. All that frothy medieval sentiment was swept aside by a brave new world of logic, science, and progress. Or so the story goes.
Audio: NotebookLM podcast on this topic.
But look closer, and you’ll find that Reason didn’t kill God—it absorbed Him. The Enlightenment didn’t abandon metaphysics. It merely privatised it.
From Confessional to Courtroom
We like to imagine that the Enlightenment was a clean break from theology. But really, it was a semantic shell game. The soul was rebranded as the self. Sin became crime. Divine judgement was outsourced to the state.
We stopped praying for salvation and started pleading not guilty.
The entire judicial apparatus—mens rea, culpability, desert, retribution—is built on theological scaffolding. The only thing missing is a sermon and a psalm.
Where theology had the guilty soul, Enlightenment law invented the guilty mind—mens rea—a notion so nebulous it requires clairvoyant jurors to divine intention from action. And where the Church offered Hell, the state offers prison. It’s the same moral ritual, just better lit.
Galen Strawson and the Death of Moral Responsibility
Enter Galen Strawson, that glowering spectre at the feast of moral philosophy. His Basic Argument is elegantly devastating:
You do what you do because of the way you are.
You can’t be ultimately responsible for the way you are.
Therefore, you can’t be ultimately responsible for what you do.
Unless you are causa sui—the cause of yourself, an unmoved mover in Calvin Klein—you cannot be held truly responsible. Free will collapses, moral responsibility evaporates, and retributive justice is exposed as epistemological theatre.
In this light, our whole legal structure is little more than rebranded divine vengeance. A vestigial organ from our theocratic past, now enforced by cops instead of clerics.
The Modern State: A Haunted House
What we have, then, is a society that has denied the gods but kept their moral logic. We tossed out theology, but we held onto metaphysical concepts like intent, desert, and blame—concepts that do not survive contact with determinism.
We are living in the afterglow of divine judgement, pretending it’s sunlight.
Nietzsche saw it coming, of course. He warned that killing God would plunge us into existential darkness unless we had the courage to also kill the values propped up by His corpse. We did the first bit. We’re still bottling it on the second.
If Not Retribution, Then What?
Let’s be clear: no one’s suggesting we stop responding to harm. But responses should be grounded in outcomes, not outrage.
Containment, not condemnation.
Prevention, not penance.
Recalibration, not revenge.
We don’t need “justice” in the retributive sense. We need functional ethics, rooted in compassion and consequence, not in Bronze Age morality clumsily duct-taped to Enlightenment reason.
The Risk of Letting Go
Of course, this is terrifying. The current system gives us moral closure. A verdict. A villain. A vanishing point for our collective discomfort.
Abandoning retribution means giving that up. It means accepting that there are no true villains—only configurations of causes. That punishment is often revenge in drag. That morality itself might be a control mechanism, not a universal truth.
But if we’re serious about living in a post-theological age, we must stop playing dress-up with divine concepts. The Enlightenment didn’t finish the job. It changed the costumes, kept the plot, and called it civilisation.