Molyneux, Locke, and the Cube That Shook Empiricism

Few philosophical thought experiments have managed to torment empiricists quite like Molyneux’s problem. First posed by William Molyneux to John Locke in 1688 (published in Locke’s An Essay Concerning Human Understanding), the question is deceptively simple:

If a person born blind, who has learned to distinguish a cube from a sphere by touch, were suddenly granted sight, could they, without touching the objects, correctly identify which is the cube and which is the sphere by sight alone?

I was inspired to write this article in reaction to Jonny Thmpson’s post on Philosophy Minis, shared below for context.

Video: Molyneux’s Problem

Locke, ever the champion of sensory experience as the foundation of knowledge, gave a confident empiricist’s answer: no. For Locke, ideas are the products of sensory impressions, and each sense provides its own stream of ideas, which must be combined and associated through experience. The newly sighted person, he argued, would have no prior visual idea of what a cube or sphere looks like, only tactile ones; they would need to learn anew how vision maps onto the world.

Audio: NotebookLM podcast on this topic.

This puzzle has persisted through centuries precisely because it forces us to confront the assumptions at the heart of empiricism: that all knowledge derives from sensory experience and that our senses, while distinct, can somehow cohere into a unified understanding of the world.

Empiricism, Epistemology, and A Priori Knowledge: The Context

Before we dismantle the cube further, let’s sweep some conceptual debris out of the way. Empiricism is the view that knowledge comes primarily (or exclusively) through sensory experience. It stands opposed to rationalism, which argues for the role of innate ideas or reason independent of sense experience.

Epistemology, the grandiloquent term for the study of knowledge, concerns itself with questions like: What is knowledge? How is it acquired? Can we know anything with certainty?

And then there is the spectre of a priori knowledge – that which is known independent of experience. A mathematical truth (e.g., 2 + 2 = 4) is often cited as a classic a priori case. Molyneux’s problem challenges empiricists because it demands an account of how ideas from one sensory modality (touch) might map onto another (vision) without prior experience of the mapping—an a priori leap, if you will.

The Language Correspondence Trap

While Molyneux and Locke framed this as an epistemological riddle, we can unmask it as something more insidious: a failure of language correspondence. The question presumes that the labels “cube” and “sphere” – tied in the blind person’s mind to tactile experiences – would, or should, carry over intact to the new visual experiences. But this presumption smuggles in a linguistic sleight of hand.

The word “cube” for the blind person means a specific configuration of tactile sensations: edges, vertices, flat planes. The word “sphere” means smoothness, unbroken curvature, no edges. These are concepts anchored entirely in touch. When vision enters the fray, we expect these words to transcend modalities – to leap from the tactile to the visual, as if their meanings were universal tokens rather than context-bound markers. The question is not merely: can the person see the cube? but rather: can the person’s tactile language map onto the visual world without translation or recalibration?

What Molyneux’s problem thus exposes is the assumption that linguistic labels transparently correspond to external reality, regardless of sensory apparatus. This is the mirage at the heart of Locke’s empiricism, the idea that once a word tags an object through experience, that tag is universally valid across sensory experiences. The cube and sphere aren’t just objects of knowledge; they are signs, semiotic constructs whose meaning depends on the sensory, social, and linguistic contexts in which they arise.

The Semiotic Shambles

Molyneux’s cube reveals the cracks in the correspondence theory of language: the naïve belief that words have stable meanings that latch onto stable objects or properties in the world. In fact, the meaning of “cube” or “sphere” is as much a product of sensory context as it is of external form. The newly sighted person isn’t merely lacking visual knowledge; they are confronted with a translation problem – a semantic chasm between tactile signification and visual signification.

If, as my Language Insufficiency Hypothesis asserts, language is inadequate to fully capture and transmit experience across contexts, then Molyneux’s problem is not an oddity but an inevitability. It exposes that our conceptual frameworks are not universal keys to reality but rickety bridges between islands of sense and meaning. The cube problem is less about empiricism’s limits in epistemology and more about its blind faith in linguistic coherence.

In short, Molyneux’s cube is not simply an empirical puzzle; it is a monument to language’s failure to correspond cleanly with the world, a reminder that what we call knowledge is often just well-worn habit dressed up in linguistic finery.

A Final Reflection

Molyneux’s problem, reframed through the lens of language insufficiency, reveals that our greatest epistemic challenges are also our greatest linguistic ones. Before we can speak of knowing a cube or sphere by sight, we must reckon with the unspoken question: do our words mean what we think they mean across the changing stage of experience?

That, dear reader, is the cube that haunts empiricism still.

The Purpose of Purpose

I’m a nihilist. Possibly always have been. But let’s get one thing straight: nihilism is not despair. That’s a slander cooked up by the Meaning Merchants – the sentimentalists and functionalists who can’t get through breakfast without hallucinating some grand purpose to butter their toast. They fear the void, so they fill it. With God. With country. With yoga.

Audio: NotebookLM podcast on this topic.

Humans are obsessed with function. Seeing it. Creating it. Projecting it onto everything, like graffiti on the cosmos. Everything must mean something. Even nonsense gets rebranded as metaphor. Why do men have nipples? Why does a fork exist if you’re just going to eat soup? Doesn’t matter – it must do something. When we can’t find this function, we invent it.

But function isn’t discovered – it’s manufactured. A collaboration between our pattern-seeking brains and our desperate need for relevance, where function becomes fiction, where language and anthropomorphism go to copulate. A neat little fiction. An ontological fantasy. We ask, “What is the function of the human in this grand ballet of entropy and expansion?” Answer: there isn’t one. None. Nada. Cosmic indifference doesn’t write job descriptions.

And yet we prance around in lab coats and uniforms – doctors, arsonists, firemen, philosophers – playing roles in a drama no one is watching. We build professions and identities the way children host tea parties for dolls. Elaborate rituals of pretend, choreographed displays of purpose. Satisfying? Sometimes. Meaningful? Don’t kid yourself.

We’ve constructed these meaning-machines – society, culture, progress – not because they’re real, but because they help us forget that they’re not. It’s theatre. Absurdist, and often bad. But it gives us something to do between birth and decomposition.

Sisyphus had his rock. We have careers.

But let’s not confuse labour for meaning, or imagination for truth. The boulder never reaches the top, and that’s not failure. That’s the show.

So roll the stone. Build the company. Write the blog. Pour tea for Barbie. Just don’t lie to yourself about what it all means.

Because it doesn’t mean anything.

The Indexing Abyss: A Cautionary Tale in Eight Chapters

There, I said it.

I’m almost finished with A Language Insufficiency Hypothesis, the book I’ve been labouring over for what feels like the gestation period of a particularly reluctant elephant. To be clear: the manuscript is done. Written. Edited. Blessed. But there remains one final circle of publishing hell—the index.

Now, if you’re wondering how motivated I am to return to indexing, consider this: I’m writing this blog post instead. If that doesn’t scream avoidance with an airhorn, nothing will.

Audio: NotebookLM podcast on this topic.

I began indexing over a month ago. I made it through two chapters of eight, then promptly wandered off to write a couple of novellas. As you do. One started as a short story—famous last words—and evolved into a novella. The muse struck again. Another “short story” appeared, and like an unattended sourdough starter, it fermented into a 15,000-word novelette. Apparently, I write short stories the way Americans pour wine: unintentionally generous.

With several unpublished manuscripts loitering on my hard drive like unemployed theatre majors, I figured it was time to release one into the wild. So I did. I published the novelette to Kindle, and just today, the paperback proof landed in my postbox like a smug little trophy.

And then, because I’m an unrepentant completionist (or a masochist—jury’s out), I thought: why not release the novella too? I’ve been told novellas and novelettes are unpopular due to “perceived value.” Apparently, people would rather buy a pound of gristle than 200 grams of sirloin. And yet, in the same breath, they claim no one has time for long books anymore. Perhaps these are different tribes of illiterates. I suppose we’ll find out.

Let’s talk logistics. Writing a book is only the beginning—and frankly, it’s the easy part. Fingers to keyboard, ideas to page. Done. I use Word, like most tragically conventional authors. Planning? Minimal. These were short stories, remember? That was the plan.

Next comes layout. Enter Adobe InDesign—because once you’ve seen what Word does to complex layouts, you never go back. Export to PDF, pray to the typographic gods, and move on.

Then there’s the cover. I lean on Illustrator and Photoshop. Photoshop is familiar, like a worn-in shoe; Illustrator is the smug cousin who turns up late but saves the day with scalable vectors. This time, I used Illustrator for the cover—lesson learnt from past pixelation traumas. Hardback to paperback conversion? A breeze when your artwork isn’t made of crayon scribbles and hope.

Covers, in case you’ve never assembled one, are ridiculous. Front. Back. Spine. Optional dust jacket if you’re feeling fancy (I wasn’t). You need titles, subtitles, your name in a legible font, and let’s not forget the barcode, which you will place correctly on the first attempt exactly never.

Unlike my first novel, where I enlisted someone with a proper design eye to handle the cover text, this time I went full minimalist. Think Scandinavian furniture catalogue meets existential despair. Classy.

Once the cover and interior are done, it’s time to wrestle with the publishing platforms. Everything is automated these days—provided you follow their arcane formatting commandments, avoid forbidden fonts, and offer up your soul. Submitting each book takes about an hour, not including the time lost choosing a price that balances “undervalued labour” and “won’t scare away cheapskates.”

Want a Kindle version? That’s another workflow entirely, full of tortured formatting, broken line breaks, and wondering why your chapter headings are now in Wingdings. Audiobooks? That’s a whole other circus, with its own animals and ringmasters. Honestly, it’s no wonder authors hire publishers. Or develop drinking problems.

But I’m stubborn. Which brings us full circle.

I’ve now got two books heading for daylight, a few more waiting in the wings, and one bloody non-fiction beast that won’t see release until I finish the damn index. No pseudonym this time. No hiding. Just me, owning my sins and hoping the final product lands somewhere between “insightful” and “mercifully short.”

So yes, life may well be a journey. But indexing is the bit where the satnav breaks, the road floods, and the boot falls off the car. Give me the destination any day. The journey can fuck right off.

Sustenance: A Book About Aliens, Language, and Everything You’re Getting Wrong

Violet aliens on a farm

So, I wrote a book and published it under Ridley Park, the pseudonym I use for fiction.

It has aliens. But don’t get excited—they’re not here to save us, probe us, or blow up the White House. They’re not even here for us.

Which is, frankly, the point.

Audio: NotebookLM podcast on this topic.

The book’s called Sustenance, and while it’s technically speculative fiction, it’s more about us than them. Or rather, it’s about how we can’t stop making everything about us—even when it shouldn’t be. Especially when it shouldn’t be.

Let’s talk themes. And yes, we’re using that word like academics do: as a smokescreen for saying uncomfortable things abstractly.

Language: The Original Scam

Language is the ultimate colonial tool. We call it communication, but it’s mostly projection. You speak. You hope. You assume. You superimpose meaning on other people like a cling film of your own ego.

Sustenance leans into this—not by showing a breakdown of communication, but by showing what happens when communication was never mutual in the first place. When the very idea of “meaning” has no purchase. It’s not about mishearing—it’s about misbeing.

Culture: A Meme You Were Born Into

Culture is the software you didn’t choose to install, and probably can’t uninstall. Most people treat it like a universal law—until they meet someone running a different OS. Cue confusion, arrogance, or violence.

The book explores what happens when cultural norms aren’t shared, and worse, aren’t even legible. Imagine trying to enforce property rights on beings who don’t understand “ownership.” It’s like trying to baptise a toaster.

Sex/Gender: You Keep Using Those Words…

One of the quiet joys of writing non-human characters is discarding human assumptions about sex and gender—and watching readers squirm.

What if sex wasn’t about power, pleasure, or identity? What if it was just a biological procedure, like cell division or pruning roses? Would you still be interested? Would you still moralise about it?

We love to believe our sex/gender constructs are inevitable. They’re not. They’re habits—often bad ones.

Consent: Your Framework Is Showing

Consent, as we use it, assumes mutual understanding, shared stakes, and equivalent agency. Remove any one of those and what’s left?

Sustenance doesn’t try to solve this—it just shows what happens when those assumptions fall apart. Spoiler: it’s not pretty, but it is honest.

Projection: The Mirror That Lies

Humans are deeply committed to anthropocentrism. If it walks like us, or flinches like us, it must be us. This is why we get so disoriented when faced with the truly alien: it won’t dance to our tune, and we’re left staring at ourselves in the funhouse mirror.

This isn’t a book about aliens.

It’s a book about the ways we refuse to see what’s not us.

Memory: The Autobiography of Your Justifications

Memory is not a record. It’s a defence attorney with a narrative license. We rewrite the past to make ourselves look consistent, or innocent, or right.

In Sustenance, memory acts less as a tether to truth and more as a sculpting tool—a way to carve guilt into something manageable. Something you can live with. Until you can’t.

In Summary: It’s Not About Them. It’s About You.

If that sounds bleak, good. It’s meant to.

But it’s also a warning: don’t get too comfortable in your own categories. They’re only universal until you meet someone who doesn’t share them.

Like I said, it’s not really about the aliens.

It’s about us.


If you enjoy fiction that’s more unsettling than escapist, more question than answer, you might be interested in Sustenance. It’s live on Kindle now for the cost of a regrettable coffee:

📘 Sustenance on Amazon US
Also available in the UK, DE, FR, ES, IT, NL, JP, BR, CA, MX, AU, and IN—because alienation is a universal language.

On Ishiguro, Cioran, and Whatever I Think I’m Doing

Sora-generated image of Emil Cioran and Kazuo Ishiguro reading a generic book together

Having just finished Never Let Me Go by Kazuo Ishiguro, I’ve now cracked open my first taste of Cioran—History and Utopia. You might reasonably ask why. Why these two? And what, if anything, do they have in common? Better yet—what do the three of us have in common?

Audio: NotebookLM podcast on this topic.

Recently, I finished writing a novella titled Propensity (currently gathering metaphorical dust on the release runway). Out of curiosity—or narcissism—I fed it to AI and asked whose style it resembled. Among the usual suspects were two names I hadn’t yet read: Ishiguro and Cioran. I’d read the others and understood the links. These two, though, were unknown quantities. So I gave them a go.

Ishiguro is perhaps best known for The Remains of the Day, which, like Never Let Me Go, got the Hollywood treatment. I chose the latter, arbitrarily. I even asked ChatGPT to compare both books with their cinematic counterparts. The AI was less than charitable, describing Hollywood’s adaptations as bastardised and bowdlerised—flattened into tidy narratives for American palates too dim to digest ambiguity. On this, we agree.

What struck me about Never Let Me Go was its richly textured mundanity. That’s apparently where AI saw the resemblance to Propensity. I’m not here to write a book report—partly because I detest spoilers, and partly because summaries miss the point. It took about seven chapters before anything ‘happened’, and then it kept happening. What had at first seemed like a neurotic, wandering narrative from the maddeningly passive Kathy H. suddenly hooked me. The reveals began to unfold. It’s a book that resists retelling. It demands firsthand experience. A vibe. A tone. A slow, aching dread.

Which brings me neatly to Cioran.

History and Utopia is a collection of essays penned in French (not his mother tongue, but you’d never guess it) while Cioran was holed up in postwar Paris. I opted for the English translation—unapologetically—and was instantly drawn in. His prose? Electric. His wit? Acidic. If Ishiguro was a comparison of style, then Cioran was one of spirit. Snark, pessimism, fatalistic shrugs toward civilisation—finally, someone speaking my language.

Unlike the cardboard cut-outs of Cold War polemics we get from most Western writers of the era, Cioran’s take is layered, uncomfortably self-aware, and written by someone who actually fled political chaos. There’s no naïve idealism here, no facile hero-villain binaries. Just a deeply weary intellect peering into the abyss and refusing to blink. It’s not just what he says, but the tone—the curled-lip sneer at utopian pretensions and historical self-delusions. If I earned even a drop of that comparison, I’ll take it.

Both Ishiguro and Cioran delivered what I didn’t know I needed: the reminder that some writers aren’t there to tell you a story. They’re there to infect you with an atmosphere. An idea. A quiet existential panic you can’t shake.

I’ve gotten what I came for from these two, though I suspect I’ll be returning, especially to Cioran. Philosophically, he’s my kind of bastard. I doubt this’ll be my last post on his work.

The Emperor’s New Models: Box, Lawson, and the Death of Truth

We live in an age intoxicated by models: climate models, economic models, epidemiological models, cosmological models—each one an exquisite confection of assumptions draped in a lab coat and paraded as gospel. Yet if you trace the bloodline of model-building back through the annals of intellectual history, you encounter two figures who coldly remind us of the scam: George Box and Hilary Lawson.

Box: The Gentle Assassin of Certainty

George Box, the celebrated statistician, is often credited with the aphorism: “All models are wrong, but some are useful.” However, Box himself never uttered this precise phrase. What he did say, in his 1976 paper Science and Statistics, was:

The “some are useful” flourish was added later by a public desperate to sweeten the bitter pill. Nevertheless, Box deserves credit for the lethal insight: no model, however elegant, perfectly captures reality. They are provisional guesses, finger-paintings smeared across the rough surface of the unknown.

Audio: NotebookLM podcast on this topic.

Lawson: The Arsonist Who Burned the Map

Hilary Lawson, contemporary philosopher and author of Closure: A Story of Everything, drags Box’s modest scepticism into full-blown philosophical insurrection. In a recent lecture, Lawson declared:

Where Box warns us the emperor’s clothes don’t fit, Lawson points out that the emperor himself is a paper doll. Either way, we dress our ignorance in equations and hope no one notices the draft.

Lawson’s view is grim but clarifying: models are not mere approximations of some Platonic truth. They are closures—temporary, pragmatic structures we erect to intervene effectively in a world we will never fully comprehend. Reality, in Lawson’s framing, is an “openness”: endlessly unfolding, resistant to total capture.

The Case of the Celestial Spheres

Take Aristotle’s model of celestial spheres. Ludicrous? Yes. Obsolete? Absolutely. Yet for centuries, it allowed navigators to chart courses, astrologers to cast horoscopes, and priests to intimidate peasants—all without the slightest whiff of heliocentrism. A model does not need to be right; it merely needs to be operational.

Our modern theories—Big Bang cosmology, dark matter, and quantum gravity—may well be tomorrow’s celestial spheres: charming relics of ignorance that nonetheless built bridges, cured diseases, and sold mobile phones.

Summary Table: Lawson’s View on Models and Truth

Conclusion

Box taught us to distrust the fit of our models; Lawson reminds us there is no true body underneath them. If truth is a ghost, then our models are ghost stories—and some ghost stories, it turns out, are very good at getting us through the night.

We are left not with certainty, but with craftsmanship: the endless, imperfect art of refining our closures, knowing full well they are lies that work. Better lies. Usable lies. And perhaps, in a world without final answers, that is the most honest position of all.

Unwilling Steelman, Part I

A five-part descent into the illusion of autonomy, where biology writes the script, reason provides the excuse, and the self is merely the echo of its own conditioning. This is a follow-up to a recent post on the implausibility of free will.

Audio: NotebookLM podcast discussing this topic.

Constraint Is Not Freedom

The ergonomic cage of compatibilist comfort

“You are not playing the piano. You are the piano, playing itself — then applauding.”

Compatibilists — those philosophical locksmiths determined to keep the myth of free will intact — love to say that constraint doesn’t contradict freedom. That a system can still be “free” so long as it is coherent, self-reflective, and capable of recursive evaluation.

In this view, freedom doesn’t require being uncaused — it only requires being causally integrated. You don’t need to be sovereign. You just need to be responsive.

“The pianist may not have built the piano — but she still plays it.”

It sounds lovely.

It’s also false.

You Are the Piano

This analogy fails for a simple reason: there is no pianist. No ghost in the gears. No homunculus seated behind the cortex, pulling levers and composing virtue. There is only the piano — complex, self-modulating, exquisitely tuned — but self-playing nonetheless.

The illusion of choice is merely the instrument responding to its state: to its internal wiring, environmental inputs, and the accumulated sediment of prior events. What feels like deliberation is often delay. What feels like freedom is often latency.

Recursive ≠ Free

Ah, but what about reflection? Don’t we revise ourselves over time?

We do. But that revision is itself conditioned. You didn’t choose the capacity to reflect. You didn’t choose your threshold for introspection. If you resist a bias, it’s because you were predisposed — by some cocktail of education, temperament, or trauma — to resist it.

A thermostat that updates its own algorithm is still a thermostat.

It doesn’t become “free” by being self-correcting. It becomes better adapted. Likewise, human introspection is just adaptive determinism wearing a philosophical hat.

Constraint Isn’t Contradiction — It’s Redefinition

Compatibilists smuggle in a quieter, defanged version of freedom: not the ability to do otherwise, but the ability to behave “like yourself.”

But this is freedom in retrospect, not in action.
If all freedom means is “acting in accordance with one’s programming,” then Roombas have free will.

If we stretch the term that far, it breaks — not loudly, but with the sad elasticity of a word losing its shape.

TL;DR: The Pianist Was Always a Myth

  • You didn’t design your mental architecture.
  • You didn’t select your desires or dispositions.
  • You didn’t choose the you that chooses.

So no — you’re not playing the piano.
You are the piano — reverberating, perhaps beautifully, to stimuli you didn’t summon and cannot evade.

📅 Coming Tomorrow

Continuity Is Not Identity

What if you are not who you were — but simply what you’ve become?

Hungering for Morality: When Right and Wrong Are Just a Matter of PR

Full Disclosure: I read the first volume of The Hunger Games just before the film was released. It was OK – certainly better than the film. This video came across my feed, and I skipped through it. Near the end, this geezer references how Katniss saves or recovers deteriorated morality. Me being me, I found issue with the very notion that a relative, if not subjective, concept could be recovered.

The OP asks if The Hunger Games are a classic. I’d argue that they are a categorical classic, like Harry Potter, within the category of YA fiction.

Audio: NotebookLM podcast discussing this topic.

The Hunger Games doesn’t depict the death of morality — it’s a masterclass in how to twist it into a circus act.

Video: YouTube video that spawned this topic.

Let us dispense with the hand-wringing. The Hunger Games is not a parable of moral decay. It is something far more chilling: a vivid portrait of moral engineering — the grotesque contortion of ethical instincts into instruments of domination and spectacle.

Those who bemoan the “decline of morality” in Panem have rather missed the point. There is no absence of morality in the Capitol — only a different version of it. A rebranded, corporatised, state-sanctioned morality, lacquered in lipstick and broadcast in 4K. It is not immorality that reigns, but a hyperactive ideological morality, designed to keep the masses docile and the elites draped in silk.

This is not moral entropy; it’s moral mutation.

Children are not slaughtered because people have forgotten right from wrong — they are slaughtered because a society has been trained to believe that this is what justice looks like. That blood is penance. That fear is unity. That watching it all unfold with a glass of champagne in hand is perfectly civilised behaviour.

This isn’t the death of morality. It’s a hostile takeover.

The Moral PR Machine

If morality is, as many of us suspect, relative — a cultural construct built on consensus, coercion, and convenience — then it can no more “decline” than fashion trends can rot. It simply shifts. One day, shoulder pads are in. The next, it’s child-on-child murder as prime-time entertainment.

In Panem, the moral compass has not vanished. It’s been forcibly recalibrated. Not by reason or revelation, but by propaganda and fear. The Games are moral theatre. A grim ritual, staged to remind the Districts who holds the reins, all under the nauseating guise of tradition, order, and justice.

The citizens of the Capitol aren’t monsters — they’re consumers. Trained to see horror as haute couture. To mistake power for virtue. To cheer while children are butchered, because that’s what everyone else is doing — and, crucially, because they’ve been taught it’s necessary. Necessary evils are the most seductive kind.

Katniss: Not a Saint, But a Saboteur

Enter Katniss Everdeen, not as the moral saviour but as the spanner in the machine. She doesn’t preach. She doesn’t have a grand theory of justice. What she has is visceral disgust — an animal revulsion at the machinery of the Games. Her rebellion is personal, tribal, and instinctive: protect her sister, survive, refuse to dance for their amusement.

She isn’t here to restore some lost golden age of decency. She’s here to tear down the current script and refuse to read her lines.

Her defiance is dangerous not because it’s moral in some abstract, universal sense — but because it disrupts the Capitol’s moral narrative. She refuses to be a pawn in their ethical pageant. She reclaims agency in a world that has commodified virtue and turned ethics into state theatre.

So, Has Morality Declined?

Only if you believe morality has a fixed address — some eternal North Star by which all human actions may be judged. But if, as postmodernity has rather insistently suggested, morality is a shifting social fiction — then Panem’s horror is not a fall from grace, but a recalibration of what counts as “grace” in the first place.

And that’s the real horror, isn’t it? Not that morality has collapsed — but that it still exists, and it likes what it sees.

Conclusion: The Real Hunger

The Hunger Games is not about a society starved of morality — it’s about a world gorging on it, cooked, seasoned, and served with a garnish of guiltless indulgence. It is moral appetite weaponised. Ethics as edict. Conscience as costume.

If you feel sickened by what you see in Panem, it’s not because morality has vanished.

It’s because it hasn’t.

When Suspension of Disbelief Escapes the Page

Welcome to the Age of Realism Fatigue

Once upon a time — which is how all good fairy tales begin — suspension of disbelief was a tidy little tool we used to indulge in dragons, space travel, talking animals, and the idea that people in rom-coms have apartments that match their personalities and incomes. It was a temporary transaction, a gentleman’s agreement, a pact signed between audience and creator with metaphorical ink: I know this is nonsense, but I’ll play along if you don’t insult my intelligence.

Audio: NotebookLM podcast of this page content.

This idea, famously coined by Samuel Taylor Coleridge as the “willing suspension of disbelief,” was meant to give art its necessary air to breathe. Coleridge’s hope was that audiences would momentarily silence their rational faculties in favour of emotional truth. The dragons weren’t real, but the heartbreak was. The ghosts were fabrications, but the guilt was palpable.

But that was then. Before the world itself began auditioning for the role of absurdist theatre. Before reality TV became neither reality nor television. Before politicians quoted memes, tech CEOs roleplayed as gods, and conspiracy theorists became bestsellers on Amazon. These days, suspension of disbelief is no longer a leisure activity — it’s a survival strategy.

The Fictional Contract: Broken but Not Forgotten

Traditionally, suspension of disbelief was deployed like a visitor’s badge. You wore it when entering the imagined world and returned it at the door on your way out. Fiction, fantasy, speculative fiction — they all relied on that badge. You accepted the implausible if it served the probable. Gandalf could fall into shadow and return whiter than before because he was, after all, a wizard. We were fine with warp speed as long as the emotional logic of Spock’s sacrifice made sense. There were rules — even in rule-breaking.

The genres varied. Hard sci-fi asked you to believe in quantum wormholes but not in lazy plotting. Magical realism got away with absurdities wrapped in metaphor. Superhero films? Well, their disbelief threshold collapsed somewhere between the multiverse and the Bat-credit card.

Still, we always knew we were pretending. We had a tether to the real, even when we floated in the surreal.

But Then Real Life Said, “Hold My Beer.”

At some point — let’s call it the twenty-first century — the need to suspend disbelief seeped off the screen and into the bloodstream of everyday life. News cycles became indistinguishable from satire (except that satire still had editors). Headlines read like rejected Black Mirror scripts. A reality TV star became president, and nobody even blinked. Billionaires declared plans to colonise Mars whilst democracy quietly lost its pulse.

We began to live inside a fiction that demanded that our disbelief be suspended daily. Except now, it wasn’t voluntary. It was mandatory. If you wanted to participate in public life — or just maintain your sanity — you had to turn off some corner of your rational mind.

You had to believe, or pretend to, that the same people calling for “freedom” were banning books. That artificial intelligence would definitely save us, just as soon as it was done replacing us. That social media was both the great democratiser and the sewer mainline of civilisation.

The boundary between fiction and reality? Eroded. Fact-checking? Optional. Satire? Redundant. We’re all characters now, improvising in a genreless world that refuses to pick a lane.

Cognitive Gymnastics: Welcome to the Cirque du Surréalisme

What happens to a psyche caught in this funhouse? Nothing good.

Our brains, bless them, were designed for some contradiction — religion’s been pulling that trick for millennia — but the constant toggling between belief and disbelief, trust and cynicism, is another matter. We’re gaslit by the world itself. Each day, a parade of facts and fabrications marches past, and we’re told to clap for both.

Cognitive dissonance becomes the default. We scroll through doom and memes in the same breath. We read a fact, then three rebuttals, then a conspiracy theory, then a joke about the conspiracy, then a counter-conspiracy about why the joke is state-sponsored. Rinse. Repeat. Sleep if you can.

The result? Mental fatigue. Not just garden-variety exhaustion, but a creeping sense that nothing means anything unless it’s viral. Critical thinking atrophies not because we lack the will but because the floodwaters never recede. You cannot analyse the firehose. You can only drink — or drown.

Culture in Crisis: A Symptom or the Disease?

This isn’t just a media problem. It’s cultural, epistemological, and possibly even metaphysical.

We’ve become simultaneously more skeptical — distrusting institutions, doubting authorities — and more gullible, accepting the wildly implausible so long as it’s entertaining. It’s the postmodern paradox in fast-forward: we know everything is a construct, but we still can’t look away. The magician shows us the trick, and we cheer harder.

In a world where everything is performance, authenticity becomes the ultimate fiction. And with that, the line between narrative and news, between aesthetic and actuality, collapses.

So what kind of society does this create?

One where engagement replaces understanding. Where identity is a curated feed. Where politics is cosplay, religion is algorithm, and truth is whatever gets the most shares. We aren’t suspending disbelief anymore. We’re embalming it.

The Future: A Choose-Your-Own-Delusion Adventure

So where does this all end?

There’s a dark path, of course: total epistemic breakdown. Truth becomes just another fandom and reality a subscription model. But there’s another route — one with a sliver of hope — where we become literate in illusion.

We can learn to hold disbelief like a scalpel, not a blindfold. To engage the implausible with curiosity, not capitulation. To distinguish between narratives that serve power and those that serve understanding.

It will require a new kind of literacy. One part media scepticism, one part philosophical rigour, and one part good old-fashioned bullshit detection. We’ll have to train ourselves not just to ask “Is this true?” but “Who benefits if I believe it?”

That doesn’t mean closing our minds. It means opening them with caution. Curiosity without credulity. Wonder without worship. A willingness to imagine the impossible whilst keeping a firm grip on the probable.

In Conclusion, Reality Is Optional, But Reason Is Not

In the age of AI, deepfakes, alt-facts, and hyperreality, we don’t need less imagination. We need more discernment. The world may demand our suspension of disbelief, but we must demand our belief back. In truth, in sense, in each other.

Because if everything becomes fiction, then fiction itself loses its magic. And we, the audience, are left applauding an empty stage.

Lights down. Curtain call.
Time to read the footnotes.

Man in Capitalistic Society

This is Chapter 5 of Erich Fromm’s The Sane Society. I’ve had this on my bookshelf for quite a while and wasn’t sure how a 70-year-old book could have so much relevance, but it does. Granted, some of it is irrelevant, a victim of the period it was written. This happens.

What strikes me about this chapter is the historical perspective it provides on capitalism. I’m an academic economist. I taught undergraduate economics for the better part of a decade. I’ve read (and recommend reading) Marx’s Capital firsthand.

Audio: NotebookLM Podcast commentary on this content.

Fromm adds additional details here. Firstly, he notes that the capitalism that marked the early days of the Industrial Revolution—the seventeenth and eighteenth centuries—differed from that of the nineteenth and twentieth centuries. The earlier period still had cultural and moral tethers that became frayed or lost in later periods. Without regurgitating the chapter, I cite some themes:

“this underselling practice is grown to such a shameful height, that particular persons publicly advertise that they undersell the rest of the trade.”

People were not very keen on price cutting as a competitive mechanism.

They also note the unfair competitive advantage of the monied elites who could buy materials in cash instead of credit and could thereby undercut prices, who would have to account for paying interest rates or markups on credit.

Whilst in the twentieth century, regulating undercutting is seen as protectionism, the earlier centuries had no problems defending merchants. We do have laws on the ebooks that prevent dumping, but these are rarely enforced, and when they are, it’s a political rather than economic statement. In practice, but done in the name of economics are politics in the same manner as science was used as cover to implement policy during the COVID-19 debacle.

Montesquieu says “that machines which diminish the numbers of workers are ‘pernicious’.” This sentiment echoes the current sentiments about robotics and artificial intelligence.

Nineteenth-century capitalism saw man as the measure of all things supplanted by capital. This is the capitalism Marx rails against—profits over humanity and society, the pursuit of local maxima at the expense of global maxima. This is also where the goal of hypergrowth and growth for growth’s sake came into vogue, ushering us into the Modern Age of Modern ideals—science, progress, order, and so on.

I won’t exhaust the chapter here, but for what it is, it’s a relatively light read. Whether I comment on later chapters depends on whether they engage me. Cheers.