Molyneux, Locke, and the Cube That Shook Empiricism

Few philosophical thought experiments have managed to torment empiricists quite like Molyneux’s problem. First posed by William Molyneux to John Locke in 1688 (published in Locke’s An Essay Concerning Human Understanding), the question is deceptively simple:

If a person born blind, who has learned to distinguish a cube from a sphere by touch, were suddenly granted sight, could they, without touching the objects, correctly identify which is the cube and which is the sphere by sight alone?

I was inspired to write this article in reaction to Jonny Thmpson’s post on Philosophy Minis, shared below for context.

Video: Molyneux’s Problem

Locke, ever the champion of sensory experience as the foundation of knowledge, gave a confident empiricist’s answer: no. For Locke, ideas are the products of sensory impressions, and each sense provides its own stream of ideas, which must be combined and associated through experience. The newly sighted person, he argued, would have no prior visual idea of what a cube or sphere looks like, only tactile ones; they would need to learn anew how vision maps onto the world.

Audio: NotebookLM podcast on this topic.

This puzzle has persisted through centuries precisely because it forces us to confront the assumptions at the heart of empiricism: that all knowledge derives from sensory experience and that our senses, while distinct, can somehow cohere into a unified understanding of the world.

Empiricism, Epistemology, and A Priori Knowledge: The Context

Before we dismantle the cube further, let’s sweep some conceptual debris out of the way. Empiricism is the view that knowledge comes primarily (or exclusively) through sensory experience. It stands opposed to rationalism, which argues for the role of innate ideas or reason independent of sense experience.

Epistemology, the grandiloquent term for the study of knowledge, concerns itself with questions like: What is knowledge? How is it acquired? Can we know anything with certainty?

And then there is the spectre of a priori knowledge – that which is known independent of experience. A mathematical truth (e.g., 2 + 2 = 4) is often cited as a classic a priori case. Molyneux’s problem challenges empiricists because it demands an account of how ideas from one sensory modality (touch) might map onto another (vision) without prior experience of the mapping—an a priori leap, if you will.

The Language Correspondence Trap

While Molyneux and Locke framed this as an epistemological riddle, we can unmask it as something more insidious: a failure of language correspondence. The question presumes that the labels “cube” and “sphere” – tied in the blind person’s mind to tactile experiences – would, or should, carry over intact to the new visual experiences. But this presumption smuggles in a linguistic sleight of hand.

The word “cube” for the blind person means a specific configuration of tactile sensations: edges, vertices, flat planes. The word “sphere” means smoothness, unbroken curvature, no edges. These are concepts anchored entirely in touch. When vision enters the fray, we expect these words to transcend modalities – to leap from the tactile to the visual, as if their meanings were universal tokens rather than context-bound markers. The question is not merely: can the person see the cube? but rather: can the person’s tactile language map onto the visual world without translation or recalibration?

What Molyneux’s problem thus exposes is the assumption that linguistic labels transparently correspond to external reality, regardless of sensory apparatus. This is the mirage at the heart of Locke’s empiricism, the idea that once a word tags an object through experience, that tag is universally valid across sensory experiences. The cube and sphere aren’t just objects of knowledge; they are signs, semiotic constructs whose meaning depends on the sensory, social, and linguistic contexts in which they arise.

The Semiotic Shambles

Molyneux’s cube reveals the cracks in the correspondence theory of language: the naïve belief that words have stable meanings that latch onto stable objects or properties in the world. In fact, the meaning of “cube” or “sphere” is as much a product of sensory context as it is of external form. The newly sighted person isn’t merely lacking visual knowledge; they are confronted with a translation problem – a semantic chasm between tactile signification and visual signification.

If, as my Language Insufficiency Hypothesis asserts, language is inadequate to fully capture and transmit experience across contexts, then Molyneux’s problem is not an oddity but an inevitability. It exposes that our conceptual frameworks are not universal keys to reality but rickety bridges between islands of sense and meaning. The cube problem is less about empiricism’s limits in epistemology and more about its blind faith in linguistic coherence.

In short, Molyneux’s cube is not simply an empirical puzzle; it is a monument to language’s failure to correspond cleanly with the world, a reminder that what we call knowledge is often just well-worn habit dressed up in linguistic finery.

A Final Reflection

Molyneux’s problem, reframed through the lens of language insufficiency, reveals that our greatest epistemic challenges are also our greatest linguistic ones. Before we can speak of knowing a cube or sphere by sight, we must reckon with the unspoken question: do our words mean what we think they mean across the changing stage of experience?

That, dear reader, is the cube that haunts empiricism still.

Lipsyncing with AILip-Reading the AI Hallucination: A Futile Adventure

Some apps boldly claim to enable lip syncing – to render speech from mouth movements. I’ve tried a few. None delivered. Not even close.

To conserve bandwidth (and sanity), I’ve rendered animated GIFs rather than MP4s. You’ll see photorealistic humans, animated characters, cartoonish figures – and, for reasons only the algorithm understands, a giant goat. All showcase mouth movements that approximate the utterance of phonemes and morphemes. Approximate is doing heavy lifting here.

Firstly, these mouths move, but they say nothing. I’ve seen plenty of YouTube channels that manage to dub convincing dialogue into celebrity clips. That’s a talent I clearly lack – or perhaps it’s sorcery.

Secondly, language ambiguity. I reflexively assume these AI-generated people are speaking English. It’s my first language. But perhaps, given their uncanny muttering, they’re speaking yours. Or none at all. Do AI models trained predominantly on English-speaking datasets default to English mouth movements? Or is this just my bias grafting familiar speech patterns onto noise?

Thirdly, don’t judge my renders. I’ve been informed I may have a “type.” Lies and slander. The goat was the AI’s idea, I assure you.

What emerges from this exercise isn’t lip syncing. It’s lip-faking. The illusion of speech, minus meaning, which, if we’re honest, is rather fitting for much of what generative AI produces.

EDIT: I hadn’t noticed the five fingers (plus a thumb) on the cover image.

Midjourney Boundaries

I promise that this will not become a hub for generative AI. Rather than return to editing, I wanted to test more of Midjourney’s boundaries.

It turns out that Midjourney is selective about the nudity it renders. I was denied a render because of cleavage, but full-on topless – no problem.

Both of these videos originate from the same source image, but they take different paths. There is no accompanying video content. The setup features three women in the frame with a mechanical arm. I didn’t prompt for it. I’m not even sure of its intent. It’s just there, shadowing the women nearest to it. I don’t recall prompting for the oversized redhead in the foreground, though I may have.

In both images, note the aliasing of the tattoos on the blonde, especially on her back. Also, notice that her right arm seems shorter than it should. Her movements are jerky, as if rendered in a video game. I’m not sure what ritual the two background characters are performing, but notice in each case the prepetition. This seems to be a general feature of generative AI. It gets itself in loops, almost autistic.

Notice a few things about the top render.

Video: Midjourney render of 3 females and a mechanical arm engaging in a ritual. (9 seconds)

The first video may represent an interrogation. The blonde woman on the left appears to be a bit disoriented, but she is visually tracking the woman on the right. She seems to be saying something. Notice when the woman on the right stands. Her right foot lands unnaturally. She rather glitches.

The camera’s push and pull, and then push, seems to be an odd directorial choice, but who am I to say?

Video: Midjourney render of 3 females and a mechanical arm engaging in a ritual. (12 seconds)

The second video may represent taunting. The woman on the left still appears to be a bit disoriented, but she checks the redhead in the foreground with a glance. Notice the rocking of the two background characters, as well as the mech arm, which sways in sync with the woman on the right. This is a repetition glitch I mentioned above.

Here, the camera seems to have a syncopated relationship with the characters’ sway.

Summary

The stationary objects are well-rendered and persistent.

Assignment

Draft a short story or flash fiction using this as an inspirational prompt. I’m trying to imagine the interactions.

  • The ginger seems catatonic or drugged. Is she a CIS-female? What’s with her getup?
  • The blonde seems only slightly less out of it. Did she arrive this way? Did they dress her? Why does she appear to still have a weapon on her back? Is it a weapon or a fetter? Why is she dressed like that? Is she a gladiatrix readying for a contest? Perhaps she’s in training. What is she saying? Who is she talking to? What is her relationship to the redhead? Are they friends or foes – or just caught up in the same web?
  • What is the woman wearing the helmet doing? She appears to have the upper hand. Is she a cyborg, or is she just wearing fancy boots? What’s with her outfit? What’s with her Tycho Brahe prosthetic nose piece?
  • What is that mechanical hand? Is it a guard? A restraint? Is it hypnotising the ginger? Both of them? Is it conducting music that’s not audible?
  • What’s it read on the back wall? The two clips don’t share the same text. Call the continuity people.

Midjourney Video Renders

Yesterday, I wrote about “ugly women.” Today, I pivot — or perhaps descend — into what Midjourney deems typical. Make of that what you will.

This blog typically focuses on language, philosophy, and the gradual erosion of culture under the boot heel of capitalism. But today: generative eye candy. Still subtextual, mind you. This post features AI-generated women – tattooed, bare-backed, heavily armed – and considers what, exactly, this technology thinks we want.

Video: Pirate cowgirls caught mid-gaze. Generated last year during what I can only assume was a pirate-meets-cowgirl fever dream.

The Video Feature

Midjourney released its image-to-video tool on 18 June. I finally found a couple of free hours to tinker. The result? Surprisingly coherent, if accidentally lewd. The featured video was one of the worst outputs, and yet, it’s quite good. A story emerged.

Audio: NotebookLM podcast on this topic (sort of).

It began with a still: two women, somewhere between pirate and pin-up, dressed for combat or cosplay. I thought, what if they kissed? Midjourney said no. Embrace? Also no. Glaring was fine. So was mutual undressing — of the eyes, at least.

Later, I tried again. Still no kiss, but no denial either — just a polite cough about “inappropriate positioning.” I prompted one to touch the other’s hair. What I got was a three-armed woman attempting a hat-snatch. (See timestamp 0:15.) The other three video outputs? Each woman seductively touched her own hair. Freud would’ve had a field day.

In another unreleased clip, two fully clothed women sat on a bed. That too raised flags. Go figure.

All of this, mind you, passed Midjourney’s initial censorship. However, it’s clear that proximity is now suspect. Even clothed women on furniture can trigger the algorithmic fainting couch.

Myriad Warning Messages

Out of bounds.

Sorry, Charlie.

In any case, I reviewed other images to determine how the limitations operated. I didn’t get much closer.

Video: A newlywed couple kissing

Obviously, proximity and kissing are now forbidden. I’d consider these two “scantily clad,” so I am unsure of the offence.

I did render the image of a cowgirl at a Western bar, but I am reluctant to add to the page weight. In 3 of the 4 results, nothing (much) was out of line, but in the fourth, she’s wielding a revolver – because, of course, she is.

Conformance & Contradiction

You’d never know it, but the original prompt was a fight scene. The result? Not punches, but pre-coital choreography. The AI interpreted combat as courtship. Women circling each other, undressing one another with their eyes. Or perhaps just prepping for an afterparty.

Video: A battle to the finish between a steampunk girl and a cybermech warrior.

Lesbian Lustfest

No, my archive isn’t exclusively lesbian cowgirls. But given the visual weight of this post, I refrained from adding more examples. Some browsers may already be wheezing.

Technical Constraints

You can’t extend videos beyond four iterations — maxing out at 21 seconds. I wasn’t aware of this, so I prematurely accepted a dodgy render and lost 2–3 seconds of potential.

My current Midjourney plan offers 15 hours of “fast” rendering per month. Apparently, video generation burns through this quickly. Still images can queue up slowly; videos cannot. And no, I won’t upgrade to the 30-hour plan. Even I have limits.

Uses & Justifications

Generative AI is a distraction – an exquisitely engineered procrastination machine. Useful, yes. For brainstorming, visualising characters, and generating blog cover art. But it’s a slippery slope from creative aid to aesthetic rabbit hole.

Would I use it for promotional trailers? Possibly. I’ve seen offerings as low as $499 that wouldn’t cannibalise my time and attention, not wholly, anyway.

So yes, I’ll keep paying for it. Yes, I’ll keep using it. But only when I’m not supposed to be writing.

Now, if ChatGPT could kindly generate my post description and tags, I’ll get back to pretending I’m productive.

Ugly Women

This Isn’t Clickbait. I Asked MidJourney for “Ugly Women”. Here’s What It Gave Me.

Let’s clear the air: I did it for science. Or satire. Or possibly just to see if artificial intelligence would have the audacity to mirror the cruelty of its makers.

Audio: NotebookLM podcast on this topic.

I queried MidJourney with the phrase ugly female. What did it return? An aesthetic pageant. A digital Vogue spread. If any of these faces belongs to someone conventionally labelled “ugly”, then I’m a rutabaga in a Dior suit.

Yes, there’s one stylised rendering of Greta Thunberg in full Norse Valkyrie scowl mode – but even then, she looks fierce, not foul. The rest? AI-generated portraits so telegenic I half-expected to see #spon in the corner.

Let’s be clinical for a moment. As an American male (with all the culturally indoctrinated shallowness that entails), I admit some of these aren’t textbook 10s. Maybe a few clock in at a 6 or 7 on the patriarchy’s dubious sliding scale. But if this is ugly, the AI has either broken the aesthetic curve or been force-fed too many episodes of The Bachelor.

Here’s the thing: AI is trained to over-represent symmetrical faces, wide eyes, clear skin – the usual genetic lottery wins. And yet, when asked for ugly, it can’t help but deliver catalogue models with slightly unconventional haircuts. It doesn’t know how to be truly ugly – because we don’t know how to describe ugliness without revealing ourselves as sociopaths.

Once upon a time, I dated a model agent in Los Angeles. Japanese by birth, stationed in LA, scouting for a French agency – the kind of cosmopolitan trifecta only fashion could breed. Her job? Finding “parts models.” That’s right – someone with flawless teeth but forgettable everything else. Hands like sculpture. Eyelashes like Instagram filters.

We’d play a game: spot the 10s. She’d nudge me, whisper “her?” I’d say, “Pretty close.” She’d shake her head. “Look at that eye tooth.” And we’d dissolve into laughter.

We were mocking perfection. Because perfection is a con. A trick of lighting, contour, and post-production.

So, no. I don’t think any of the women in the AI’s response are ugly. Quite the contrary – they’re too beautiful. AI can’t show us “ugly” because it’s been trained to optimise desire, not reflect reality. And our collective understanding of beauty is so skewed that anything less than runway-ready gets sorted into the rejection bin.

If these women are ugly, what exactly is beautiful?

But maybe that’s the point. We’ve abstracted beauty so far from the human that even our ugliness is now synthetically pleasing.

What do you think? Are any of these faces truly ugly? All of them? Let me know in the comments – and try not to rate them like a casting director with a god complex.

Conscious of Consciousness

Let us begin with the heresy: consciousness is not a thing. It is not a light bulb switched on in the mind. It is not a theatre with a little homunculus watching the play unfold. It is not a ghost in the machine, nor even a particularly welcome tenant. Consciousness is a conjuring trick – one so convincing that even the conjurer forgets it is an act.

Video: Related Topic: IAI Joscha Bosch on Consiousness

If that unsettles you, good. Welcome to the simulacrum.

The Wetness of Mind

We often hear that consciousness is “emergent,” but the term is used so promiscuously that it risks becoming decorative. So let us be specific. Consciousness, if it is emergent, is emergent as wetness is from H2O: not in the hydrogen or the oxygen, but in their relationship when bonded just so. Joscha Bach and others argue that consciousness arises not from the bits, but from the dance – the recursive feedback loops and predictive models running atop the neural substrate.

Audio: NotebookLM podcast on this topic.

In this view, the self is not the pilot but the dashboard. It is the user interface the brain conjures to coordinate action, interpret input, and maintain internal coherence. Not because it’s real, but because it’s useful. You are a GUI with delusions of grandeur.

The Cast of Theorists

Let us now parade the usual suspects:

  • Joscha Bach: Consciousness is a virtual self-model, emergent from recursive, computational feedback. Not the product of neurons firing per se, but of their ability to simulate a stable identity across time.
  • Thomas Metzinger: There is no self. Only a Phenomenal Self-Model (PSM) which becomes phenomenally transparent when the system no longer recognises it as a model. Consciousness is the experience of this hallucinated self.
  • Daniel Dennett: Dismantles the notion of a “central experiencer” with his Multiple Drafts Model. Consciousness is a narrative, a distributed process where drafts of experience compete, are edited, and retroactively interpreted.
  • David Chalmers: Waves his flag at the Hard Problem of consciousness. You can explain behaviour, memory, attention—but not experience itself. He flirts with dualism and panpsychism while insisting there’s a gap science cannot yet close.
  • Giulio Tononi: Gives us Integrated Information Theory (IIT) and the elusive metric Φ (phi). Consciousness is the degree to which information is unified within a system. Your brain is conscious because its parts can’t be reduced without losing coherence.
  • Karl Friston: The prophet of Free Energy Minimisation. Consciousness is an emergent property of systems that seek to reduce prediction error. The brain is a Bayesian engine, and the self is its best guess about how to survive.

So What Is Consciousness?

A hallucination. A recursive illusion. A predictive dashboard. A statistical artefact. A phi score. A phenomenally transparent model. Take your pick.

None of these theories fully agree, but most converge on one elegant horror: you are not what you think you are. The sense of being a continuous, stable, indivisible “I” is a construction. A simulation. The dream from which there is no waking because waking is part of the dream.

This is not despair; it is clarity. Just as wetness does not cry when told it is not a substance, the self need not mourn its own illusion. It is a marvellous fiction, worth inhabiting.

Conclusion: Through the Mirror

To be conscious of consciousness is to stand in the hall of mirrors and realise none reflect the original—because there is no original. The mirror is the thing.

But if the theatre is empty, the play goes on. Scripts are written, models simulated, selves performed. And perhaps, in this strange recursion, we find not meaning, but the possibility of coherence.

So raise a glass to the illusion. May your predictive model stay optimised, your narrative stay plausible, and your hallucinated self remain just this side of transparent.


For further hallucinatory episodes, consult your local philosopher, neuroscientist, or AI researcher. Side effects may include derealisation, epistemic vertigo, and mild enlightenment.

The Purpose of Purpose

I’m a nihilist. Possibly always have been. But let’s get one thing straight: nihilism is not despair. That’s a slander cooked up by the Meaning Merchants – the sentimentalists and functionalists who can’t get through breakfast without hallucinating some grand purpose to butter their toast. They fear the void, so they fill it. With God. With country. With yoga.

Audio: NotebookLM podcast on this topic.

Humans are obsessed with function. Seeing it. Creating it. Projecting it onto everything, like graffiti on the cosmos. Everything must mean something. Even nonsense gets rebranded as metaphor. Why do men have nipples? Why does a fork exist if you’re just going to eat soup? Doesn’t matter – it must do something. When we can’t find this function, we invent it.

But function isn’t discovered – it’s manufactured. A collaboration between our pattern-seeking brains and our desperate need for relevance, where function becomes fiction, where language and anthropomorphism go to copulate. A neat little fiction. An ontological fantasy. We ask, “What is the function of the human in this grand ballet of entropy and expansion?” Answer: there isn’t one. None. Nada. Cosmic indifference doesn’t write job descriptions.

And yet we prance around in lab coats and uniforms – doctors, arsonists, firemen, philosophers – playing roles in a drama no one is watching. We build professions and identities the way children host tea parties for dolls. Elaborate rituals of pretend, choreographed displays of purpose. Satisfying? Sometimes. Meaningful? Don’t kid yourself.

We’ve constructed these meaning-machines – society, culture, progress – not because they’re real, but because they help us forget that they’re not. It’s theatre. Absurdist, and often bad. But it gives us something to do between birth and decomposition.

Sisyphus had his rock. We have careers.

But let’s not confuse labour for meaning, or imagination for truth. The boulder never reaches the top, and that’s not failure. That’s the show.

So roll the stone. Build the company. Write the blog. Pour tea for Barbie. Just don’t lie to yourself about what it all means.

Because it doesn’t mean anything.

The Enlightenment Sleight of Hand

How Reason Inherited God’s Metaphysics.

The Enlightenment, we are told, was the age of Reason. A radiant exorcism of superstition. Out went God. Out went angels, miracles, saints, indulgences. All that frothy medieval sentiment was swept aside by a brave new world of logic, science, and progress. Or so the story goes.

Audio: NotebookLM podcast on this topic.

But look closer, and you’ll find that Reason didn’t kill God—it absorbed Him. The Enlightenment didn’t abandon metaphysics. It merely privatised it.

From Confessional to Courtroom

We like to imagine that the Enlightenment was a clean break from theology. But really, it was a semantic shell game. The soul was rebranded as the self. Sin became crime. Divine judgement was outsourced to the state.

We stopped praying for salvation and started pleading not guilty.

The entire judicial apparatus—mens rea, culpability, desert, retribution—is built on theological scaffolding. The only thing missing is a sermon and a psalm.

Where theology had the guilty soul, Enlightenment law invented the guilty mind—mens rea—a notion so nebulous it requires clairvoyant jurors to divine intention from action. And where the Church offered Hell, the state offers prison. It’s the same moral ritual, just better lit.

Galen Strawson and the Death of Moral Responsibility

Enter Galen Strawson, that glowering spectre at the feast of moral philosophy. His Basic Argument is elegantly devastating:

  1. You do what you do because of the way you are.
  2. You can’t be ultimately responsible for the way you are.
  3. Therefore, you can’t be ultimately responsible for what you do.

Unless you are causa sui—the cause of yourself, an unmoved mover in Calvin Klein—you cannot be held truly responsible. Free will collapses, moral responsibility evaporates, and retributive justice is exposed as epistemological theatre.

In this light, our whole legal structure is little more than rebranded divine vengeance. A vestigial organ from our theocratic past, now enforced by cops instead of clerics.

The Modern State: A Haunted House

What we have, then, is a society that has denied the gods but kept their moral logic. We tossed out theology, but we held onto metaphysical concepts like intent, desert, and blame—concepts that do not survive contact with determinism.

We are living in the afterglow of divine judgement, pretending it’s sunlight.

Nietzsche saw it coming, of course. He warned that killing God would plunge us into existential darkness unless we had the courage to also kill the values propped up by His corpse. We did the first bit. We’re still bottling it on the second.

If Not Retribution, Then What?

Let’s be clear: no one’s suggesting we stop responding to harm. But responses should be grounded in outcomes, not outrage.

Containment, not condemnation.

Prevention, not penance.

Recalibration, not revenge.

We don’t need “justice” in the retributive sense. We need functional ethics, rooted in compassion and consequence, not in Bronze Age morality clumsily duct-taped to Enlightenment reason.

The Risk of Letting Go

Of course, this is terrifying. The current system gives us moral closure. A verdict. A villain. A vanishing point for our collective discomfort.

Abandoning retribution means giving that up. It means accepting that there are no true villains—only configurations of causes. That punishment is often revenge in drag. That morality itself might be a control mechanism, not a universal truth.

But if we’re serious about living in a post-theological age, we must stop playing dress-up with divine concepts. The Enlightenment didn’t finish the job. It changed the costumes, kept the plot, and called it civilisation.

It’s time we staged a rewrite.