How MEOW Turns a Metaphysical Mountain Into a Linguistic Molehill

In the last post, I argued that the so-called ‘hard problem of consciousness‘ was never a problem with consciousness. It was a problem with language – specifically, the English language’s unfortunate habit of carving the world into neat little substances and then demanding to know why its own divisions won’t glue back together.

Audio: NotebookLM summary podcast of this topic, on resolving the hard problem of consciousness.

The response was predictable.

  • ‘But what about subjective feel?’
  • ‘What about emergence?’
  • ‘What about ontology?’
  • ‘What about Chalmers?’
  • ‘What about that ineffable thing you can’t quite point at?’

All fair questions. All built atop the very framing that manufactures the illusion of a metaphysical gap.

So here’s the promised demonstration: not yet a full essay (though it may evolve into one), but a clear application of MEOW – the Mediated Encounter Ontology of the World – to the hard problem itself. Consider this a field test of the framework. A tidy autopsy, not the funeral oration.

The Set-Up: Chalmers’ Famous Trick

Chalmers asks:

The question feels profound only because the terms ‘physical’ and ‘experience’ smuggle in the very metaphysics they pretend to interrogate. They look like opposites because the grammar makes them opposites. English loves a comforting binary.

But MEOW doesn’t bother with the front door. It doesn’t assume two substances – ‘mind’ over here, ‘world’ over there – and then panic when they refuse to shake hands. It treats experience as the way an encounter manifests under a layered architecture of mediation. There’s no bridge. Only layers.

T₀ – Biological Mediation

The body is not a barrier. It is the encounter’s first architecture.

At T₀, the world is already transformed: transduction, gating, synchrony, inhibition, adaptation. Organisms don’t receive ‘raw’ physical inputs. They metabolise them. The form of contact is biological before it is anything else.

The hard problem begins by assuming there’s a realm of dumb physical mechanisms that somehow need to ‘produce’ experience. But organisms do not encounter dumb mechanism. They encounter structured contact –biological mediation – from the first millisecond.

If you insist on thinking in substances, T₀ looks like a problem.
If you think in mediations, it looks like the beginning of sense-making.

T₁ – Cognitive Mediation

Where the Enlightenment saw a window, cognition installs a newsroom.

Prediction, priors, memory, inference, attention – all shaping what appears and what never makes it into view. Experience at T₁ is not something ‘added’. It is the organisational structure of the encounter itself.

The hard problem treats ‘experience’ as a mysterious extra–something floating atop neural activity like metaphysical cream. But at T₁, what appears as experience is simply the organisation of biological contact through cognitive patterns.

There is no ‘what emerges from the physical’. There is the way the encounter is organised.

And all of this unfolds under resistance – the world’s persistent refusal to line up neatly with expectation. Prediction errors, perceptual limits, feedback misfires: this constraint structure prevents the entire thing from collapsing into relativist soup.

T₂ – Linguistic–Conceptual Mediation

Here is where the hard problem is manufactured.

This is the layer that takes an ordinary phenomenon and turns it into a metaphysical puzzle. Words like ‘experience’, ‘physical’, ‘mental’, ‘subjective’, and ‘objective’ pretend to be carved in stone. They aren’t. They slide, drift, and mutate depending on context, grammar, and conceptual lineage.

The hard problem is almost entirely a T₂ artefact – a puzzle produced by a grammar that forces us to treat ‘experience’ and ‘physical process’ as two different substances rather than two different summaries of different mediational layers.

If you inherit a conceptual architecture that splits the world into mind and matter, of course you will look for a bridge. Language hands you the illusion and then refuses to refund the cost of admission.

T₃ – Cultural–Normative Mediation

The Western problem is not the world’s problem.

The very idea that consciousness is metaphysically puzzling is the product of a specific cultural lineage: Enlightenment substance dualism (even in its ‘materialist’ drag), Cartesian leftovers, empiricist habits, and Victorian metaphysics disguised as objectivity.

Other cultures don’t carve the world this way. Other ontologies don’t need to stitch mind back into world. Other languages simply don’t produce this problem.

Reassembling the Encounter

Once you run consciousness through the mediational layers, the hard problem dissolves:

  • Consciousness is not an emergent property of neural complexity.
  • Consciousness is not a fundamental property of the universe.
  • Consciousness is the reflexive mode of certain mediated encounters, the form the encounter takes when cognition, language, and culture become part of what is appearing.

There is no gap to explain because the ‘gap’ is the product of a linguistic–conceptual framework that splits where the world does not.

As for the ever-mystical ‘what-it’s-like’: that isn’t a metaphysical jewel buried in the brain; it is the way a T₀–T₃ architecture manifests when its own structure becomes reflexively available.

A Brief Disclaimer Before the Internet Screams

Pointing out that Chalmers (and most of modern philosophy) operates within a faulty ontology is not to claim MEOW is flawless or final. It isn’t. But if Occam’s razor means anything, MEOW simply removes one unnecessary supposition — the idea that ‘mind’ and ‘world’ are independent substances in need of reconciliation. No triumphalism. Just subtraction.

Where This Leaves Chalmers

Chalmers is not wrong. He’s just asking the wrong question. The hard problem is not a metaphysical insight. It’s the moment our language tripped over its shoelaces and insisted the pavement was mysterious.

MEOW doesn’t solve the hard problem. It shows why the hard problem only exists inside a linguistic architecture that can’t model its own limitations.

This piece could easily grow into a full essay – perhaps it will. But for now, it does the job it needs to: a practical demonstration of MEOW in action.

And, arguably more important, it buys me one more day of indexing.

The Hard Problem Was Never Consciousness

3–5 minutes

It Was Language All Along.

This whole misadventure began sometime in 2018, when I started documenting what has now metastasised into the Language Insufficiency Hypothesis. If I weren’t typing this, I’d be doing the honourable thing and finishing the index, but here we are, procrastinating with purpose. I had a suspicion, even then, that language was up to something. Something slippery. Something evasive. At first, it was just a motley catalogue of weasel words that refused to sit still long enough to be given a meaning. I should have taken the hint when the list kept expanding like a Victorian railway: terminally over-budget and convinced of its own grandeur.

But, naturally, I pressed on.

Audio: NotebookLM summary podcast and conversation around this topic.

At the time I had that slow-burn itch about identity, selfhood, free will, agency – you know, the usual metaphysical tat we’re reared on like a Victorian child raised on laudanum. It wasn’t that these things didn’t exist; it was that the words simply couldn’t bear the conceptual load we’d been piling onto them. And so I found myself in the company of philosophers who either tried to rescue these terms (Dennett, ever the valiant firefighter with a damp match), complicate them (Searle, constructing houses of cards under wind machines), or dissolve them outright (Parfit, smiling serenely as the rest of us panic).

Meanwhile, Strawson was somewhere in the corner insisting experience is all there is, Putnam was in his perennial retraction phase, and I was merely trying to keep my own conceptual apparatus from collapsing like an undercooked soufflé.

I’ll admit I had a long-standing soft spot for Dennett’s consciousness-as-emergence hypothesis. It made a certain intuitive sense at the time: pile up enough neural machinery, sprinkle in some feedback loops, and consciousness would bubble up like steam from a kettle. It felt elegant. It felt mechanistically honest. And, crucially, it made perfect sense within the inherited Realist framework I was still tacitly lugging around. Of course, experience ’emerges’ from physical processes if you start from a worldview already partitioned into physical substrates and mental phenomena waiting to be accounted for. Dennett wasn’t wrong so much as operating within the same architectural error the rest of us had been marinating in. Once I began reframing the whole encounter through mediation rather than emergence, the elegance dissolved. What had looked like metaphysics turned out to be a conceptual afterimage generated by a language that couldn’t model its own limitations.

And then there was Chalmers.

Ah, the ‘hard problem’. I lost count of how many times it surfaced. Like mould. Or a debt collector. Chalmers’ dilemma – how physical processes give rise to experience – is purportedly the Mount Everest of metaphysics. Yet the more I thought about it, the more it reeked of a linguistic parlour trick. A conceptual magic eye puzzle: stare long enough and a unicorn appears, provided you’ve surrendered your scepticism and a good measure of oxygen.

The problem isn’t that consciousness is ‘hard’. The problem is that the linguistic scaffolding we’re using was never built for this terrain. ‘Experience’. ‘Physical’. ‘Mental’. ‘Explain’. These words pretend to be steel beams when they’re actually damp cardboard.

What remains isn’t a cosmic riddle but a linguistic artefact. A conceptual false path carved by centuries of grammatico-metaphysical enthusiasm – the unfortunate habit of mistaking grammatical symmetry for metaphysical necessity.

Which brings me to the present, having at last gelled the LIH and published the Mediated Encounter Ontology of the World – a relational metaphysics that has the decency not to hallucinate substances it can’t justify. MEOW clears the fog rather neatly: the so-called ‘hard problem’ is only ‘hard’ because we continue to treat ‘mind’ and ‘world’ as two independent substances requiring metaphysical reconciliation. Together, LIH and MEOW provide a double exposure of the problem: LIH shows why the language fails; MEOW shows what the language was failing to describe.

So here we are. I’d like to reconsider Chalmers through the dual lenses of LIH and MEOW – not to ‘solve’ the hard problem, but to show it was never the right problem to begin with. The difficulty isn’t consciousness; it’s the language we’re forced to use, the same language that refuses to sit still, the same language that keeps trying to trick us into mistaking grammatical symmetry for metaphysical necessity.

In a coming post, I intend to pry open that illusion with a crowbar. Delicately, of course. One must be civilised about these things.

Because if language is insufficient – and it is – then perhaps what Chalmers discovered was not the abyss of consciousness, but the limit of the dictionary.

Conscious of Consciousness

Let us begin with the heresy: consciousness is not a thing. It is not a light bulb switched on in the mind. It is not a theatre with a little homunculus watching the play unfold. It is not a ghost in the machine, nor even a particularly welcome tenant. Consciousness is a conjuring trick – one so convincing that even the conjurer forgets it is an act.

Video: Related Topic: IAI Joscha Bosch on Consiousness

If that unsettles you, good. Welcome to the simulacrum.

The Wetness of Mind

We often hear that consciousness is “emergent,” but the term is used so promiscuously that it risks becoming decorative. So let us be specific. Consciousness, if it is emergent, is emergent as wetness is from H2O: not in the hydrogen or the oxygen, but in their relationship when bonded just so. Joscha Bach and others argue that consciousness arises not from the bits, but from the dance – the recursive feedback loops and predictive models running atop the neural substrate.

Audio: NotebookLM podcast on this topic.

In this view, the self is not the pilot but the dashboard. It is the user interface the brain conjures to coordinate action, interpret input, and maintain internal coherence. Not because it’s real, but because it’s useful. You are a GUI with delusions of grandeur.

The Cast of Theorists

Let us now parade the usual suspects:

  • Joscha Bach: Consciousness is a virtual self-model, emergent from recursive, computational feedback. Not the product of neurons firing per se, but of their ability to simulate a stable identity across time.
  • Thomas Metzinger: There is no self. Only a Phenomenal Self-Model (PSM) which becomes phenomenally transparent when the system no longer recognises it as a model. Consciousness is the experience of this hallucinated self.
  • Daniel Dennett: Dismantles the notion of a “central experiencer” with his Multiple Drafts Model. Consciousness is a narrative, a distributed process where drafts of experience compete, are edited, and retroactively interpreted.
  • David Chalmers: Waves his flag at the Hard Problem of consciousness. You can explain behaviour, memory, attention—but not experience itself. He flirts with dualism and panpsychism while insisting there’s a gap science cannot yet close.
  • Giulio Tononi: Gives us Integrated Information Theory (IIT) and the elusive metric Φ (phi). Consciousness is the degree to which information is unified within a system. Your brain is conscious because its parts can’t be reduced without losing coherence.
  • Karl Friston: The prophet of Free Energy Minimisation. Consciousness is an emergent property of systems that seek to reduce prediction error. The brain is a Bayesian engine, and the self is its best guess about how to survive.

So What Is Consciousness?

A hallucination. A recursive illusion. A predictive dashboard. A statistical artefact. A phi score. A phenomenally transparent model. Take your pick.

None of these theories fully agree, but most converge on one elegant horror: you are not what you think you are. The sense of being a continuous, stable, indivisible “I” is a construction. A simulation. The dream from which there is no waking because waking is part of the dream.

This is not despair; it is clarity. Just as wetness does not cry when told it is not a substance, the self need not mourn its own illusion. It is a marvellous fiction, worth inhabiting.

Conclusion: Through the Mirror

To be conscious of consciousness is to stand in the hall of mirrors and realise none reflect the original—because there is no original. The mirror is the thing.

But if the theatre is empty, the play goes on. Scripts are written, models simulated, selves performed. And perhaps, in this strange recursion, we find not meaning, but the possibility of coherence.

So raise a glass to the illusion. May your predictive model stay optimised, your narrative stay plausible, and your hallucinated self remain just this side of transparent.


For further hallucinatory episodes, consult your local philosopher, neuroscientist, or AI researcher. Side effects may include derealisation, epistemic vertigo, and mild enlightenment.

The Hard Problem of Consciousness

If you are reading this, you are likely familiar with David Chalmers’ idea of the Hard Problem of Consciousness—the thorny, maddeningly unsolvable question of why and how subjective experience arises from physical processes. If you’re not, welcome to the rabbit hole. Here, we’ll plunge deeper by examining the perspective of Stuart Hameroff, who, like a philosophical magician, reframes this conundrum as a chicken-and-egg problem: what came first, life or consciousness? His answer? Consciousness. But wait—there’s a slight snag. Neither “life” nor “consciousness” has a universally agreed-upon definition. Oh, the joy of philosophical discourse.

Video: Professor Stuart Hameroff and others promote the idea that consciousness pre-dates life. A fuller version is available at IAI.
Audio: Podcast on this topic.

For the uninitiated, Hameroff’s stance is heavily flavoured with panpsychism—the idea that consciousness is a fundamental feature of the universe, like space or time. In this worldview, consciousness predates life itself. From this vantage, Hameroff’s proposition seems inevitable, a tidy solution that fits neatly into a panpsychistic framework. But let me stop you right there because I’m not signing up for the panpsychism fan club, and I’m certainly not prepared to let Hameroff’s intellectual sleight of hand go unchallenged.

To make his case, Hameroff engages in a curious manoeuvre: he defines both life and consciousness in ways that conveniently serve his argument. Consciousness, for him, is not limited to the complex phenomena of human or even animal experience but is a fundamental property of the universe, embedded in the very fabric of reality. Meanwhile, consciousness eventually orchestrates itself into life—a secondary phenomenon. With these definitions, his argument clicks together like a self-serving jigsaw puzzle. It’s clever, I’ll grant him that. But cleverness isn’t the same as being correct.

This is the philosophical equivalent of marking your own homework. By defining the terms of debate to fit his narrative, Hameroff ensures that his conclusion will satisfy his fellow panpsychists. The faithful will nod along, their priors confirmed. But for those outside this echo chamber, his framework raises more questions than it answers. How does this universal consciousness work? Why should we accept its existence as a given? And—here’s the kicker—doesn’t this just punt the problem one step back? If consciousness is fundamental, what’s the mechanism by which it “pre-exists” life?

Hameroff’s move is bold, certainly. But boldness isn’t enough. Philosophy demands rigour, and redefining terms to suit your argument isn’t rigorous; it’s rhetorical trickery. Sure, it’s provocative. But does it advance our understanding of the Hard Problem, or does it merely reframe it in a way that makes Hameroff’s preferred answer seem inevitable? For my money, it’s the latter.

The real issue is that panpsychism itself is a philosophical Rorschach test. It’s a worldview that can mean just about anything, from the claim that electrons have a rudimentary kind of awareness to the idea that the universe is a giant mind. Hameroff’s take lands somewhere in this spectrum, but like most panpsychist arguments, it’s long on metaphysical speculation and short on empirical grounding. If you already believe that consciousness is a fundamental aspect of reality, Hameroff’s arguments will feel like a revelation. If you don’t, they’ll feel like smoke and mirrors.

In the end, Hameroff’s chicken-and-egg problem might be better framed as a false dichotomy. Perhaps life and consciousness co-evolved in ways we can’t yet fully understand. Or perhaps consciousness, as we understand it, emerges from the complexity of life, a byproduct rather than a prerequisite. What’s clear is that Hameroff’s solution isn’t as tidy as it seems, nor as universally compelling. It’s a clever sleight of hand, but let’s not mistake cleverness for truth.

Can Zombies Ever Be Conscious?

In the world of consciousness studies, few topics spark as much heated debate as the possibility of philosophical zombies—hypothetical beings that behave exactly like humans but lack subjective experience, or qualia. On the surface, zombies seem like an interesting thought experiment, but they quickly turn into a battleground for deeper issues about the nature of consciousness itself.

This post explores two key perspectives in this debate: Daniel Dennett’s functionalist critique of zombies and a recent scientific paper that argues zombies are biologically impossible. While both reject the possibility of zombies, they do so for different reasons, and the discussion leaves room for future possibilities that could disrupt the current consensus.

Dennett’s Zombies and Zimboes: Consciousness as Function

Daniel Dennett, one of the most influential philosophers of mind, is known for his no-nonsense rejection of philosophical zombies. Dennett argues that if something behaves exactly like a conscious being, it is conscious. For him, there is no hidden metaphysical property—such as subjective experience—that separates a “zombie” from a conscious human. Consciousness, in his view, is entirely explainable by physical processes and functional behaviour.

Dennett extends his argument with the concept of zimboes, satirical creatures that not only act like conscious beings but can even reflect on their states, claiming to be conscious, despite supposedly lacking any inner experience. For Dennett, if a being can behave as though it has introspective awareness and engage in the full spectrum of human behaviour, there’s no meaningful distinction between that being and a conscious person.

In short, Dennett collapses the distinction between zombies and conscious beings. If something passes all the behavioural and functional tests of consciousness, it might as well be conscious. Zombies, as typically conceived, are simply an illusion—a misunderstanding of what consciousness is.

A Biological Rejection: Zombies Are Impossible

On the other hand, a more recent paper offers a different, biologically grounded argument against zombies. The authors propose that consciousness is the result of self-organising systems. In this view, biological organisms maintain their survival through adaptive behaviours constrained by policies—rules that govern how they react to environmental stimuli. These policies require a first-order self: a basic form of consciousness that allows an organism to navigate and interpret its environment.

The authors argue that without this first-order self, an organism would not be able to exhibit the fitness-driven behaviours needed for survival. Therefore, zombies—beings that behave like humans without consciousness—are biologically impossible. For these researchers, consciousness is not just a side effect of complex behaviour; it’s a necessary condition for such behaviour. Their framework dissolves the so-called “hard problem” of consciousness, asserting that subjective experience, or qualia, arises directly from the qualitative nature of self-organising systems.

In their view, zombies cannot exist because behaviour as complex as that of conscious beings requires consciousness.

The Open Question: What About Future Technology?

However, there is a tension between these two perspectives, particularly when we consider future possibilities in technology and artificial intelligence. Both Dennett and the authors of the biological paper argue that zombies—whether defined as Dennett’s “behaviourally indistinguishable” beings or the biologically impossible entities proposed by the paper—are not real. But could this change?

What if advanced AI or synthetic biological systems could simulate human behaviour so perfectly that they effectively become zombies—performing all the actions and behaviours we associate with consciousness, but lacking any subjective experience? Dennett might still argue that these systems are conscious, as long as they behave as though they are. But the biological view complicates this, since it ties consciousness directly to the survival and adaptive behaviours of self-organising systems.

Could a highly advanced AI system bypass the need for subjective experience while still exhibiting complex, adaptive behaviour? If so, it would challenge the current consensus and potentially create a new class of entities—artificial zombies—that neither behave nor function like traditional conscious beings but still perform human-like actions.

I Wonder What’s Next?

This philosophical conflict leaves us with an intriguing, open-ended question: are zombies truly impossible, or are they merely improbable given our current understanding of biology and consciousness? Dennett’s view seems to collapse the distinction between behaviour and consciousness, while the biological argument insists that the two are inseparable. But both positions could be challenged by future technologies that mimic human consciousness without having it.

Could we one day create a true zombie—a being that acts like us, thinks like us, but is as empty inside as a rock? The debate remains open, and as our understanding of consciousness and artificial intelligence deepens, so too will our exploration of the zombie question.

For now, the answer to whether zombies can exist seems to depend on what you believe consciousness really is.