When Aliens Speak English: The False Promise of Linguistic Familiarity

5–7 minutes

Why shared language creates the illusion – not the reality – of shared experience

Human beings routinely assume that if another agent speaks our language, we have achieved genuine mutual understanding. Fluency is treated as a proxy for shared concepts, shared perceptual categories, and even shared consciousness. This assumption appears everywhere: in science fiction, in popular philosophy videos, and in everyday cross-cultural interactions. It is a comforting idea, but philosophically indefensible.

Video: Could You Explain Cold to an Alien? – Hank Green

Recent discussions about whether one could ‘explain cold to an alien’ reveal how deeply this assumption is embedded. Participants in such debates often begin from the tacit premise that language maps transparently onto experience, and that if two interlocutors use the same linguistic term, they must be referring to a comparable phenomenon.

A closer analysis shows that this premise fails at every level.

Audio: NotebookLM summary podcast on this topic.

Shared Language Does Not Imply Shared Phenomenology

Even within the human species, thermal experience is markedly variable. Individuals from colder climates often tolerate temperatures that visitors from warmer regions find unbearable. Acclimation, cultural norms, metabolic adaptation, and learned behavioural patterns all shape what ‘cold’ feels like.

If the same linguistic term corresponds to such divergent experiences within a species, the gap across species becomes unbridgeable.

A reptile, for example, regulates temperature not by feeling cold in any mammalian sense, but by adjusting metabolic output. A thermometer measures cold without experiencing anything at all. Both respond to temperature; neither inhabits the human category ‘cold’.

Thus, the human concept is already species-specific, plastic, and contextually learned — not a universal experiential module waiting to be translated.

Measurement, Behaviour, and Experience Are Distinct

Thermometers and reptiles react to temperature shifts, and yet neither possesses cold-qualia. This distinction illuminates the deeper philosophical point:

  • Measurement registers a variable.
  • Behaviour implements a functional response.
  • Experience is a mediated phenomenon arising from a particular biological and cognitive architecture.

Aliens might measure temperature as precisely as any scientific instrument. That alone tells us nothing about whether they experience anything analogous to human ‘cold’, nor whether the concept is even meaningful within their ecology.

The Problem of Conceptual Export: Why Explanation Fails

Attempts to ‘explain cold’ to hypothetical aliens often jump immediately to molecular description – slower vibrational states, reduced kinetic energy, and so forth. This presumes that the aliens share:

  • our physical ontology,
  • our conceptual divisions,
  • our sense-making framework,
  • and our valuation of molecular explanation as intrinsically clarifying.

But these assumptions are ungrounded.

Aliens may organise their world around categories we cannot imagine. They may not recognise molecules as explanatory entities. They may not treat thermal variation as affectively laden or behaviourally salient. They may not even carve reality at scales where ‘temperature’ appears as a discrete variable.

When the conceptual scaffolding differs, explanation cannot transfer. The task is not translation but category creation, and there is no guarantee that the requisite categories exist on both sides.

The MEOW Framework: MEOWa vs MEOWb

The Mediated Encounter Ontology (MEOW) clarifies this breakdown by distinguishing four layers of mediation:

  • T0: biological mediation
  • T1: cognitive mediation
  • T2: linguistic mediation
  • T3: social mediation

Humans run MEOWa, a world structured through mammalian physiology, predictive cognition, metaphor-saturated language, and social-affective narratives.

Aliens (in fiction or speculation) operate MEOWb, a formally parallel mediation stack but with entirely different constituents.

Two systems can speak the same language (T2 alignment) whilst:

  • perceiving different phenomena (T0 divergence),
  • interpreting them through incompatible conceptual schemas (T1 divergence),
  • and embedding them in distinct social-meaning structures (T3 divergence).

Linguistic compatibility does not grant ontological compatibility.
MEOWa and MEOWb allow conversation but not comprehension.

Fiction as Illustration: Why Aliens Speaking English Misleads Us

In Sustenance, the aliens speak flawless Standard Southern English. Their linguistic proficiency invites human characters (and readers) to assume shared meaning. Yet beneath the surface:

  • Their sensory world differs;
  • their affective architecture differs;
  • their concepts do not map onto human categories;
  • and many human experiential terms lack any analogue within their mediation.

The result is not communication but a parallel monologue: the appearance of shared understanding masking profound ontological incommensurability.

The Philosophical Consequence: No Universal Consciousness Template

Underlying all these failures is a deeper speciesist assumption: that consciousness is a universal genus, and that discrete minds differ only in degree. The evidence points elsewhere.

If “cold” varies across humans, fails to apply to reptiles, and becomes meaningless for thermometers, then we have no grounds for projecting it into alien phenomenology. Nor should we assume that other species – biological or artificial – possess the same experiential categories, emotional valences, or conceptual ontologies that humans treat as foundational.

Conclusion

When aliens speak English, we hear familiarity and assume understanding. But a shared phonological surface conceals divergent sensory systems, cognitive architectures, conceptual repertoires, and social worlds.

Linguistic familiarity promises comprehension, but delivers only the appearance of it. The deeper truth is simple: Knowing our words is not the same as knowing our world.

And neither aliens, reptiles, nor thermometers inhabit the experiential space we map with those words.

Afterword

Reflections like these are precisely why my Anti-Enlightenment project exists. Much contemporary philosophical commentary remains quietly speciesist and stubbornly anthropomorphic, mistaking human perceptual idiosyncrasies for universal structures of mind. It’s an oddly provincial stance for a culture that prides itself on rational self-awareness.

To be clear, I have nothing against Alex O’Connor. He’s engaging, articulate, and serves as a gateway for many encountering these topics for the first time. But there is a difference between introducing philosophy and examining one’s own conceptual vantage point. What frustrates me is not the earnestness, but the unexamined presumption that the human experiential frame is the measure of all frames.

Having encountered these thought experiments decades ago, I’m not interested in posturing as a weary elder shaking his stick at the next generation. My disappointment lies elsewhere: in the persistent inability of otherwise intelligent thinkers to notice how narrow their perspective really is. They speak confidently from inside the human mediation stack without recognising it as a location – not a vantage point outside the world, but one local ecology among many possible ones.

Until this recognition becomes basic philosophical hygiene, we’ll continue to confuse linguistic familiarity for shared ontology and to mistake the limits of our own embodiment for the limits of consciousness itself.

How MEOW Turns a Metaphysical Mountain Into a Linguistic Molehill

In the last post, I argued that the so-called ‘hard problem of consciousness‘ was never a problem with consciousness. It was a problem with language – specifically, the English language’s unfortunate habit of carving the world into neat little substances and then demanding to know why its own divisions won’t glue back together.

Audio: NotebookLM summary podcast of this topic, on resolving the hard problem of consciousness.

The response was predictable.

  • ‘But what about subjective feel?’
  • ‘What about emergence?’
  • ‘What about ontology?’
  • ‘What about Chalmers?’
  • ‘What about that ineffable thing you can’t quite point at?’

All fair questions. All built atop the very framing that manufactures the illusion of a metaphysical gap.

So here’s the promised demonstration: not yet a full essay (though it may evolve into one), but a clear application of MEOW – the Mediated Encounter Ontology of the World – to the hard problem itself. Consider this a field test of the framework. A tidy autopsy, not the funeral oration.

The Set-Up: Chalmers’ Famous Trick

Chalmers asks:

The question feels profound only because the terms ‘physical’ and ‘experience’ smuggle in the very metaphysics they pretend to interrogate. They look like opposites because the grammar makes them opposites. English loves a comforting binary.

But MEOW doesn’t bother with the front door. It doesn’t assume two substances – ‘mind’ over here, ‘world’ over there – and then panic when they refuse to shake hands. It treats experience as the way an encounter manifests under a layered architecture of mediation. There’s no bridge. Only layers.

T₀ – Biological Mediation

The body is not a barrier. It is the encounter’s first architecture.

At T₀, the world is already transformed: transduction, gating, synchrony, inhibition, adaptation. Organisms don’t receive ‘raw’ physical inputs. They metabolise them. The form of contact is biological before it is anything else.

The hard problem begins by assuming there’s a realm of dumb physical mechanisms that somehow need to ‘produce’ experience. But organisms do not encounter dumb mechanism. They encounter structured contact –biological mediation – from the first millisecond.

If you insist on thinking in substances, T₀ looks like a problem.
If you think in mediations, it looks like the beginning of sense-making.

T₁ – Cognitive Mediation

Where the Enlightenment saw a window, cognition installs a newsroom.

Prediction, priors, memory, inference, attention – all shaping what appears and what never makes it into view. Experience at T₁ is not something ‘added’. It is the organisational structure of the encounter itself.

The hard problem treats ‘experience’ as a mysterious extra–something floating atop neural activity like metaphysical cream. But at T₁, what appears as experience is simply the organisation of biological contact through cognitive patterns.

There is no ‘what emerges from the physical’. There is the way the encounter is organised.

And all of this unfolds under resistance – the world’s persistent refusal to line up neatly with expectation. Prediction errors, perceptual limits, feedback misfires: this constraint structure prevents the entire thing from collapsing into relativist soup.

T₂ – Linguistic–Conceptual Mediation

Here is where the hard problem is manufactured.

This is the layer that takes an ordinary phenomenon and turns it into a metaphysical puzzle. Words like ‘experience’, ‘physical’, ‘mental’, ‘subjective’, and ‘objective’ pretend to be carved in stone. They aren’t. They slide, drift, and mutate depending on context, grammar, and conceptual lineage.

The hard problem is almost entirely a T₂ artefact – a puzzle produced by a grammar that forces us to treat ‘experience’ and ‘physical process’ as two different substances rather than two different summaries of different mediational layers.

If you inherit a conceptual architecture that splits the world into mind and matter, of course you will look for a bridge. Language hands you the illusion and then refuses to refund the cost of admission.

T₃ – Cultural–Normative Mediation

The Western problem is not the world’s problem.

The very idea that consciousness is metaphysically puzzling is the product of a specific cultural lineage: Enlightenment substance dualism (even in its ‘materialist’ drag), Cartesian leftovers, empiricist habits, and Victorian metaphysics disguised as objectivity.

Other cultures don’t carve the world this way. Other ontologies don’t need to stitch mind back into world. Other languages simply don’t produce this problem.

Reassembling the Encounter

Once you run consciousness through the mediational layers, the hard problem dissolves:

  • Consciousness is not an emergent property of neural complexity.
  • Consciousness is not a fundamental property of the universe.
  • Consciousness is the reflexive mode of certain mediated encounters, the form the encounter takes when cognition, language, and culture become part of what is appearing.

There is no gap to explain because the ‘gap’ is the product of a linguistic–conceptual framework that splits where the world does not.

As for the ever-mystical ‘what-it’s-like’: that isn’t a metaphysical jewel buried in the brain; it is the way a T₀–T₃ architecture manifests when its own structure becomes reflexively available.

A Brief Disclaimer Before the Internet Screams

Pointing out that Chalmers (and most of modern philosophy) operates within a faulty ontology is not to claim MEOW is flawless or final. It isn’t. But if Occam’s razor means anything, MEOW simply removes one unnecessary supposition — the idea that ‘mind’ and ‘world’ are independent substances in need of reconciliation. No triumphalism. Just subtraction.

Where This Leaves Chalmers

Chalmers is not wrong. He’s just asking the wrong question. The hard problem is not a metaphysical insight. It’s the moment our language tripped over its shoelaces and insisted the pavement was mysterious.

MEOW doesn’t solve the hard problem. It shows why the hard problem only exists inside a linguistic architecture that can’t model its own limitations.

This piece could easily grow into a full essay – perhaps it will. But for now, it does the job it needs to: a practical demonstration of MEOW in action.

And, arguably more important, it buys me one more day of indexing.

The Myth of Causa Sui Creativity

(or: Why Neither Humans nor AI Create from Nothing)

In the endless squabble over whether AI can be “creative” or “intelligent,” we always end up back at the same semantic swamp. At the risk of poking the bear, I have formulated a response. Creativity is either whatever humans do, or whatever humans do that AI can’t. Intelligence is either the general ability to solve problems or a mysterious inner light that glows only in Homo sapiens. The definitions shift like sand under the feet of the argument.

Audio: NotebookLM podcast on this topic

Strip away the romance, and the truth is far less flattering: neither humans nor AI conjure from the void. Creativity is recombination, the reconfiguration of existing material into something unfamiliar. Intelligence is the ability to navigate problems using whatever tools and heuristics one has to hand.

The Causa Sui conceit, the idea that one can be the cause of oneself, is incoherent in art, thought, or physics. Conservation of energy applies as much to ideas as to atoms.

  • Humans consume inputs: books, conversations, music, arguments, TikTok videos.
  • We metabolise them through cognitive habits, biases, and linguistic forms.
  • We output something rearranged, reframed, sometimes stripped to abstraction.

The AI process is identical in structure, if not in substrate: ingest vast data, run it through a model, output recombination. The difference is that AI doesn’t pretend otherwise.

When a human produces something impressive, we call it creative without inspecting the provenance of the ideas. When an AI produces something impressive, we immediately trace the lineage of its inputs, as if the human mind weren’t doing the same. This is not epistemic rigour, it’s tribal boundary enforcement.

The real objection to AI is not that it fails the test of creativity or intelligence; it’s that it passes the functional test without being part of the club. Our stories about human exceptionalism require a clear line between “us” and “it,” even if we have to draw that line through semantic fog.

My Language Insufficiency Hypothesis began with the recognition that language cannot fully capture the reality it describes. Here, the insufficiency is deliberate; the words “creativity” and “intelligence” are kept vague so they can always be shifted away from anything AI achieves.

I cannot be causa sui, and neither can you. The only difference is that I’m willing to admit it.

Beware the Bots: A Cautionary Tale on the Limits of Generative AI

Generative AI (Gen AI) might seem like a technological marvel, a digital genie conjuring ideas, images, and even conversations on demand. It’s a brilliant tool, no question; I use it daily for images, videos, and writing, and overall, I’d call it a net benefit. But let’s not overlook the cracks in the gilded tech veneer. Gen AI comes with its fair share of downsides—some of which are as gaping as the Mariana Trench.

First, a quick word on preferences. Depending on the task at hand, I tend to use OpenAI’s ChatGPT, Anthropic’s Claude, and Perplexity.ai, with a particular focus on Google’s NotebookLM. For this piece, I’ll use NotebookLM as my example, but the broader discussion holds for all Gen AI tools.

Now, as someone who’s knee-deep in the intricacies of language, I’ve been drafting a piece supporting my Language Insufficiency Hypothesis. My hypothesis is simple enough: language, for all its wonders, is woefully insufficient when it comes to conveying the full spectrum of human experience, especially as concepts become abstract. Gen AI has become an informal editor and critic in my drafting process. I feed in bits and pieces, throw work-in-progress into the digital grinder, and sift through the feedback. Often, it’s insightful; occasionally, it’s a mess. And herein lies the rub: with Gen AI, one has to play babysitter, comparing outputs and sending responses back and forth among the tools to spot and correct errors. Like cross-examining witnesses, if you will.

But NotebookLM is different from the others. While it’s designed for summarisation, it goes beyond by offering podcasts—yes, podcasts—where it generates dialogue between two AI voices. You have some control over the direction of the conversation, but ultimately, the way it handles and interprets your input depends on internal mechanics you don’t see or control.

So, I put NotebookLM to the test with a draft of my paper on the Language Effectiveness-Complexity Gradient. The model I’m developing posits that as terminology becomes more complex, it also becomes less effective. Some concepts, the so-called “ineffables,” are essentially untranslatable, or at best, communicatively inefficient. Think of describing the precise shade of blue you can see but can’t quite capture in words—or, to borrow from Thomas Nagel, explaining “what it’s like to be a bat.” NotebookLM managed to grasp my model with impressive accuracy—up to a point. It scored between 80 to 100 percent on interpretations, but when it veered off course, it did so spectacularly.

For instance, in one podcast rendition, the AI’s male voice attempted to give an example of an “immediate,” a term I use to refer to raw, preverbal sensations like hunger or pain. Instead, it plucked an example from the ineffable end of the gradient, discussing the experience of qualia. The slip was obvious to me, but imagine this wasn’t my own work. Imagine instead a student relying on AI to summarise a complex text for a paper or exam. The error might go unnoticed, resulting in a flawed interpretation.

The risks don’t end there. Gen AI’s penchant for generating “creative” content is notorious among coders. Ask ChatGPT to whip up some code, and it’ll eagerly oblige—sometimes with disastrous results. I’ve used it for macros and simple snippets, and for the most part, it delivers, but I’m no coder. For professionals, it can and has produced buggy or invalid code, leading to all sorts of confusion and frustration.

Ultimately, these tools demand vigilance. If you’re asking Gen AI to help with homework, you might find it’s as reliable as a well-meaning but utterly clueless parent who’s keen to help but hasn’t cracked a textbook in years. And as we’ve all learned by now, well-meaning intentions rarely translate to accurate outcomes.

The takeaway? Use Gen AI as an aid, not a crutch. It’s a handy tool, but the moment you let it think for you, you’re on shaky ground. Keep it at arm’s length; like any assistant, it can take you far—just don’t ask it to lead.