Can Zombies Ever Be Conscious?

In the world of consciousness studies, few topics spark as much heated debate as the possibility of philosophical zombies—hypothetical beings that behave exactly like humans but lack subjective experience, or qualia. On the surface, zombies seem like an interesting thought experiment, but they quickly turn into a battleground for deeper issues about the nature of consciousness itself.

This post explores two key perspectives in this debate: Daniel Dennett’s functionalist critique of zombies and a recent scientific paper that argues zombies are biologically impossible. While both reject the possibility of zombies, they do so for different reasons, and the discussion leaves room for future possibilities that could disrupt the current consensus.

Dennett’s Zombies and Zimboes: Consciousness as Function

Daniel Dennett, one of the most influential philosophers of mind, is known for his no-nonsense rejection of philosophical zombies. Dennett argues that if something behaves exactly like a conscious being, it is conscious. For him, there is no hidden metaphysical property—such as subjective experience—that separates a “zombie” from a conscious human. Consciousness, in his view, is entirely explainable by physical processes and functional behaviour.

Dennett extends his argument with the concept of zimboes, satirical creatures that not only act like conscious beings but can even reflect on their states, claiming to be conscious, despite supposedly lacking any inner experience. For Dennett, if a being can behave as though it has introspective awareness and engage in the full spectrum of human behaviour, there’s no meaningful distinction between that being and a conscious person.

In short, Dennett collapses the distinction between zombies and conscious beings. If something passes all the behavioural and functional tests of consciousness, it might as well be conscious. Zombies, as typically conceived, are simply an illusion—a misunderstanding of what consciousness is.

A Biological Rejection: Zombies Are Impossible

On the other hand, a more recent paper offers a different, biologically grounded argument against zombies. The authors propose that consciousness is the result of self-organising systems. In this view, biological organisms maintain their survival through adaptive behaviours constrained by policies—rules that govern how they react to environmental stimuli. These policies require a first-order self: a basic form of consciousness that allows an organism to navigate and interpret its environment.

The authors argue that without this first-order self, an organism would not be able to exhibit the fitness-driven behaviours needed for survival. Therefore, zombies—beings that behave like humans without consciousness—are biologically impossible. For these researchers, consciousness is not just a side effect of complex behaviour; it’s a necessary condition for such behaviour. Their framework dissolves the so-called “hard problem” of consciousness, asserting that subjective experience, or qualia, arises directly from the qualitative nature of self-organising systems.

In their view, zombies cannot exist because behaviour as complex as that of conscious beings requires consciousness.

The Open Question: What About Future Technology?

However, there is a tension between these two perspectives, particularly when we consider future possibilities in technology and artificial intelligence. Both Dennett and the authors of the biological paper argue that zombies—whether defined as Dennett’s “behaviourally indistinguishable” beings or the biologically impossible entities proposed by the paper—are not real. But could this change?

What if advanced AI or synthetic biological systems could simulate human behaviour so perfectly that they effectively become zombies—performing all the actions and behaviours we associate with consciousness, but lacking any subjective experience? Dennett might still argue that these systems are conscious, as long as they behave as though they are. But the biological view complicates this, since it ties consciousness directly to the survival and adaptive behaviours of self-organising systems.

Could a highly advanced AI system bypass the need for subjective experience while still exhibiting complex, adaptive behaviour? If so, it would challenge the current consensus and potentially create a new class of entities—artificial zombies—that neither behave nor function like traditional conscious beings but still perform human-like actions.

I Wonder What’s Next?

This philosophical conflict leaves us with an intriguing, open-ended question: are zombies truly impossible, or are they merely improbable given our current understanding of biology and consciousness? Dennett’s view seems to collapse the distinction between behaviour and consciousness, while the biological argument insists that the two are inseparable. But both positions could be challenged by future technologies that mimic human consciousness without having it.

Could we one day create a true zombie—a being that acts like us, thinks like us, but is as empty inside as a rock? The debate remains open, and as our understanding of consciousness and artificial intelligence deepens, so too will our exploration of the zombie question.

For now, the answer to whether zombies can exist seems to depend on what you believe consciousness really is.

Jargon, Brains, and the Struggle for Meaning

6–9 minutes

Specialised Languages: Academia’s Jargon Olympics

If you thought normal language was confusing, let’s take a moment to appreciate the true champions of linguistic obscurity: academics. Welcome to the world of specialised languages, where entire fields of study have developed their own language games that make even Wittgenstein’s head spin.

Here’s how it works: Every discipline—science, law, philosophy—creates its own jargon to describe the world. At first, it seems helpful. Instead of using vague terms, you get precise definitions for complex ideas. But what started as a way to improve communication within a field quickly turned into a linguistic arms race, where the more obscure and convoluted your terms are, the smarter you sound. You’re not just a lawyer anymore—you’re someone who’s ready to throw “res ipsa loquitur” into casual conversation to leave everyone else in the room wondering if they’ve missed a memo.

The problem? If you’re not part of the club, good luck understanding what anyone is talking about. Want to read a physics paper? Prepare to learn a whole new vocabulary. Need to get through a legal document? You’ll be knee-deep in Latin phrases before you even get to the point. And don’t even try to decipher a philosophical text unless you’re ready to battle abstract nouns that have been stretched and twisted beyond recognition.

It’s not just the words themselves that are the issue—it’s the sheer density of them. Take “justice” for example. In philosophy, you’ve got theories about distributive justice, retributive justice, restorative justice, and a hundred other variations, each with its own set of terms and conditions. And that’s before we even touch on how “justice” is defined in legal circles, where it becomes an even more tangled mess of case law and precedent. Every field is playing its own version of the “justice” game, with its own rules and definitions, and none of them are interested in comparing notes.

This is the academic world in a nutshell. Each discipline has built its own linguistic fortress, and unless you’ve spent years studying, you’re not getting in. But here’s the kicker: even within these fields, people are often misunderstanding each other. Just because two scientists are using the same words doesn’t mean they’re on the same page. Sometimes, it’s more like a game of intellectual one-upmanship—who can define the most obscure term or twist a familiar word into something completely unrecognisable?

And let’s not forget the philosophers. They’ve turned linguistic acrobatics into an art form. Good luck reading Foucault or Derrida without a dictionary (or five) on hand. You might walk away thinking you understand their points, but do you really? Or have you just memorised the jargon without actually grasping the deeper meaning? Even scholars within these fields often argue over what was really meant by a certain text—Barthes, after all, famously declared the “death of the author,” so it’s not like anyone really has the final say on meaning anyway.

So here we are, knee-deep in jargon, trying to communicate with people who, technically, speak the same language but are operating within entirely different rulesets. Every academic discipline has its own secret code, and if you don’t know it, you’re lost. Even when you do know the code, you’re still at risk of miscommunication, because the words that look familiar have been stretched and shaped to fit highly specific contexts. It’s like being fluent in one dialect of English and then suddenly being asked to write a thesis in legalese. Good luck.

In the end, academia’s specialised languages don’t just make things harder—they actively create barriers. What started as a way to improve precision has turned into an obstacle course of incomprehensible terms, where the real challenge is just figuring out what anyone’s actually saying. And let’s be honest, even if you do figure it out, there’s no guarantee it’s going to mean the same thing next time you see it.

Neurolinguistics: Even Our Brains Can’t Agree

So far, we’ve seen how language is a mess of miscommunication, cultural differences, and academic jargon. But surely, at least on a biological level, our brains are all on the same page, right? Well, not exactly. Welcome to the wonderful world of neurolinguistics, where it turns out that even the very organ responsible for language can’t get its act together.

Here’s the deal: Neurolinguistics is the study of how the brain processes language, and while it’s fascinating, it’s also a bit of a buzzkill for anyone hoping for consistency. See, your brain and my brain don’t process language in the same way. Sure, we’ve got similar hardware, but the software is wildly unpredictable. There are individual differences, cultural influences, and developmental quirks that all affect how we understand and produce language. What’s simple for one brain might be completely baffling to another.

Take, for example, something as basic as syntax. Chomsky might have told us we all have a universal grammar hard-wired into our brains, but neurolinguistics has shown that how we apply that grammar can vary significantly. Some people are wired to handle complex sentence structures with ease—think of that friend who can follow 10 different clauses in a single breath. Others? Not so much. For them, even a moderately tricky sentence feels like mental gymnastics. The brain is constantly juggling words, meanings, and structures, and some brains are better at it than others.

But the real kicker is how differently we interpret words. Remember those abstract nouns we’ve been wrestling with? Well, it turns out that your brain might be interpreting ‘freedom’ or ‘justice’ completely differently from mine – not just because of culture or upbringing, but because our brains physically process those words in different ways. Neurolinguistic studies have shown that certain regions of the brain are activated differently depending on the individual’s experience with language. In other words, your personal history with a concept can literally change how your brain lights up when you hear or say it.

And don’t even get me started on bilingual brains. If you speak more than one language, your brain is constantly toggling between two (or more) linguistic systems, which means it’s running twice the risk of misinterpretation. What a word means in one language might trigger a completely different association in another, leaving bilingual speakers in a constant state of linguistic flux. It’s like trying to run two operating systems on the same computer—things are bound to get glitchy.

But here’s the real kicker: Even within the same person, the brain can’t always process language the same way all the time. Stress, fatigue, emotional state—all of these factors can influence how well we handle language on any given day. Ever tried to have a coherent conversation when you’re tired or angry? Good luck. Your brain isn’t interested in nuance or deep philosophical ideas when it’s in survival mode. It’s just trying to get through the day without short-circuiting.

So, not only do we have to deal with the external chaos of language – miscommunication, different contexts, shifting meanings – but we also have to contend with the fact that our own brains are unreliable interpreters. You can use all the right words, follow all the right grammar rules, and still end up with a garbled mess of meaning because your brain decided to take a nap halfway through the sentence.

In the end, neurolinguistics reminds us that language isn’t just a social or cultural problem – it’’’s a biological one too. Our brains are doing their best to keep up, but they’re far from perfect. The very organ that makes language possible is also responsible for making it infinitely more complicated than it needs to be. And if we can’t rely on our own brains to process language consistently, what hope do we have of ever understanding anyone else?


Previous | Next

The Myth of Psychological Normalcy

Neurodivergence is a hot topic today. But why? Are people more informed, or simply more comfortable speaking out?

Humans, by nature, are neurodivergent. This has always been the case, yet the illusion of neurological normalcy persists. The real issue lies in psychology’s pretence that humans fit into a standard model of normalcy.

In production processes, normality is expected—a product must meet specific standards, and deviations are considered defects. However, applying this industrial concept to human psychology is deeply flawed. Humans are not teleological except in the broadest biological sense. More people have brown eyes than blue or green; are those with brown eyes normal and others not? Ninety per cent of people are right-handed; does that make left-handed individuals abnormal? Statistically, they might be. However, in psychology, normalcy carries a judgmental connotation that goes beyond mere numbers.

Psychology, as expressed in behaviour, is not a suitable domain for discussing normality. Many people misuse the concept of “normal” as a moral qualification, which must stop.

In fact, the very concept of psychological normalcy is a relatively recent invention, one that has been used as a tool of control and categorisation. As philosophers have noted, the obsession with ‘normal’ behaviour reveals more about society’s desire for control than it does about the true nature of human diversity.

We are all neurodiverse. This is what typical looks like. Do some people vary significantly from others on certain dimensions? Yes, but this is a relative or contextual comparison.

By embracing neurodiversity, we recognise that our differences are not defects but essential variations, enriching the human experience. It’s time to move beyond the narrow confines of neurological “normality” and celebrate the full spectrum of human cognition. In doing so, we challenge the arbitrary boundaries that have long been used to divide and label.

Testudineous Agency

In chapter 71, Ultimate Responsibility, in Intuition Pumps and Other Tools for Thinking, author and philosopher, Daniel Dennett presents a counterargument to the notion that an agent, a person, is not absolutely responsible for their actions. He questions some premises in the ‘the way you are’ line of argumentation, but I question some of his questions.

Here is a nice clear version of what some thinkers take to be the decisive argument. It is due in this form to the philosopher Galen Strawson (2010):
1. You do what you do, in any given situation, because of the way you are.
2. So in order to be ultimately responsible for what you do, you have to be ultimately responsible for the way you are—at least in certain crucial mental respects.
3. But you cannot be ultimately responsible for the way you are in any respect at all.
4. So you cannot be ultimately responsible for what you do.

Dennett, Daniel C.. Intuition Pumps And Other Tools for Thinking (p. 395). W. W. Norton & Company. Kindle Edition.

Dennett continues.

The first premise is undeniable: “the way you are” is meant to include your total state at the time, however you got into it. Whatever state it is, your action flows from it non-miraculously.

Dennett and I are in agreement with Strawson. There is not much to see here. It’s akin to saying the now is the result of all past events until now. This is “the way you are”.

The second premise observes that you couldn’t be “ultimately” responsible for what you do unless you were “ultimately” responsible for getting yourself into that state—at least in some regards.

This second premise asserts that one cannot be responsible for any action that one had no part in performing. Two scenarios come immediately to mind.

First, you are not responsible for being born. As Heidegger notes, we are all thrown into this world. We have no say in when or where—what country or family—or what circumstances.

Second, if one is hypnotised or otherwise incapacitated, and then involved in a crime, one is merely a cog and not an agent, so not responsible in any material sense.

But according to step (3) this is impossible.

Whilst Dennett fixates on the absolute aspect of the assertion, I’d like to be more charitable and suggest that we still end up with a sorites paradox. Dennett will return to this one, and so shall I.

So step (4), the conclusion, does seem to follow logically. Several thinkers have found this argument decisive and important. But is it really?

As Dennett invalidates step (3), he insists that the conclusion is also invalid. He asserts that the notion of absolute responsibility is a red herring, and I argue that Dennett doesn’t get us much further, perhaps redirecting us with a pink herring.

I’ve created an image with tortoises to make my point. There are actually two points I wish to make. The first point is to determine where the responsibility is inherited. This point is meant to articulate that the world can not be strictly deterministic and yet one can still not have significant agency. The second point is that culpability is asserted as a need, and acceptance of this assertion is the problem.

This image has an empty alt attribute; its file name is image-14.png
Testuditude

The image depicts an evolution of an agent, with time progressing from left to right. The tortoise on the right is a product of each of the recursive tortoises to its left. The image means to convey that each subsequent tortoise is a genetic and social and social product of each tortoise prior. Of course, this is obviously simplified, because tortoises require pairs, so feel free to imagine each precedent tortoise to represent a pair or feel free to add that level of diagrammatic complexity.

This is not meant to distinguish between nature and nurture. Instead, the claim is that one is a product of both of these. Moreover, as genetic, epigenetic, and mimetic influences are transmitted in family units, they also occur through social interaction and the environment, as represented by the orange and green tortoises.

…if one is a product of genetic and mimetic forces, how much agency remains for culpability?

The point here is that if one is a product of genetic and mimetic forces, how much agency remains for culpability? Each person is an emergent unit—autonomous, yes, and yet highly programmed.

If I programme a boobytrap to kill or maim any intruder, the boobytrap has no agency. I assert further, that the maker of that boobytrap has no more responsibility than the killing device.

The old hand grenade wired to a doorknob boobytrap trick

But who do we blame? you ask, and that’s precisely the problem. Asking questions doesn’t presume answers. This is a logical fallacy and cognitive bias. This heuristic leaves us with faulty jurisprudence systems. Humans seem hardwired, as it were, to blame. Humans need to believe in the notion of free will because they need to blame because they need to punish because vengeance is part of human nature to the extent there is human nature. There seems to be a propensity to frame everything as a causal relationship. Dennett calls this the Intentional stance. To borrow a from Dennett…

This instinctual response is the source in evolution of the invention of all the invisible elves, goblins, leprechauns, fairies, ogres, and gods that eventually evolve into God, the ultimate invisible intentional system.

Dennett, Daniel C.. Intuition Pumps And Other Tools for Thinking (p. 374). W. W. Norton & Company. Kindle Edition.
Fire Trap in Home Alone

Sins of the Fathers (and Mothers)

Let’s wrap this up with a sorites paradox. As I’ve already said, I agree with Dennett that the absolute aspect is unnecessary and undesired. The question remains how much agency™ does a person have once we account for the other factors? Is it closer to 90 per cent or 10 per cent? Apart from this, what is the threshold for culpability? Legal systems already have arbitrary (if not capricious) thresholds for this, whether mental capacity or age, which basically distils back to the realm of capacity.

I have no basis to even venture a guess, but that’s never stopped me before. I’d argue that the agency is closer to zero than to one hundred per cent of the total, and I’d propose that 70 per cent feels like a reasonable threshold.

I could have sworn I’d posted a position on this after I read Robert Sapolsky’s Behave. Perhaps it’s never made it out of drafts.

In closing, I don’t think we need to settle the question of determinism versus free will to recognise that even without strict determinism, personal agency is still severely limited, and yet as our political systems presume a level of rationality that is not apparent, so do legal systems presume a level of agency not present.

Žižek’s Essentialism

So, I’ve gone down a rabbit hole. Again. This time, it’s Žižek. Again. I’ve still not read any of Žižek’s own work, but people mention him often and he is a shameless self-promoter. In this video clip, he responds to whether gender is a social construct. Unfortunately, he conflates gender with sex, and his examples cite transsexuals not transgenders.

sex is about biological sex assignment

To set the stage, sex is about biological sex assignment—the sex category you are assigned into at birth: male, female, or other for some 1.8%. This is a simplistic categorisation: penis = male; vagina = female; both or neither: rounding error. In some cases, a decision is made to surgically conform the child to either male or female and ensure through prophylactic treatment that this isn’t undone hormonally in adolescence.

gender is about identity

Gender is about identity. As such, it is entirely a social construct. All identity of this nature is a function of language and society. In this world—in the West—, females wear dresses (if they are to be worn at all) and males don’t—kilts notwithstanding. In this world, sex and gender have little room for divergence. so the male who identifies as this gender (not this sex) is ostracised.

The example I usually consider first is the comedian Eddie Izzard—a cross-dresser. He’s probably a bad example because he does identify as a male. He just doesn’t wish to be constrained by male role restrictions and wants to wear the makeup that’s been reserved for women in the West at this time.

Žižek eventually gets to an argument about essentialism—so we’re back at Sorites paradoxes and Theseus again. At the start, I could argue that the sexual distinction has few meaningful contexts. For me, unless I am trying to have sex and/or procreate, the distinction is virtually meaningless. For others, only procreation remains contextually relevant. In this technological world, as Beauvoir noted in the late 1940s, strength differentials are not so relevant. End where they are, sex is not the deciding factor—it’s strength.

Žižek’s contention seems to be that the postmoderns (or whomever) disclaim essentialism in favour of constructivism but then resolve at essentialism as a defence because ‘now I am in the body originally intended’. I’ll argue that this is the logic employed by the person, but this person is not defending some academic philosophical position. They are merely engaging in idiomatic vernacular.

I am not deeply familiar with this space, and if the same person who is making a claim against essentialism is defending their actions with essentialism, then he’s got a leg to stand on. As for me, the notions of essentialism and constructivism are both constructed.

Post-Post-Modern Subjectivism

I’ve just finished reading Steven Pinker‘s The Blank Slate. Originally published in 2002 (and re-published with an afterword in 2016), it still feels fresh. Pinker offers compelling rationale for accepting that humans are not blank slates entering the world.

Though I am somewhat of a social justice warrior in principle, I am still a moral subjectivist, a post-modern thinker. Pinker shares his strong feelings against subjectivism, but he provides no evidence of the moral objectivism he advances, relying instead on an emotional appeal; in fact, he employs the same defensive tactic his detractors employ, which is to try to make an empathic connection to the reader.

All he does is to claim that there is an objective morality because everybody feels and knows that X is better than Y, taking a strawman approach. It’s not that I disagree with his Xs and Ys; it’s just that they are subjective not objective measures. He tries to slip in an appeal to popularity by claiming that everybody would (or should) feel this way when push comes to shove.

Nietzsche, I think, had it right in Beyond Good and Evil when he pointed out the dual moral systems of masters and slaves. Although a moral (just) system might be best constructed from scratch in the manner of Rawlsveil of ignorance, we are not starting from a blank slate. The power structures are already in place. There is a possibility for upward and downward mobility, but large jumps are not likely except in the manner of a lottery. Other than this, it’s unlikely that one will move from one quintile to another and even less likely to skip a quintile, especially on the upward trend.

In any case, the issue is not whether some might feel subjectively better; it’s whether—across all possible dimensions—a relative, stable equilibrium can be found. Even here, this is not objective, even if it’s not otherwise arbitrary or capricious. The larger problem is one of epistemological empiricism—apart from the ontological question—, whether we can know that we’ve found the objective truth or if we’ve just settled on something that works for our current station.

As much as I really do like Steven Pinker, and I await his next book, Enlightenment Now, I do so only to read how he couches his argument in support of Enlightenment and Humanism, two concepts I feel are tainted by hubris