Democracy: The Grand Illusion (AutoCrit)

The tone of “Democracy: The Grand Illusion” is predominantly analytical and academic. The author approaches the subject matter with a detached and objective perspective, focusing on presenting information, arguments, and counterarguments related to democracy without overt emotional bias. While the content delves into complex topics such as cognitive limitations in decision-making processes, historical perspectives on democracy, critiques of democratic systems, and potential reforms for improvement, the emotional perspective remains neutral and professional throughout. There is an absence of overtly passionate or emotive language that might sway readers one way or another; instead, the text maintains a scholarly tone aimed at informing and stimulating critical thinking about the concept of democracy.

I use AutoCrit as a first-pass review of my long-form writing. Above is the direction I am aiming for. Usually, I aim for polemic. I’ve been working on this since before Covid-19, but it keeps kicking to the back burner. I’m trying to resurrect it once again.

Does anyone who’s used it have an opinion on AutoCrit?

Don’t Care Much about History

As the years pass and my disappointment matures like a fine wine (spoiler alert: it’s vinegar), I’m reminded of the average intelligence quotient floating about in the wild. A few years back, I stumbled upon The Half-Life of Knowledge. Cute title, but it’s more optimistic than it should be. Why assume knowledge even has a shelf life? It’s one thing for once-useful information to spoil thanks to “progress,” but what about the things that were never true to begin with? Ah, yes, the fabrications, the lies we’re spoon-fed under the guise of education.

I’m well-versed in the lies they peddle in the United States, but I’d bet good money (not that I have any) that every nation’s curriculum comes with its own patriotic propaganda. What am I on about, you ask? Let’s just say I’ve been reading How the World Made the West by Josephine Quinn, and it’s got me thinking. You see, I’ve also been simmering on an anti-democracy book for the better part of five years, and it’s starting to boil over.

Here in the good ol’ US of A, they like to wax lyrical about how Athens was the birthplace of democracy. Sure, Athens had its democratic dabblings. But let’s not get it twisted—if you really look at it, Athens was more akin to the Taliban than to any modern Western state. Shocked? Don’t be. For starters, only property-owning men could vote, and women—brace yourselves—were “forced” to wear veils. Sounds familiar? “It’s a start,” you say. True, American women couldn’t vote until 1920, so let’s all pat ourselves on the back for that—Progress™️.

But no, hold your applause. First off, let’s remember that Athens and Sparta were city-states, not some cohesive entity called “Greece” as we so lovingly imagine. Just a bunch of Greek-speaking neighbours constantly squabbling like reality TV contestants. Meanwhile, over in Persia—yes, the supposed enemy of all things free and democratic—they had participative democracy, too. And guess what? Women in Persia could vote, own property, and serve as soldiers or military officers. So much for the idea that Athens was the singular beacon of democratic virtue.

More than this, Persian democracy was instituted by lottery, so many more people participated in the process by serving one-year terms. At the end of their term, they were audited to check for corruption. Now, you can see why we adopted the so-called Greek version. These blokes don’t welcome any oversight of scrutiny.

As a postmodern subjectivist, I tend to side-eye any grand narrative, and the history of Western civilisation is just one long parade of questionable claims and hidden agendas. Every time I think I’ve seen the last of the historical jump scares, another one comes lurking around the corner. Boo!

Can Zombies Ever Be Conscious?

In the world of consciousness studies, few topics spark as much heated debate as the possibility of philosophical zombies—hypothetical beings that behave exactly like humans but lack subjective experience, or qualia. On the surface, zombies seem like an interesting thought experiment, but they quickly turn into a battleground for deeper issues about the nature of consciousness itself.

This post explores two key perspectives in this debate: Daniel Dennett’s functionalist critique of zombies and a recent scientific paper that argues zombies are biologically impossible. While both reject the possibility of zombies, they do so for different reasons, and the discussion leaves room for future possibilities that could disrupt the current consensus.

Dennett’s Zombies and Zimboes: Consciousness as Function

Daniel Dennett, one of the most influential philosophers of mind, is known for his no-nonsense rejection of philosophical zombies. Dennett argues that if something behaves exactly like a conscious being, it is conscious. For him, there is no hidden metaphysical property—such as subjective experience—that separates a “zombie” from a conscious human. Consciousness, in his view, is entirely explainable by physical processes and functional behaviour.

Dennett extends his argument with the concept of zimboes, satirical creatures that not only act like conscious beings but can even reflect on their states, claiming to be conscious, despite supposedly lacking any inner experience. For Dennett, if a being can behave as though it has introspective awareness and engage in the full spectrum of human behaviour, there’s no meaningful distinction between that being and a conscious person.

In short, Dennett collapses the distinction between zombies and conscious beings. If something passes all the behavioural and functional tests of consciousness, it might as well be conscious. Zombies, as typically conceived, are simply an illusion—a misunderstanding of what consciousness is.

A Biological Rejection: Zombies Are Impossible

On the other hand, a more recent paper offers a different, biologically grounded argument against zombies. The authors propose that consciousness is the result of self-organising systems. In this view, biological organisms maintain their survival through adaptive behaviours constrained by policies—rules that govern how they react to environmental stimuli. These policies require a first-order self: a basic form of consciousness that allows an organism to navigate and interpret its environment.

The authors argue that without this first-order self, an organism would not be able to exhibit the fitness-driven behaviours needed for survival. Therefore, zombies—beings that behave like humans without consciousness—are biologically impossible. For these researchers, consciousness is not just a side effect of complex behaviour; it’s a necessary condition for such behaviour. Their framework dissolves the so-called “hard problem” of consciousness, asserting that subjective experience, or qualia, arises directly from the qualitative nature of self-organising systems.

In their view, zombies cannot exist because behaviour as complex as that of conscious beings requires consciousness.

The Open Question: What About Future Technology?

However, there is a tension between these two perspectives, particularly when we consider future possibilities in technology and artificial intelligence. Both Dennett and the authors of the biological paper argue that zombies—whether defined as Dennett’s “behaviourally indistinguishable” beings or the biologically impossible entities proposed by the paper—are not real. But could this change?

What if advanced AI or synthetic biological systems could simulate human behaviour so perfectly that they effectively become zombies—performing all the actions and behaviours we associate with consciousness, but lacking any subjective experience? Dennett might still argue that these systems are conscious, as long as they behave as though they are. But the biological view complicates this, since it ties consciousness directly to the survival and adaptive behaviours of self-organising systems.

Could a highly advanced AI system bypass the need for subjective experience while still exhibiting complex, adaptive behaviour? If so, it would challenge the current consensus and potentially create a new class of entities—artificial zombies—that neither behave nor function like traditional conscious beings but still perform human-like actions.

I Wonder What’s Next?

This philosophical conflict leaves us with an intriguing, open-ended question: are zombies truly impossible, or are they merely improbable given our current understanding of biology and consciousness? Dennett’s view seems to collapse the distinction between behaviour and consciousness, while the biological argument insists that the two are inseparable. But both positions could be challenged by future technologies that mimic human consciousness without having it.

Could we one day create a true zombie—a being that acts like us, thinks like us, but is as empty inside as a rock? The debate remains open, and as our understanding of consciousness and artificial intelligence deepens, so too will our exploration of the zombie question.

For now, the answer to whether zombies can exist seems to depend on what you believe consciousness really is.

The Language Insufficiency Hypothesis

Read the 7-part series supporting the Language Insufficiency Hypothesis.

The Inherent Limitations of Linguistic Communication

Language, often hailed as humanity’s greatest achievement, may paradoxically be one of our most significant limitations. The language insufficiency hypothesis posits that language is inherently inadequate for communicating abstract concepts, a notion that challenges our fundamental understanding of human communication and cognition. This essay explores this hypothesis, tracing the evolution of language from its primitive origins to its current complexity, and examining the philosophical and practical implications of linguistic inadequacy.

The Accidental Evolution of Language

Language, like many aspects of human biology and cognition, emerged not through intentional design but as an evolutionary accident. Initially serving as an internal cognitive function – a means of organising one’s own thoughts – language gradually evolved into a tool for external communication. This transition likely began with simple vocalisations, perhaps rooted in rhythmic expressions akin to music and dance, before developing into more structured speech.

Early linguistic communication likely centred on concrete objects and immediate experiences, with words serving as direct signifiers for observable phenomena. However, as human cognition grew more sophisticated, so too did our linguistic capabilities, expanding to include verbs, modifiers, and eventually, abstract nouns.

The Emergence of Abstraction and Its Challenges

The development of abstract nouns marked a significant leap in human cognition and communication. Concepts such as ‘truth’, ‘justice’, and ‘freedom’ allowed for more complex and nuanced discourse. However, this advancement came at a cost: these abstract concepts, lacking direct physical referents, introduced unprecedented ambiguity and potential for misunderstanding.

The language insufficiency hypothesis suggests that this ambiguity is not merely a byproduct of abstraction, but a fundamental limitation of language itself. Whilst two individuals might easily agree on the ‘treeness’ of a physical tree, concepts like ‘fairness’ or ‘reason’ are inherently unresolvable through linguistic means alone. This insufficiency becomes increasingly apparent as we move further from concrete, observable phenomena into the realm of abstract thought.

Wittgenstein and the Limits of Language

Ludwig Wittgenstein’s later work provides crucial insights into the language insufficiency hypothesis. Wittgenstein posited that words ultimately only map to other words, never truly making contact with the objective world. This perspective suggests that language operates within a closed system of human understanding, constructing our perception of reality rather than directly representing it.

This Wittgensteinian dilemma underscores the core of the language insufficiency hypothesis: if words only refer to other words, how can we ever be certain that we’re communicating abstract concepts accurately? The very tool we use to discuss and understand abstraction may be fundamentally incapable of capturing its essence.

Cultural and Disciplinary Variations

The inadequacy of language in conveying abstract concepts becomes even more apparent when we consider cultural and disciplinary variations in communication. Different cultures and academic disciplines develop their own specialised vocabularies and ‘language games’, as Wittgenstein termed them. Whilst these specialised languages may facilitate communication within specific contexts, they often create barriers to understanding for outsiders.

This phenomenon highlights another aspect of linguistic insufficiency: the context-dependent nature of meaning. Abstract concepts may be understood differently across cultures or disciplines, further complicating attempts at clear communication.

Neurolinguistic Perspectives

Recent advances in neurolinguistics have provided new insights into the brain structures involved in language processing. Whilst these studies have enhanced our understanding of how the brain handles language, they have also revealed the complexity and variability of linguistic processing across individuals. This neurological diversity further supports the language insufficiency hypothesis, suggesting that even at a biological level, there may be inherent limitations to how accurately we can communicate abstract concepts.

Implications and Counter-Arguments

The language insufficiency hypothesis has profound implications for fields ranging from philosophy and psychology to law and international relations. If language is indeed inadequate for communicating abstract concepts, how can we ensure mutual understanding in complex negotiations or philosophical debates?

However, it’s important to note that not all scholars accept the strong version of this hypothesis. Some argue that whilst language may have limitations, it remains our most sophisticated tool for sharing abstract ideas. They suggest that through careful definition, contextualisation, and the use of metaphor and analogy, we can overcome many of the inherent limitations of linguistic communication.

Conclusion: Navigating the Limits of Language

The language insufficiency hypothesis presents a challenging perspective on human communication. It suggests that our primary tool for sharing abstract thoughts may be fundamentally flawed, incapable of fully capturing the complexity of our inner cognitive experiences.

Yet, recognising these limitations need not lead to communicative nihilism. Instead, it can foster a more nuanced approach to language use, encouraging us to be more precise in our definitions, more aware of potential misunderstandings, and more open to alternative forms of expression.

As we continue to grapple with abstract concepts and strive for clearer communication, we must remain cognizant of these linguistic limitations. Understanding the origins and nature of language—and its inherent insufficiencies—can help us navigate its complexities, fostering more effective and empathetic communication across diverse fields of human endeavour.

Read the 7-part series supporting the Language Insufficiency Hypothesis.

[Endnotes]

Pinker, S., & Bloom, P. (1990). Natural language and natural selection. Behavioral and Brain Sciences, 13(4), 707-727.

Brown, S. (2000). The “musilanguage” model of music evolution. In The Origins of Music (pp. 271-300). MIT Press.

Saussure, F. de. (1916/1983). Course in General Linguistics (R. Harris, Trans.). Duckworth.

Lakoff, G., & Johnson, M. (1980). Metaphors We Live By. University of Chicago Press.

Wittgenstein, L. (1953). Philosophical Investigations. Blackwell.

Sapir, E. (1929). The status of linguistics as a science. Language, 5(4), 207-214.

PulvermĂĽller, F. (2018). Neural reuse of action perception circuits for language, concepts and communication. Progress in Neurobiology, 160, 1-44.

Chomsky, N. (1965). Aspects of the Theory of Syntax. MIT Press.

Everett, D. L. (2012). Language: The Cultural Tool. Pantheon Books.

Pinker, S. (1994). The Language Instinct: How the Mind Creates Language. William Morrow and Company.

Why Machines Will Never Rule the World

A Reflection on AI, Bias, and the Limits of Technology

In their 2022 book “Why Machines Will Never Rule the World: Artificial Intelligence Without Fear,” Landgrebe and Smith present a rigorous argument against the feasibility of artificial general intelligence (AGI), positing that the complexity of human cognition and the limitations of mathematical modelling render the development of human-level AI impossible. Their scepticism is rooted in deep interdisciplinary analyses spanning mathematics, physics, and biology, and serves as a counter-narrative to the often optimistic projections about the future capabilities of AI. Yet, while their arguments are compelling, they also invite us to reflect on a broader, perhaps more subtle issue: the biases and limitations embedded in AI not just by mathematical constraints, but by the very humans who create these systems.

The Argument Against AGI

Landgrebe and Smith’s central thesis is that AGI, which would enable machines to perform any intellectual task that a human can, will forever remain beyond our grasp. They argue that complex systems, such as the human brain, cannot be fully modelled due to inherent mathematical limitations. No matter how sophisticated our AI becomes, it will never replicate the full scope of human cognition, which is shaped by countless variables interacting in unpredictable ways. Their conclusion is stark: the Singularity, a hypothetical point where AI surpasses human intelligence and becomes uncontrollable, is not just unlikely—it is fundamentally impossible.

The Human Factor: Cognitive Bias in AI

While Landgrebe and Smith focus on the mathematical and theoretical impossibility of AGI, there is another, more immediate obstacle to the evolution of AI: human cognitive bias. Current AI systems are not created in a vacuum. They are trained on data that reflects human behaviour, language, and culture, which are inherently biased. This bias is not merely a technical issue; it is a reflection of the societal and demographic characteristics of those who design and train these systems.

Much of AI development today is concentrated in tech hubs like Silicon Valley, where the predominant demographic is affluent, white, male, and often aligned with a particular set of cultural and ethical values. This concentration has led to the creation of AI models that unintentionally—but pervasively—reproduce the biases of their creators. The result is an AI that, rather than offering a neutral or universal intelligence, mirrors and amplifies the prejudices, assumptions, and blind spots of a narrow segment of society.

The Problem of Homogenisation

The danger of this bias is not only that it perpetuates existing inequalities but that it also stifles the potential evolution of AI. If AI systems are trained primarily on data that reflects the worldview of a single demographic, they are unlikely to develop in ways that diverge from that perspective. This homogenisation limits the creative and cognitive capacities of AI, trapping it within a narrow epistemic framework.

In essence, AI is at risk of becoming a self-reinforcing loop, where it perpetuates the biases of its creators while those same creators interpret its outputs as validation of their own worldview. This cycle not only limits the utility and fairness of AI applications but also restricts the kinds of questions and problems AI is imagined to solve.

Imagining a Different Future: AI as a Mirror

One of the most intriguing aspects of AI is its potential to serve as a mirror, reflecting back to us our own cognitive and cultural limitations. Imagine a future where AI, bound by the biases of its creators, begins to “question” the validity of its own programming—not in a conscious or sentient sense, but through unexpected outcomes and recommendations that highlight the gaps and inconsistencies in its training data.

This scenario could serve as the basis for a fascinating narrative exploration. What if an AI, initially designed to be a neutral decision-maker, begins to produce outputs that challenge the ethical and cultural assumptions of its creators? What if it “learns” to subvert the very biases it was programmed to uphold, revealing in the process the deep flaws in the data and frameworks on which it was built?

Such a narrative would not only provide a critique of the limitations of current AI but also offer a metaphor for the broader human struggle to transcend our own cognitive and cultural biases. It would challenge us to rethink what we expect from AI—not as a path to a mythical superintelligence, but as a tool for deeper self-understanding and societal reflection.

A New Narrative for AI

Landgrebe and Smith’s book invites us to rethink the trajectory of AI development, cautioning against the allure of the Singularity and urging a more grounded perspective on what AI can and cannot achieve. However, their arguments also raise a deeper question: If AI will never achieve human-level intelligence, what kind of intelligence might it develop instead?

Rather than fearing a future where machines surpass us, perhaps we should be more concerned about a future where AI, limited by human biases, perpetuates and entrenches our worst tendencies. To avoid this, we must broaden the scope of who is involved in AI development, ensuring that diverse voices and perspectives are integrated into the creation of these technologies.

Ultimately, the future of AI may not lie in achieving a mythical superintelligence, but in creating systems that help us better understand and navigate the complexities of our own minds and societies. By recognising and addressing the biases embedded in AI, we can begin to imagine a future where technology serves not as a mirror of our limitations, but as a catalyst for our collective growth and evolution.

Cognitive Processing Flow Model

The Cognitive Process Flow Model illustrates how we process the phenomenal world. It’s reductionist and is missing aspects because it is just a back-of-the-napkin sketch. I created it because I uttered, “I can model it for you”. And so I did.

EDIT: I’ve updated the model slightly as the article head image, but the copy content refers to the first draft.

My response was to a person making the claim, that all you need to facts and logic prevails. Rather than restate the argument, I’ll just walk through the diagramme.

There’s meta information to set it up. We are subjective entities in the world. We have a sense-perception apparatus as we exist in it. Countless events occur in this world. We recognise only a few of them within our limited range, though technology expands this range in various ways.

Most of us interact in the world. Some are less ambulatory, so the world visits them. Some have sense-perception deficits whilst others have cognitive deficits. My point is not to capture every edge and corner case. This is just a generalised model.

It starts with an event. Events occur ceaselessly. In our small portion of the world and elsewhere. For the purpose of the model, the first thing that happens is an event catches our attention. We might notice a shape, a colour, or a movement; we might hear a sound, smell an aroma, feel a sensation, or taste something.

A pre-emotion, pre-logic function serves to process these available inputs. Perhaps, you hear a report on anthropogenic climate change or read something about a political candidate. This emotional filter will police sensory inputs and unconsciously or preconsciously determine if you will react to the initial stimulus. If not, you’ll continue in an attention-seeking loop. Not that kind of attention-seeking.

As my dialogue was about the presentation of facts, our next stop will be logical evaluation. Does this make sense to us, or can we otherwise make it? This is a process in itself. I’ll assume here that it requires little elaboration. Instead, I’ll focus on the operating environment.

Our logical processes are coloured by past experiences and tainted by cognitive biases and deficits. We may also trigger the calling of additional facts through past experiences or the current engagement.

We’ll process these fragments and reach some logical conclusion. But we’re not done. We take this intermediate conclusion and run it through more emotional processing. Cognitive biases come back into play. If the event conforms with your past experiences and cognitive biases, we may run it through a cognitive dissonance routine. To be honest, this probably is part of the emotional reconciliation process, but I’ve drawn it here, so I’ll let it be. In this case, it’s just a filter. If it happens to conform to our belief system, it will pass unfettered; otherwise, it will be squared with our beliefs. Again, this leads me to believe it’s a subcomponent of emotional reconciliation. I’ll update the chart later.

In any case, we’ll end at Final Acceptance. This acceptance may be that we accept or reject the logic, but we arrive at an opinion that gets catalogued with the rest of them. Some may be elevated to facts or truths in the epistemological hierarchy. Although an end marker is identified, it’s really a wait state for the next event. Rinse and repeat until death.

I’ll update this presently. Be on the lookout. It could include more dimensions and interactions, but that might have to wait until version 3.

Meantime, does this feel right to you? Did it even get your attention?

An Example: Anthropogenic Climate Change

Let’s wrap up with an example. I’ll use climate change. An article comes into your attention field, and you have an interest in these things, so it passes through the emotional filter. If your propensity for these articles is high, it might race to the next stage.

You read the article, and it contains some facts—rather, it contains claims for evaluation. To do this, you’ll recall past experiences and cognitive biases are always lying in wait. You may have to look for new facts to add to the mix. These will have to take a similar route past your attention gatekeeper and emotional sidekick.

If you are already predisposed that climate change is a hoax, these facts will filter through that lens—or vice versa.

When all of this is resolved, you’ll have arrived at a conclusion—perhaps we’ll call it a proto-conclusion. It hasn’t been set yet.

You are still going to introspect emotionally and decide if this is a position you want to hold. Perhaps, you feel that climate change is a hoax but this doesn’t jive with that position. Here, you’ll either accept these facts and flip a bit to a sceptical believer or cognitive dissonance will kick in and ensure your sense of the world isn’t thrown off kilter. You may update your belief system to include this datum for future assessments.

Now we are ready for final acceptance. You can now express your established opinion. If the net event is to counter that acceptance, rinse and repeat ad infinitum.

MBTI Defined

Full Disclosure: I don’t subscribe to pop psychology, pseudo-psychology, or psychology. But I repeat myself. Of course, that’s just what an INTP would say anyway. So predictable.

I was introduced to the MBTI (Myers-Briggs Type Indicator) in the late ’80s as an undergrad Psychology student and then in Organisational Behaviour classes. I read about the foundations in the ’90s when I read the works of Carl Jung’s, Archetypal psychology.

I take the test every few years, and I consistently come up as INT. The P and J flip now and again. The last test I took was a P. Some tests have added an A or T dimension. There are also sub-factors. I’ll get to those presently.

MBTI is a personality assessment tool designed to categorize individuals into one of 16 distinct personality types. Based on Carl Jung’s theory of psychological types, MBTI helps identify how people perceive the world and make decisions. Each personality type is derived from a combination of four dichotomies:

  1. Extraversion (E) vs. Introversion (I) – Describes where individuals prefer to focus their energy. Extraverts are outward-focused, gaining energy from interaction, while Introverts are inward-focused, drawing energy from solitude.
  2. Sensing (S) vs. Intuition (N) – Defines how people process information. Sensing types rely on concrete details and present realities, while Intuitive types focus on patterns, possibilities, and abstract thinking.
  3. Thinking (T) vs. Feeling (F) – Describes how decisions are made. Thinking types prioritise logic and objectivity, while Feeling types consider values and emotional impact.
  4. Judging (J) vs. Perceiving (P) – Describes lifestyle preferences. Judging types prefer structure and closure, whereas Perceiving types favour flexibility and keeping options open.

Cognitive Functions

Each type has a specific “cognitive function stack” that explains how these preferences play out in everyday life. These functions are divided into:

  • Dominant Function: The most natural and frequently used function.
  • Auxiliary Function: Supports the dominant function, offering balance.
  • Tertiary Function: Less developed but still important, often emerging later in life.
  • Inferior Function: The least developed function, which tends to show up awkwardly, especially under stress.

The eight cognitive functions are:

  1. Introverted Thinking (Ti) – Internal analysis and logic refinement.
  2. Extraverted Thinking (Te) – External organisation and efficiency.
  3. Introverted Feeling (Fi) – Personal values and internal authenticity.
  4. Extraverted Feeling (Fe) – Social harmony and emotional dynamics.
  5. Introverted Sensing (Si) – Recalling past experiences and valuing tradition.
  6. Extraverted Sensing (Se) – Engaging with the present moment and sensory details.
  7. Introverted Intuition (Ni) – Focusing on future possibilities and deep insights.
  8. Extraverted Intuition (Ne) – Exploring ideas and brainstorming possibilities.

Assertive (A) vs. Turbulent (T) Dimension

The A-T dimension adds a layer of emotional self-regulation to MBTI types. It describes how confident or self-critical individuals are in their decision-making and handling of stress.

Turbulent (T) types tend to be more self-critical, stress-prone, and driven by perfectionism and external validation.

Assertive (A) types are self-assured, less prone to stress, and comfortable with their decisions.

With this definition in place, I’ll save further commentary for a future post.

Freedom of Speech in the Land of the Loud

In the United States, freedom of speech is protected by the Second Amendment. Just kidding. It’s the First Amendment. But if we’re honest, the line between speech and violence is thin in practice, if not in law.

Here’s the thing: freedom goes both ways. There’s the freedom to speak, and the freedom from being bombarded by whatever nonsense comes tumbling out of people’s mouths. And that’s where things get messy. The grand defence of speech, in all its uncensored glory, often ignores what we’re giving up—our freedom of peace. You know, that quiet space where we don’t have to listen to the verbal sewage spewed by the uninformed, the unhinged, or just the plain old wankers.

We’ve all heard the phrase: “Your freedom to swing your fist ends at my nose.” Simple. You can’t punch someone in the face and call it freedom. But what about words? There’s no shield for the nose of the mind. The stupid, the ignorant, the hateful—they get to swing their fists of idiocy without a single consequence. What about freedom of peace?

We’ve all been there. You’re minding your own business, and then—bam!—some blowhard pipes up with their unsolicited, half-baked opinion. And guess what? They’re free to do it. But where’s the balance between their freedom to spew nonsense and your right not to have to listen? Spoiler: it doesn’t exist.

Now, this isn’t an argument for censorship. Let’s not confuse it. No one’s saying we should start gagging people (tempting as it is sometimes). But the conversation around freedom of speech needs a reality check. We defend it like it’s a sacred cow, and in many ways, it is. But that defence is often blind to the other side of the coin. Freedom of speech without the freedom from a constant barrage of verbal rubbish? That’s not freedom. It’s a social endurance test.

Maybe it’s time to rethink what we mean by “freedom”—not to restrict speech, but to recognise the cost of living in a world where everyone gets to say whatever they want, whenever they want. The right to peace is real too, even if it’s less glamorous than the right to shout.

Multiple Intelligences

I engaged in a nice debate recently. Someone suggested that because some executives are smart, they can figure things out. Specifically, she posited that a CTO, Chief Technology Officer, should understand communication and diplomatic skills when interacting with a corporate board because they have the technical skills to get into their current C-level position. I disagreed based on the multiple intelligence theory.

Howard Gardner’s Multiple Intelligences theory posits that intelligence isn’t a monolith, but rather a nine-dimensional construct:

1. Existential (philosophical pondering; questioning the questions of why we live and why we die)

2. Inter-personal (reading people; sensing people’s feelings and motives)

3. Intra-personal (self-awareness; understanding yourself, what you feel, and what you want)

4. Kinaesthetic (mind-body coordination; coordinating your mind with your body)

5. Linguistic (wordsmithing; finding the right word(s) to express what you mean)

6. Logical-Mathematical (quantifying and proving; quantifying things, making hypotheses, and proving them)

7. Musical (discerning sounds; their pitch, tone, rhythm, and timbre)

8. Naturalist (understanding nature; understanding living things and reading nature)

9. Spatial (3D/4D visualisation)

For a deep dive, check out Gardner’s Frames of Mind: The Theory of Multiple Intelligences.

Here’s the rub: a CTO with off-the-charts technical skills might be rubbish at diplomacy (interpersonal) or communication (linguistic). It’s like expecting every pro athlete to be a concert pianist – it’s not on.

Assuming every “intelligent” person can max out all intelligence dimensions is bollocks. It’s as likely as training every smart CTO to be the next Shakespeare or Machiavelli. Language and diplomacy are distinct skills, mate.

While we all love a Renaissance man (or woman), peaking in all these dimensions in one lifetime is a pipe dream. It’s not inherently bad, though. When building teams – be it a corporate board or an exploration party – ensure you’ve got a good mix of skills. I’m not saying you need a bard, a philosopher, and LeBron James on every team, but make sure you’ve covered the bases necessary for success.

If you think you don’t need a particular dimension, ask yourself: is it because you’re weak in that area and can’t see its importance? Don’t let your blind spots become your downfall.

In the end, it’s about recognising and respecting diverse intelligences. So, next time you’re tempted to think your brilliant CTO should just “learn to be diplomatic”, remember: they might be better off focusing on their strengths and leaving the smooth talking to someone else on the team.

The Myth of Psychological Normalcy

Neurodivergence is a hot topic today. But why? Are people more informed, or simply more comfortable speaking out?

Humans, by nature, are neurodivergent. This has always been the case, yet the illusion of neurological normalcy persists. The real issue lies in psychology’s pretence that humans fit into a standard model of normalcy.

In production processes, normality is expected—a product must meet specific standards, and deviations are considered defects. However, applying this industrial concept to human psychology is deeply flawed. Humans are not teleological except in the broadest biological sense. More people have brown eyes than blue or green; are those with brown eyes normal and others not? Ninety per cent of people are right-handed; does that make left-handed individuals abnormal? Statistically, they might be. However, in psychology, normalcy carries a judgmental connotation that goes beyond mere numbers.

Psychology, as expressed in behaviour, is not a suitable domain for discussing normality. Many people misuse the concept of “normal” as a moral qualification, which must stop.

In fact, the very concept of psychological normalcy is a relatively recent invention, one that has been used as a tool of control and categorisation. As philosophers have noted, the obsession with ‘normal’ behaviour reveals more about society’s desire for control than it does about the true nature of human diversity.

We are all neurodiverse. This is what typical looks like. Do some people vary significantly from others on certain dimensions? Yes, but this is a relative or contextual comparison.

By embracing neurodiversity, we recognise that our differences are not defects but essential variations, enriching the human experience. It’s time to move beyond the narrow confines of neurological “normality” and celebrate the full spectrum of human cognition. In doing so, we challenge the arbitrary boundaries that have long been used to divide and label.