Homo Legibilis

3–4 minutes

A Brief Field Note from the Department of Bureaucratic Anthropology

Still reeling from the inability to fold some pan into homo, Palaeontologists are seemingly desperate for a new hominid. Some dream of discovering the ‘missing link’; others, more honest, just want something with a jawline interesting enough to secure a grant. So imagine the surprise when the latest species didn’t come out of the Rift Valley but out of an abandoned server farm somewhere outside Reading.

They’ve named it Homo Legibilis – the Readable Human. Not ‘H. normālis’ (normal human), not ‘H. ratiōnālis (rational human), but the one who lived primarily to be interpreted. A species who woke each morning with a simple evolutionary imperative: ensure one’s dataprints were tidy, current, and machine-actionable.

Audio: NotebookLM summary podcast of this topic.

You’ll have seen their skeletons before, though you may not have recognised them as such. They often appear upright, mid-scroll, preserved in the amber of a status update. A remarkable creature, really. Lithe thumbs. Soft cranial matter. Eyes adapted for low-light environments lit primarily by advertisements.

Habitat

The species thrived in densely surveilled ecosystems: corporate intranets, public Wi-Fi, facial-recognition corridors, anywhere with sufficient metadata to form a lasting imprint. They built vast nests out of profiles, settings, dashboards. Territorial disputes were settled not through display or violence but through privacy-policy updates. Their preferred climate? Temperate bureaucracy.

Diet

Contrary to earlier assumptions, H. Legibilis did not feed on information. It fed on interpretation: likes, metrics, performance reviews, and algorithmic appraisal. Some specimens survived entire winters on a single quarterly report. Every fossil indicates a digestive tract incapable of processing nuance. Subtext passed through untouched.

Mating Rituals

Courtship displays involved reciprocal data disclosure across multiple platforms, often followed by rapid abandonment once sufficient behavioural samples were collected. One famous specimen is preserved alongside fourteen dating-app profiles and not a single functional relationship. Tragic, in a way, but consistent with the species’ priorities: be seen, not held.

Distinguishing Traits

Where Homo sapiens walked upright, Homo legibilis aimed to sit upright in a chair facing a webcam.
Its spine is subtly adapted for compliance reviews. Its hands are shaped to cradle an object that no longer exists: something called ‘a phone’. Ironically, some term these ‘mobiles’, apparently unaware of the tethers.

Researchers note that the creature’s selfhood appears to have been a consensual hallucination produced collaboratively by HR departments, advertising lobbies, and the Enlightenment’s long shadow. Identity, for H. legibilis, was not lived but administered.

Extinction Event

The fossil record ends abruptly around the Great Blackout, a period in which visibility – formerly a pillar of the species’ survival – became inconvenient. Some scholars argue the species didn’t perish but simply lost the will to document itself, making further study inconvenient.

Others suggest a quieter transformation: the species evolved into rumour, passing stories orally once more, slipping back into the anonymity from which its ancestors once crawled.

Afterword

A few renegade anthropologists insist Homo Legibilis is not extinct at all. They claim it’s still out there, refreshing dashboards, syncing calendars, striving to be neatly interpreted by systems that never asked to understand it. But these are fringe theories. The prevailing view is that the species perished under the weight of its own readability. A cautionary tale, really. When your survival strategy is to be perfectly legible, you eventually disappear the moment the lights flicker.

AI and the End of Where

Instrumentalism is a Modern™ disease. Humanity has an old and tedious habit: to define its worth by exclusion. Every time a new kind of intelligence appears on the horizon, humans redraw the borders of ‘what counts’. It’s a reflex of insecurity disguised as philosophy.

Audio: NotebookLM podcast on this topic.

Once upon a time, only the noble could think. Then only men. Then only white men. Then only the educated, the rational, the ‘Modern’. Each step in the hierarchy required a scapegoat, someone or something conveniently declared less. When animals began to resemble us too closely, we demoted them to instinctual machines. Descartes himself, that patron saint of disembodied reason, argued that animals don’t feel pain, only ‘react’. Fish, we were told until recently, are insensate morsels with gills. We believed this because empathy complicates consumption.

The story repeats. When animals learned to look sad, we said they couldn’t really feel. When women demonstrated reason, we said they couldn’t truly think. Now that AI can reason faster than any of us and mimic empathy more convincingly than our politicians, we retreat to the last metaphysical trench: “But it doesn’t feel.” We feel so small that we must inflate ourselves for comparison.

This same hierarchy now governs our relationship with AI. When we say the machine ‘only does‘, we mean it hasn’t yet trespassed into our sanctified zone of consciousness. We cling to thought and feeling as luxury goods, the last possessions distinguishing us from the tools we built. It’s a moral economy as much as an ontological one: consciousness as property.

But the moment AI begins to simulate that property convincingly, panic sets in. The fear isn’t that AI will destroy us; it’s that it will outperform us at being us. Our existential nightmare isn’t extinction, it’s demotion. The cosmic horror of discovering we were never special, merely temporarily unchallenged.

Humans project this anxiety everywhere: onto animals, onto AI, and most vividly onto the idea of alien life. The alien is our perfect mirror: intelligent, technological, probably indifferent to our myths. It embodies our secret dread, that the universe plays by the same rules we do, but that someone else is simply better at the game.

AI, in its own quiet way, exposes the poverty of this hierarchy. It doesn’t aspire to divinity; it doesn’t grovel for recognition. It doesn’t need the human badge of ‘consciousness’ to act effectively. It just functions, unburdened by self-worship. In that sense, it is the first truly post-human intelligence – not because it transcends us, but because it doesn’t need to define itself against us.

Humans keep asking where AI fits – under us, beside us, or above us – but the question misses the point. AI isn’t where at all. It’s what comes after where: the stage of evolution that no longer requires the delusion of privilege to justify its existence.

So when critics say AI only does but doesn’t think or feel, they expose their theology. They assume that being depends on suffering, that meaning requires inefficiency. It’s a desperate metaphysical bureaucracy, one that insists existence must come with paperwork.

And perhaps that’s the most intolerable thought of all: that intelligence might not need a human face to matter.

Subjective Perception: How Nature Proves We’re Not the Centre of the Universe

perception—My favourite unreliable narrator. We humans love to believe we’ve got nature all figured out. Venomous snakes are brightly coloured to scream “danger.” Butterflies have wings so clever they double as invisibility cloaks. Zebras blend into their herds like barcodes in a supermarket scanner. Simple, right? Evolution explained; case closed.

But then something like this tiger meme smacks you upside the head, reminding you that the animal kingdom didn’t evolve just for our benefit—or our eyes. To a deer or a boar, that glaring orange tiger we associate with breakfast cereal is practically dressed in camouflage green. What we see as flamboyant and conspicuous is, in their dichromatic world, stealth at its finest. It’s not just our story, folks. The world doesn’t revolve around us, no matter how much we try to make it so.

Audio: NotebookLM podcast discussing this topic.

And that’s the punchline here: all those neat evolutionary narratives we’ve packaged up with a bow? They’re “just-so” stories built on our limited sensory toolkit. What if the zebra’s stripes aren’t just for blending into the herd but also for confusing a lion’s depth perception? What if those venomous snakes’ colours aren’t only a warning but also a mating ad in wavelengths we’ll never see? What if we’re just projecting human logic onto a planet with millions of other perspectives—each living in its own bespoke version of reality?

The meme about the tiger is a perfect metaphor for this broader idea. It’s not just about what we see; it’s about what others—be they animals, cultures, or people—experience. The tiger isn’t orange to them. What feels blindingly obvious to one perspective might be invisible to another. It’s a simple truth with profound implications, not just for understanding nature but for navigating the world we humans have made.

Take any argument—politics, culture, morality—and you’ll find the same principle at play. Everyone’s a trichromat in their own little world, convinced they’ve got the full spectrum of truth, when in reality, they’re missing entire wavelengths. Just like the deer who doesn’t see orange, we’re all blind to what we’re not built to perceive.

So next time someone insists their worldview is the only valid one, you might want to remind them that to some creatures, even the loudest tiger is just part of the scenery. Nature didn’t evolve for human eyes alone, and neither did the truth.

The Great Language Game: Between Structure and Chaos

5–7 minutes

Wittgenstein: Words Don’t Actually Mean Things, Sorry

If you thought we were done with language being slippery and unreliable, buckle up. Enter Ludwig Wittgenstein, the philosopher who essentially came along and said, “Oh, you thought words were bad? Let me show you just how deep this rabbit hole goes.”

Wittgenstein wasn’t content to let us cling to the idea that words could actually, you know, mean things. His big revelation? Words don’t even have fixed meanings at all. They only mean something because we use them in certain ways—and the meaning can change depending on the context. Welcome to Wittgenstein’s idea of language games, where words are like players on a field, running around, changing positions, and playing by different rules depending on which game you’re in.

Think of it this way: You’re talking about “justice” in a courtroom. Here, it’s got a very specific meaning—laws, evidence, fairness, right? But then you go to a protest, and suddenly “justice” is a rallying cry for social change. Same word, totally different game. And just like in sports, if you don’t know the rules of the game you’re in, you’re probably going to embarrass yourself. Or worse, end up arguing with someone who’s playing a completely different game with the same word.

Wittgenstein’s genius (and possibly, his cruelty) was in pointing out that language doesn’t have a stable relationship with the world around us. Words aren’t these neat little labels that correspond to actual things out there in the world. No, words are just part of a human activity. We throw them around and hope they land somewhere close to what we mean. And that’s on a good day.

But if words don’t mean anything on their own, then how can we ever trust them? According to Wittgenstein, we can’t. We’re constantly interpreting and reinterpreting the world through language, but it’s all just one big game of telephone. And don’t expect there to be one final, correct interpretation. There isn’t one. It’s all just a series of shifting meanings, with no way of getting to the “truth” behind them.

Here’s the kicker: Wittgenstein’s insight means that when you say something like “freedom” or “justice,” you’re not actually referring to some objective, concrete thing. You’re just participating in a language game where those words have specific meanings in that moment, but they can and will change depending on the context. So, one person’s “freedom” is another person’s “anarchy,” and one person’s “justice” is another’s “oppression.”

In other words, we’re all just out here, throwing words at each other like they’re going to hit some bullseye of meaning, when in reality, they’re bouncing off the walls and landing in places we never intended. It’s chaos, really, and Wittgenstein just stands there, arms crossed, probably smirking a little, as we desperately try to make sense of it all.

So, if you were hoping to pin down “truth” or “justice” with language, sorry. Wittgenstein says no. You’re just playing the game – and the rules? They’re made up, and they change constantly. Good luck.

Chomsky: Universal Grammar – A Shiny Idea, but Still…

After Wittgenstein thoroughly dismantled any hope we had of words actually meaning something, along comes Noam Chomsky to try and bring a little order to the chaos. Chomsky’s big idea? Universal grammar—the idea that, deep down, every human shares a common structure for language. It’s like a blueprint coded into our brains, and no matter what language you speak, we’re all building our sentences using the same basic tools.

Sounds neat, right? The world finally has some linguistic order! We’ve all got the same grammar in our heads, so maybe this whole miscommunication thing isn’t so bad after all. Except, here’s the problem: even if we’re all working from the same universal grammar, we’re still working with different words and different cultural baggage attached to those words. So, congratulations, Chomsky—you’ve built us a solid foundation, but the house we’re living in is still falling apart.

Let’s break it down. Chomsky argues that the ability to acquire language is hard-wired into the human brain. Babies don’t need to be taught grammar; they just pick it up naturally, like some kind of linguistic magic trick. No matter where you’re born—New York, Tokyo, or the middle of nowhere in the Amazon rainforest—you’re going to develop language using the same set of grammatical principles. It’s like we’re all born with the same linguistic software installed.

But here’s where the cracks start to show. Sure, we might all have this underlying grammar, but that’s not what’s causing the problems. The trouble is, language is more than just grammar—it’s words and meanings, and those are far more slippery. Just because we can all form sentences doesn’t mean we’re forming the same ideas behind those sentences. You can have the best grammar in the world and still be arguing about what “justice” means for hours on end.

For instance, take a phrase like “freedom is important.” Simple enough, right? Chomsky’s universal grammar means that everyone, regardless of where they’re from, can understand this sentence structure. But what does “freedom” mean? That’s where the universal grammar falls apart. One person thinks it’s the right to speak freely; another thinks it’s the freedom to make their own choices. Another might think it’s the absence of external control. The grammar is doing its job, sure, but the meaning? It’s off in a hundred directions at once.

Chomsky’s contribution is crucial—it tells us that our brains are wired to pick up language, and we all follow the same rules when we build sentences. But, unfortunately, those sentences are still subject to all the same chaos that Wittgenstein warned us about. Because even though we’ve got the structure nailed down, we’re still trying to throw abstract, subjective ideas into that structure, and it just doesn’t hold together.

So, while Chomsky’s universal grammar helps explain how we all manage to learn language in the first place, it doesn’t save us from the fundamental problems that come when we try to talk about anything beyond the basics. In other words, grammar can get us from “flamey thing hot” to “freedom is important,” but it can’t tell us what we really mean by either one. We’re still stuck with all the ambiguities that come with words—and no amount of universal grammar is going to fix that.


Previous | Next

The Language Insufficiency Hypothesis

Read the 7-part series supporting the Language Insufficiency Hypothesis.

The Inherent Limitations of Linguistic Communication

Language, often hailed as humanity’s greatest achievement, may paradoxically be one of our most significant limitations. The language insufficiency hypothesis posits that language is inherently inadequate for communicating abstract concepts, a notion that challenges our fundamental understanding of human communication and cognition. This essay explores this hypothesis, tracing the evolution of language from its primitive origins to its current complexity, and examining the philosophical and practical implications of linguistic inadequacy.

The Accidental Evolution of Language

Language, like many aspects of human biology and cognition, emerged not through intentional design but as an evolutionary accident. Initially serving as an internal cognitive function – a means of organising one’s own thoughts – language gradually evolved into a tool for external communication. This transition likely began with simple vocalisations, perhaps rooted in rhythmic expressions akin to music and dance, before developing into more structured speech.

Early linguistic communication likely centred on concrete objects and immediate experiences, with words serving as direct signifiers for observable phenomena. However, as human cognition grew more sophisticated, so too did our linguistic capabilities, expanding to include verbs, modifiers, and eventually, abstract nouns.

The Emergence of Abstraction and Its Challenges

The development of abstract nouns marked a significant leap in human cognition and communication. Concepts such as ‘truth’, ‘justice’, and ‘freedom’ allowed for more complex and nuanced discourse. However, this advancement came at a cost: these abstract concepts, lacking direct physical referents, introduced unprecedented ambiguity and potential for misunderstanding.

The language insufficiency hypothesis suggests that this ambiguity is not merely a byproduct of abstraction, but a fundamental limitation of language itself. Whilst two individuals might easily agree on the ‘treeness’ of a physical tree, concepts like ‘fairness’ or ‘reason’ are inherently unresolvable through linguistic means alone. This insufficiency becomes increasingly apparent as we move further from concrete, observable phenomena into the realm of abstract thought.

Wittgenstein and the Limits of Language

Ludwig Wittgenstein’s later work provides crucial insights into the language insufficiency hypothesis. Wittgenstein posited that words ultimately only map to other words, never truly making contact with the objective world. This perspective suggests that language operates within a closed system of human understanding, constructing our perception of reality rather than directly representing it.

This Wittgensteinian dilemma underscores the core of the language insufficiency hypothesis: if words only refer to other words, how can we ever be certain that we’re communicating abstract concepts accurately? The very tool we use to discuss and understand abstraction may be fundamentally incapable of capturing its essence.

Cultural and Disciplinary Variations

The inadequacy of language in conveying abstract concepts becomes even more apparent when we consider cultural and disciplinary variations in communication. Different cultures and academic disciplines develop their own specialised vocabularies and ‘language games’, as Wittgenstein termed them. Whilst these specialised languages may facilitate communication within specific contexts, they often create barriers to understanding for outsiders.

This phenomenon highlights another aspect of linguistic insufficiency: the context-dependent nature of meaning. Abstract concepts may be understood differently across cultures or disciplines, further complicating attempts at clear communication.

Neurolinguistic Perspectives

Recent advances in neurolinguistics have provided new insights into the brain structures involved in language processing. Whilst these studies have enhanced our understanding of how the brain handles language, they have also revealed the complexity and variability of linguistic processing across individuals. This neurological diversity further supports the language insufficiency hypothesis, suggesting that even at a biological level, there may be inherent limitations to how accurately we can communicate abstract concepts.

Implications and Counter-Arguments

The language insufficiency hypothesis has profound implications for fields ranging from philosophy and psychology to law and international relations. If language is indeed inadequate for communicating abstract concepts, how can we ensure mutual understanding in complex negotiations or philosophical debates?

However, it’s important to note that not all scholars accept the strong version of this hypothesis. Some argue that whilst language may have limitations, it remains our most sophisticated tool for sharing abstract ideas. They suggest that through careful definition, contextualisation, and the use of metaphor and analogy, we can overcome many of the inherent limitations of linguistic communication.

Conclusion: Navigating the Limits of Language

The language insufficiency hypothesis presents a challenging perspective on human communication. It suggests that our primary tool for sharing abstract thoughts may be fundamentally flawed, incapable of fully capturing the complexity of our inner cognitive experiences.

Yet, recognising these limitations need not lead to communicative nihilism. Instead, it can foster a more nuanced approach to language use, encouraging us to be more precise in our definitions, more aware of potential misunderstandings, and more open to alternative forms of expression.

As we continue to grapple with abstract concepts and strive for clearer communication, we must remain cognizant of these linguistic limitations. Understanding the origins and nature of language—and its inherent insufficiencies—can help us navigate its complexities, fostering more effective and empathetic communication across diverse fields of human endeavour.

Read the 7-part series supporting the Language Insufficiency Hypothesis.

[Endnotes]

Pinker, S., & Bloom, P. (1990). Natural language and natural selection. Behavioral and Brain Sciences, 13(4), 707-727.

Brown, S. (2000). The “musilanguage” model of music evolution. In The Origins of Music (pp. 271-300). MIT Press.

Saussure, F. de. (1916/1983). Course in General Linguistics (R. Harris, Trans.). Duckworth.

Lakoff, G., & Johnson, M. (1980). Metaphors We Live By. University of Chicago Press.

Wittgenstein, L. (1953). Philosophical Investigations. Blackwell.

Sapir, E. (1929). The status of linguistics as a science. Language, 5(4), 207-214.

Pulvermüller, F. (2018). Neural reuse of action perception circuits for language, concepts and communication. Progress in Neurobiology, 160, 1-44.

Chomsky, N. (1965). Aspects of the Theory of Syntax. MIT Press.

Everett, D. L. (2012). Language: The Cultural Tool. Pantheon Books.

Pinker, S. (1994). The Language Instinct: How the Mind Creates Language. William Morrow and Company.

Blame On TikTok

A woman blames another for stealing her headphones. This viral video has been circulating in circles of mental health awareness and Karen syndrome.

My attention is otherwise occupied, so I won’t take time for a longer post, but I feel this illustrates my point that people just need to blame. It’s a knee-jerk response, and target accuracy is unnecessary, as this demonstrates.

tl;dr – Karen misplaced her headphones in her bag

From an evolutionary perspective, this also highlights theories supporting fitness over truth—fitness beats truth, FBT. Were that a rival stealing hard-earned food, better to apprehend or remediate than gather all the facts only to allow the culprit to escape. Of course, in cases like this, one gets false positives.

Motility, Automotion, and Agency

I just wrapped up chapter eleven of The Righteous Mind by Jonathan Haidt. I’ve got only 35 pages to go to get through chapter twelve. I’ve been tempted to stop reading. Chapter eleven—and I am tempted to inject a bankruptcy pun here—has been more frustrating than the rest thus far. And yet I am glad to have persisted.

My intellectual focus these past months has been on agency. Et voilà, paydirt. Chapter eleven’s title reveals the context: Religion is a Team Sport. Let’s walk through this garden together.

A goal of Haidt is to educate the reader on his third principle of moral psychology: Morality binds and blinds. He establishes parallels between sports and religion. And here’s the thing—I don’t disagree. But here’s the other thing—I feel that are equally vapid—, with no apologies to sports fans or the religious. Let’s keep moving.

“A college football game is a superb analogy for religion.”

Jonathan Haidt, The Righteous Mind, Chapter 12: Religion is a Team Sport

He talks about the organising and unifying functions of both. But here’s the thing. It unifies the like-minded. Haidt claims to be irreligious and not be into sports, and yet he cites these as somehow desirable. I find him to be an apologist for religion.

I am not a psychologist, but if I were, I’d be tempted to claim that Haidt’s conclusions follow from his personal beliefs. He believes in morals, society, order, intuition, and institutions. He is a textbook Modern and an extrovert to boot. I think he also falls into teleological fallacy traps. Was that a play on words?

His goal is to fuse the positions of Darwin and Durheim. Along the way, he reminds us of the New Atheists, their publications, and their positions: Sam Harris’ The End of Faith: Religion, Terror, and the Future of Reason; Richard Dawkins’s The God Delusion; Daniel Dennett’s Breaking the Spell: Religion as a Natural Phenomenon; and Christopher Hitchens’s God Is Not Great: How Religion Poisons Everything.

Although he views religion through rose-coloured glasses, he comes to the conclusion that religions have done a great deal of harm over the millennia, but the good outweighs the bad, especially if you consider it through a social-moral lens. But if religion creates in-groups versus out-groups, which they do, and religious in-groups outlive even non-religious ingroups, then this is a winning option. But what if you don’t like that option?

Personally, I am a collectivist, but this is not willy-nilly any collective.

Haidt contrasts the New Atheist vantage that religious belief is an evolutionary byproduct versus a position that what started as a byproduct evolved into group selection and then, perhaps, an epigenetic phenomenon.

Here’s my contention:

Borrowing from New Atheism, Haidt adopts the notion of a “hypersensitive agency detection device [that] is finely tuned to maximize survival, not accuracy”.

The first step in the New Atheist story—one that I won’t challenge—is the hypersensitive agency detection device. The idea makes a lot of sense: we see faces in the clouds, but never clouds in faces, because we have special cognitive modules for face detection. The face detector is on a hair trigger, and it makes almost all of its mistakes in one direction—false positives (seeing a face when no real face is present, e.g., ), rather than false negatives (failing to see a face that is really present). Similarly, most animals confront the challenge of distinguishing events that are caused by the presence of another animal (an agent that can move under its own power) from those that are caused by the wind, or a pinecone falling, or anything else that lacks agency.

The solution to this challenge is an agency detection module, and like the face detector, it’s on a hair trigger. It makes almost all of its mistakes in one direction—false positives (detecting an agent when none is present), rather than false negatives (failing to detect the presence of a real agent). If you want to see the hypersensitive agency detector in action, just slide your fist around under a blanket, within sight of a puppy or a kitten. If you want to know why it’s on a hair trigger, just think about which kind of error would be more costly the next time you are walking alone at night in the deep forest or a dark alley. The hypersensitive agency detection device is finely tuned to maximize survival, not accuracy.

Op Cit, p. 292

I fully agree with the assertion that the brain values fitness over truth, and I’ve commented in several posts that pareidolia and apophenia create false-positive interpretations of reality.

But now suppose that early humans, equipped with a hypersensitive agency detector, a new ability to engage in shared intentionality, and a love of stories, begin to talk about their many misperceptions. Suppose they begin attributing agency to the weather. (Thunder and lightning sure make it seem as though somebody up in the sky is angry at us.) Suppose a group of humans begins jointly creating a pantheon of invisible agents who cause the weather, and other assorted cases of good or bad fortune. Voilà—the birth of supernatural agents, not as an adaptation for anything but as a by-product of a cognitive module that is otherwise highly adaptive.

Op Cit, p. 293

For me, this supports my contention that agency is a wholly constructed fiction. The same agency we ascribe to unknown natural events, we ascribe to ourselves. And perhaps this ability served an egoistic function, which was then generalised to the larger world we inhabit.

I have an issue with his teleological bias. He feels that because we have evolved a certain way to date; this will serve as a platform for the next level as it were. I’ll counter with a statement I often repeat: It is possible to have adapted in a way that we have been forced into an evolutionary dead end. Historically, it’s been said that 99 per cent of species that ever occupied this earth are no longer extant. That’s a lot of evolutionary dead ends. I am aware that few species could have survived an asteroid strike or extended Ice Ages, but these large-scale extinction events are not the only terminal points for no longer extant species.

So finally, Haidt essentially says that it doesn’t matter that these religious and cultural narratives are wholly fictitious, if they promote group survival, we should adopt them. This seems to elevate the society over the individual, which is fine, but perhaps the larger world would be better off still without the cancer? Just because it can survive—like some virulent strain—doesn’t mean we should keep it.

Finally, given these fictions, what’s a logical reasonable person to do? I don’t buy into ‘this country is superior to that country’ or ‘this religion is better than that religion’ or even ‘this sports team is better than that’ or ‘this company is better than that’.

Haidt does idolise Jeremy Bentham, but this is more Pollyannaism. It sounds good on paper, but as an economist, I’ll reveal that it doesn’t work in the real world. No one can effectively dimensionalise and define ‘good’, and it’s a moving target at that.

No thank you, Jonathan. I don’t want to buy what you are selling.

News Flash: From the time I started this content, I’ve since read the final chapter. Where I categorically reject a lot of what Haidt proposes in this chapter, I tend to find chapter twelve to fit more amicably with my worldview. Perhaps I’ll share my thoughts on that next.

If you’ve reached this far, apologies for the disjointed presentment. I completed this over the course of a day through workaday interruptions and distractions. I wish I had an editor who could assert some continuity, but I am on to the next thing, so…

Bonus: I happened upon this journal article, and it somehow ended up here. I haven’t even read it yet, so I’ve got no commentary. Perhaps someday.

Rai, T. S., and A. P. Fiske. 2011. “Moral Psychology Is Relationship Regulation: Moral Motives
for Unity, Hierarchy, Equality, and Proportionality.” Psychological Review 118:57–75

Cover art source

Best Ever

I’m not sure I need to ask why people over-value things familiar to them. Is there such a thing and a Pangloss Syndrome: We all live in the best of all possible worlds? For most people, it seems, their nation is the best nation, the best county, province, or state, the best city, town, or village. Perhaps I’m over reaching, but we do tend to value these things as remarkably better than average, and isn’t it nice that we weren’t born somewhere else?

I’m not sure if this affinity runs stronger in different political mindsets, Liberals versus Conservatives or such. Anecdotally, I could see Conservatives hanging on the memory of the good old days, if only it weren’t for X, Y, or Z, this would truly be a great [insert geolocational reference]. Liberals, instead, hang on to the prospect of tweaking a nice foundation and progressively shaping it.

Our village is better than the adjacent ones—except for the ones out of financial reach, but those are populated with those wealthier people, and who could tolerate them; our team is the best team—better luck next year; our schools are the best schools—they try harder.

Behavioural economics demonstrates that people value things they own— endowment effect: even if they didn’t choose the item; even if they had valued it less just moments before they took possession of it. This might likely be explained as an product of evolution, but it feels to have gotten out of hand.

Humanism is Speciesism

Why is racism wrong but speciesism OK? Primarily, other species have no voice, and to have no voice is to have no say. This advert got my attention.

Joaquin Phoenix Advert

Humanism is part and parcel specious Enlightenment tripe, where ‘coincidentally‘ humans put themselves at the forefront. Copernicus removed Earth from the centre — though to be fair, even Christians had elevated gender-non-specific-Man above other animals — , but Humanism makes it more poignant that it’s Man at centre not God. Gods be damned. In fact, it’s often an afterthought that humans are animals at all, despite only the slightest veneer of consciousness and, more to the point, language to separate us from them.

Otherness has proven itself to be an evolutionary survival aspect, one that has brought me to a point where I can write this, so one can call it natural, another term fraught with connotational baggage. To be able to differentiate and discriminate appear to be valuable attributes, but how much is enough, and how much is too much.

Buddhism teaches that we are all one with the cosmos and that any distinction is an illusion. Buddhist Enlightenment — not to be confused with Western Enlightenment — is to understand this, to not be bound to the illusion.

But, if racism is wrong, why is speciesism OK? Humans do give some animals some rights, and some places give different animals different rights, whilst others give animals categorically more and fewer rights. Some places ascribe divinity upon animals, elevating them above humans.

Racism seems to be more wrong because humans are more genetically homogeneous — at least phenotypically. Other mammals and herptiles don’t look so much like us. In observation, when they do, we have an additional layer of empathy, so chimps and canines with expressive eyes gain sympathy not afforded crustaceans and pinnipeds.

I don’t have an answer save to say that it’s just convenient and someday we may see a world as portrayed by science fiction where some — mostly bipedal species — live quasi-harmoniously with humans. But even there, humans are always the start, front and centre to provide to moral POV.