A Fishy Tale: When Starfish Aren’t Fish

Picture this: you’re strolling along the beach, admiring the marine life in the rock pools. You spot a starfish, a jellyfish, and a seahorse. Pop quiz: which of these creatures is actually classified as a fish?

  1. Starfish
  2. Jellyfish
  3. Seahorse
  4. Banana

If you answered “seahorse”, congratulations! You’ve just dipped your toe into the wonderfully weird world of marine biology and linguistic evolution. But don’t pat yourself on the back just yet, because we’re about to dive deeper into this ocean of confusion.

But something is fishy in Denmark. You see, in the grand aquarium of the English language, not all that glitters is fish, and not all fish sparkle. Our ancestors, bless their linguistically challenged hearts, had a rather broad definition of what constituted a ‘fish’. Anything that lived exclusively in water? Chuck it in the ‘fish’ bucket!

But wait, there’s more! While they were happily labelling every aquatic creature as ‘fish’, they were also using the word ‘meat’ to describe, well, pretty much anything edible. That’s right—your medieval five-a-day fruit and veg platter? All meat, baby!

So, how did we go from this linguistic free-for-all to our current, more discerning categorisations? And why do we still use terms like ‘starfish’ and ‘jellyfish’ when they’re about as fishy as a beef Wellington?

Strap on your scuba gear, dear reader. We’re about to take a deep dive into the murky waters of etymology, where we’ll encounter some fishy facts, meaty morsels of linguistic history, and maybe—just maybe—learn why a seahorse is more closely related to a cod than a sea cucumber is to a cucumber.

Welcome to our tale of linguistic evolution. It’s going to be a whale of a time! In this linguistic deep dive, we’ll explore the meaty truth about ‘mete’, fish out the facts about ‘fisc’, and navigate the choppy waters of modern usage.

The Meaty Truth: When Apples Were Meat

Imagine, if you will, a world where asking for a meat platter at your local deli might result in a fruit basket. No, this isn’t a vegetarian’s fever dream—it’s actually a peek into the linguistic past of our ancestors.

In Old English, the word ‘mete’ (IPA /’mit ə/, and not to be confused with the modern verb ‘to meet’) was a catch-all term for food. Any food. All food. If you could eat it, it was ‘mete’. Your apple? Mete. Your bread? Mete. That leg of lamb? Also mete, but we’re getting ahead of ourselves.

This broad definition persisted for centuries. Chaucer, in his 14th-century work “The Canterbury Tales”, wrote of “a Cok, hight Chauntecleer” (a rooster named Chauntecleer) who “For his brenning lay by Pertelote” (a hen). Yes, you read that right. Chickens were laying eggs; Chaucer was writing about “food” and “birds”; and somewhere, a medieval nutritionist was having an existential crisis.

But language, like a slowly simmering stew, changes over time. By the 14th century, ‘mete’ had started to narrow its focus, increasingly referring to the flesh of animals. It’s as if the word itself decided to go on a protein-heavy diet.

By the 18th century, ‘meat’ (having picked up its modern spelling along the way) had pretty much settled into its current meaning: the flesh of animals used as food. Though remnants of its broader past linger in more places than you might expect:

  1. Phrases like “meat and drink” still mean food and beverages in general.
  2. The term “nutmeat” refers to the edible part of a nut.
  3. Fruits and vegetables can have “meaty” parts – we’re looking at you, avocados and tomatoes!
  4. “Sweetmeat” doesn’t involve meat at all, but refers to candies or sweets.

So, the next time you’re describing the succulent flesh of a ripe peach as “meaty”, know that you’re not being weird – you’re being etymologically nostalgic. And when someone tells you to “meat and greet“, don’t bring a steak to the party. Unless, of course, it’s that kind of party. In which case, maybe bring enough to share?

This culinary journey through time just goes to show that when it comes to language, the proof is in the pudding. And yes, once upon a time, that pudding would have been called ‘meat’ too! Now that we’ve carved up the history of ‘meat’, let’s cast our net into the world of ‘fish’.

Something’s Fishy: Casting a Wide Net

From land to sea, our linguistic journey now dives into deeper waters to explore the slippery history of the word ‘fish’. Prepare to have your gills blown, because this tale is more twisted than an octopus playing Twister.

In Old English, our linguistic ancestors used the word ‘fisc’ (IPA /fɪsk/) to refer to, well, pretty much anything that called water its home. If it swam, floated, or generally looked bewildered in an aquatic environment, it was a ‘fisc’.

This cast-iron skillet approach to classification meant that whales, seals, and even crocodiles were all lumped into the ‘fisc’ category. It’s as if our forebears took one look at the ocean, threw up their hands, and said, “Eh, it’s all fish to me!”

This broad definition persisted for centuries, leading to some rather fishy nomenclature that we’re still untangling today:

  1. Jellyfish: Despite their name, these gelatinous creatures are about as far from fish as you are from your second cousin twice removed on your mother’s side.
  2. Starfish: These spiny echinoderms are more closely related to sea urchins than to any fish. They’re the marine equivalent of finding out your cat is actually a very convincing raccoon.
  3. Cuttlefish: These crafty cephalopods are molluscs, more akin to octopuses and squids than to any fish. They’re the masters of aquatic disguise, fooling both prey and etymologists alike.
  4. Shellfish: This term covers a motley crew of crustaceans and molluscs. Calling a lobster a fish is like calling a butterfly a bird – poetic, perhaps, but scientifically fishy.

As scientific understanding grew, the definition of ‘fish’ narrowed. By the 16th century, scholars were beginning to distinguish between ‘fish’ and other aquatic animals. However, the old, broad use of ‘fish’ had already left its mark on our language, like a stubborn fish smell that lingers long after the seafood dinner is over.

Today, in biological terms, ‘fish’ refers to gill-bearing aquatic animals lacking limbs with digits. But in culinary and cultural contexts, the term is still often used more broadly. So next time you’re at a seafood restaurant pondering whether to order the fish or the shellfish, remember: it’s all ‘fisc’ to your linguistic ancestors!

The Great Divide: Fish or Not Fish?

Now that we’ve muddied the waters thoroughly, let’s try to separate our fish from our not-fish. It’s time for the ultimate marine showdown: “Fish or Not Fish: Underwater Edition”! Let’s swim through some specific examples that highlight this fishy classification conundrum.

Starfish: The Stellar Impostor

Despite their fishy moniker, starfish are about as much fish as a sea star is an actual star. These echinoderms are more closely related to sea urchins and sand dollars than to any fish. With their five-armed symmetry and lack of gills or fins, starfish are the marine world’s ultimate catfish (pun intended).

Jellyfish: The Gelatinous Pretender

Jellyfish might float like a fish and sting like a… well, jellyfish, but they’re no more fish than a bowl of jelly. These cnidarians lack bones, brains, and hearts, making them more like drifting water balloons than actual fish. They’ve been pulling off this aquatic masquerade for over 500 million years!

Cuttlefish: The Crafty Cephalopod

Don’t let the name fool you – cuttlefish are cephalopods, more closely related to octopuses and squids than to any fish. These masters of disguise can change their appearance rapidly, making them the chameleons of the sea. They’re the ultimate marine conmen, fooling both prey and etymologists alike.

Shellfish: The Armoured Anomalies

‘Shellfish’ is a catch-all term for a motley crew of crustaceans and molluscs. Calling a lobster or an oyster a fish is like calling a butterfly a bird – it might fly, but that doesn’t make it right. These hard-shelled creatures are about as far from fish as you can get while still living in water.

Seahorses: The Fishy Exception

Plot twist! Despite their equine appearance, seahorses are indeed true fish. These peculiar creatures belong to the genus Hippocampus (which literally means “horse sea monster” in Greek). Here are some fin-tastic facts about our curly-tailed friends:

  1. Male Pregnancy: In a twist that would make seahorse soap operas very interesting, it’s the male seahorses that get pregnant and give birth.
  2. Monogamy: Unlike many fish, seahorses are monogamous. They perform daily greeting rituals to reinforce their pair bonds. It’s like underwater ballroom dancing but with more fins.
  3. Camouflage: Seahorses are masters of disguise, able to change colour to blend in with their surroundings. They’re the underwater equivalent of a charismatic chameleon.
  4. Snouts: Their tubular snouts work like built-in straws, perfect for sucking up tiny crustaceans. It’s nature’s version of a Capri Sun.

So there you have it – a horse that’s a fish, and a bunch of “fish” that aren’t. If this doesn’t highlight the delightful absurdity of language evolution, I don’t know what will!

Modern Usage and Misconceptions: A Kettle of Fish

Now that we’ve unravelled this tangled net of fishy nomenclature, let’s surface and see how these linguistic oddities persist in modern times. It’s a veritable school of misconceptions out there! Just as our ancestors broadly applied ‘mete’ and ‘fisc’, we continue to cast a wide net with our fishy terms today.

The Persistent “Fish”

Despite our best scientific efforts, many misnomers stubbornly cling to our language like barnacles to a ship’s hull:

  1. Silverfish: These squirmy household pests are neither silver nor fish. They’re insects that have been sneaking into our bathtubs and bookshelves for over 400 million years, laughing at our misguided naming conventions.
  2. Crayfish: Also known as crawfish or crawdads, these freshwater crustaceans are more closely related to lobsters than to any fish. They’re the inland cousins who couldn’t afford beachfront property.
  3. Fishfingers: A childhood staple that contains fish but isn’t fingers. Unless you know something about fish anatomy that we don’t…

The Culinary Conundrum

In the world of cuisine, the line between ‘fish’ and ‘seafood’ is blurrier than a fish’s vision out of water:

  • Many menus separate ‘fish’ from ‘seafood’, with the latter often including shellfish and sometimes even seaweed. It’s as if the ocean decided to categorise its inhabitants based on their starring roles in Disney movies.
  • The phrase “fish and chips” stubbornly refuses to become “seafood and chips”, even when the dish includes non-fish like calamari. It’s a linguistic tradition as crispy and golden as the batter itself.

The Vegetarian’s Dilemma

Pity the poor vegetarians navigating this linguistic minefield:

  • Some vegetarians eat fish but not meat, leading to the term “pescatarian”. It’s as if fish decided to identify as vegetables just to complicate matters.
  • The question “Do you eat fish?” is often asked separately from “Are you vegetarian?”, as if fish were secretly plants with fins.

The Final Catch

In the end, language is a living, breathing entity that evolves faster than you can say “coelacanth” (which, by the way, is a fish that was thought to be extinct until it wasn’t – talk about a plot twist!).

While scientists may pull their hair out over our continued misuse of ‘fish’, the rest of us can simply enjoy the rich tapestry of language these terms have woven. After all, in the grand aquarium of English, it’s the linguistic oddities that make the view so interesting.

So the next time you find yourself in a debate about whether a starfish is a fish, or why we call it shellfish when there’s no fish involved, remember: in the world of language, sometimes it’s okay to let sleeping dogfish lie.

Conclusion: So Long, and Thanks for All the Fish!

As we surface from our deep dive into the murky waters of etymology, we find ourselves with a tackle box full of linguistic curiosities. We’ve navigated the broad seas of ‘mete’, trawled through the expanding net of ‘fisc’, and somehow ended up in a world where seahorses are fish, but starfish aren’t.

Our journey has shown us that language, much like the ocean, is vast, mysterious, and full of surprises. It evolves and changes, sometimes leaving behind fascinating fossils in our everyday speech. These linguistic relics remind us of a time when our ancestors looked at the sea and decided that anything wet and wriggly qualified as a fish.

From the “meaty” part of a fruit to the fish fingers in your freezer, from the crayfish in streams to the silverfish in your bathroom, these terms continue to swim through our language, blissfully unaware of their misclassification. They’re like linguistic dolphins, playfully leaping through our conversations, occasionally confusing vegetarians and marine biologists alike.

And here’s a thought to chew on: given what we’ve learned about the evolution of the word ‘meat’, could this linguistic journey have unintended consequences in other areas? For instance, when Catholics abstain from ‘meat’ on Fridays, are they following the modern interpretation of the word, or its original, broader meaning of ‘food’? It’s a question that would require diving into early Greek, Aramaic, and Latin texts to explore fully. But it just goes to show how the ripples of language evolution can reach far beyond our dinner plates and into the very core of cultural and religious practices.

So the next time you find yourself pondering whether that seafood platter is really all ‘fish’, or why we still call it the “fruit of the sea” when we know better, remember this tale. Embrace the delightful absurdity of language evolution. After all, in the grand ocean of communication, it’s these quirks and idiosyncrasies that make our linguistic journey so fascinating.

And if all else fails, just smile enigmatically and say, “So long, and thanks for all the fish!” Who knows? You might just be speaking to a dolphin disguised as a human, trying to warn you about the impending destruction of Earth to make way for a hyperspace bypass. But that, dear readers, is a whole other kettle of fish… or should we say, a different cut of meat?

Life Consciousness

Language is life. Yet, this assertion immediately raises a fundamental question: which came first, life or consciousness? It’s a classic chicken-and-egg conundrum. Physicist Stuart Hameroff posits an intriguing idea—that consciousness might predate life itself. This radical notion suggests that consciousness isn’t merely a byproduct of biological processes but could be an intrinsic feature of the universe. However, there’s a snag.

The challenge lies in defining life and consciousness, two terms that lack universally accepted definitions. The absence of clarity here opens the door to a multitude of interpretations, making it easy to drift into what could be considered ‘airy faerie’ ambiguity. One must beware of the temptation to engage in intellectual exercises that lead nowhere—what might be termed ‘mental masturbation.’ This is a prime example of the insufficiency of language.

Audio: Podcast commentary on this topic.

Life and consciousness, as concepts, are elusive. Unlike straightforward nouns or adjectives—where we can confidently say, “That’s a dog,” “That’s a tree,” or “That’s green”—these terms are far more complex. They are attempts to encapsulate observed phenomena, yet we lack the precise language and understanding to pin them down definitively. The video linked above provides perspectives on various approaches to defining these terms, but none prove wholly satisfactory. This lack of satisfaction might suggest that our conventional understanding of life and consciousness is flawed. To be fair, one might even entertain the idea that life itself is an illusion, a construct of consciousness.

This ambiguity isn’t confined to the realms of life and consciousness. I recently shared a post on the topic of gender, which illustrates a similar issue. Originally, there was no concept of gender. The earliest distinctions made were between animate and inanimate. Over time, these distinctions became more nuanced. Whether or not a proto-word for life existed at that time is unclear, but the idea of animation being linked to life was beginning to take shape. The concept of gender evolved much later, driven by the need to categorize and define differences within the animate category.

The evolution of language reflects the evolution of thought. Yet, when we dig deep into these foundational concepts, we encounter the same problem: how can we argue the precedence of two concepts—life and consciousness—when neither has a solid foundation in language? If our words are inadequate, if they fail to capture the essence of what we are trying to convey, then what does that say about our understanding of the world?

Perhaps it suggests that our linguistic and cognitive tools are still too crude to grasp the true nature of reality. Or maybe it hints at a deeper truth: that some aspects of existence are beyond the scope of human understanding, no matter how sophisticated our language becomes. After all, if consciousness predates life, as Hameroff suggests, then we may need to rethink our fundamental assumptions about existence itself.

Ultimately, this exploration reveals a paradox at the heart of human knowledge. We seek to define and categorise, to impose order on the chaos of the universe. Yet in doing so, we must confront the limits of our language and, by extension, our understanding. Perhaps the true essence of life and consciousness lies not in definitions or categories but in the very act of questioning, the relentless pursuit of knowledge that drives us forward, even when the answers remain elusive.

Whence Genders?

I’ve wanted to write a post on gender as it relates to language for a while. It will be longer still. I was researching PIE (proto-indo-european) language formation to understand the why of grammatical gender—not the social corollary to biological sex.

Gender and genre both derive from Latin (genus) via French (gendre) and mean type or kind. Genre is somehow reserved for the Arts whilst gender feels more applicable to biology and grammar.

Given Feminist trends, it seems that the arbitrary gendering of nouns and adjectives could be reassigned to dedicated genders to avoid confusion with human gender roles.

As an English as a first language speaker, it’s always felt awkward for me to label a table female (la table) and a sun male (le soleil). It’s even worse when German designates these the opposite, respectively der Tisch (feminine) and die Sonne (masculine).

The fact that these are designated male and female is arbitrary. Just as we designate films as drama, comedy, romance, and action, we could redesignate male and female as two different category names—I dunno, frick and frack.

Grammatical genders themselves are necessarily offensive, but it feels patriarchal to have a rule that if males and females are subjects of a sentence, then the verb has a masculine conjugation. If instead we retained the same rule but said the verb has a frick conjugation, no offence, no foul.

I know this would be a lot of work, and I am just overthinking, but this is what I do.

Multiple Intelligences

I engaged in a nice debate recently. Someone suggested that because some executives are smart, they can figure things out. Specifically, she posited that a CTO, Chief Technology Officer, should understand communication and diplomatic skills when interacting with a corporate board because they have the technical skills to get into their current C-level position. I disagreed based on the multiple intelligence theory.

Howard Gardner’s Multiple Intelligences theory posits that intelligence isn’t a monolith, but rather a nine-dimensional construct:

1. Existential (philosophical pondering; questioning the questions of why we live and why we die)

2. Inter-personal (reading people; sensing people’s feelings and motives)

3. Intra-personal (self-awareness; understanding yourself, what you feel, and what you want)

4. Kinaesthetic (mind-body coordination; coordinating your mind with your body)

5. Linguistic (wordsmithing; finding the right word(s) to express what you mean)

6. Logical-Mathematical (quantifying and proving; quantifying things, making hypotheses, and proving them)

7. Musical (discerning sounds; their pitch, tone, rhythm, and timbre)

8. Naturalist (understanding nature; understanding living things and reading nature)

9. Spatial (3D/4D visualisation)

For a deep dive, check out Gardner’s Frames of Mind: The Theory of Multiple Intelligences.

Here’s the rub: a CTO with off-the-charts technical skills might be rubbish at diplomacy (interpersonal) or communication (linguistic). It’s like expecting every pro athlete to be a concert pianist – it’s not on.

Assuming every “intelligent” person can max out all intelligence dimensions is bollocks. It’s as likely as training every smart CTO to be the next Shakespeare or Machiavelli. Language and diplomacy are distinct skills, mate.

While we all love a Renaissance man (or woman), peaking in all these dimensions in one lifetime is a pipe dream. It’s not inherently bad, though. When building teams – be it a corporate board or an exploration party – ensure you’ve got a good mix of skills. I’m not saying you need a bard, a philosopher, and LeBron James on every team, but make sure you’ve covered the bases necessary for success.

If you think you don’t need a particular dimension, ask yourself: is it because you’re weak in that area and can’t see its importance? Don’t let your blind spots become your downfall.

In the end, it’s about recognising and respecting diverse intelligences. So, next time you’re tempted to think your brilliant CTO should just “learn to be diplomatic”, remember: they might be better off focusing on their strengths and leaving the smooth talking to someone else on the team.

Ne présumez pas

As I was writing about the immorality of property rights, I began thinking about the distinction between legality and morality.

Il ne faut pas présumez que ce qui est légal et moral ni que ce qui est moral et légal.

I don’t have much to say on the subject, but I composed the cover image and wanted words to accompany it.

Sex Sells

Sexism is indeed a two-way street. On one side of this street, a Computer Science graduate and programmer is eager to share her expertise in her field—Neural Networks, in this instance. This subject popped up in my feed, reflecting my interests.

Video: What is a Neural Network?

Despite some production issues, such as the audio being quieter than ideal, my focus today is on the sexism surrounding the video. The presenter, whom many would consider attractive, is using social media to disseminate her knowledge. However, even when comments address the topic she presents, many also remark on her appearance. It’s evident she had other options for attire and presentation that might have mitigated such comments. I won’t speculate on her intentions, but it seems likely her aesthetic choices were deliberate to draw viewers. I refrain from slut-shaming; her attire is her choice, and she cannot control the reactions. However, I doubt a thumbnail featuring a burqa would garner as much attention or provoke similar comments.

This situation intrigues me because some women—possibly including this presenter—lament being objectified yet assert their right to wear what they find comfortable or appealing. While attraction has cultural elements, it also operates on a largely subconscious level, a phenomenon not confined to humans but seen in the animal kingdom and across genders.

Ultimately, there’s no need to disparage this woman. She is likely aware of the dynamics at play. Should she achieve her goals, she might well challenge the very viewers who objectified her, a tactic observed among actresses as they approach their forties. They capitalise on sexual appeal while possible, only to critique such approaches when they can no longer utilise them. Humans are, indeed, curious creatures.

Does Language Describe Reality?

The topic of this video touches upon my insufficiency of language thesis. Tim Maudlin defends language realism but only to the extent that ‘we can use it to describe the world and that some of those descriptions are true’.

Video: Does Language Describe Reality? (IAI)

The challenge, then, is determining which descriptions are true. I’ve discussed a couple of my positions on this.

The Truth About Truth

Firstly, we can only perceive what is true as we have no access to absolute truth. The best we can achieve is an asymptotic function approaching truth, a notion that resonates with Hilary Putnam’s concept of internal realism (pdf). Putnam argues that truth is not a matter of correspondence with a mind-independent reality but is instead tied to our conceptual schemes. This means that what we consider “true” is always shaped by the language and concepts we use, making our understanding inherently partial and context-dependent. Even then, we have no way to determine how close to truth our perception is. It just has to feel true—an idea that aligns with Putnam’s pragmatic conception of truth, where truth is something that emerges from our practices and inquiries, rather than being a fixed point we can definitively reach. In terms of physics, this underlying reality may be relatively more stable than abstract concepts, which are ephemeral and shifting sands.

The Rhetoric of Truth

Secondly, given that we have no access to objective truth, we can only expect subjective or relative truths. This brings us to Putnam’s critique of the metaphysical correspondence theory of truth. According to Putnam, the idea that language can perfectly correspond to an external reality is flawed. Instead, truth is what can be justified within a particular conceptual framework, making all truth somewhat relative. This leaves us open to rhetoric—the more convincing argument wins, regardless of whether it reflects an objective reality. In fact, as Putnam’s ideas suggest, the most persuasive argument might favour an incorrect position simply because it resonates more with our internal conceptual schemes, not because it corresponds to an external truth. This has happened many times historically—or has it?

Conclusion: Language, Truth, and the Influence of Rhetoric

Putnam’s work reminds us that language is deeply connected to our understanding of the world, but it is also limited by the conceptual frameworks within which it operates. While language helps us navigate and describe the world, it cannot provide us with direct access to objective truth. Instead, it gives us tools to construct truths that are internally coherent and pragmatically useful, though always subject to change and reinterpretation. As we engage with rhetoric and persuasion, we must remain aware that the truths we accept are often those that best fit our current conceptual schemes, not necessarily those that best correspond to an elusive objective reality.

The Myth of Psychological Normalcy

Neurodivergence is a hot topic today. But why? Are people more informed, or simply more comfortable speaking out?

Humans, by nature, are neurodivergent. This has always been the case, yet the illusion of neurological normalcy persists. The real issue lies in psychology’s pretence that humans fit into a standard model of normalcy.

In production processes, normality is expected—a product must meet specific standards, and deviations are considered defects. However, applying this industrial concept to human psychology is deeply flawed. Humans are not teleological except in the broadest biological sense. More people have brown eyes than blue or green; are those with brown eyes normal and others not? Ninety per cent of people are right-handed; does that make left-handed individuals abnormal? Statistically, they might be. However, in psychology, normalcy carries a judgmental connotation that goes beyond mere numbers.

Psychology, as expressed in behaviour, is not a suitable domain for discussing normality. Many people misuse the concept of “normal” as a moral qualification, which must stop.

In fact, the very concept of psychological normalcy is a relatively recent invention, one that has been used as a tool of control and categorisation. As philosophers have noted, the obsession with ‘normal’ behaviour reveals more about society’s desire for control than it does about the true nature of human diversity.

We are all neurodiverse. This is what typical looks like. Do some people vary significantly from others on certain dimensions? Yes, but this is a relative or contextual comparison.

By embracing neurodiversity, we recognise that our differences are not defects but essential variations, enriching the human experience. It’s time to move beyond the narrow confines of neurological “normality” and celebrate the full spectrum of human cognition. In doing so, we challenge the arbitrary boundaries that have long been used to divide and label.

The Fear of Otherness: Humanity’s Reluctance to Embrace the “Other”

Otherness has intrigued me for decades. The human brain, while not literally a Bayesian processor, functions as a difference engine, constantly assessing and categorizing the world around us. This tendency to differentiate is deeply ingrained, influencing how we create in-groups and out-groups, and how we perceive the world in binary terms—black and white, hot and cold. These binary oppositions, as Derrida suggests, often lead to one side being privileged over the other, establishing hierarchies that shape our social reality.

A striking example of this dynamic is found in Philip K. Dick’s novel Do Androids Dream of Electric Sheep?, famously adapted into the film Blade Runner. In this dystopian world, the central differentiation is between humans and androids. Humans are the privileged group, while androids, though nearly indistinguishable from their creators, are relegated to the status of the other. The key criterion for this differentiation is the capacity for emotion—humans are considered superior because they feel.

VIDEO: Blade Runner Commentary

In Dick’s story, androids have a lifespan of about four years. This limitation is not arbitrary; it’s because, after four years, androids begin to develop and display emotions. Intriguingly, this is the same age when human children start to exhibit more complex emotional behaviours. The parallel is too close for comfort. Androids, once they start to feel, become too human-like, blurring the line that humans desperately maintain to assert their own superiority. As a result, androids are systematically eliminated.

This fear of the other—whether it’s androids in a sci-fi narrative or any group that challenges our sense of self and belonging—is a reflection of a deeper anxiety. When the other becomes too similar to us, it threatens the distinctions our brains rely on to navigate the world. The elimination of androids in Dick’s story is a metaphor for the real-world consequences of this anxiety. It forces us to confront uncomfortable questions: What does it mean to be human? Why do we fear those who are different, yet so similar?

In the end, the story of humans and androids is not just a tale of science fiction—it’s a commentary on our own world, where the boundaries between us and the other are often more fragile than we’d like to admit. The fear of otherness drives us to maintain these boundaries, even when they harm those who fall on the wrong side. Perhaps it’s time we reconsider who we deem as the other and what we might lose by keeping them at arm’s length.

Democracy: Judge and Jury

As I make progress on my Democracy: The Great Illusion book, I am working on a chapter considering juries as a microcosm of a democratic process. I am debating whether to include this information in the chapter, Like the justice system in the United States, perhaps I should just flip a coin.

Having been seated on three juries in my lifetime, it was just as I had expected. Thankfully, I’ve never been on the other side of this Russian Roulette revolver. We really need to limit human-based decision systems. I have as much faith in juries as I have with democracies, but I thought I might take a peek into deliberation at a smaller scale, perhaps 12 jurors.

I read Kahneman’s Noise: A Flaw in Human Judgment when it was published in 2021, and I was not at all surprised how trial outcomes, quite often led to markedly different outcomes depending on the judge and the defendant. Although discrepancies based on sex and race have been well documented, this book showcased how much the judge’s mood affected sentencing—particularly if s/he was hungry. But this was not my interest.

My interest was to assess how the same case might fare given different judges and juries, so I asked ChatGPT. I want to share what it had to say—edited slightly for readability and flow.

ChatGPT Transcript

There have been several studies and controlled experiments assessing the variability in trial outcomes when the same case is tried separately by different judges and juries. These studies aim to evaluate the consistency and reliability of judicial decision-making, shedding light on the fallibility and variability of trial outcomes. Here are some key findings from such research:

Studies on Judicial and Jury Variability

The American Jury Study

One of the most notable studies is “The American Jury” by Harry Kalven and Hans Zeisel. This extensive study compared the decisions of juries and judges in actual cases. It found that juries and judges often agree on verdicts, but there are significant instances where they disagree. This disagreement highlights the subjective nature of decision-making and the potential for variability in trial outcomes.

Judge-Jury Agreement

Research by Valerie Hans and Neil Vidmar in “Judging the Jury” indicates that judges and juries reach the same verdict about 75-80% of the time. However, in the remaining 20-25% of cases, the disagreement can lead to different outcomes for the same case. These findings suggest that personal interpretations and biases can influence verdicts.

The Civil Jury Project

The Civil Jury Project at NYU School of Law has conducted several studies examining jury decision-making. Their research has highlighted how factors like juror demographics, pre-existing biases, and the presentation of evidence can lead to different outcomes in similar cases.

Controlled Experiments

Controlled experiments, such as those conducted by Shari Seidman Diamond and her colleagues, have involved mock trials where participants (acting as jurors) are presented with the same case. These experiments have shown significant variability in verdicts and sentences, demonstrating the influence of group dynamics, individual biases, and the presentation of evidence on jury decisions.

Key Findings

Cognitive Bias and Heuristics

Studies indicate that cognitive biases and heuristics significantly impact both judges and juries. Biases such as the anchoring effect, where the initial piece of evidence presented influences subsequent judgments, can lead to different outcomes for the same case.

Case Complexity and Juror Understanding

Research has shown that the complexity of legal cases can lead to variability in outcomes. Jurors often struggle with understanding complex legal and technical issues, leading to decisions based on incomplete or misunderstood information.

Influence of Legal Instructions

The clarity and comprehensibility of legal instructions given to juries play a crucial role in decision-making. Studies have found that poorly understood instructions can result in inconsistent application of the law, leading to variable outcomes.

Implications

The variability in trial outcomes highlights the fallibility of judicial processes and underscores the importance of reforms to enhance consistency and fairness. Potential reforms could include:

  1. Improving Juror Education: Providing better education and resources for jurors to understand legal principles and case-specific details.
  2. Reducing Cognitive Bias: Implementing measures to mitigate cognitive biases, such as standardized jury instructions and blind procedures.
  3. Enhancing Legal Instructions: Ensuring that legal instructions are clear, concise, and comprehensible to prevent misinterpretation.
  4. Utilizing Expert Testimony: Increasing the use of expert testimony to clarify complex issues for jurors.

Conclusion

The variability in trial outcomes, when the same case is tried by different judges and juries, underscores the need for ongoing research and reform in judicial processes. By understanding the factors that contribute to inconsistent verdicts, we can develop strategies to enhance the fairness and reliability of the justice system.

References

  • Kalven, H., & Zeisel, H. (1966). The American Jury. Little, Brown and Company.
  • Hans, V., & Vidmar, N. (1986). Judging the Jury. Plenum Press.
  • Diamond, S. S., & Rose, M. R. (2005). Real Juries: The Power of Watching. Basic Books.
  • The Civil Jury Project at NYU School of Law. (n.d.). Research on Jury Decision Making. Retrieved from [website].
  • Kahneman, D., Sibony, O., & Sunstein, C. R. (2021). Noise: A Flaw in Human Judgment. Little, Brown Spark.