The Trolley Problem of For-Profit Healthcare:

Loops of Death and Denial

The trolley problem is a philosophical thought experiment that pits action against inaction. In the original version, a person faces a choice: a trolley hurtles down a track toward five people tied to the rails, but a lever allows the trolley to be diverted onto another track, where one person is tied. The dilemma is simple in its grotesque arithmetic: let five die or actively kill one to save them. A perennial favourite of ethics classes, the trolley problem is most often used to explore Consequentialism, particularly Utilitarianism, and its cool calculus of harm minimisation. Over the years, countless variations have been conjured, but few approach the nightmarish reality of its real-world application: the for-profit healthcare system in the United States.

With the recent death of UnitedHealthcare CEO Brian Thompson, the trolley dilemma takes on a new and morbid relevance. Let’s reframe the challenge.

The Healthcare Trolley Loop

Picture the trolley again on a bifurcated track. The lever remains, as does the moral agent poised to decide its fate. This time, the agent is Brian Thompson. The setup is simple: one track leads to the deaths of five people, and the other is empty. But here’s the twist: the trolley doesn’t just pass once in this version—it’s on a loop. At every interval, Thompson must decide whether to pull the lever and send the trolley to the empty track or allow it to continue its deadly course, killing five people each time.

But Thompson isn’t just deciding in a vacuum. The track with five people comes with a financial incentive: each life lost means higher profits, better quarterly earnings, and soaring shareholder returns. Diverting the trolley to the empty track, meanwhile, offers no payout. It’s not a single moral quandary; it’s a recurring decision, a relentless calculus of death versus dollars.

This isn’t just a metaphor; it’s a business model. For-profit healthcare doesn’t merely tolerate death—it commodifies it. The system incentivises harm through denial of care, inflated costs, and structural inefficiencies that ensure maximum profit at the expense of human lives.

Enter the Shooter

Now, introduce the wildcard: the shooter. Someone whose loved one may have been one of the countless victims tied to the track. They see Thompson at the lever, his decisions ensuring the endless loop of suffering and death. Perhaps they believe that removing Thompson can break the cycle—that a new lever-puller might divert the trolley to the empty track.

Thompson is killed, but does it change anything? The system remains. Another CEO steps into Thompson’s place, hand on the lever, ready to make the same decision. Why? Because the tracks, the trolley, and the profit motive remain untouched. The system ensures that each decision-maker faces the same incentives, pressures, and chilling rationale: lives are expendable; profits are not.

The Problem of Plausible Deniability

The shooter’s actions are vilified because they are active, visible, and immediate. A single violent act is morally shocking, and rightly so. But what of the quiet violence perpetuated by the healthcare system? The denial of coverage, the refusal of life-saving treatments, the bankruptcy-inducing bills—all are forms of systemic violence, their harm diffused and cloaked in the language of economic necessity.

The for-profit model thrives on this plausible deniability. Its architects and operators can claim they’re simply “following the market,” that their hands are tied by the invisible forces of capitalism. Yet the deaths it causes are no less real, no less preventable. The difference lies in perception: the shooter’s act is direct and visceral, while the system’s violence is passive and bureaucratic, rendered almost invisible by its banality.

A System Built on Death

Let’s not mince words: the current healthcare system is a death loop. It’s not an accident; it’s a feature. Profit-seeking in healthcare means there is always a financial incentive to let people die. During the Affordable Care Act (ACA) debates, opponents of universal healthcare decried the spectre of “death panels,” bureaucrats deciding who lives and who dies. Yet this is precisely what for-profit insurance companies do—only their decisions are driven not by medical necessity or moral considerations, but by spreadsheets and stock prices.

This is the logic of capitalism writ large: maximise profit, externalise harm, and frame systemic failures as unavoidable. Healthcare is merely one example. Across industries, the same dynamic plays out, whether in environmental destruction, labour exploitation, or financial crises. The trolley always runs on tracks built for profit, and the bodies left in its wake are just collateral damage.

How to Break the Loop

The death of Brian Thompson changes nothing. The system will simply produce another Thompson, another lever-puller incentivised to make the same deadly decisions. Breaking the loop requires dismantling the tracks themselves.

  1. Remove the Profit Motive: Healthcare should not be a marketplace but a public good. Universal single-payer systems, as seen in many other developed nations, prioritise care over profit, removing the incentive to let people die for financial gain.
  2. Recognise Passive Harm as Active: We must stop excusing systemic violence as “inevitable.” Denying care, pricing treatments out of reach, and allowing medical bankruptcy are acts of violence, no less deliberate than pulling a trigger.
  3. Hold the System Accountable: It’s not just the CEOs at fault; the lawmakers, lobbyists, and corporations sustain this deadly status quo. The blood is on their hands, too.

Conclusion: The Real Villain

The shooter is not the solution, but neither is their act the real crime. The healthcare system—and by extension, capitalism itself—is the true villain of this story. It constructs the tracks, builds the trolley, and installs lever-pullers like Brian Thompson to ensure the loop continues.

When will it end? When we stop debating which track to divert the trolley toward and start dismantling the system that made the trolley inevitable in the first place. Until then, we are all complicit, passengers on a ride that profits from our suffering and death. The question isn’t who’s at the lever; it’s why the trolley is running at all.

The Rise of AI: Why the Rote Professions Are on the Chopping Block

Medical doctors, lawyers, and judges have been the undisputed titans of professional authority for centuries. Their expertise, we are told, is sacrosanct, earned through gruelling education, prodigious memory, and painstaking application of established knowledge. But peel back the robes and white coats, and you’ll find something unsettling: a deep reliance on rote learning—an intellectual treadmill prioritising recall over reasoning. In an age where artificial intelligence can memorise and synthesise at scale, this dependence on predictable, replicable processes makes these professions ripe for automation.

Rote Professions in AI’s Crosshairs

AI thrives in environments that value pattern recognition, procedural consistency, and brute-force memory—the hallmarks of medical and legal practice.

  1. Medicine: The Diagnosis Factory
    Despite its life-saving veneer, medicine is largely a game of matching symptoms to diagnoses, dosing regimens, and protocols. Enter an AI with access to the sum of human medical knowledge: not only does it diagnose faster, but it also skips the inefficiencies of human memory, emotional bias, and fatigue. Sure, we still need trauma surgeons and such, but diagnosticians are so yesterday’s news.
    Why pay a six-figure salary to someone recalling pharmacology tables when AI can recall them perfectly every time? Future healthcare models are likely to see Medical Technicians replacing high-cost doctors. These techs, trained to gather patient data and operate alongside AI diagnostic systems, will be cheaper, faster, and—ironically—more consistent.
  2. Law: The Precedent Machine
    Lawyers, too, sit precariously on the rote-learning precipice. Case law is a glorified memory game: citing the right precedent, drafting contracts based on templates, and arguing within frameworks so well-trodden that they resemble legal Mad Libs. AI, with its infinite recall and ability to synthesise case law across jurisdictions, makes human attorneys seem quaintly inefficient. The future isn’t lawyers furiously flipping through books—it’s Legal Technicians trained to upload case facts, cross-check statutes, and act as intermediaries between clients and the system. The $500-per-hour billable rate? A relic of a pre-algorithmic era.
  3. Judges: Justice, Blind and Algorithmic
    The bench isn’t safe, either. Judicial reasoning, at its core, is rule-based logic applied with varying degrees of bias. Once AI can reliably parse case law, evidence, and statutes while factoring in safeguards for fairness, why retain expensive and potentially biased judges? An AI judge, governed by a logic verification layer and monitored for compliance with established legal frameworks, could render verdicts untainted by ego or prejudice.
    Wouldn’t justice be more blind without a human in the equation?

The Techs Will Rise

Replacing professionals with AI doesn’t mean removing the human element entirely. Instead, it redefines roles, creating new, lower-cost positions such as Medical and Legal Technicians. These workers will:

  • Collect and input data into AI systems.
  • Act as liaisons between AI outputs and human clients or patients.
  • Provide emotional support—something AI still struggles to deliver effectively.

The shift also democratises expertise. Why restrict life-saving diagnostics or legal advice to those who can afford traditional professionals when AI-driven systems make these services cheaper and more accessible?

But Can AI Handle This? A Call for Logic Layers

AI critics often point to hallucinations and errors as proof of its limitations, but this objection is shortsighted. What’s needed is a logic layer: a system that verifies whether the AI’s conclusions follow rationally from its inputs.

  • In law, this could ensure AI judgments align with precedent and statute.
  • In medicine, it could cross-check diagnoses against the DSM, treatment protocols, and patient data.

A second fact-verification layer could further bolster reliability, scanning conclusions for factual inconsistencies. Together, these layers would mitigate the risks of automation while enabling AI to confidently replace rote professionals.

Resistance and the Real Battle Ahead

Predictably, the entrenched elites of medicine, law, and the judiciary will resist these changes. After all, their prestige and salaries are predicated on the illusion that their roles are irreplaceable. But history isn’t on their side. Industries driven by memorisation and routine application—think bank tellers, travel agents, and factory workers—have already been disrupted by technology. Why should these professions be exempt?

The real challenge lies not in whether AI can replace these roles but in public trust and regulatory inertia. The transformation will be swift and irreversible once safeguards are implemented and AI earns confidence.

Critical Thinking: The Human Stronghold

Professions that thrive on unstructured problem-solving, creativity, and emotional intelligence—artists, philosophers, innovators—will remain AI-resistant, at least for now. But the rote professions, with their dependency on standardisation and precedent, have no such immunity. And that is precisely why they are AI’s lowest-hanging fruit.

It’s time to stop pretending that memorisation is intelligence, that precedent is innovation, or that authority lies in a gown or white coat. AI isn’t here to make humans obsolete; it’s here to liberate us from the tyranny of rote. For those willing to adapt, the future looks bright. For the rest? The machines are coming—and they’re cheaper, faster, and better at your job.

Language: Tool for Clarity or Shaper of Reality?

6–8 minutes

Pinker: The Optimist Who Thinks Language Works

Enter Steven Pinker, a cognitive scientist and eternal optimist about language. While we’ve been busy pointing out how language is a jumbled mess of misunderstandings, Pinker comes along with a sunny outlook, waving his banner for the language instinct. According to Pinker, language is an evolved tool – something that our brains are wired to use, and it’s good. Really good. So good, in fact, that it allowed us to build civilisations, exchange complex ideas, and, you know, not get eaten by sabre-toothed tigers.

Sounds like a nice break from all the linguistic doom and gloom, right? Pinker believes that language is a powerful cognitive skill, something we’ve developed to communicate thoughts and abstract ideas with remarkable precision. He points to the fact that we’re able to create entire worlds through language – novels, philosophies, legal systems, and scientific theories. Language is, to him, one of the greatest achievements of the human mind.

But here’s where things get a little sticky. Sure, Pinker’s optimism about language is refreshing, but he’s still not solving our core problem: meaning. Pinker may argue that language works wonderfully for most of our day-to-day communication – and in many cases, he’s right. We can all agree that saying, “Hey, don’t touch the flamey thing” is a pretty effective use of language. But once we start using words like ‘freedom’ or ‘justice’, things start to unravel again.

Take a sentence like ‘freedom is essential’. Great. Pinker might say this is a perfectly formed thought, conveyed using our finely tuned linguistic instincts. But the problem? Ask five people what ‘freedom’ means, and you’ll get five different answers. Sure, the grammar is flawless, and everyone understands the sentence structurally. But what they mean by ‘freedom’? That’s a whole other ball game.

Pinker’s language instinct theory helps explain how we learn language, but it doesn’t really account for how we use language to convey abstract, subjective ideas. He might tell us that language has evolved as an efficient way to communicate, but that doesn’t fix the problem of people using the same words to mean wildly different things. You can be the most eloquent speaker in the world, but if your definition of ‘freedom’ isn’t the same as mine, we’re still lost in translation.

And let’s not forget: while language is indeed a fantastic tool for sharing information and surviving in complex societies, it’s also great at creating conflicts. Wars have been fought over differences in how people interpret words like ‘justice’ or ‘rights’. Pinker might say we’ve evolved language to foster cooperation, but history suggests we’ve also used it to argue endlessly about things we can never quite agree on.

So, yes, Pinker’s right – language is a cognitive marvel, and it’s gotten us pretty far. But his optimism doesn’t quite stretch far enough to cover the fact that language, for all its brilliance, still leaves us stuck in a web of interpretation and miscommunication. It’s like having a state-of-the-art GPS that works perfectly – until you get to that roundabout and suddenly no one knows which exit to take.

In the end, Pinker’s got a point: language is one of the most sophisticated tools we’ve ever developed. It’s just a shame that when it comes to abstract concepts, we still can’t agree on which way’s north.

Sapir-Whorf: Language Shapes Reality – Or Does It?

Now it’s time for the Sapir-Whorf hypothesis to take the stage, where things get really interesting – or, depending on your perspective, slightly ridiculous. According to this theory, the language you speak actually shapes the way you see the world. Think of it as linguistic mind control: your perception of reality is limited by the words you have at your disposal. Speak the wrong language, and you might as well be living on another planet.

Sounds dramatic, right? Here’s the gist: Sapir and Whorf argued that the structure of a language affects how its speakers think and perceive the world. If you don’t have a word for something, you’re going to have a hard time thinking about that thing. Inuit languages, for example, are famous for having multiple words for different kinds of snow. If you’re an Inuit speaker, the hypothesis goes, you’re much more attuned to subtle differences in snow than someone who just calls it all ‘snow’.

Now, on the surface, this sounds kind of plausible. After all, we do think using language, don’t we? And there’s some truth to the idea that language can influence the way we categorise and describe the world. But here’s where Sapir-Whorf starts to go off the deep end. According to the stronger version of this hypothesis, your entire reality is shaped and limited by your language. If you don’t have the word for “freedom” in your language, you can’t experience it. If your language doesn’t have a word for “blue,” well, guess what? You don’t see blue.

Let’s take a step back. This sounds like the kind of thing you’d hear at a dinner party from someone who’s just a little too impressed with their first year of linguistics classes. Sure, language can shape thought to a degree, but it doesn’t have a stranglehold on our perception of reality. We’re not prisoners of our own vocabulary. After all, you can still experience freedom, even if you’ve never heard the word. And you can certainly see blue, whether your language has a word for it or not.

In fact, the idea that you’re trapped by your language is a little insulting, when you think about it. Are we really saying that people who speak different languages are living in different realities? That a person who speaks Mandarin sees the world in a fundamentally different way than someone who speaks Spanish? Sure, there might be some subtle differences in how each language breaks down concepts, but we’re all still human. We’re all still sharing the same world, and no matter what language we speak, we still have the cognitive capacity to understand and experience things beyond the limits of our vocabulary.

Let’s also not forget that language is flexible. If you don’t have a word for something, you make one up. If you’re missing a concept, you borrow it from another language or invent a metaphor. The idea that language is some kind of mental prison ignores the fact that we’re constantly evolving our language to keep up with the way we see the world—not the other way around.

And here’s the real kicker: if Sapir and Whorf were right, and we’re all walking around in little linguistic bubbles, then how on earth have we managed to translate anything? How have entire philosophies, religious texts, and scientific theories made their way across cultures and languages for centuries? If language really was shaping our reality that strongly, translation would be impossible – or at least incredibly limited. But here we are, discussing concepts like ‘freedom’, ‘justice’, and ‘truth’ across languages, cultures, and centuries.

So while it’s fun to entertain the idea that your language shapes your reality, let’s not give it too much credit. Yes, language can influence how we think about certain things. But no, it doesn’t define the boundaries of our existence. We’re not all stuck in a linguistic matrix, waiting for the right word to set us free.


Previous | Next

From Signs to Abstractions: The Slippery Slope of Meaning

5–7 minutes

Saussure and the Signified: Words as Slippery Symbols

Fast-forward a few thousand years, and humans are no longer just warning each other about hot flames or toothy predators. We’ve moved on to the exciting world of abstract thought, but the language tools we’re using haven’t quite caught up. Enter Ferdinand de Saussure, who basically waltzed in to tell us, ‘Hey, all those words you’re throwing around? They’re not doing what you think they’re doing.’

Saussure gave us the idea of the signifier and the signified. Now, don’t let the fancy terms fool you. It’s just a way of pointing out that when we say ‘tree’, we’re not actually talking about a tree. No, we’re using the word ‘tree’ as a symbol – a signifier – that points to the idea of a tree. The signified is the actual concept of ‘tree-ness’ floating around in your brain. But here’s the kicker: everyone’s idea of a tree is a little different.

And this isn’t just a language problem – it’s an art problem too. Enter René Magritte, the surrealist artist who really drove this point home with his famous painting, Ceci n’est pas une pipe (‘This is not a pipe’). At first glance, it looks like a straightforward picture of a pipe, but Magritte was making a deeper point. It’s not actually a pipe – it’s an image of a pipe, a representation. You can’t stuff it with tobacco and smoke it, because what you’re looking at is a representation, not the real thing.

Image: La Trahison des Images, René François Ghislain Magritte

In the same way, when we use words, we’re not talking about the thing itself – we’re just waving a flag toward the concept of that thing. So, when you say ‘tree’, you’re really saying ceci n’est pas un arbre – this is not a tree. It’s just a word, a placeholder, a verbal painting of something real. And just like Magritte’s pipe, it’s easy to get confused. You might think you’re talking about the same tree, or the same ‘freedom’, but all you’ve got is a symbol – and everyone’s symbol looks a little different.

This is where things start to unravel. Words are slippery symbols, and as soon as we move away from concrete, physical objects – like trees or, yes, pipes – and into abstract ideas, like ‘justice’ or ‘truth’, the symbols become even harder to hold onto. The cracks in language start to widen, and before you know it, you’re no longer even sure if you’re talking about the same concept at all.

Language, Saussure argues, isn’t this neat, objective system we thought it was. It’s a game we’re playing, and the rules are written in invisible ink. By the time we get to abstract nouns, we’re basically playing with loaded dice. You think you’re communicating clearly, but every word you use is just a placeholder for the idea you hope the other person has in their head. And nine times out of ten? They don’t.

So, while early humans were struggling to agree on the ‘flamey thing’, we’re here trying to agree on concepts that are infinitely more complicated. And Saussure? He’s just sitting in the corner with a smirk, telling us we never had control over language in the first place. “Good luck with your ‘truth'”, he seems to be saying. ‘I’m sure it’ll mean the same thing to everyone’.

Abstraction: Enter Freedom, Truth, and Confusion

Now that we’ve wrapped our heads around the fact that words are nothing but slippery symbols, let’s take it up a notch. You thought ‘tree’ was tricky? Try something more abstract. Enter: freedom, truth, justice. Things that can’t be seen, touched, or stuffed into a pipe. Here’s where language goes from being slippery to downright treacherous.

See, early language worked because it was tied to concrete things. ‘Toothey thing scary’ wasn’t up for debate. Either you got eaten, or you didn’t. Simple. But then humans, ever the overachievers, decided it wasn’t enough to just label the world around them. They wanted to label ideas, too – things that don’t have any physical form but somehow drive us all crazy.

Take ‘freedom’, for instance. Sounds nice, right? Except, if you ask ten people what it means, you’ll get ten different answers. For some, it’s ‘freedom from’ something – a kind of liberation. For others, it’s ‘freedom to’ do whatever you want, whenever you want. And yet for others, it’s an abstract ideal tied up in political philosophy. Suddenly, you’re not just dealing with different trees – you’re dealing with entirely different forests.

The same goes for truth. Is it objective? Subjective? Relative? Absolute? Everyone’s got a different take. Plato had his own grand ideas about ‘Truth’ with a capital T, while Nietzsche basically rolled his eyes and said, ‘Good luck with that’. You’re out here using the word, assuming it means the same thing to everyone else, but really you’re all just talking past each other.

And don’t even get started on justice. Some say it’s about fairness, others say it’s about the law, and still others think it’s just a nice idea for dinner party debates. The problem with these words – these abstract nouns – is that they represent ideas that live entirely in our heads. Unlike the ‘flamey thing’ or the ‘toothey thing’, there’s no physical reality to pin them to. There’s no universally agreed-upon image of ‘freedom’ that we can all point to and nod along, like Magritte’s pipe. There’s just… vague agreement. Sometimes. On a good day.

This is where language really starts to break down. You might think you’re having a productive conversation about ‘freedom’ or ‘truth’, but half the time, you’re speaking different languages without even realising it. Words like these aren’t just slippery – they’re shapeshifters. They bend and morph depending on who’s using them, when, and why.

So, while early humans were busy with their simple, effective ‘toothey thing scary’, we’re now trying to nail down ideas that refuse to be nailed down. What started as a useful survival tool has turned into a game of philosophical Twister, with everyone tied up in knots trying to define something they can’t even see. And, as usual, language is just standing in the corner, smirking, knowing full well it’s not up to the task.


Previous | Next

Democracy: Judge and Jury

As I make progress on my Democracy: The Great Illusion book, I am working on a chapter considering juries as a microcosm of a democratic process. I am debating whether to include this information in the chapter, Like the justice system in the United States, perhaps I should just flip a coin.

Having been seated on three juries in my lifetime, it was just as I had expected. Thankfully, I’ve never been on the other side of this Russian Roulette revolver. We really need to limit human-based decision systems. I have as much faith in juries as I have with democracies, but I thought I might take a peek into deliberation at a smaller scale, perhaps 12 jurors.

I read Kahneman’s Noise: A Flaw in Human Judgment when it was published in 2021, and I was not at all surprised how trial outcomes, quite often led to markedly different outcomes depending on the judge and the defendant. Although discrepancies based on sex and race have been well documented, this book showcased how much the judge’s mood affected sentencing—particularly if s/he was hungry. But this was not my interest.

My interest was to assess how the same case might fare given different judges and juries, so I asked ChatGPT. I want to share what it had to say—edited slightly for readability and flow.

ChatGPT Transcript

There have been several studies and controlled experiments assessing the variability in trial outcomes when the same case is tried separately by different judges and juries. These studies aim to evaluate the consistency and reliability of judicial decision-making, shedding light on the fallibility and variability of trial outcomes. Here are some key findings from such research:

Studies on Judicial and Jury Variability

The American Jury Study

One of the most notable studies is “The American Jury” by Harry Kalven and Hans Zeisel. This extensive study compared the decisions of juries and judges in actual cases. It found that juries and judges often agree on verdicts, but there are significant instances where they disagree. This disagreement highlights the subjective nature of decision-making and the potential for variability in trial outcomes.

Judge-Jury Agreement

Research by Valerie Hans and Neil Vidmar in “Judging the Jury” indicates that judges and juries reach the same verdict about 75-80% of the time. However, in the remaining 20-25% of cases, the disagreement can lead to different outcomes for the same case. These findings suggest that personal interpretations and biases can influence verdicts.

The Civil Jury Project

The Civil Jury Project at NYU School of Law has conducted several studies examining jury decision-making. Their research has highlighted how factors like juror demographics, pre-existing biases, and the presentation of evidence can lead to different outcomes in similar cases.

Controlled Experiments

Controlled experiments, such as those conducted by Shari Seidman Diamond and her colleagues, have involved mock trials where participants (acting as jurors) are presented with the same case. These experiments have shown significant variability in verdicts and sentences, demonstrating the influence of group dynamics, individual biases, and the presentation of evidence on jury decisions.

Key Findings

Cognitive Bias and Heuristics

Studies indicate that cognitive biases and heuristics significantly impact both judges and juries. Biases such as the anchoring effect, where the initial piece of evidence presented influences subsequent judgments, can lead to different outcomes for the same case.

Case Complexity and Juror Understanding

Research has shown that the complexity of legal cases can lead to variability in outcomes. Jurors often struggle with understanding complex legal and technical issues, leading to decisions based on incomplete or misunderstood information.

Influence of Legal Instructions

The clarity and comprehensibility of legal instructions given to juries play a crucial role in decision-making. Studies have found that poorly understood instructions can result in inconsistent application of the law, leading to variable outcomes.

Implications

The variability in trial outcomes highlights the fallibility of judicial processes and underscores the importance of reforms to enhance consistency and fairness. Potential reforms could include:

  1. Improving Juror Education: Providing better education and resources for jurors to understand legal principles and case-specific details.
  2. Reducing Cognitive Bias: Implementing measures to mitigate cognitive biases, such as standardized jury instructions and blind procedures.
  3. Enhancing Legal Instructions: Ensuring that legal instructions are clear, concise, and comprehensible to prevent misinterpretation.
  4. Utilizing Expert Testimony: Increasing the use of expert testimony to clarify complex issues for jurors.

Conclusion

The variability in trial outcomes, when the same case is tried by different judges and juries, underscores the need for ongoing research and reform in judicial processes. By understanding the factors that contribute to inconsistent verdicts, we can develop strategies to enhance the fairness and reliability of the justice system.

References

  • Kalven, H., & Zeisel, H. (1966). The American Jury. Little, Brown and Company.
  • Hans, V., & Vidmar, N. (1986). Judging the Jury. Plenum Press.
  • Diamond, S. S., & Rose, M. R. (2005). Real Juries: The Power of Watching. Basic Books.
  • The Civil Jury Project at NYU School of Law. (n.d.). Research on Jury Decision Making. Retrieved from [website].
  • Kahneman, D., Sibony, O., & Sunstein, C. R. (2021). Noise: A Flaw in Human Judgment. Little, Brown Spark.

Retributive Injustice

I’ve already said that justice is a weasel word, but let’s pretend that it’s actually something more substantial and perhaps even real. I’ve spoken on the notion of blame as well. I have been thinking about how untenable retributive justice is and it seems to include restorative justice, too. But let’s focus on the retributive variety for now.

In short, retributive justice is getting the punishment one deserves, and I think desert is the weak link. Without even delving into causa sui territory, I feel there are two possible deserving parties. The agent and society. Let’s regard these in turn.

The Agent

An agent, or more specifically moral agents, are entities that can be deemed responsible for their actions on moral grounds. Typically, moral agency assumes that an agent, an actor, is fully aware of the cultural rules of a given society, whether norms or legislated. Under this rationale, we tend to exclude inanimate objects with no agency, non-human life forms, children, and persons with diminished cognitive faculties. In some cases, this diminution may have been self-imposed as in the case of chemically induced impairment, for example by drugs or alcohol. We might consider these entities as being broken. In any case, they do not qualify as having agency. An otherwise moral agent until duress or coercion may no longer be expected to retain agency.

Unless an informed and unimpaired agent commits an act with intent … there can be no moral desert

Unless an informed and unimpaired agent commits an act with intent, another weasely word in its own right, there can be no moral desert. But let’s hold this thought for a bit and turn our attention to society.

Society

For the purposes of this commentary, society is a group of like-minded persons who have created norms, customs, laws, and regulations. In most cases, people come into societies whose structure is already formed, and they need to acculturate and adapt, as changing the fabric of society generally takes time. Even in the case of warfare where a society is subsumed, cultural norms will persist for at least a time.

Whilst it is incumbent for a person to become aware of the rules of engagement and interaction with a society, this is reciprocally a responsibility of society to impart its norms through signalling and performance as well as through more formal training, such as public fora, schools, and activities. Even media and entertainment can serve to reinforce this function.

So What?

I argue that retributive justice is bullshit (to employ technical language) is because if an informed and unimpaired agent does violate some standard or protocol, the society is at least partially to blame—perhaps fully so. Again, if the person is not unimpaired, a pivotal question might be why is s/he uninformed? If the person has the information but ignores it, to what extent is the person impaired and what responsibility does society have for being unaware?

Special Case?

What if a particularly predacious person from Society A infiltrates Society B? Is the person broken or is Society A responsible to creating a person that would prey on some other unsuspecting society? Again, the person is never entirely responsible unless s/he is broke, in which case, s/he is exempt and not morally responsible.

When Then?

As I’ve said before, a person who commits an act against the interest of a society may be quarantined or perhaps exiled or shunned as some cultures practice, but these are meant to preserve the cohesion of the society and not meant to exact a point of flesh in retribution.

In the end, I just don’t see a use case where retribution would fall upon a single actor. If some transgression is made, how then do we ensure society pays its dues as well? In my mind, society is more apt to fail the individual than the other way around, but maybe that’s just me and my world.

What am I missing here?

VIDEO: The Truth about Truth

I wrote about this content in 2019, but I wanted to revisit it for a video as well as create a podcast audio version.

Video: YouTube version of this page content
Podcast: Audio rendition of this page content

In today’s segment, I am going to share my perspectives on the truth about truth. To start, I’ll let the audience know that I do not believe in the notion of truth. I feel the term is ill-defined especially in the realm of metaphysics and morality. I feel that when most people employ the word ‘truth’, what they mean to say is ‘fact.’ That a fire engine is red, for example, may be a fact, if indeed the fire engine happens to be red, but it is not true. This is a misapplication of the term. If you employ truth as a direct synonym for fact, then this is not what’s being discussed here, and perhaps your time might be better spent watching some content by the Critical Drinker.   

My argument is that truth is not objective. Rather it is subjective and perspectival. I concede that there may be some objective truth out there somewhere, but it is not and will not ever be accessible to us because of limitations in our sense-perception faculties and cognitive limitations. Per Aristotle, we only have five senses with which we can connect to the world, and these senses are limited. If there is anything out there that would require another sense receptor—a sense receptor not available to us—, we would never be able to sense it, to even know of its existence. Perhaps the universe emits 100 sense signals, but we are only capable of receiving and translating five. We’d be oblivious to 95 per cent of reality.

I am not making any claims that this is the case, but human cognition is so limited, that we can’t even conceive of what another sense might be. If you can, please leave a comment.

To be clear, I am not talking about senses we know other species possess. Bats may have echolocation, and sharks may have electroreception. Some animals may have greater sensory acuity—superior vision and auditory senses, olfactory and gustatory, tactile, or whatever. Some can see into infrared or ultraviolet light spectra. Technology that includes biomimicry provides humans with microscopes for the microworld and telescopes for the macroworld. We have x-rays and sonar and radar, radios and televisions that extend our senses, but these provide no new sensory receptors.

Like the story of the blind people and the elephant, we are left grasping at parts. But even if we are able to step back to view the whole elephant, to hear the elephant, to touch and smell or even taste the elephant, if there is more to the elephant, we cannot know it. The same goes for ourselves.

I know that some people might inject gods or psychic or paranormal energy into this void, and sure, feel free, but I am looking beyond these pedestrian concepts. What else might there be?

But let’s depart this train and head in a different direction. I want us to focus on the senses we do have. For the typical human, sight is our primary arbiter of reality, at least as defined idiomatically. We tend to believe what we see, and what we see, we assume as real—even if we are later mistaken. I guess that wasn’t a unicorn or a pink elephant. I must have been hallucinating or dreaming. I could have sworn that was Auntie Em.

There are several competing theories around truth, but I’ll focus on the Correspondence theory, which is simply put, the notion that, proxying reality for truth, human perception corresponds with the real world. And a pragmatist might argue that’s close enough for the government.

Keep in mind that historically humans have contorted themselves into making calculations. Remember how long people had been tying themselves into knots to show planetary motion in a geocentric system creating epicycles and retrograde motion to map understanding to a perceived reality.

One might even argue that we’ve progressed. It wasn’t true or accurate then, but now it is. And perhaps it is. Let’s look at some illustrations.

NB: Due to an editorial mishap, this paragraph was dropped in the podcast, hence dropped from the video, which shared the podcast audio source. As such, this image was also not used in the video. This is unfortunate, as it was meant to introduce those with limited maths knowledge to the asymptotic curve, as described. Apologies, and I hope this serves to orient any travellers who may have lost their way at this point.

In this first illustration, we see Truth (or relative truthiness) on the Y-axis and Time on the X-Axis. On the top, we see a threshold representing Reality. In the plane, I’ve rendered an asymptotic curve, where over time, we get closer and closer to the Truth. But we never quite get there. More on this later.

The next illustration will help to demonstrate what’s happening.

Notice there is a gap between the curve and the Reality cap. For one thing, we don’t really know where we are relative to Reality. In the case of the geocentric system, we might have been at the leftmost space. Once we determined that the system is actually solar-centric, we might have moved right on the curve to close the gap. We might be tempted to defend that we’ve finally reached the truth, but we’d have been equally willing to make the same defence from the geocentric position, so we need to be mindful of the past.

Perhaps, this last example was too obvious. We feel comfortable staking a truth claim—or at least a claim of fact. So let’s look at another example.

Let’s re-use the same axes—Truth and Time—, but rather than an asymptotic curve, let’s presume something more polynomial in nature—or not particularly cyclic. Rather than retrograde motion in planets, let’s visit the supposed progress of Newtonian over Einsteinian physics.

This takes a bit more setup but bear with me.  In this case, I have taken liberties and illustrated the Einsteinian physics gap to capture an inferior vantage on reality over Newtonian physics. Granted, I need to rely on a bit of suspension of disbelief, but in the bigger picture, I am trying to convey a scenario where some new paradigm puts the prior knowledge in perspective.

In this instance, both Newtonian and Einsteinian flavours of physics are based on a materialistic, particles-based model, which is where the modern physics consensus resides. But, let’s say that consensus changes in such a way that it is determined that something else underlies reality, say consciousness per Analytic Idealism as proposed by Bernardo Kastrup or per Integrated Information Theory (IIT) as advanced by Donald Hoffman and others. As with retrograde motion, we might end up finding that we were barking up the wrong tree. This might be a bit different because the particles are a directly perceived manifestation of the underlying consciousness, but I wanted to create a scenario where knowledge thought to have advanced actually regressed, but this wasn’t revealed until a new perspective was available.

Yet again, an important aspect of note is that we don’t actually know the distance between our perceptions and real Reality.

This last illustration builds upon the first asymptotic chart but has an in-built error margin meant to reflect language insufficiencies. There is some concept that people feel they grasp, but the consensus is not as unified as the group thinks.

I’ll share two examples, the first being the concept of justice. To me, Justice is what I deem a weasel word. It’s a word we commonly use, but it means different things to different people. To me, it’s a euphemism for vengeance by proxy, but for others, it transcends that and mirrors some impartial dispensation of just desert—some good old-fashioned law and order.

[Justice is] a euphemism for vengeance by proxy

Without getting stuck down some rabbit hole, my point is that if we aggregate these beliefs, the asymptotic curve represents an average consensus vantage rather than something as obvious as 2 plus 2 equals 4. On this note, allow me to clear the air.

Some viewers might be clamouring to say, “but 2 plus 2 equals four is true.” But this is tautologically true, which is to say that it’s true by definition. It’s a similar tautology to saying that it’s true that snow is white, or coal is black. We’ve already defined snow, white, coal, and black, so these may be facts, but they are true by definition.

Revisiting the chart, notice that there are two curves in the space. In this case, I illustrate competing truth claims from the perspective of an omniscient narrator. The case is whether the earth is an oblate spheroid or is flat. I am going to go out on a limb and assert the earth is spherical, as represented by the top blue curve—and we have some margin of error as to what that might mean. The bottom red curve depicts the perceived truth of the flat earthers, who also have some room for semantic error.

Given that I am presuming that I am in the right adopting the majority position—please be right—, the blue curve is closer to Reality than the red curve. Of course, in the event that the earth is really flat, then it proves my point that we don’t know where we are relative to truth, so we assume that the state of knowledge at any given time is what’s real.

Again, forgive my fanciful examples. Please don’t tell me that this spheroid versus planer earth is tautological too because you’d be correct, but I am already aware. They are just nonsensical illustrations. Nonetheless, I hope they’ve served to express a point.

I could have as well created curves that depicted two cohorts’ beliefs on the efficacy of tarot or astrology in predicting the future. I am sure that it might render somewhat like the last chart, but I’d also presume that both curves would have very low truth values as seen from an objective observer. Secretly, I hope tarot wins the truth battle.

Before I end our time together, I’d like to convey that for an Analytic Idealist, these charts might be more acceptable at face value. For a Realist, Naïve or otherwise, they may argue that this curve is not asymptotic and may in fact reach some tangency. I don’t happen to believe this is the case or I wouldn’t have spent my time assembling and presenting this. Time will tell. Or will it?

Video: Blame and Causa Sui

In this segment, I ponder the interplay between blame and Causa Sui. I’ll discuss the implications for moral responsibility as well as legal responsibility, which are not as in sync as one might imagine they might be.

Video: Blame & Causa Sui

To the uninitiated, Western legal systems have no pretensions about being about morality or justice. Legal systems are designed to maintain power structures and the status quo. They are deontological machines, making them prime targets for automation by the machine learning associated with artificial intelligence. This would also diminish the power of rhetoric over facts to some extent. But, I am no legal scholar, and all of this will have to wait for another segment.

I recently shared a video on causa sui and the basics of blame and blameworthiness, so I want to intersect those topics here.

Peter Strawson suggested that for humans, blame is a reactive response. It’s reflexive like having your knee jerk when tapped. Essentially, his position is that if blame didn’t naturally exist, we’d have to invent it, mirroring Voltaire’s quip, ‘If God did not exist, it would be necessary to invent Him’. Of course, this is because they serve the same power control purpose.

If God did not exist, it would be necessary to invent Him

Voltaire

To be fair, blame is closer to real than God, but the point remains. Strawson’s point is also that humans are saddled with blame and it’s not going anywhere no matter how nebulous it becomes in execution. It’s natural.

To me, this starts to sound suspiciously like a naturalistic fallacy. Humans seem to selectively cherry-pick which so-called natural tendencies they choose to defend. One might use nature to argue that female sexual availability begins at menstruation, and yet we have decided to ignore this and defer this on the grounds of civility. It’s obvious that we could consider blame to be an animal instinct we want to domesticate away, but because it serves other purposes, per Strawson’s perspective, it’s a useful tool.
But what’s the causa sui challenge. Let’s quickly recapitulate.

Causa sui argues that one cannot be the cause of oneself, ex nihilo. Being full products of nature and nurture to adopt the lay parlance, any blameworthiness lies with the sources or creators. Since we are concerned with moral responsibility, we can eliminate nature forthrightly. Nature may be responsible—by many estimations approximately 40 per cent responsible—, it possesses no moral agency. And if the individual is not responsible, then we are left with the environment and society, including the social environment. Of course, the environment gets off the hook in the same manner as the genetic and hereditary factors of nature.

Before we consider society, let’s regard the individual.

Albeit the brain-as-computer is a bit facile, it’s still good enough for illustrative purposes. When you are born, your cognitive hardware is installed, as are your edge peripherals and update protocols. Any of these can become damaged through some degenerative processes, or external environmental factors, but since my interest is in optimistic rather than pessimistic scenarios, I’ll ignore these instances. Given that blameworthiness is directly related to presumed cognitive processing, factors that diminish these faculties, mitigate blameworthiness and factors than increase it, ameliorate it.

As a—quote—’normal’ child becomes an adolescent and then an adult, the probability it will become blameworthy, increases with age, ceteris paribus. A person with cognitive deficits or conditions such as aphasia or dementia decreases the probability of blame assignment. Even temporary impairment mitigates judgment—oh, she was drunk.

So, following the brain-as-computer analogy, your brain is a CPU with a self-updating cognitive operating system and instruction set. Essentially, there is also short and long-term memory.
In the case of cognitive deficits, one of these components might be effectively broken. The CPU might process too slowly; it might misinterpret what it receives; there may be issues with the sense organs or the nerves that transport signals.

I’ve got a mate who, due to medical malpractice at birth, experienced nerve damage. Although his eyes and brain are normal, his optic nerve cannot carry signals very well, effectively leaving him blind. Neither can he taste nor smell. So there’s that.

But assuming that this processing and storage hardware are intact, the causa sui constraint still applies, but let’s spend some time evaluating societal interactions.

All inputs come from society—cultures and subcultures. Apart from misinterpreted processing scenarios, if a person doesn’t receive a particular moral instruction set, that person should surely be considered to be exempt from moral blame. It may be difficult to assess whether an instruction has been input. This is a reason why children are categorically exempted: they may not have received all of the expected moral codes, they may not have been stored or effectively indexed, and their processing hardware is still in development—alpha code if you will. Brain plasticity is another attribute I won’t spend much time on, but the current state of science says that the brain is still not fully developed even by age 30, so this is certainly a mitigating factor, even if we allow leeway for the causa sui argument.

I mention subculture explicitly because the predominant culture is not the only signal source. A child raised by, I don’t know, say pirates, would have an amended moral code. I am sure we can all think of different subcultures that might undermine or come at cross odds with the dominant culture, whether hippies, religious cultists, militia groups, racial purist groups, and so on.

So, a commonly held moral in the subdominant group may counter that of the prevailing one. An example that comes to mind is some religious organisations that do not agree with human medical intervention. There have been cases where parents have allowed a child to die from an otherwise curable condition. Although in the United States, there is a claim of freedom of religion—a claim that is spotty at best—, parents or guardians in situations like these have been convicted and sentenced for following their own moral codes. But as with all people, these people are as susceptible to the limitations of causa sui as the rest of us. They are not responsible for creating themselves, but moral responsibility was asserted based on the beliefs of the prevailing culture. Even besides the legal context, persons in the larger society would likely blame the parents for their neglect—though they may be praised for being resolute in their righteousness by their in-group. This just underscores that morality is a collection of socially constructed conventions rather than something more objective.

Returning to causa sui, let’s say a person commits an act that society would typically assign blame. Rather than exercise some act of retributive justice—a concept with no foundation in a causa sui universe—the course of action was remediation. In this case, the desired moral instruction would be delivered thereby seemingly making the moral offender blameworthy. But would they be?

Presumably, (for what it’s worth) psychologists would evaluate the subject for competency in maintaining the programming. In the case of the aforementioned religious parents, they may be threatened with retribution for not abiding by the superseding rules of the prevailing power structure.

Although I might personally allow some leeway even with the causa sui in full force and effect, but I can’t say that I have much faith in the ability of humans to make a correct assessment. My impression is that any assessment would be one of convenience than something sounder.

Perhaps I’ll produce a more robust segment on retributive justice, but my feeling is that retributive justice is an area that legal systems should avoid altogether. If necessary, focus on restorative justice, rehabilitation (or ‘habilitation’ as the case might be) and quarantine models to ensure any bad actors are contained away from society. Again, this puts individuals at the mercy of cultures they find themselves a part of. I am not going to delve into this any further save to remind the listener of gang initiation schemes where a person needs to kill a member of a rival gang to become a trusted member. This is their moral code—quite at odds with the mainstream.

So there you have it. Owing to causa sui constraints, a person cannot be ultimately responsible for their actions. My primary thesis is—apart from metaphorical equipment failures—that any moral responsibility falls wholly on the society or culture. Full stop. And this isn’t as foreign as one might first feel. Although for most people blame is natural, in an individualistic society, people are interested in finding the culprit. In collectivist cultures, any culprit might do. Perhaps I’ll share some stories in a future segment.
Meantime, what are your thoughts on moral responsibility? Can someone be ultimately responsible? Some have said the ‘ultimate responsibility’ is a philosophical red herring and that we can still hold someone responsible, even if not in the ultimate sense, which causa sui disallows. Are you more in this camp? Is this enough to mete out so-called retributive justice? For me, retributive justice is a euphemism for vengeance, and justice is a weasel word. But that’s just me, and perhaps a topic for another segment.

Are there any topics you’d like me to cover? Leave a comment below.

Justice and Intent

When discussing the topic of justice, besides the element of the event of offence, another element is typically intent. In this case, a father inadvertently left an infant in his car. He was supposed to drop the child off at daycare but forget and instead drove directly to work. The temperatures were hot, and this contributed to the death of the child. Upon discovering this, the father suicided.

I have copied the story below in full, as these things have been known to go missing every now and again.

A Virginia father died by an apparent suicide after finding his child dead inside his hot car, authorities said.

It appears the father accidentally left the 18-month-old in the car for at least three hours on Tuesday, leading to the child’s death, Lt. Col. Christopher Hensley of the Chesterfield Police Department said at a news conference.

When the child didn’t arrive at daycare, the father apparently realized the toddler was in his car, Hensley said.

Around noon, family members called police to report that the father was talking about dying by suicide in the woods behind his house. The father was the only person home at the time, Hensley said.

Responding officers found the car in the driveway with an open door and an empty child seat, Hensley said.

Officers went into the home where they found the dead 18-month-old, he said.

As officers continued to check the perimeter, they found the father dead in the woods from an apparent gunshot wound, he said.

Hensley called it a “horrible tragedy on so many levels.”

This marks the eighth child to die from a hot car this year, according to national nonprofit KidsAndCars.org. More than 1,000 kids have died from hot cars since 1990, the organization said.

Click here for tips on how to keep children safe from hot cars this summer.

ABC News

An interest of mine is justice, hence this post. I’ll get to that, but there is also a narrative of social priorities to extract from here, too.

The first is that we live in a society where 18-month-olds almost need to be separated from their family. Of course, the privileged can defend that they have sitters or au pairs or nannies. In the past, there were extended families and Clinton’s Village. Each has its plusses and minuses. I am not a fan of the idea of women serving as baby factories, pumping out babies and serving their plight as wage slaves, but that’s not my call. I also understand that raising children is not the most mentally stimulating activity, but that’s beside the point.

In this case, the father was more focused on getting to work than the welfare of his child. And given the outcome, it’s obvious that he had feelings for the child—although perhaps it was more the fear of the repercussions of being blamed. One can’t know for sure, but I’ll opt for the charitable rendition.

Let’s return to justice. Justice is the sense that one gets one’s just desert, but what is just and what is desert? In the artificial form of justice purportedly practised by lawyers and jurists, this man would not likely be held responsible for legal reasons without even having to plumb the depths of philosophical reasons.

It’s been said that karma operates with three levers:
intent, action, and reflection.

In this case, intent appears to be absent and reflection seems to be apparent in the outcome. The action was the lost life of an infant, a human life. Equally weighted, he’d be one step back and two steps forward, so his register would not be in the black. But this is not how he judged himself.

Even given the karmic model, it’s easy to imagine the reactions. As easy as it is for me to sit back behind my keyboard and be dispassionate, I can imagine the mother not being so reserved. Humans are blame-machines. I’ve been spending the past three or four months researching this topic peripherally with a focus on human agency, but in a reductionist model, humans seem to need to blame. And if there is no object, they have no qualms about making one up. Humans are good storytellers—more so, story-receivers—, but let’s not get distracted. He knew he would be blamed. Not least of all, he blamed himself.

Although I don’t subscribe to the notion of self—or even of intent—, it seems obvious that this father did. I can’t imagine how I’d feel if this were me—and I don’t want to try. But let’s not lose sight of the complicity of society that forces humans to make a choice between family and survival.