My philosophical critique, not of the book Why Democrats Are Dangerous, but of the two warring factions in United States politics – mind you, partisanship not limited to the US – sparked the ire of defenders of their respective turf. ‘You’ve got it wrong. Those other people are either addleheaded or abject evil’ is a consolidation of responses from both sides of the aisle. I’ve crafted a response.
Audio: NotebookLM summary podcast on this topic.
It’s perfectly true that I occupy a perspective. Everyone does. This isn’t a confession; it’s a structural feature of being human. Consciousness is perspectival by design. We don’t get to hover above the world like disembodied CCTV cameras. We look from somewhere.
But acknowledging one’s perspective is not the same thing as being trapped in a rut. A rut implies unexamined repetition, reflex, and dogma. A perspective implies angle, interpretation, intellectual stance. The accusation I’m hearing – ‘you’re in a rut too’ – is not actually an argument. It’s an attempt to delegitimise the analysis without engaging with it.
It says nothing about whether my observation is true, coherent, or well-reasoned; it merely notes that I, like every other speaking organism on the planet, occupy a position. And from this banal fact it attempts to smuggle in a conclusion: that my critique is thereby invalid. It’s a sleight of hand, and a clumsy one.
If someone believes I’m wrong, they are welcome – encouraged, even – to demonstrate:
where the logic fails
where the evidence contradicts me
where the symmetry is mischaracterised
where the interpretation distorts rather than illuminates
That is argumentation.
What they are offering instead is a sort of epistemic shrug: ‘You’re in a perspective, therefore you have no authority’. This is an ad hominem in a trench coat, pretending to be profundity.
The irony, of course, is that the people making this charge never seem to apply it to themselves. Their own viewpoint, naturally, is not a rut but a ‘stance’, ‘framework’, ‘tradition’, ‘bedrock’, or ‘fact’. Only the critic has perspective; they merely have truth.
But here’s the critical distinction:
Having a perspective does not invalidate an analysis, and pretending not to have one does.
The entire Anti-Enlightenment project rests on this recognition: that all human positions are mediated, situated, incomplete – and yet still capable of meaningful observation. You don’t escape your perspective by denying it; you escape dogma by interrogating it.
If someone wishes to rebut what I’ve written, they should do so directly, with evidence, reasoning, or counterexamples. If all they offer is ‘well, you’re biased too’, then they’ve conceded the argument by refusing to enter it.
A perspective is not a rut. A rut is what happens when you can’t tell the difference.
Accusations of writing being AI are becoming more common – an irony so rich it could fund Silicon Valley for another decade. We’ve built machines to detect machines imitating us, and then we congratulate ourselves when they accuse us of being them. It’s biblical in its stupidity.
A year ago, I read an earnest little piece on ‘how to spot AI writing’. The tells? Proper grammar. Logical flow. Parallel structure. Essentially, competence. Imagine that – clarity and coherence as evidence of inhumanity. We’ve spent centuries telling students to write clearly, and now, having finally produced something that does, we call it suspicious.
Audio: NotebookLM podcast on this topic and the next one.
My own prose was recently tried and convicted by Reddit’s self-appointed literati. The charge? Too well-written, apparently. Reddit – where typos go to breed. I pop back there occasionally, against my better judgment, to find the same tribunal of keyboard Calvinists patrolling the comment fields, shouting ‘AI!’ at anything that doesn’t sound like it was composed mid-seizure. The irony, of course, is that most of them wouldn’t recognise good writing unless it came with upvotes attached.
Image: A newspaper entry that may have been generated by an AI with the surname Kahn. 🧐🤣
Now, I’ll admit: my sentences do have a certain mechanical precision. Too many em dashes, too much syntactic symmetry. But that’s not ‘AI’. That’s simply craft. Machines learned from us. They imitate our best habits because we can’t be bothered to keep them ourselves. And yet, here we are, chasing ghosts of our own creation, declaring our children inhuman.
Apparently, there are more diagnostic signs. Incorporating an Alt-26 arrow to represent progress is a telltale infraction → like this. No human, they say, would choose to illustrate A → B that way. Instead, one is faulted for remembering – or at least understanding – that Alt-key combinations exist to reveal a fuller array of options: …, ™, and so on. I’ve used these symbols long before AI Wave 4 hit shore.
Interestingly, I prefer spaced en dashes over em dashes in most cases. The em dash is an Americanism I don’t prefer to adopt, but it does reveal the American bias in the training data. I can consciously adopt a European spin; AI, lacking intent, finds this harder to remember.
I used to use em dashes freely, but now I almost avoid them—if only to sidestep the mass hysteria. Perhaps I’ll start using AI to randomly misspell words and wreck my own grammar. Or maybe I’ll ask it to output everything in AAVE, or some unholy creole of Contemporary English and Chaucer, and call it a stylistic choice. (For the record, the em dashes in this paragraph were injected by the wee-AI gods and left as a badge of shame.)
Meanwhile, I spend half my time wrestling with smaller, dumber AIs – the grammar-checkers and predictive text gremlins who think they know tone but have never felt one. They twitch at ellipses, squirm at irony, and whimper at rhetorical emphasis. They are the hall monitors of prose, the petty bureaucrats of language.
And the final absurdity? These same half-witted algorithms are the ones deputised to decide whether my writing is too good to be human.
I like to stay updated on the news of the day, so I just registered for a Ground News account. Ground News is a news aggregator. They gather news and categorise it by political leaning and the publication’s record on factuality. Their claim is to reveal blind spots so help people not get caught in perspective bubbles. It also shows when a story is picked up predominantly by one side or another. I’ve seen ads for this on many channels and have for a while, so it’s likely that you have, too. This is not an ad.
This article attracted my attention, not because of the content but because of the headline. As a statistician, this bothers me. As a communicator, the damage is trebled. I don’t receive any compensation for clicking the link. I include it for reference for those who are not familiar with the service.
Image: Ground News Screengrab
Notice the choice of writing, ‘1-in-6 parents reject vaccine recommendations‘.
Two things shine through.
The use of ‘reject’ – a negative verb.
The use of ‘1-in-6’ – the figure accompanying the negative verb – 17%.
Statistically, this means that 5-in-6 parents follow vaccine recommendations – 83%.
This is the summary view. Scan down, and notice the Left-leaning Raw Story references a ‘staggering number’ of parents who reject vaccines. Notice also how the language softens – the claim is revised to ‘delay or reject’. Without clicking into the story, what is this breakdown? I’m not sure, but this is what sensationalism looks like to attract clicks.
Image: Ground News Summary View
Interestingly, the outlets tend to use different language and give different attention. What percentage of this is due to political bias and which is benign editorial licence is unclear.
On balance, the articles – Left, Right, and Centre – unanimously note that vaccine use is down, incidences of measles are up, RFK policies appear to be exacerbating the health management issue. The worst offenders are ‘very’ religious, white, politically conservative people. This cohort aligns with the RFK and the current administration.
The poll also found that parents who have postponed or avoided vaccinating their children tend to be white, conservative, and highly religious, and some choose to homeschool.
For this story, one of the sources was Greek and another French. Some claim to be behind a paywall, but this didn’t pose a problem for me. Perhaps they offer some complementary views.
Separately, on the right-hand side of the top image, there is a bias indicator: It shows that 57% of the reports were from Left-leaning journals, 36% Centre, leaving the remaining 7% to Right-leaning sources.
Image: Updated Bias Distribution
When I returned to write this post, I noticed that the reporting had changed as more Centre-focused reports picked up the story.
If I were to guess, this story shines a negative light on the Right, so they may just be waiting for the news cycle to pass.
In the (Right-facing) Greek story I read, the reporting wasn’t materially different to the other stories, which is to say they don’t try to render the story through rose-colour glasses.
As I make progress on my Democracy: The Great Illusion book, I am working on a chapter considering juries as a microcosm of a democratic process. I am debating whether to include this information in the chapter, Like the justice system in the United States, perhaps I should just flip a coin.
Having been seated on three juries in my lifetime, it was just as I had expected. Thankfully, I’ve never been on the other side of this Russian Roulette revolver. We really need to limit human-based decision systems. I have as much faith in juries as I have with democracies, but I thought I might take a peek into deliberation at a smaller scale, perhaps 12 jurors.
I read Kahneman’s Noise: A Flaw in Human Judgment when it was published in 2021, and I was not at all surprised how trial outcomes, quite often led to markedly different outcomes depending on the judge and the defendant. Although discrepancies based on sex and race have been well documented, this book showcased how much the judge’s mood affected sentencing—particularly if s/he was hungry. But this was not my interest.
My interest was to assess how the same case might fare given different judges and juries, so I asked ChatGPT. I want to share what it had to say—edited slightly for readability and flow.
ChatGPT Transcript
There have been several studies and controlled experiments assessing the variability in trial outcomes when the same case is tried separately by different judges and juries. These studies aim to evaluate the consistency and reliability of judicial decision-making, shedding light on the fallibility and variability of trial outcomes. Here are some key findings from such research:
Studies on Judicial and Jury Variability
The American Jury Study
One of the most notable studies is “The American Jury” by Harry Kalven and Hans Zeisel. This extensive study compared the decisions of juries and judges in actual cases. It found that juries and judges often agree on verdicts, but there are significant instances where they disagree. This disagreement highlights the subjective nature of decision-making and the potential for variability in trial outcomes.
Judge-Jury Agreement
Research by Valerie Hans and Neil Vidmar in “Judging the Jury” indicates that judges and juries reach the same verdict about 75-80% of the time. However, in the remaining 20-25% of cases, the disagreement can lead to different outcomes for the same case. These findings suggest that personal interpretations and biases can influence verdicts.
judges and juries reach the same verdict about 75-80% of the time
The Civil Jury Project
The Civil Jury Project at NYU School of Law has conducted several studies examining jury decision-making. Their research has highlighted how factors like juror demographics, pre-existing biases, and the presentation of evidence can lead to different outcomes in similar cases.
Controlled Experiments
Controlled experiments, such as those conducted by Shari Seidman Diamond and her colleagues, have involved mock trials where participants (acting as jurors) are presented with the same case. These experiments have shown significant variability in verdicts and sentences, demonstrating the influence of group dynamics, individual biases, and the presentation of evidence on jury decisions.
Key Findings
Cognitive Bias and Heuristics
Studies indicate that cognitive biases and heuristics significantly impact both judges and juries. Biases such as the anchoring effect, where the initial piece of evidence presented influences subsequent judgments, can lead to different outcomes for the same case.
Case Complexity and Juror Understanding
Research has shown that the complexity of legal cases can lead to variability in outcomes. Jurors often struggle with understanding complex legal and technical issues, leading to decisions based on incomplete or misunderstood information.
Influence of Legal Instructions
The clarity and comprehensibility of legal instructions given to juries play a crucial role in decision-making. Studies have found that poorly understood instructions can result in inconsistent application of the law, leading to variable outcomes.
Implications
The variability in trial outcomes highlights the fallibility of judicial processes and underscores the importance of reforms to enhance consistency and fairness. Potential reforms could include:
Improving Juror Education: Providing better education and resources for jurors to understand legal principles and case-specific details.
Reducing Cognitive Bias: Implementing measures to mitigate cognitive biases, such as standardized jury instructions and blind procedures.
Enhancing Legal Instructions: Ensuring that legal instructions are clear, concise, and comprehensible to prevent misinterpretation.
Utilizing Expert Testimony: Increasing the use of expert testimony to clarify complex issues for jurors.
Conclusion
The variability in trial outcomes, when the same case is tried by different judges and juries, underscores the need for ongoing research and reform in judicial processes. By understanding the factors that contribute to inconsistent verdicts, we can develop strategies to enhance the fairness and reliability of the justice system.
References
Kalven, H., & Zeisel, H. (1966). The American Jury. Little, Brown and Company.
Hans, V., & Vidmar, N. (1986). Judging the Jury. Plenum Press.
Diamond, S. S., & Rose, M. R. (2005). Real Juries: The Power of Watching. Basic Books.
The Civil Jury Project at NYU School of Law. (n.d.). Research on Jury Decision Making. Retrieved from [website].
Kahneman, D., Sibony, O., & Sunstein, C. R. (2021). Noise: A Flaw in Human Judgment. Little, Brown Spark.