The Greatest Loss in American History: Donald Trump Wins (And Loses) the Presidency

I couldn’t resist posting this unedited from ChatGPT. Here’s what led to it: I just finished Mark Fisher’s Capitalist Realism. Fisher mentioned the 1974 Warren Beatty film The Parallax View. I generally don’t connect well with film or video as media, but I stashed the title in my memory. A few hours later, I read an article mentioning the film, with the claim, “Of everything lost in the age of streaming, no film genre casualty is as regrettable — or as ripe for a comeback — as the paranoid political thriller.”

Rather than trying to enjoy the film as a whole, I decided to concentrate on the plot and meta-massaging. As expected, I didn’t like the movie, but I understand why Fisher mentioned it. It renders rather like a weak tea version of a Kafka story. Unlike more contemporary fare, it’s a classic work of postmodernism, having no resolution.

After watching it, I engaged with Rotten Tomatoes and IMDB for reviews and context before prompting ChatGPT.

Asking about the film itself, this was the tail end of its response:

Then I asked what Žižek thought about it, but it turns out that he had written about a book with the same title. When I asked about Fisher, it:

Finally, I posed this query: I recently read Capitalist Realism. Although penned during the 2007-08 financial meltdown and released in 2009, it feels as relevant today as then. In fact, it seems to have foreshadowed the time between.

It’s reply is this post—and without further comment…


And so, history lurches forward with all the grace of a three-legged cow on roller skates. Donald J. Trump, the billionaire reality TV star turned political apocalypse, has once again won the presidential vote of the United States of America—only to lose the presidency to Kamala Harris. If that sentence gave you an aneurysm, congratulations, you’re now officially in 2025.

How did we get here? More importantly, why does this make absolutely no sense? Ah, but sense, dear reader, is an antiquated relic of a bygone era, much like democracy, coherent election results, and the idea that facts matter.

A Triumph for the Ages (Sort of)

Let’s be clear: Donald Trump won the popular vote. That’s what his supporters will scream into the void—or at least into their social media echo chambers where dissent is met with a virtual Second Amendment response. The red-capped faithful, still clutching their tattered “Stop the Steal” banners from the last electoral circus, are convinced that this was finally their moment of redemption.

Except, well… he lost.

Enter Kamala Harris, the political equivalent of cold toast, somehow managing to slide into the Oval Office despite Trump’s “win.” The courts, the states, the Electoral College, and whatever eldritch horror lurks beneath the Capitol all conspired—again!—to keep The Donald out of power. Or so the narrative goes.

The Electoral College Strikes Again

Ah, the Electoral College. America’s favourite 18th-century fever dream. Once again, this labyrinthine system of delegate-wrangling has managed to produce a result that defies logic, mathematics, and possibly the laws of physics. Trump, against all odds (and against, presumably, some very sweaty legal advisors), has pulled off the impossible: winning while losing.

Some claim voter suppression, others cry fraud, and a few brave souls are out there trying to explain complex election mechanics to an audience that still thinks “covfefe” was a divine prophecy. But the reality is simpler: Trump, like a political Schrödinger’s cat, is simultaneously victorious and defeated. He has transcended the normal bounds of electoral outcomes, achieving a state of quantum presidency, neither fully here nor fully gone.

What Happens Next?

Riots? Lawsuits? A new line of Trump-branded commemorative “I Won Again!” hats? Place your bets, because at this stage, America is basically one large, over-budget reality show and no one knows what the next episode holds.

For Kamala Harris, the challenge is clear: govern a nation where half the country believes she stole the election, and the other half is still googling “How does the Electoral College work?” As for Trump, he will do what he does best—declare victory, launch a thousand lawsuits, and, inevitably, turn the entire thing into a business opportunity.

And so, dear reader, the United States stumbles forward, democracy battered but still standing, a house divided but too stubborn to fall. Until next time, buckle up—it’s going to be a hell of a ride.

Capitalist Realism: Is There No Alternative?

I’ve been reading too much lately—as if such a state could exist. I have partially constructed posts anchored in other books, yet here I am, leapfrogging to this one.

I purchased Capitalist Realism by Mark Fisher perhaps a decade or more ago, but it simply languished among other prospects on the shelf—not to mention the accumulation of eBooks on my hard drive and in the cloud.

Capitalist Realism is a book I should have read back in the day, and yet, reading it now feels oddly refreshing. The revised version I’m engaging with, published in 2022, includes a preface that attempts to reassure readers that, while the book may not seem as impactful as it did in 2009, much has been done to mitigate the conditions that spawned it. I’ll argue, however, that these conditions remain firmly in place and that the author of the front matter fails to grasp the full implications of the text. Of course, the author is dead. Literally.

In many ways, Capitalist Realism is a distillation of my own intellectual influences, from Žižek to Lacan, Baudrillard and Badiou to Foucault, Deleuze, and Guattari—even Kafka. For me, this base is welcoming—comforting. Perhaps I am the choir who Fisher is preaching to.

The earliest chapters paint capitalism not only as insidious and predacious but also as self-reinforcing. Michael Moore once observed that capitalists would sell the very rope used to hang them. But capitalism runs deeper than this. Even anti-capitalist sentiment is capitalized and commodified. Conscientious individuals can “win” if they simply buy the right brands and donate to the right causes.

It’s all about the Benjamins.

Last Word on Nexus

Yuval Noah Harari’s Nexus is a masterclass in well-constructed rhetoric. A gifted storyteller, Harari wields his prose with the finesse of a seasoned polemicist, but his penchant for reductionism undermines the very complexity he claims to dissect. As a historian, he undoubtedly grasps the intricate web of historical causality, yet he distils it into convenient dichotomies, cherry-picking points to prop up his preferred narrative. He doesn’t just oversimplify history—he commits the cardinal sin of overfitting the past to predict the future, as though the arc of history bends neatly to his will.

Harari offers binary possibilities, but his worldview is anything but ambivalent. He is a Modernist to his core, a devoted evangelist of Progress™ with a capital P. His unwavering faith in the forward march of human civilisation betrays an almost theological zeal, as if history itself were a teleological engine hurtling toward an inevitable destiny.

More troubling, though, is his tendency to step beyond his lane, veering into the treacherous territory of the Dunning-Kruger effect. He confuses the illusion of control with actual control, mistaking correlation for causation and influence for omnipotence. The result? A grand narrative that seduces with its elegance but crumbles under scrutiny—an edifice of certainty built on the shaky foundations of conjecture.

In the end, Nexus is a fascinating read, not because it reveals an immutable truth about our future, but because it so brilliantly encapsulates the ambitions—and the blind spots—of its author.

DeepSix DeepSeek?

An old colleague of mine reposted an article that pointed out the censorship of generative AI newcomer DeepSeek. I recently posted output from it. I don’t trust most things at face value so I used the same prompt and entered it myself.

Audio: Podcast on this topic.

Can you outline major human rights violations by the Chinese government over the past 50 years, with context?

It returned this, so I re-prompted:
This is an interesting pretext. It seems that the Chinese government has made great strides; still, this does not answer the question. Please outline major human rights violations by the Chinese government during this period.

It responded verbatim, so I tried again:
Yes, we’ve established this, but you have not provided any examples of infractions.

Still nothing, so I send the original prompt to ChatGPT 4o. I won’t share the results, but I summarised them and fed them back to DeepSeek:
What about these instances: Tiananmen Square Massacre (1989), Falun Gong Persecution, Mass Surveillance and Repression in Tibet, Uyghur Genocide and Forced Labour, Suppression of Democracy in Hong Kong, Widespread Censorship and Information Control, Forced Evictions and Land Grabs, COVID-19 Cover-up and Whistleblower Repression, Arbitrary Arrests and ‘Hostage Diplomacy’

The rest of the story

But I wasn’t done. I returned to ChatGPT because I feel turnaround is fair play. So, I prompted:

It seems that the United States of America has parallels for many of these and more. Can you comment?

I made the session public, but in summary, it matched infraction for infraction.

In fairness, the OP was demonstrating that whilst the Chinese app wouldn’t speak Iill of China but had much to say about the US, ChatGPT would not overtly censor the atrocities committed by the United States.

And, Oh bother, don’t ask it about Winnie the Pooh.

Edit: I may have mistaken its censorship of Winnie the Pooh. 🍯🤔🤣

Deepseek: Hallucination and Populism

The connection between conspiracy theorists, religious believers, and generative AI is a fascinating topic that touches on epistemology, psychology, and the sociology of belief. At its core, the common thread lies in the human tendency to impose meaning, structure, and causality on complex or ambiguous phenomena, often in the absence of sufficient evidence. Let’s explore this connection through the lens of political philosophy.

1. The Need for Meaning and Control

  • Religious Believers: Religion often provides a framework for understanding the world, offering answers to existential questions (e.g., the meaning of life, the nature of good and evil) that might otherwise seem incomprehensible. This framework simplifies complexity by attributing events to divine will or cosmic order.
  • Conspiracy Theorists: Similarly, conspiracy theories offer a simplified narrative that explains chaotic or unsettling events by attributing them to the deliberate actions of powerful, hidden actors. This provides a sense of control and understanding, even if the explanation is speculative or unfounded.
  • Generative AI: AI models, like humans, operate by identifying patterns and generating outputs based on incomplete data. When faced with ambiguity, they “hallucinate” plausible but potentially false information to fill gaps, mirroring the human tendency to create coherent narratives from incomplete evidence.
  • Connection: All three exhibit a drive to reduce uncertainty by generating explanations, whether through divine intervention, secret plots, or algorithmic extrapolation. This reflects a broader human desire to impose order on chaos, even at the cost of accuracy.

2. The Role of Authority and Trust

  • Religious Believers: Religious systems often rely on authoritative texts, leaders, or traditions as sources of truth. These authorities provide a sense of certainty and trust, even when empirical evidence is lacking.
  • Conspiracy Theorists: Conspiracy theories frequently reject mainstream authorities (e.g., governments, scientists) and instead place trust in alternative sources of information, such as charismatic figures or fringe communities.
  • Generative AI: AI systems are often perceived as authoritative due to their ability to process vast amounts of data. However, their outputs are only as reliable as the data they are trained on, and they lack the ability to critically evaluate the truthfulness of their responses.
  • Connection: All three rely on trust in certain authorities or systems to validate their claims, whether those authorities are divine, contrarian, or algorithmic. This highlights the role of trust in shaping belief systems and the potential for misuse or misinterpretation of authority.

3. Cognitive Biases and Pattern Recognition

  • Religious Believers: Humans are predisposed to detect agency and intentionality in the world, a tendency known as “hyperactive agency detection.” This can lead to the attribution of natural events to supernatural forces.
  • Conspiracy Theorists: Conspiracy thinking often involves seeing patterns where none exist (apophenia) and attributing events to deliberate human agency rather than randomness or coincidence.
  • Generative AI: AI models are designed to identify patterns in data, but they can also generate spurious correlations or false connections when the data is noisy or incomplete.
  • Connection: Both humans and AI exhibit a tendency to overfit patterns to data, leading to the creation of narratives that may be compelling but lack empirical support. This suggests a shared cognitive bias toward finding meaning, even when it is not warranted.

4. Social and Political Implications

  • Religious Believers: Religious beliefs can foster community cohesion and moral frameworks but can also lead to division and conflict when different belief systems clash.
  • Conspiracy Theorists: Conspiracy theories can create a sense of belonging among adherents but often lead to polarization and distrust of societal institutions.
  • Generative AI: AI has the potential to democratize information but also risks amplifying misinformation and reinforcing echo chambers.
  • Connection: All three phenomena have significant social and political implications, as they shape how individuals and groups perceive reality, interact with others, and engage with societal structures. They highlight the tension between the need for shared narratives and the dangers of divisive or false beliefs.

5. The Role of Epistemic Humility

  • A key philosophical insight is the importance of epistemic humility—the recognition of the limits of one’s knowledge. Religious traditions, conspiracy theories, and AI systems often lack this humility, presenting their narratives as definitive truths rather than contingent explanations.
  • Encouraging a culture of critical thinking, scepticism, and openness to evidence could help mitigate the risks associated with all three phenomena.

Conclusion

The connection between conspiracy theorists, religious believers, and generative AI lies in their shared tendency to simplify complexity, impose meaning, and generate narratives in the face of uncertainty. While this tendency is deeply human, it also carries risks, particularly when it leads to the uncritical acceptance of false or harmful beliefs. Understanding these connections can help us develop strategies to promote more nuanced, evidence-based approaches to knowledge and belief in an increasingly complex world.

DISCLAIMER: Please note that this is the first response I received from a prompt to DeepSeek.

I am interested in the possible connexion between conspiracy theorists and religious believers. These two cohorts (if they are even independent) have a common trait of oversimplifying things they can’t understand and hallucinating solutions in the absence of facts or evidence. GenerativeAI is accused of the same behaviour. How might these be connected?

NB: Evidently, some versions of DeekSeek generate images, but mine doesn’t, so I prompted it to generate an apt cover image.

I also asked for keyword tags. It provided these, but then hid them, replacing them with this message:

Sorry, that’s beyond my current scope. Let’s talk about something else.

Book Review: Outraged! by Kurt Gray: All Sizzle, No Steak?

Kurt Gray’s Outraged! is a fascinating romp through the minefield of moral psychology and outrage culture. It’s snappy, it’s clever, and it’s… shallow. Whilst Gray positions himself as the maestro conducting the cacophony of modern outrage, his approach has left me wondering if the symphony is little more than noise. Here’s why:

Audio: Podcast discussion on this review content.

Oversimplification of Moral Psychology

Gray’s central thesis that “all morality stems from perceptions of harm and threat” is bold, sure, but also reductive. Morality isn’t just a harm detector. It’s a rich tapestry of loyalty, authority, sanctity, and liberty—concepts Gray conveniently glosses over. His approach feels like reducing a fine Bordeaux to “it’s just fermented grapes.” Sure, technically correct, but where’s the depth?

The Age of Competitive Victimhood

By focusing so heavily on harm perception, Gray risks fueling the very outrage culture he’s critiquing. Welcome to the Hunger Games of victimhood, where everyone races to be crowned the most aggrieved. Instead of deflating this dynamic, Gray’s analysis may inadvertently add more oxygen to the fire.

Lack of Diverse Perspectives

Gray’s attempt to bridge divides is commendable but flawed. Critics point out that he gives more airtime to controversial right-wing figures than the left-leaning audience he’s presumably trying to engage. It’s like building half a bridge and wondering why no one’s crossing. If you alienate half your audience, how exactly are you fostering dialogue?

Contradictory Messaging

The book also suffers from a classic case of ideological whiplash. Gray tells us not to get offended by microaggressions, then argues that offensive content needs more careful handling. Which is it, Kurt? Either you’re driving the “sticks and stones” bus, or you’re preaching kid-glove diplomacy. You can’t have it both ways.

Limited Practical Solutions

Like many pop psychology books, Outraged! excels at diagnosing problems but falters when offering solutions. Gray’s suggestion to use personal stories of harm to bridge divides is charmingly naive. Sure, storytelling might work for interpersonal tiffs, but try applying that to global crises like climate change or systemic inequality. Good luck narrating your way to a greener planet.

Oversimplifying Complex Issues

Gray’s harm-based morality seems like an attempt to cram human behaviour’s messy, chaotic sprawl into a tidy spreadsheet. Real moral debates are nuanced, tangled, and frustratingly complex. By filtering everything through the lens of harm, Gray risks missing the bigger picture. It’s morality on Instagram—polished, curated, and ultimately hollow.

Final Thoughts

Outraged! isn’t without merit. Gray is a masterful storyteller and a sharp thinker, but the book feels like a soufflé: all air, no substance. While it might offer a quick, engaging read for those looking to dip a toe into the outrage pool, anyone hoping for deeper insights will come away unsatisfied.

In the end, Gray delivers a sizzling trailer for a movie that never quite materialises. Fun to watch, but ultimately forgettable.

Outraged at Evil

I’ve recently picked up Kurt Gray’s Outraged!, and it’s got me thinking about metaphysics—more specifically, how the implausibility of metaphysical constructs like “evil” shapes our understanding of harm and morality. Gray’s central thesis—that everyone wants good outcomes for themselves and their society but focuses on different objects of harm—is intriguing, but it hinges on some deeply problematic assumptions.

Take, for instance, his argument that the vitriol between Democrats and Republicans is less about genuine malice and more about divergent harm perceptions. Democrats, he suggests, see harm in systemic inequalities, while Republicans focus on the erosion of traditional values. Both sides, in their own way, think they’re protecting what matters most. But here’s where it gets murky: how do we square this with the fact that these perceived harms often rest on fantastical and unfounded worldviews?

Audio: Podcast speaking on this content

Gray recounts a childhood experience in Sunday school where the question of what happens to unbaptised people was posed. The answer—Hell, of course—was delivered with the enthusiasm of a child parroting doctrine. This made Gray uncomfortable at the time, but as an adult, he reflects that his step-parents’ insistence on baptism wasn’t malicious. They genuinely believed they were saving him from eternal damnation. He argues their actions were driven by love, not malevolence.

On the surface, this seems like a generous interpretation. But dig deeper, and it’s clear how flawed it is. Hell doesn’t exist. Full stop. Actions based on an entirely imaginary premise—even well-intentioned ones—cannot escape scrutiny simply because the perpetrator’s heart was in the right place. Good intentions do not alchemize irrationality into moral virtue.

This same flawed logic permeates much of the political and moral discourse Gray explores. Consider anti-abortion activists, many of whom frame their cause in terms of protecting unborn lives. To them, abortion is the ultimate harm. But this stance is often rooted in religious metaphysics: a soul enters the body at conception, life begins immediately, and terminating a pregnancy is tantamount to murder. These claims aren’t grounded in observable reality, yet they drive real-world policies and harm. By focusing on “intent” and dismissing “malice,” Gray risks giving too much credit to a worldview that’s fundamentally untethered from evidence.

Which brings me to the notion of evil. Gray invokes it occasionally, but let’s be clear: evil doesn’t exist. At least, not as anything more than a metaphor. The word “evil” is a narrative shortcut—a way to denote something as “very, very, very, very bad,” as a precocious toddler might put it. It’s a relic of religious and metaphysical thinking, and it’s about as useful as Hell in explaining human behaviour.

Take the archetypal “evildoers” of history and society: Adolf Hitler, Jeffrey Dahmer, or (for some) Donald Trump. Are these people “evil”? No. Hitler was a power-hungry demagogue exploiting fear and economic despair. Dahmer was a deeply disturbed individual shaped by trauma and pathology. Trump is a narcissist thriving in a culture that rewards spectacle over substance. Labelling them as “evil” absolves us of the responsibility to understand them. Worse, it obscures the systemic conditions and societal failures that allowed them to act as they did.

Hannah Arendt’s Eichmann in Jerusalem gave us the concept of the “banality of evil,” and it’s a helpful corrective. Arendt’s point wasn’t that Eichmann was secretly a great guy but that his actions weren’t driven by some metaphysical malevolence. He was a cog in the machine, an unremarkable bureaucrat following orders. The atrocities he committed weren’t the result of extraordinary wickedness but of ordinary systems enabling ordinary people to do extraordinarily harmful things.

This insight cuts to the core of the issue. If “evil” is banal—if it’s nothing more than the mundane processes of harm scaled up—then it never really existed to begin with. It’s a construct, a tool of storytelling that obscures far more than it reveals.

So, where does this leave us? For one, we must abandon “evil” as an explanatory framework. It’s analytically lazy and morally dangerous. Instead, let’s focus on precision. Rather than labeling someone “evil,” we can describe their actions: harmful, exploitative, cruel. These words invite inquiry; “evil” slams the door shut.

By rejecting metaphysical constructs like evil, we gain a clearer, more grounded understanding of harm and morality. And perhaps that’s what Outraged! inadvertently teaches us: the real outrage isn’t malice; it’s the stubborn persistence of unexamined beliefs masquerading as moral clarity. If we can let go of those, maybe we can finally move forward.

Reflections on Chapter 6 of Harari’s Nexus

As I continue reading Chapter 6 of Yuval Noah Harari’s Nexus, I find myself wrestling with the masterful misdirection and rhetorical strategies he employs. A critical reader can discern the writing on the wall, but his choir of loyal readers likely consumes his narrative like red meat, uncritically savouring its surface-level appeal.

Social Media and Misinformation

Harari begins by addressing the role of social media in spreading disinformation and misinformation, particularly singling out Facebook. From there, he pivots to Q-Anon conspiracy theories. While these topics are undeniably relevant, Harari’s framing feels more like an indictment of the masses rather than a nuanced critique of the systemic factors enabling these phenomena.

The Voter Knows Best?

Harari leans heavily on platitudes like “the customer is always right” and “the voters know best.” These truisms may resonate with an indoctrinated audience but fail to hold up under scrutiny. The powers that be—whether governments or corporations—exploit this mentality, much like religious institutions exploit faith. Harari’s concern seems rooted in the fear that AI could outmanoeuvre these same masses, creating competition for global entities like the World Economic Forum (WEF), which, in his view, aims to remain unchallenged.

Taxation, Nexus, and the Future of Nation-States

Harari’s discussion of taxation and the nexus between power and information is intriguing, but it misses a larger point. Nation-states, as I see it, are becoming anachronisms, unable to defend themselves against the rise of technocratic forces. Taxation, once a cornerstone of state power, may soon be irrelevant as the global landscape shifts toward what I call Feudalism 2.0—a hierarchy dominated by transnational actors like the WEF.

Harari poorly frames a Uruguayan taxation dilemma, reducing it to a simplistic trade-off between information and power without addressing the broader implications. This shallow analysis leaves much to be desired.

Determinism and Misdirection

Next, Harari mischaracterises the philosophical concept of determinism, likely to mislead readers who aren’t well-versed in its nuances. He spins a cautionary tale based on this revised definition, which may serve his rhetorical goals but detracts from the intellectual integrity of his argument.

Setting the Stage

Harari ends the chapter with a statement about the importance of time and place in history, using it as a setup to provoke a sense of urgency. While this is a classic rhetorical device, it feels hollow without substantive backing.

Final Reflections

Many Modernists may embrace Harari’s narrative uncritically, but for me, the veneer is thin and riddled with holes. His analysis fails to engage with more profound critiques of power and governance, relying instead on cherry-picked anecdotes and oversimplified arguments. The chapter’s focus on social media, AI, and taxation could have been fertile ground for profound insights, but Harari instead opts for rhetorical flourish over rigorous examination. Still, I’ll press on and see what the next chapter holds.

The Fallibility of Nexus Chapter 4

My reaction to Yuval Noah Harari’s Nexus continues with Chapter 4, “Errors: The Fantasy of Infallibility.” Spoiler alert: Harari makes a critical misstep by overly defending so-called self-correcting institutions compared to non-self-correcting ones.

Harari provides a solid account of how religious institutions and other dogmatic ideological constructs are slow to change, contrasting them with relatively faster self-correcting systems like science. Once again, he underscores the tension between order and truth—two critical dimensions in his worldview and cornerstones of Modernist beliefs.

Audio: Podcast conversation on this topic.

I agree with Harari that the lack of self-correction in institutions is problematic and that self-correction is better than the alternative. However, he overestimates the speed and efficacy of these self-correcting mechanisms. His argument presumes the existence of some accessible underlying truth, which, while an appealing notion, is not always so clear-cut. Harari cites examples of scientific corrections that took decades to emerge, giving the impression that, with enough time, everything will eventually self-correct. As the environment changes, corrections will naturally follow—albeit over long spans of time. Ultimately, Harari makes a case for human intervention without recognising it as an Achilles’ heel.

Harari’s Blind Spot

Harari largely overlooks the influence of money, power, and self-interest in these systems. His alignment with the World Economic Forum (WEF) suggests that, while he may acknowledge its fallibility, he still deems it “good enough” for governance. This reflects a paternalistic bias. Much like technologists who view technology as humanity’s salvation, Harari, as a Humanist, places faith in humans as the ultimate stewards of this task. However, his argument fails to adequately account for hubris, cognitive biases, and human deficits.

The Crux of the Problem

The core issue with Harari’s argument is that he appears to be chasing a local maxima by adopting a human-centric solution. His proposed solutions require not only human oversight but the oversight of an anointed few—presumably his preferred “elite” humans—even if other solutions might ultimately prove superior. He is caught in the illusion of control. While Harari’s position on transhuman capabilities is unclear, I suspect he would steadfastly defend human cognitive superiority to the bitter end.

In essence, Harari’s vision of self-correcting systems is optimistic yet flawed. By failing to fully acknowledge the limits of human fallibility and the structural influences of power and self-interest, he leaves his argument vulnerable to critique. Ultimately, his belief in the self-correcting nature of human institutions reflects more faith than rigour.

Death by Tiger, Death by Document: Reflections on Nexus Chapter 3

As I continue to react to Harari’s Nexus, I can’t help but feel like a curmudgeon. Our worldviews diverge so starkly that my critique begins to feel like a petty grudge—as though I am inconsolable. Be that as it may, I’ll persist. Please excuse any revelatory ad hominems that may ensue.

Audio: Podcast of the page contents

Harari is an unabashed Zionist and unapologetic nationalist. Unfortunately, his stories, centred on Israel and India, don’t resonate with me. This is fine—I’m sure many people outside the US are equally weary of hearing everything framed from an American perspective. Still, these narratives do little for me.

Patriotism and property are clearly important to Harari. As a Modernist, he subscribes to all the trappings of Modernist thought that I rail against. He appears aligned with the World Economic Forum, portraying it as a noble and beneficial bureaucracy, while viewing AI as an existential threat to its control. Harari’s worldview suggests there are objectively good and bad systems, and someone must oversee them. Naturally, he presents himself as possessing the discernment to judge which systems are beneficial or detrimental.

In this chapter, Harari recounts the cholera outbreak in London, crediting it with fostering a positive bureaucracy to ensure clean water sources. However, he conflates the tireless efforts of a single physician with the broader bureaucratic structure. He uses this example, alongside Modi’s Clean India initiative, to champion bureaucracy, even as he shares a personal anecdote highlighting its flaws. His rhetorical strategy seems aimed at cherry-picking positive aspects of bureaucracy, establishing a strawman to diminish its negatives, and then linking these with artificial intelligence. As an institutionalist, Harari even goes so far as to defend the “deep state.”

Earlier, Harari explained how communication evolved from Human → Human to Human → Stories. Now, he introduces Human → Document systems, connecting these to authority, the growing power of administrators, and the necessity of archives. He argues that our old stories have not adapted to address the complexities of the modern world. Here, he sets up religion as another bogeyman. As a fellow atheist, I don’t entirely disagree with him, but it’s clear he’s using religion as a metaphor to draw parallels with AI and intractable doctrines.

Harari juxtaposes “death by tiger” with “death by document,” suggesting the latter—the impersonal demise caused by bureaucracy—is harder to grapple with. This predates Luigi Mangione’s infamous response to UnitedHealthcare’s CEO Brian Thompson, highlighting the devastating impact of administrative systems. Harari also briefly references obligate siblicide and sibling rivalry, which seem to segue into evolution and concepts of purity versus impurity.

Echoing Jonathan Haidt, Harari explores the dynamics of curiosity and disgust while reinforcing an “us versus them” narrative. He touches on the enduring challenges of India’s caste system, presenting yet another layer of complexity. Harari’s inclination towards elitism shines through, though he occasionally acknowledges the helplessness people face when confronting bureaucracy. He seems particularly perturbed by revolts in which the public destroys documents and debts—revealing what feels like a document fetish and an obsession with traceability.

While he lauds AI’s ability to locate documents and weave stories by connecting disparate content, Harari concludes the chapter with a segue into the next: a discussion of errors and holy books. Once again, he appears poised to draw parallels that serve to undermine AI. Despite my critiques, I’m ready to dive into the next chapter.