We live in an age intoxicated by models: climate models, economic models, epidemiological models, cosmological models—each one an exquisite confection of assumptions draped in a lab coat and paraded as gospel. Yet if you trace the bloodline of model-building back through the annals of intellectual history, you encounter two figures who coldly remind us of the scam: George Box and Hilary Lawson.
Box: The Gentle Assassin of Certainty
George Box, the celebrated statistician, is often credited with the aphorism: “All models are wrong, but some are useful.” However, Box himself never uttered this precise phrase. What he did say, in his 1976 paper Science and Statistics, was:
“Since all models are wrong, the scientist must be alert to what is importantly wrong.”
George Box
The “some are useful” flourish was added later by a public desperate to sweeten the bitter pill. Nevertheless, Box deserves credit for the lethal insight: no model, however elegant, perfectly captures reality. They are provisional guesses, finger-paintings smeared across the rough surface of the unknown.
Audio: NotebookLM podcast on this topic.
Lawson: The Arsonist Who Burned the Map
Hilary Lawson, contemporary philosopher and author of Closure: A Story of Everything, drags Box’s modest scepticism into full-blown philosophical insurrection. In a recent lecture, Lawson declared:
“You don’t need truth to have a usable model.”
Hilary Lawson
Where Box warns us the emperor’s clothes don’t fit, Lawson points out that the emperor himself is a paper doll. Either way, we dress our ignorance in equations and hope no one notices the draft.
Lawson’s view is grim but clarifying: models are not mere approximations of some Platonic truth. They are closures—temporary, pragmatic structures we erect to intervene effectively in a world we will never fully comprehend. Reality, in Lawson’s framing, is an “openness”: endlessly unfolding, resistant to total capture.
The Case of the Celestial Spheres
Take Aristotle’s model of celestial spheres. Ludicrous? Yes. Obsolete? Absolutely. Yet for centuries, it allowed navigators to chart courses, astrologers to cast horoscopes, and priests to intimidate peasants—all without the slightest whiff of heliocentrism. A model does not need to be right; it merely needs to be operational.
Our modern theories—Big Bang cosmology, dark matter, and quantum gravity—may well be tomorrow’s celestial spheres: charming relics of ignorance that nonetheless built bridges, cured diseases, and sold mobile phones.
Summary Table: Lawson’s View on Models and Truth
Aspect
Lawson’s Position
Role of Models
Tools/metaphors for intervention, not truth
Truth
Not required for usefulness
Refinement
Models are improved for practical effectiveness
Reality
Fundamentally open, never fully captured by models
Implication
Focus on utility and adaptability, not final truth
Conclusion
Box taught us to distrust the fit of our models; Lawson reminds us there is no true body underneath them. If truth is a ghost, then our models are ghost stories—and some ghost stories, it turns out, are very good at getting us through the night.
We are left not with certainty, but with craftsmanship: the endless, imperfect art of refining our closures, knowing full well they are lies that work. Better lies. Usable lies. And perhaps, in a world without final answers, that is the most honest position of all.
A five-part descent into the illusion of autonomy, where biology writes the script, reason provides the excuse, and the self is merely the echo of its own conditioning.This is a follow-up to a recent post on the implausibility of free will.
Audio: NotebookLM podcast discussing this topic.
Constraint Is Not Freedom
The ergonomic cage of compatibilist comfort
“You are not playing the piano. You are the piano, playing itself — then applauding.”
Compatibilists — those philosophical locksmiths determined to keep the myth of free will intact — love to say that constraint doesn’t contradict freedom. That a system can still be “free” so long as it is coherent, self-reflective, and capable of recursive evaluation.
In this view, freedom doesn’t require being uncaused — it only requires being causally integrated. You don’t need to be sovereign. You just need to be responsive.
“The pianist may not have built the piano — but she still plays it.”
It sounds lovely.
It’s also false.
You Are the Piano
This analogy fails for a simple reason: there is no pianist. No ghost in the gears. No homunculus seated behind the cortex, pulling levers and composing virtue. There is only the piano — complex, self-modulating, exquisitely tuned — but self-playing nonetheless.
The illusion of choice is merely the instrument responding to its state: to its internal wiring, environmental inputs, and the accumulated sediment of prior events. What feels like deliberation is often delay. What feels like freedom is often latency.
Recursive ≠Free
Ah, but what about reflection? Don’t we revise ourselves over time?
We do. But that revision is itself conditioned. You didn’t choose the capacity to reflect. You didn’t choose your threshold for introspection. If you resist a bias, it’s because you were predisposed — by some cocktail of education, temperament, or trauma — to resist it.
A thermostat that updates its own algorithm is still a thermostat.
It doesn’t become “free” by being self-correcting. It becomes better adapted. Likewise, human introspection is just adaptive determinism wearing a philosophical hat.
Compatibilists smuggle in a quieter, defanged version of freedom: not the ability to do otherwise, but the ability to behave “like yourself.”
But this is freedom in retrospect, not in action. If all freedom means is “acting in accordance with one’s programming,” then Roombas have free will.
If we stretch the term that far, it breaks — not loudly, but with the sad elasticity of a word losing its shape.
TL;DR: The Pianist Was Always a Myth
You didn’t design your mental architecture.
You didn’t select your desires or dispositions.
You didn’t choose the you that chooses.
So no — you’re not playing the piano. You are the piano — reverberating, perhaps beautifully, to stimuli you didn’t summon and cannot evade.
Full Disclosure: I read the first volume of The Hunger Games just before the film was released. It was OK – certainly better than the film. This video came across my feed, and I skipped through it. Near the end, this geezer references how Katniss saves or recovers deteriorated morality. Me being me, I found issue with the very notion that a relative, if not subjective, concept could be recovered.
The OP asks if The Hunger Games are a classic. I’d argue that they are a categorical classic, like Harry Potter, within the category of YA fiction.
Audio: NotebookLM podcast discussing this topic.
The Hunger Games doesn’t depict the death of morality — it’s a masterclass in how to twist it into a circus act.
Video: YouTube video that spawned this topic.
Let us dispense with the hand-wringing. The Hunger Games is not a parable of moral decay. It is something far more chilling: a vivid portrait of moral engineering — the grotesque contortion of ethical instincts into instruments of domination and spectacle.
Those who bemoan the “decline of morality” in Panem have rather missed the point. There is no absence of morality in the Capitol — only a different version of it. A rebranded, corporatised, state-sanctioned morality, lacquered in lipstick and broadcast in 4K. It is not immorality that reigns, but a hyperactive ideological morality, designed to keep the masses docile and the elites draped in silk.
This is not moral entropy; it’s moral mutation.
Children are not slaughtered because people have forgotten right from wrong — they are slaughtered because a society has been trained to believe that this is what justice looks like. That blood is penance. That fear is unity. That watching it all unfold with a glass of champagne in hand is perfectly civilised behaviour.
This isn’t the death of morality. It’s a hostile takeover.
The Moral PR Machine
If morality is, as many of us suspect, relative — a cultural construct built on consensus, coercion, and convenience — then it can no more “decline” than fashion trends can rot. It simply shifts. One day, shoulder pads are in. The next, it’s child-on-child murder as prime-time entertainment.
In Panem, the moral compass has not vanished. It’s been forcibly recalibrated. Not by reason or revelation, but by propaganda and fear. The Games are moral theatre. A grim ritual, staged to remind the Districts who holds the reins, all under the nauseating guise of tradition, order, and justice.
The citizens of the Capitol aren’t monsters — they’re consumers. Trained to see horror as haute couture. To mistake power for virtue. To cheer while children are butchered, because that’s what everyone else is doing — and, crucially, because they’ve been taught it’s necessary. Necessary evils are the most seductive kind.
Katniss: Not a Saint, But a Saboteur
Enter Katniss Everdeen, not as the moral saviour but as the spanner in the machine. She doesn’t preach. She doesn’t have a grand theory of justice. What she has is visceral disgust — an animal revulsion at the machinery of the Games. Her rebellion is personal, tribal, and instinctive: protect her sister, survive, refuse to dance for their amusement.
She isn’t here to restore some lost golden age of decency. She’s here to tear down the current script and refuse to read her lines.
Her defiance is dangerous not because it’s moral in some abstract, universal sense — but because it disrupts the Capitol’s moral narrative. She refuses to be a pawn in their ethical pageant. She reclaims agency in a world that has commodified virtue and turned ethics into state theatre.
So, Has Morality Declined?
Only if you believe morality has a fixed address — some eternal North Star by which all human actions may be judged. But if, as postmodernity has rather insistently suggested, morality is a shifting social fiction — then Panem’s horror is not a fall from grace, but a recalibration of what counts as “grace” in the first place.
And that’s the real horror, isn’t it? Not that morality has collapsed — but that it still exists, and it likes what it sees.
Conclusion: The Real Hunger
The Hunger Games is not about a society starved of morality — it’s about a world gorging on it, cooked, seasoned, and served with a garnish of guiltless indulgence. It is moral appetite weaponised. Ethics as edict. Conscience as costume.
If you feel sickened by what you see in Panem, it’s not because morality has vanished.
Once upon a time — which is how all good fairy tales begin — suspension of disbelief was a tidy little tool we used to indulge in dragons, space travel, talking animals, and the idea that people in rom-coms have apartments that match their personalities and incomes. It was a temporary transaction, a gentleman’s agreement, a pact signed between audience and creator with metaphorical ink: I know this is nonsense, but I’ll play along if you don’t insult my intelligence.
Audio: NotebookLM podcast of this page content.
This idea, famously coined by Samuel Taylor Coleridge as the “willing suspension of disbelief,” was meant to give art its necessary air to breathe. Coleridge’s hope was that audiences would momentarily silence their rational faculties in favour of emotional truth. The dragons weren’t real, but the heartbreak was. The ghosts were fabrications, but the guilt was palpable.
But that was then. Before the world itself began auditioning for the role of absurdist theatre. Before reality TV became neither reality nor television. Before politicians quoted memes, tech CEOs roleplayed as gods, and conspiracy theorists became bestsellers on Amazon. These days, suspension of disbelief is no longer a leisure activity — it’s a survival strategy.
The Fictional Contract: Broken but Not Forgotten
Traditionally, suspension of disbelief was deployed like a visitor’s badge. You wore it when entering the imagined world and returned it at the door on your way out. Fiction, fantasy, speculative fiction — they all relied on that badge. You accepted the implausible if it served the probable. Gandalf could fall into shadow and return whiter than before because he was, after all, a wizard. We were fine with warp speed as long as the emotional logic of Spock’s sacrifice made sense. There were rules — even in rule-breaking.
The genres varied. Hard sci-fi asked you to believe in quantum wormholes but not in lazy plotting. Magical realism got away with absurdities wrapped in metaphor. Superhero films? Well, their disbelief threshold collapsed somewhere between the multiverse and the Bat-credit card.
Still, we always knew we were pretending. We had a tether to the real, even when we floated in the surreal.
But Then Real Life Said, “Hold My Beer.”
At some point — let’s call it the twenty-first century — the need to suspend disbelief seeped off the screen and into the bloodstream of everyday life. News cycles became indistinguishable from satire (except that satire still had editors). Headlines read like rejected Black Mirror scripts. A reality TV star became president, and nobody even blinked. Billionaires declared plans to colonise Mars whilst democracy quietly lost its pulse.
We began to live inside a fiction that demanded that our disbelief be suspended daily. Except now, it wasn’t voluntary. It was mandatory. If you wanted to participate in public life — or just maintain your sanity — you had to turn off some corner of your rational mind.
You had to believe, or pretend to, that the same people calling for “freedom” were banning books. That artificial intelligence would definitely save us, just as soon as it was done replacing us. That social media was both the great democratiser and the sewer mainline of civilisation.
The boundary between fiction and reality? Eroded. Fact-checking? Optional. Satire? Redundant. We’re all characters now, improvising in a genreless world that refuses to pick a lane.
What happens to a psyche caught in this funhouse? Nothing good.
Our brains, bless them, were designed for some contradiction — religion’s been pulling that trick for millennia — but the constant toggling between belief and disbelief, trust and cynicism, is another matter. We’re gaslit by the world itself. Each day, a parade of facts and fabrications marches past, and we’re told to clap for both.
Cognitive dissonance becomes the default. We scroll through doom and memes in the same breath. We read a fact, then three rebuttals, then a conspiracy theory, then a joke about the conspiracy, then a counter-conspiracy about why the joke is state-sponsored. Rinse. Repeat. Sleep if you can.
The result? Mental fatigue. Not just garden-variety exhaustion, but a creeping sense that nothing means anything unless it’s viral. Critical thinking atrophies not because we lack the will but because the floodwaters never recede. You cannot analyse the firehose. You can only drink — or drown.
Culture in Crisis: A Symptom or the Disease?
This isn’t just a media problem. It’s cultural, epistemological, and possibly even metaphysical.
We’ve become simultaneously more skeptical — distrusting institutions, doubting authorities — and more gullible, accepting the wildly implausible so long as it’s entertaining. It’s the postmodern paradox in fast-forward: we know everything is a construct, but we still can’t look away. The magician shows us the trick, and we cheer harder.
In a world where everything is performance, authenticity becomes the ultimate fiction. And with that, the line between narrative and news, between aesthetic and actuality, collapses.
So what kind of society does this create?
One where engagement replaces understanding. Where identity is a curated feed. Where politics is cosplay, religion is algorithm, and truth is whatever gets the most shares. We aren’t suspending disbelief anymore. We’re embalming it.
The Future: A Choose-Your-Own-Delusion Adventure
So where does this all end?
There’s a dark path, of course: total epistemic breakdown. Truth becomes just another fandom and reality a subscription model. But there’s another route — one with a sliver of hope — where we become literate in illusion.
We can learn to hold disbelief like a scalpel, not a blindfold. To engage the implausible with curiosity, not capitulation. To distinguish between narratives that serve power and those that serve understanding.
It will require a new kind of literacy. One part media scepticism, one part philosophical rigour, and one part good old-fashioned bullshit detection. We’ll have to train ourselves not just to ask “Is this true?” but “Who benefits if I believe it?”
That doesn’t mean closing our minds. It means opening them with caution. Curiosity without credulity. Wonder without worship. A willingness to imagine the impossible whilst keeping a firm grip on the probable.
In Conclusion, Reality Is Optional, But Reason Is Not
In the age of AI, deepfakes, alt-facts, and hyperreality, we don’t need less imagination. We need more discernment. The world may demand our suspension of disbelief, but we must demand our belief back. In truth, in sense, in each other.
Because if everything becomes fiction, then fiction itself loses its magic. And we, the audience, are left applauding an empty stage.
Lights down. Curtain call. Time to read the footnotes.
This is Chapter 5 of Erich Fromm’s The Sane Society. I’ve had this on my bookshelf for quite a while and wasn’t sure how a 70-year-old book could have so much relevance, but it does. Granted, some of it is irrelevant, a victim of the period it was written. This happens.
Sidebar: Whilst praising Freud, Fromm also calls him out, essentially accusing him of projecting his mummy-daddy issues onto the world at large. When I first encountered Freud’s work in the 1980s, it was already largely discredited. Some of what should have been wasn’t. Freud’s work as a psychoanalyst heavily overshadowed his other contributions. In fact, Freud offers some strong philosophical insights into society and civilisation. He’s not all about cigars and dreamscapes.
What strikes me about this chapter is the historical perspective it provides on capitalism. I’m an academic economist. I taught undergraduate economics for the better part of a decade. I’ve read (and recommend reading) Marx’s Capital firsthand.
Audio: NotebookLM Podcast commentary on this content.
Fromm adds additional details here. Firstly, he notes that the capitalism that marked the early days of the Industrial Revolution—the seventeenth and eighteenth centuries—differed from that of the nineteenth and twentieth centuries. The earlier period still had cultural and moral tethers that became frayed or lost in later periods. Without regurgitating the chapter, I cite some themes:
“this underselling practice is grown to such a shameful height, that particular persons publicly advertise that they undersell the rest of the trade.”
People were not very keen on price cutting as a competitive mechanism.
They also note the unfair competitive advantage of the monied elites who could buy materials in cash instead of credit and could thereby undercut prices, who would have to account for paying interest rates or markups on credit.
Whilst in the twentieth century, regulating undercutting is seen as protectionism, the earlier centuries had no problems defending merchants. We do have laws on the ebooks that prevent dumping, but these are rarely enforced, and when they are, it’s a political rather than economic statement. In practice, but done in the name of economics are politics in the same manner as science was used as cover to implement policy during the COVID-19 debacle.
Montesquieu says “that machines which diminish the numbers of workers are ‘pernicious’.” This sentiment echoes the current sentiments about robotics and artificial intelligence.
Nineteenth-century capitalism saw man as the measure of all things supplanted by capital. This is the capitalism Marx rails against—profits over humanity and society, the pursuit of local maxima at the expense of global maxima. This is also where the goal of hypergrowth and growth for growth’s sake came into vogue, ushering us into the Modern Age of Modern ideals—science, progress, order, and so on.
I won’t exhaust the chapter here, but for what it is, it’s a relatively light read. Whether I comment on later chapters depends on whether they engage me. Cheers.
I’ve got a confession to make: Science Fiction as a genre doesn’t resonate with me. Neither does Fantasy. I enjoy some fiction, but it seems that it’s primarily Literary Fiction – old-school classics like Dostoyevsky, Tolstoy, Nabokov, Kafka, Barthelme, and the like. Mostly, I prefer non-fiction.
I’ve just finished reading William Gibson’s Neuromancer, having read The Peripheral at the end of last year. To be fair, someone recommended Neal Stephenson’s Snow Crash, which is in the same genre – cyberpunk. I’d been advised that Snow Crash is better written, but I thought it might be best to start at the start of that genre.
[EDIT: After writing this post, I read the first two chapters of Snow Crash. The opening scenes of samurai swords had me thinking I needed to give up on this genre once and for all. But no, this was simple character building—fingers crossed. It quickly—and I do mean quickly—changed into a rapid montage dripping with ironic satire. I was in my element. I’ll take sci-fi wrapped in irony. It’s like stuffing a pill in cheese to trick your pet into taking medicine. I don’t want to jinx it, and I don’t know how Stephenson can control the pacing. I’ll let it unfold, and I hope it lands in a happy place. Now, I feel obliged to end with a motivational message on not giving up.]
These writers have good ideas. It often sounds appealing when someone tells me the plot summary, but the details bore me to tears. When I read reviews of these books, I frequently hear how immersive they are, but to me, they are cluttered and chockablock with minutiae. I find myself prodding, “Just get to the point.” But there has to be more than this. Short stories may fare better. I liked Ursula K LeGuin’s The Ones Who Walk Away from Omelas, but that was more related to its philosophical, anti-utilitarian perspective rather than the story.
It’s not as if Dostoyevsky doesn’t circumlocute and pontificate, but it’s somehow different. I want to like it. I want to read it – first-hand, not just a summary, so I can feel that I’ve engaged with the material.
Over the years, I’ve been consoled by fans of the genre, who say, “I understand. What you need to read is” [fill in the blank]. I read Ender’s Game on this advice.
To be fair, Sci-Fi movies and television don’t resonate with me either. Star Wars? Nope. Star Trek? Nope. Firefly. No, again.
What people find amazing, I find trite. Often, there is some embedded Modernist morality that some view as profound. I roll my eyes. I cringe thinking of old Star Trek episodes about what makes humans so special.
I don’t tend to find movies or television very interesting in general. I’ve never owned a television. My partners always do. “But you watch streaming content,” you say, and you’d be correct. But I watch it on my own time and take a chance, if only to remain connected to contemporary trends.
My last engagement was Arcane on Netflix. I found Season One well done and entertaining, but I’m not sure Anime qualifies as Sci-Fi. I caught The Peripheral on Amazon a couple of months ago, which led me to the book, but they turned out to be different stories, though they were set in the same universe with (generally) the same characters.
What a relief. After trudging through a couple of so-called ‘popular’ books, it’s refreshing to read something that actually reflects the complexity of reality.
The first essay in Slavoj Ĺ˝iĹľek’s Against Progress is titled Progress and Its Vicissitudes. It opens with a nod to the opening scene of The Prestige, directed by Christopher Nolan. But Ĺ˝iĹľek, as always, takes it further. He unearths something far more sinister in the magician’s sleight of hand—something unsettlingly perceptible to the young girl who witnesses the trick.
Life isn’t always what it seems on the surface. It isn’t as neat or digestible as our perceptions make it out to be. Ĺ˝iĹľek embarks on a scathing critique of the concept of progress, dismantling its conventional interpretations and exposing the often-overlooked consequences that lurk beneath its glossy exterior. He unravels the paradoxes and failures inherent in the notion of progress, urging readers to rethink their blind faith in the idea of a linear march towards a utopian future.
Defining ‘progress,’ he argues, is akin to laying claim to the future. But whose future? And at what cost? He interrogates the competing visions that shape human possibility, questioning whether, in the face of our cascading ecological, social, and political crises, things can actually improve—and what ‘better’ even means. He skewers various ideologies—neoliberalism, populism, and the self-improvement industrial complex—for their roles in manipulating and distorting the very concept of progress.
From a postmodernist perspective, Ĺ˝iĹľek’s analysis aligns with the scepticism towards grand narratives and universal truths that define postmodern thought. He deconstructs the monolithic idea of progress, revealing it as a construct that conveniently conceals underlying power structures and exclusions. By exposing the ‘squashed dead birds’—the inevitable collateral damage of progress—he underscores just how arbitrary and manufactured our notions of advancement really are.
Moreover, Ĺ˝iĹľek’s critique echoes the postmodernist fixation on desire, denial, and disavowal. He examines how different visions of progress systematically exclude or sacrifice certain elements and how these dynamics manifest across cultural phenomena, from Hollywood blockbusters to decolonisation movements. His analysis reinforces the postmodernist conviction that meanings are never fixed—they are fluid, contested, and often riddled with contradiction.
In the end, Against Progress is not just a critique—it’s an invitation. An invitation to abandon the comforting illusion of an inevitable march towards something better and to face the messy, contingent reality of human existence head-on.
This post explains how to interpret the ternary plot chart’s visualisation. The ternary chart on the survey results page will render something like this. This is an admin page with additional functionality, but it’s similar enough. The blue dot represents the average of all responses. The star represents where I guessed the average would land–mostly modern with some residual premodernity and a touch of postmodernity.
Under the title in the header is a textual assessment of the visualisation. In this case, the response illustrates someone moderately modern with postmodern influences. Although this person also has some premodern tendencies, they are relatively insignificant to the context.
The three possible worldviews are at the vertices (the corners) of the triangle. Each side is a scale progressing from 0% to 100%—100% coincident with the label. For example, the bottom side runs from 0 on the left to 100 on the right, which would indicate a score of 100 per cent Premodern, which the output deems Pure Premodern.
Notice that each vertex has green and yellow shading that serves as visual aids representing the strength of the relationship to the corner. Green is strong, and yellow is moderate. The white section outlined by an interior triangle with a red border is decidedly mixed, showing no strong inclination to any of the extremes.
In the example above, the red plot point illustrates a response (as shown below the chart) that is 20.7% Premodern, 52.1% Modern, and 27.2% Postmodern. These numbers should always sum to 100, though there will be some drift due to rounding. The star represents where I thought the average response would be. Follow the tickmarks on each side, and you’ll notice they correspond with the plot point as a 3-tuple (20, 70, 10).
In the future, I expect to render a view that plots the average survey response as a reference.
Below this chart is an expository account of the response choices. You can render this content as a PDF for your personal archive.
Final Word
If you have any questions or suggestions related to this topic, please feel free to leave them in the comments below.
Kurt Gray’s Outraged! is a fascinating romp through the minefield of moral psychology and outrage culture. It’s snappy, it’s clever, and it’s… shallow. Whilst Gray positions himself as the maestro conducting the cacophony of modern outrage, his approach has left me wondering if the symphony is little more than noise. Here’s why:
Audio: Podcast discussion on this review content.
Oversimplification of Moral Psychology
Gray’s central thesis that “all morality stems from perceptions of harm and threat” is bold, sure, but also reductive. Morality isn’t just a harm detector. It’s a rich tapestry of loyalty, authority, sanctity, and liberty—concepts Gray conveniently glosses over. His approach feels like reducing a fine Bordeaux to “it’s just fermented grapes.” Sure, technically correct, but where’s the depth?
The Age of Competitive Victimhood
By focusing so heavily on harm perception, Gray risks fueling the very outrage culture he’s critiquing. Welcome to the Hunger Games of victimhood, where everyone races to be crowned the most aggrieved. Instead of deflating this dynamic, Gray’s analysis may inadvertently add more oxygen to the fire.
Lack of Diverse Perspectives
Gray’s attempt to bridge divides is commendable but flawed. Critics point out that he gives more airtime to controversial right-wing figures than the left-leaning audience he’s presumably trying to engage. It’s like building half a bridge and wondering why no one’s crossing. If you alienate half your audience, how exactly are you fostering dialogue?
Contradictory Messaging
The book also suffers from a classic case of ideological whiplash. Gray tells us not to get offended by microaggressions, then argues that offensive content needs more careful handling. Which is it, Kurt? Either you’re driving the “sticks and stones” bus, or you’re preaching kid-glove diplomacy. You can’t have it both ways.
Limited Practical Solutions
Like many pop psychology books, Outraged! excels at diagnosing problems but falters when offering solutions. Gray’s suggestion to use personal stories of harm to bridge divides is charmingly naive. Sure, storytelling might work for interpersonal tiffs, but try applying that to global crises like climate change or systemic inequality. Good luck narrating your way to a greener planet.
Oversimplifying Complex Issues
Gray’s harm-based morality seems like an attempt to cram human behaviour’s messy, chaotic sprawl into a tidy spreadsheet. Real moral debates are nuanced, tangled, and frustratingly complex. By filtering everything through the lens of harm, Gray risks missing the bigger picture. It’s morality on Instagram—polished, curated, and ultimately hollow.
As I continue reading Chapter 6 of Yuval Noah Harari’s Nexus, I find myself wrestling with the masterful misdirection and rhetorical strategies he employs. A critical reader can discern the writing on the wall, but his choir of loyal readers likely consumes his narrative like red meat, uncritically savouring its surface-level appeal.
Social Media and Misinformation
Harari begins by addressing the role of social media in spreading disinformation and misinformation, particularly singling out Facebook. From there, he pivots to Q-Anon conspiracy theories. While these topics are undeniably relevant, Harari’s framing feels more like an indictment of the masses rather than a nuanced critique of the systemic factors enabling these phenomena.
The Voter Knows Best?
Harari leans heavily on platitudes like “the customer is always right” and “the voters know best.” These truisms may resonate with an indoctrinated audience but fail to hold up under scrutiny. The powers that be—whether governments or corporations—exploit this mentality, much like religious institutions exploit faith. Harari’s concern seems rooted in the fear that AI could outmanoeuvre these same masses, creating competition for global entities like the World Economic Forum (WEF), which, in his view, aims to remain unchallenged.
Taxation, Nexus, and the Future of Nation-States
Harari’s discussion of taxation and the nexus between power and information is intriguing, but it misses a larger point. Nation-states, as I see it, are becoming anachronisms, unable to defend themselves against the rise of technocratic forces. Taxation, once a cornerstone of state power, may soon be irrelevant as the global landscape shifts toward what I call Feudalism 2.0—a hierarchy dominated by transnational actors like the WEF.
Harari poorly frames a Uruguayan taxation dilemma, reducing it to a simplistic trade-off between information and power without addressing the broader implications. This shallow analysis leaves much to be desired.
Determinism and Misdirection
Next, Harari mischaracterises the philosophical concept of determinism, likely to mislead readers who aren’t well-versed in its nuances. He spins a cautionary tale based on this revised definition, which may serve his rhetorical goals but detracts from the intellectual integrity of his argument.
Setting the Stage
Harari ends the chapter with a statement about the importance of time and place in history, using it as a setup to provoke a sense of urgency. While this is a classic rhetorical device, it feels hollow without substantive backing.
Final Reflections
Many Modernists may embrace Harari’s narrative uncritically, but for me, the veneer is thin and riddled with holes. His analysis fails to engage with more profound critiques of power and governance, relying instead on cherry-picked anecdotes and oversimplified arguments. The chapter’s focus on social media, AI, and taxation could have been fertile ground for profound insights, but Harari instead opts for rhetorical flourish over rigorous examination. Still, I’ll press on and see what the next chapter holds.