Although the rating was not bad – 3.8 as of this writing – the reviews told a different story.
The first half of the book was worthwhile as it was a thought provoking description of how early societies may have developed social systems to encourage group cooperation and discourage harmful behaviors. The second half however was a rambling, preachy, and judgemental dissertation on postmodernism using the all too familiar tropes of social justice dialog. Overall it was poorly written though some of that may be due to translation. There is too little useful material here to justify its purchase.
Firstly, this version is from a German edition. Some people feel that some structure and communication value was lost in translation. In any case, he’s accused of being verbose and circumlocutory.
Amazon describes this book as “a sweeping history,” but the only thing sweeping about it was what I wanted to do with it under the nearest rug.
Secondly, it may be somewhat derivative of Nietzsche’s work on the same topic.
The book is rambling and incoherent, the author makes wild assertions about things that happened before recorded history, leaps from one train of thought to another, poo poos the ideas of religion but frequently quotes Nietzsche as the infallible source of knowledge…
Tonyo (reviewer)
In any case, the topic interests me, but I don’t see myself reading it any time soon.
—or—How the Invisible Hand Became a Throttling Grip on the Throat of the Biosphere
As many frequent visitors know, I am a recovering economist. I tend to view economics through a philosophical lens. Here. I consider the daft nonsense of Pareto optimality.
Audio: NotebookLM podcast of this content.
There is a priesthood in modern economics—pious in its equations, devout in its dispassion—that gathers daily to prostrate before the altar of Pareto. Here, in this sanctum of spreadsheet mysticism, it is dogma that an outcome is “optimal” so long as no one is worse off. Never mind if half the world begins in a ditch and the other half in a penthouse jacuzzi. So long as no one’s Jacuzzi is repossessed, the system is just. Hallelujah.
This cult of cleanliness, cloaked in the language of “efficiency,” performs a marvellous sleight of hand: it transforms systemic injustice into mathematical neutrality. The child working in the lithium mines of the Congo is not “harmed”—she simply doesn’t exist in the model. Her labour is an externality. Her future, an asterisk. Her biosphere, a rounding error in the grand pursuit of equilibrium.
Let us be clear: this is not science. This is not even ideology. It is theology—an abstract faith-based system garlanded with numbers. And like all good religions, it guards its axioms with fire and brimstone. Question the model? Heretic. Suggest the biosphere might matter? Luddite. Propose redistribution? Marxist. There is no room in this holy order for nuance. Only graphs and gospel.
Jevons warned us…that improvements in efficiency could increase, not reduce, resource consumption.
The rot runs deep. William Stanley Jevons—yes, that Jevons, patron saint of unintended consequences—warned us as early as 1865 that improvements in efficiency could increase, not reduce, resource consumption. But his paradox, like Cassandra’s prophecy, was fated to be ignored. Instead, we built a civilisation on the back of the very logic he warned would destroy it.
Then came Simon Kuznets, who—bless his empirically addled soul—crafted a curve that seemed to promise that inequality would fix itself if we just waited politely. We called it the Kuznets Curve and waved it about like a talisman against the ravages of industrial capitalism, ignoring the empirical wreckage that piled up beneath it like bones in a trench.
Meanwhile, Pareto himself, that nobleman of social Darwinism, famously calculated that 80% of Italy’s land was owned by 20% of its people—and rather than challenge this grotesque asymmetry, he chose to marvel at its elegance. Economics took this insight and said: “Yes, more of this, please.”
And so the model persisted—narrow, bloodless, and exquisitely ill-suited to the world it presumed to explain. The economy, it turns out, is not a closed system of rational actors optimising utility. It is a planetary-scale thermodynamic engine fuelled by fossil sunlight, pumping entropy into the biosphere faster than it can absorb. But don’t expect to find that on the syllabus.
Mainstream economics has become a tragic farce, mouthing the language of optimisation while presiding over cascading system failure. Climate change? Not in the model. Biodiversity collapse? A regrettable externality. Intergenerational theft? Discounted at 3% annually.
We are witnessing a slow-motion suicide
We are witnessing a slow-motion suicide cloaked in the rhetoric of balance sheets. The Earth is on fire, and the economists are debating interest rates.
What we need is not reform, but exorcism. Burn the models. Salt the axioms. Replace this ossified pseudoscience with something fit for a living world—ecological economics, systems theory, post-growth thinking, anything with the courage to name what this discipline has long ignored: that there are limits, and we are smashing into them at speed.
History will not be kind to this priesthood of polite annihilation. Nor should it be.
We live in an age of two-dimensional minds trying to navigate a three-dimensional world—and doing it with all the grace of a toddler wielding a chainsaw. For over a generation, the US and UK have been polarised, Balkanised, and lobotomised by the Great Red vs. Blue Punch & Judy Show. Left, right. Us, them. Hero, villain. There is no nuance, no gradient, no middle ground. Just a glorious reduction of civilisation’s complexities into primary-coloured football teams for emotionally underdeveloped adults.
This is not politics. This is pantomime.
Audio: NotebookLM podcast on this topic.
And the real tragedy? The world isn’t even two-dimensional. It’s not even three. Try thinking of it as a cube—six sides at least, all pressing in at once, depending on your angle. Culture, history, class, geography, education, trauma, temperament, aesthetic preference—each a face of the cube. But tell that to the modern partisan and they’ll squint at you like you’ve just tried to explain jazz to a toaster.
No, to them, the world is flat. A line. A tug-of-war between two equally blinkered tribes dragging the rest of us into the pit. Pick a side or shut up, they shriek. If you’re not with us, you’re against us. If you don’t chant the correct slogans or signal the proper virtues, you’re obviously a heretic, a bigot, or—worst of all—centrist scum. They don’t want conversation; they want confirmation. Preferably in 280 characters or less.
Try introducing complexity and you’ll be accused of bothsidesism, moral cowardice, or—God forbid—thinking. It’s like throwing a Rubik’s cube into a toddler fight club.
This binary reductionism doesn’t stop at politics. Even gender—possibly the most nuanced and intimate aspect of human identity—has been flattened into a tug-of-war between biological essentialists and gender abolitionists, both sides wielding hashtags like holy relics. The irony? These same culture warriors still manage to marvel at rainbows, utterly unaware that their own worldview only permits two colours. How do they even process a traffic light?
The cult of the binary isn’t just intellectually bankrupt—it’s a threat to civilisation. We didn’t crawl out of the primordial ooze, develop language, invent calculus, and split the atom just so Karen and Kev from Facebook could reduce geopolitics to an episode of EastEnders. The world is messy. People are contradictory. Context matters. But nuance doesn’t trend.
We’re governed by algorithms, policed by outrage, and divided by design. The machinery of mass culture rewards the loudest, angriest, most wilfully ignorant voices, and we feed the beast like dopamine-addled pigeons pecking a lever. The cube has been flattened into a cartoon. And yet we wonder why everything feels broken.
So here’s a radical idea: what if we stopped flattening the world into a battlefield and started mapping it like a landscape? What if we admitted that not every problem has two sides—some have two hundred? What if we taught critical thinking instead of tribal loyalty? What if we made complexity sexy again?
But I digress. That might require imagination. And we’ve outsourced that to TikTok influencers and AI chatbots.
Meanwhile, the cube spins. And the rest of us try to hold on.
I’m not a fan of IQ as a general metric. Let us be done with the cult of the clever. Let us drag the IQ score from its pedestal, strip it of its statistical robes, and parade it through the streets of history where it belongs—next to phrenology, eugenics, and other well-meaning pseudosciences once weaponised by men in waistcoats.
The so-called Intelligence Industrial Complex—an infernal alliance of psychologists, bureaucrats, and HR departments—has for too long dictated the terms of thought. It has pretended to measure the immeasurable. It has sold us a fiction in numerical drag: that human intelligence can be distilled, packaged, and ranked.
Audio: NotebookLM podcast on this topic.
What it measures, it defines. What it defines, it controls.
IQ is not intelligence. It is cognitive GDP: a snapshot of what your brain can do under fluorescent lights with a timer running. It rewards abstraction, not understanding; speed, not depth; pattern recognition, not wisdom. It’s a test of how well you’ve been conditioned to think like the test-makers.
This is not to say IQ has no value. Of course it does—within its own ecosystem of schools, bureaucracies, and technocracies. But let us not mistake the ruler for the terrain. Let us not map the entire landscape of human potential using a single colonial compass.
True intelligence is not a number. It is a spectrum of situated knowings, a polyphony of minds tuned to different frequencies. The Inuit hunter tracking a seal through silence. The griot remembering centuries of lineage. The autistic coder intuiting an algorithm in dreamtime. The grandmother sensing a lie with her bones. IQ cannot touch these.
To speak of intelligence as if it belonged to a single theory is to mistake a monoculture for a forest. Let us burn the monoculture. Let us plant a thousand new seeds.
A Comparative Vivisection of Intelligence Theories
Theory / Model
Core Premise
Strengths
Blind Spots / Critiques
Cultural Framing
IQ (Psychometric g)
Intelligence is a single, general cognitive ability measurable via testing
Predicts academic & job performance; standardised
Skewed toward Western logic, ignores context, devalues non-abstract intelligences
Western, industrial, meritocratic
Multiple Intelligences (Gardner)
Intelligence is plural: linguistic, spatial, musical, bodily, etc.
Recognises diversity; challenges IQ monopoly
Still individualistic; categories often vague; Western in formulation
Liberal Western pluralism
Triarchic Theory (Sternberg)
Intelligence = analytical + creative + practical
Includes adaptability, real-world success
Still performance-focused; weak empirical grounding
Western managerial
Emotional Intelligence (Goleman)
Intelligence includes emotion regulation and interpersonal skill
Useful in leadership & education contexts
Commodified into corporate toolkits; leans self-help
Western therapeutic
Socio-Cultural (Vygotsky)
Intelligence develops through social interaction and cultural mediation
Recognises developmental context and culture
Less attention to adult or cross-cultural intelligence
Soviet / constructivist
Distributed Cognition / Extended Mind
Intelligence is distributed across people, tools, systems
Breaks skull-bound model; real-world cognition
Hard to measure; difficult to institutionalise
Post-cognitive, systems-based
Indigenous Epistemologies
Intelligence is relational, ecological, spiritual, embodied, ancestral
Holistic; grounded in lived experience
Marginalised by academia; often untranslatable into standard metrics
Global South / decolonial
Conclusion: Beyond the Monoculture of Mind
If we want a more encompassing theory of intelligence, we must stop looking for a single theory. We must accept plurality—not as a nod to diversity, but as an ontological truth.
Intelligence is not a fixed entity to be bottled and graded. It is a living, breathing phenomenon: relational, situated, contextual, historical, ecological, and cultural.
And no test devised in a Princeton psych lab will ever tell you how to walk through a forest without being seen, how to tell when rain is coming by smell alone, or how to speak across generations through story.
It’s time we told the Intelligence Industrial Complex: your number’s up.
Given current IQ trends, humanity is getting dumber. Let’s not mince words. This implies the AGI singularity—our long-heralded techno-apotheosis—will arrive against a backdrop of cognitive decay. A dimming species, squinting into the algorithmic sun.
Audio: NotebookLM podcast discussing this content.
Now, I’d argue that AI—as instantiated in generative models like Claude and ChatGPT—already outperforms at least half of the human population. Likely more. The only question worth asking is this: at what percentile does AI need to outperform the human herd to qualify as having “surpassed” us?
Living in the United States, I’m painfully aware that the average IQ hovers somewhere in the mid-90s—comfortably below the global benchmark of 100. If you’re a cynic (and I sincerely hope you are), this explains quite a bit. The declining quality of discourse. The triumph of vibes over facts. The national obsession with astrology apps and conspiracy podcasts.
Harvard astronomer Avi Loeb argues that as humans outsource cognition to AI, they lose the capacity to think. It’s the old worry: if the machines do the heavy lifting, we grow intellectually flaccid. There are two prevailing metaphors. One, Platonic in origin, likens cognition to muscle—atrophying through neglect. Plato himself worried that writing would ruin memory. He wasn’t wrong.
But there’s a counterpoint: the cooking hypothesis. Once humans learned to heat food, digestion became easier, freeing up metabolic energy to grow bigger brains. In this light, AI might not be a crutch but a catalyst—offloading grunt work to make space for higher-order thought.
So which is it? Are we becoming intellectually enfeebled? Or are we on the cusp of a renaissance—provided we don’t burn it all down first?
Crucially, most people don’t use their full cognitive capacity anyway. So for the bottom half—hell, maybe the bottom 70%—nothing is really lost. No one’s delegating their calculus homework to ChatGPT if they were never going to attempt it themselves. For the top 5%, AI is already a glorified research assistant—a handy tool, not a replacement.
The real question is what happens to the middle band. The workaday professionals. The strivers. The accountants, engineers, copywriters, and analysts hovering between the 70th and 95th percentiles—assuming our crude IQ heuristics even hold. They’re the ones who have just enough brainpower to be displaced.
That’s where the cognitive carnage will be felt. Not in the depths, not at the heights—but in the middle.
Once upon a time — which is how all good fairy tales begin — suspension of disbelief was a tidy little tool we used to indulge in dragons, space travel, talking animals, and the idea that people in rom-coms have apartments that match their personalities and incomes. It was a temporary transaction, a gentleman’s agreement, a pact signed between audience and creator with metaphorical ink: I know this is nonsense, but I’ll play along if you don’t insult my intelligence.
Audio: NotebookLM podcast of this page content.
This idea, famously coined by Samuel Taylor Coleridge as the “willing suspension of disbelief,” was meant to give art its necessary air to breathe. Coleridge’s hope was that audiences would momentarily silence their rational faculties in favour of emotional truth. The dragons weren’t real, but the heartbreak was. The ghosts were fabrications, but the guilt was palpable.
But that was then. Before the world itself began auditioning for the role of absurdist theatre. Before reality TV became neither reality nor television. Before politicians quoted memes, tech CEOs roleplayed as gods, and conspiracy theorists became bestsellers on Amazon. These days, suspension of disbelief is no longer a leisure activity — it’s a survival strategy.
The Fictional Contract: Broken but Not Forgotten
Traditionally, suspension of disbelief was deployed like a visitor’s badge. You wore it when entering the imagined world and returned it at the door on your way out. Fiction, fantasy, speculative fiction — they all relied on that badge. You accepted the implausible if it served the probable. Gandalf could fall into shadow and return whiter than before because he was, after all, a wizard. We were fine with warp speed as long as the emotional logic of Spock’s sacrifice made sense. There were rules — even in rule-breaking.
The genres varied. Hard sci-fi asked you to believe in quantum wormholes but not in lazy plotting. Magical realism got away with absurdities wrapped in metaphor. Superhero films? Well, their disbelief threshold collapsed somewhere between the multiverse and the Bat-credit card.
Still, we always knew we were pretending. We had a tether to the real, even when we floated in the surreal.
But Then Real Life Said, “Hold My Beer.”
At some point — let’s call it the twenty-first century — the need to suspend disbelief seeped off the screen and into the bloodstream of everyday life. News cycles became indistinguishable from satire (except that satire still had editors). Headlines read like rejected Black Mirror scripts. A reality TV star became president, and nobody even blinked. Billionaires declared plans to colonise Mars whilst democracy quietly lost its pulse.
We began to live inside a fiction that demanded that our disbelief be suspended daily. Except now, it wasn’t voluntary. It was mandatory. If you wanted to participate in public life — or just maintain your sanity — you had to turn off some corner of your rational mind.
You had to believe, or pretend to, that the same people calling for “freedom” were banning books. That artificial intelligence would definitely save us, just as soon as it was done replacing us. That social media was both the great democratiser and the sewer mainline of civilisation.
The boundary between fiction and reality? Eroded. Fact-checking? Optional. Satire? Redundant. We’re all characters now, improvising in a genreless world that refuses to pick a lane.
What happens to a psyche caught in this funhouse? Nothing good.
Our brains, bless them, were designed for some contradiction — religion’s been pulling that trick for millennia — but the constant toggling between belief and disbelief, trust and cynicism, is another matter. We’re gaslit by the world itself. Each day, a parade of facts and fabrications marches past, and we’re told to clap for both.
Cognitive dissonance becomes the default. We scroll through doom and memes in the same breath. We read a fact, then three rebuttals, then a conspiracy theory, then a joke about the conspiracy, then a counter-conspiracy about why the joke is state-sponsored. Rinse. Repeat. Sleep if you can.
The result? Mental fatigue. Not just garden-variety exhaustion, but a creeping sense that nothing means anything unless it’s viral. Critical thinking atrophies not because we lack the will but because the floodwaters never recede. You cannot analyse the firehose. You can only drink — or drown.
Culture in Crisis: A Symptom or the Disease?
This isn’t just a media problem. It’s cultural, epistemological, and possibly even metaphysical.
We’ve become simultaneously more skeptical — distrusting institutions, doubting authorities — and more gullible, accepting the wildly implausible so long as it’s entertaining. It’s the postmodern paradox in fast-forward: we know everything is a construct, but we still can’t look away. The magician shows us the trick, and we cheer harder.
In a world where everything is performance, authenticity becomes the ultimate fiction. And with that, the line between narrative and news, between aesthetic and actuality, collapses.
So what kind of society does this create?
One where engagement replaces understanding. Where identity is a curated feed. Where politics is cosplay, religion is algorithm, and truth is whatever gets the most shares. We aren’t suspending disbelief anymore. We’re embalming it.
The Future: A Choose-Your-Own-Delusion Adventure
So where does this all end?
There’s a dark path, of course: total epistemic breakdown. Truth becomes just another fandom and reality a subscription model. But there’s another route — one with a sliver of hope — where we become literate in illusion.
We can learn to hold disbelief like a scalpel, not a blindfold. To engage the implausible with curiosity, not capitulation. To distinguish between narratives that serve power and those that serve understanding.
It will require a new kind of literacy. One part media scepticism, one part philosophical rigour, and one part good old-fashioned bullshit detection. We’ll have to train ourselves not just to ask “Is this true?” but “Who benefits if I believe it?”
That doesn’t mean closing our minds. It means opening them with caution. Curiosity without credulity. Wonder without worship. A willingness to imagine the impossible whilst keeping a firm grip on the probable.
In Conclusion, Reality Is Optional, But Reason Is Not
In the age of AI, deepfakes, alt-facts, and hyperreality, we don’t need less imagination. We need more discernment. The world may demand our suspension of disbelief, but we must demand our belief back. In truth, in sense, in each other.
Because if everything becomes fiction, then fiction itself loses its magic. And we, the audience, are left applauding an empty stage.
Lights down. Curtain call. Time to read the footnotes.
Reading Leviathan has me thinking. Nothing new, mind you—just reinvigorated. Hobbes, bless his scowling soul, is the consummate pessimist. People, in his view, are untrustworthy sods, ready to stab you in the back at the first flicker of opportunity. He doesn’t believe in community. He believes in containment.
Audio: NotebookLM discussion about this topic.
And to be fair, he’s not entirely wrong. He captures a certain cohort with uncanny accuracy. You know the type. Type-A™ personalities: the Donald Trumps, Elon Musks, Adolph Hitlers, Shahs of Iran, and that guy in marketing who always schedules meetings for 8am. The ones who salivate at the mere whiff of power, who’d sell their grandmothers for a press release and call it vision.
This, in short, is why we can’t have nice things.
Now, I’ll concede that most people want more than they have. Economics depends on this assumption like religion depends on guilt. But not everyone is driven by an insatiable lust for money, dominance, or legacy. That, my friends, is not ambition. It is pathology—a malignant, metastasising hunger that infects the likes of Trump, Musk, Bezos, Sunak, and their ilk. The hunger to rule, not just participate.
The trouble is, the majority of the world’s population are idiots—not technically, but metaphorically. Soft-headed. Overstimulated. Easily distracted by flags, influencers, and “free shipping.” And there are flavours of idiots. Musk is a lucky idiot. Trump is a useful idiot. Most are a hair’s breadth from being cannon fodder.
And then we wonder why everything’s broken.
The world could be configured differently. It could consist of autonomous collectives, each minding its own business, each respecting the other’s boundaries like courteous houseplants. But this equilibrium is shattered—always shattered—by the predatory few. The outliers. The sharks in suits. The ones who mistake governance for domination and diplomacy for personal branding.
So we build mechanisms to defend ourselves—laws, institutions, surveillance, standing armies—but these mechanisms inevitably attract the same types we were trying to ward off. Power-hungry cretins in different hats. The protectors, it turns out, are rarely benevolent dictators. They are predacious politicos, wearing virtue like a costume, mouthing justice while tightening the screws.
And here lies the eternal problem. Not technology. Not ideology. Not even ignorance.
But the recurring infestation of pathological ambition in a species otherwise just trying to get on with its day.
I’ve read Part I of Hobbes’ Leviathan and wonder what it would have been like if he filtered his thoughts through Hume or Wittgenstein. Hobbes makes Dickens read like Pollyanna. It’s an interesting historical piece, worth reading on that basis alone. It reads as if the Christian Bible had to pass through a legal review before it had been published, sapped of vigour. As bad a rap as Schopenhauer seems to get, Hobbes is the consummate Ebenezer Scrooge. Bah, humbug – you nasty, brutish, filthy animals!*
Audio: NotebookLM podcast conversation on this topic.
In any case, it got me thinking of free will and, more to the point, of will itself.
A Brief History of Humanity’s Favourite Metaphysical Scapegoat
By the time Free Will turned up to the party, the real guest of honour—the Will—had already been drinking heavily, muttering incoherently in the corner, and starting fights with anyone who made eye contact. We like to pretend that the “will” is a noble concept: the engine of our autonomy, the core of our moral selves, the brave little metaphysical organ that lets us choose kale over crisps. But in truth, it’s a bloody mess—philosophy’s equivalent of a family heirloom that no one quite understands but refuses to throw away.
So, let’s rewind. Where did this thing come from? And why, after 2,500 years of name-dropping, finger-pointing, and metaphysical gymnastics, are we still not quite sure whether we have a will, are a will, or should be suing it for damages?
Plato: Soul, Reason, and That Poor Horse
In the beginning, there was Plato, who—as with most things—half-invented the question and then wandered off before giving a straight answer. For him, the soul was a tripartite circus act: reason, spirit, and appetite. Will, as a term, didn’t get top billing—it didn’t even get its name on the poster. But the idea was there, muddling along somewhere between the charioteer (reason) and the unruly horses (desire and spiritedness).
No explicit will, mind you. Just a vague sense that the rational soul ought to be in charge, even if it had to beat the rest of itself into submission.
Aristotle: Purpose Without Pathos
Aristotle, ever the tidy-minded taxonomist, introduced prohairesis—deliberate choice—as a sort of proto-will. But again, it was all about rational calculation toward an end. Ethics was teleological, goal-oriented. You chose what aligned with eudaimonia, that smug Greek term for flourishing. Will, if it existed at all, was just reason picking out dinner options based on your telos. No inner torment, no existential rebellion—just logos in a toga.
Augustine: Sin, Suffering, and That Eternal No
Fast-forward a few hundred years, and along comes Saint Augustine, traumatised by his libido and determined to make the rest of us suffer for it. Enter voluntas: the will as the seat of choice—and the scene of the crime. Augustine is the first to really make the will bleed. He discovers he can want two incompatible things at once and feels properly appalled about it.
From this comes the classic Christian cocktail: freedom plus failure equals guilt. The will is free, but broken. It’s responsible for sin, for disobedience, for not loving God enough on Wednesdays. Thanks to Augustine, we’re stuck with the idea that the will is both the instrument of salvation and the reason we’re going to Hell.
Cheers.
Medievals: God’s Will or Yours, Pick One
The Scholastics, never ones to let an ambiguity pass unanalysed, promptly split into camps. Aquinas, ever the reasonable Dominican, says the will is subordinate to the intellect. God is rational, and so are we, mostly. But Duns Scotus and William of Ockham, the original voluntarist hooligans, argue that the will is superior—even in God. God could have made murder a virtue, they claim, and you’d just have to live with it.
From this cheerful perspective, will becomes a force of arbitrary fiat, and humans, made in God’s image, inherit the same capacity for irrational choice. The will is now more than moral; it’s metaphysical. Less reason’s servant, more chaos goblin.
Hobbes: Appetite with Delusions of Grandeur
Then along comes Thomas Hobbes, who looks at the soul and sees a wheezing machine of appetites. Will, in his famously cheery view, is simply “the last appetite before action.” No higher calling, no spiritual struggle—just the twitch that wins. Man is not a rational animal, but a selfish algorithm on legs. For Hobbes, will is where desire stumbles into motion, and morality is a polite euphemism for not getting stabbed.
Kant: The Will Gets a Makeover
Enter Immanuel Kant: powdered wig, pursed lips, and the moral rectitude of a man who scheduled his bowel movements. Kant gives us the “good will”, which acts from duty, not desire. Suddenly, the will is autonomous, rational, and morally legislative—a one-man Parliament of inner law.
It’s all terribly noble, terribly German, and entirely exhausting. For Kant, free will is not the ability to do whatever you like—it’s the capacity to choose according to moral law, even when you’d rather be asleep. The will is finally heroic—but only if it agrees to hate itself a little.
Schopenhauer: Cosmic Will, Cosmic Joke
And then the mood turns. Schopenhauer, world’s grumpiest mystic, takes Kant’s sublime will and reveals it to be a blind, thrashing, cosmic force. Will, for him, isn’t reason—it’s suffering in motion. The entire universe is will-to-live: a desperate, pointless striving that dooms us to perpetual dissatisfaction.
There is no freedom, no morality, no point. The only escape is to negate the will, preferably through aesthetic contemplation or Buddhist-like renunciation. In Schopenhauer’s world, the will is not what makes us human—it’s what makes us miserable.
Nietzsche: Transvaluation and the Will to Shout Loudest
Cue Nietzsche, who takes Schopenhauer’s howling void and says: yes, but what if we made it fabulous? For him, the will is no longer to live, but to power—to assert, to create, to impose value. “Free will” is a theologian’s fantasy, a tool of priests and moral accountants. But will itself? That’s the fire in the forge. The Übermensch doesn’t renounce the will—he rides it like a stallion into the sunset of morality.
Nietzsche doesn’t want to deny the abyss. He wants to waltz with it.
Today: Free Will and the Neuroscientific Hangover
And now? Now we’re left with compatibilists, libertarians, determinists, and neuroscientists all shouting past each other, armed with fMRI machines and TED talks. Some claim free will is an illusion, a post hoc rationalisation made by brains doing what they were always going to do. Others insist that moral responsibility requires it, even if we can’t quite locate it between the neurons.
We talk about willpower, will-to-change, political will, and free will like they’re real things. But under the hood, we’re still wrestling with the same questions Augustine posed in a North African villa: Why do I do what I don’t want to do? And more importantly, who’s doing it?
Conclusion: Where There’s a Will, There’s a Mess
From Plato’s silent horses to Nietzsche’s Dionysian pyrotechnics, the will has shape-shifted more times than a politician in an election year. It has been a rational chooser, a moral failure, a divine spark, a mechanical twitch, a cosmic torment, and an existential triumph.
Despite centuries of philosophical handwringing, what it has never been is settled.
So where there’s a will, there’s a way. But the way? Twisting, contradictory, and littered with the corpses of half-baked metaphysical systems.
Welcome to the labyrinth. Bring snacks.
* The solitary, poor, nasty, brutish, and short quote is forthcoming. Filthy animals is a nod to Home Alone.
David Brooks and the Hollowing Out of Conservatism
David Brooks is the quintessential old-school Conservative—the kind who once upheld a semblance of ideological coherence. He belongs to the pre-Reagan-Thatcher vintage, a time when Conservatism at least had the decency to argue from principles rather than blind tribalism. We could debate these people in good faith. Those days are gone. The current incarnation of Conservatism contains only homoeopathic traces of its Classical™ predecessor—diluted beyond recognition.
The Degeneration of Conservatism
The rot set in with Reagan, who caught it from Thatcher. Greed and selfishness were laundered into virtues, repackaged as “individual responsibility,” and the party’s intellectual ballast began to erode. By the time Bush II’s administration rolled in, Neo-Conservatism had replaced any lingering Burkean ethos, and by Trump’s tenure, even the pretence of ideology was gone. Conservatism-in-Name-Only—whatever Trump’s brand of reactionary nihilism was—swallowed the party whole. Do they even call themselves Conservatives anymore, or has that ship sailed along with basic literacy?
To be fair, this didn’t go unnoticed. Plenty of old-school Republicans recoiled in horror when Trump became their figurehead. Before the 2016 election, conservative pundits could barely contain their disdain for his incompetence, lack of moral compass, and general buffoonery. And yet, once they realised he was the party’s golden goose, they clambered aboard the Trump Train with the enthusiasm of lottery winners at a payday loan office. His staunchest critics became his most obsequious apologists. What does this tell us about their value system? Spoiler: nothing good.
Brooks’ Lament
Which brings us back to Brooks, who now bemoans the death of Conservative values. On this, we agree. Where we part ways is on whether those values were worth saving. Say you’re boarding a train from New York to Los Angeles. Conservatism might argue that a Miami-bound train is still a train, so what’s the problem? It’s the same vehicle, just going somewhere else. Except, of course, Conservatism has always insisted on the slow train over the fast train—because urgency is unseemly, and progress must be rationed.
If I’m an affluent middle-classer, I might prefer Conservatism’s careful incrementalism—it keeps my apple cart stable. Admirable, if you enjoy tunnel vision. Progressives, by contrast, recognise that some people don’t even have apple carts. Some are starving while others hoard orchards. To the Conservative, the poor just aren’t trying hard enough. To the Progressive, the system is broken, and the playing field needs a serious re-levelling. Even when Conservatives acknowledge inequality, their instinct is to tiptoe toward justice rather than risk disrupting their own affluence.
The Fallacy of Objective Reality
Leaving politics for philosophy, Brooks predictably rails against Postmodernism, decrying relativism in favour of good old-fashioned Modernist “reality.” He’s horrified by subjectivism, as though personal interpretation weren’t the foundation of all human experience. Like Jordan Peterson, he believes his subjective truth is the objective truth. And like Peterson, he takes umbrage at anyone pointing out otherwise. It feels so absolute to them that they mistake their own convictions for universal constants.
As a subjectivist, I accept that reality is socially mediated. We interpret truth claims based on cognitive biases, cultural conditioning, and personal experience. Even when we strive for objectivity, we do so through subjective lenses. Brooks’ Modernist nostalgia is touching but delusional—akin to demanding we all agree on a single flavour of ice cream.
The Existential Problem
And so, I find myself in partial agreement with Brooks. Yes, there is an existential crisis. The patient has a broken leg. But our prescriptions differ wildly. I won’t offer a metaphor for that—consider it your homework as a reader.
Brooks is likely a better writer than a public speaker, but you may still find yourself nodding along with some of his arguments. If you’re a “true” Christian Conservative—if you still believe in something beyond crass self-interest—he may well be preaching to the choir. But let’s be honest: how many in that choir are still listening?
This is Chapter 5 of Erich Fromm’s The Sane Society. I’ve had this on my bookshelf for quite a while and wasn’t sure how a 70-year-old book could have so much relevance, but it does. Granted, some of it is irrelevant, a victim of the period it was written. This happens.
Sidebar: Whilst praising Freud, Fromm also calls him out, essentially accusing him of projecting his mummy-daddy issues onto the world at large. When I first encountered Freud’s work in the 1980s, it was already largely discredited. Some of what should have been wasn’t. Freud’s work as a psychoanalyst heavily overshadowed his other contributions. In fact, Freud offers some strong philosophical insights into society and civilisation. He’s not all about cigars and dreamscapes.
What strikes me about this chapter is the historical perspective it provides on capitalism. I’m an academic economist. I taught undergraduate economics for the better part of a decade. I’ve read (and recommend reading) Marx’s Capital firsthand.
Audio: NotebookLM Podcast commentary on this content.
Fromm adds additional details here. Firstly, he notes that the capitalism that marked the early days of the Industrial Revolution—the seventeenth and eighteenth centuries—differed from that of the nineteenth and twentieth centuries. The earlier period still had cultural and moral tethers that became frayed or lost in later periods. Without regurgitating the chapter, I cite some themes:
“this underselling practice is grown to such a shameful height, that particular persons publicly advertise that they undersell the rest of the trade.”
People were not very keen on price cutting as a competitive mechanism.
They also note the unfair competitive advantage of the monied elites who could buy materials in cash instead of credit and could thereby undercut prices, who would have to account for paying interest rates or markups on credit.
Whilst in the twentieth century, regulating undercutting is seen as protectionism, the earlier centuries had no problems defending merchants. We do have laws on the ebooks that prevent dumping, but these are rarely enforced, and when they are, it’s a political rather than economic statement. In practice, but done in the name of economics are politics in the same manner as science was used as cover to implement policy during the COVID-19 debacle.
Montesquieu says “that machines which diminish the numbers of workers are ‘pernicious’.” This sentiment echoes the current sentiments about robotics and artificial intelligence.
Nineteenth-century capitalism saw man as the measure of all things supplanted by capital. This is the capitalism Marx rails against—profits over humanity and society, the pursuit of local maxima at the expense of global maxima. This is also where the goal of hypergrowth and growth for growth’s sake came into vogue, ushering us into the Modern Age of Modern ideals—science, progress, order, and so on.
I won’t exhaust the chapter here, but for what it is, it’s a relatively light read. Whether I comment on later chapters depends on whether they engage me. Cheers.