Although the rating was not bad β 3.8 as of this writing β the reviews told a different story.
The first half of the book was worthwhile as it was a thought provoking description of how early societies may have developed social systems to encourage group cooperation and discourage harmful behaviors. The second half however was a rambling, preachy, and judgemental dissertation on postmodernism using the all too familiar tropes of social justice dialog. Overall it was poorly written though some of that may be due to translation. There is too little useful material here to justify its purchase.
Firstly, this version is from a German edition. Some people feel that some structure and communication value was lost in translation. In any case, he’s accused of being verbose and circumlocutory.
Amazon describes this book as “a sweeping history,” but the only thing sweeping about it was what I wanted to do with it under the nearest rug.
Secondly, it may be somewhat derivative of Nietzsche’s work on the same topic.
The book is rambling and incoherent, the author makes wild assertions about things that happened before recorded history, leaps from one train of thought to another, poo poos the ideas of religion but frequently quotes Nietzsche as the infallible source of knowledgeβ¦
Tonyo (reviewer)
In any case, the topic interests me, but I don’t see myself reading it any time soon.
βorβHow the Invisible Hand Became a Throttling Grip on the Throat of the Biosphere
As many frequent visitors know, I am a recovering economist. I tend to view economics through a philosophical lens. Here. I consider the daft nonsense of Pareto optimality.
Audio: NotebookLM podcast of this content.
There is a priesthood in modern economicsβpious in its equations, devout in its dispassionβthat gathers daily to prostrate before the altar of Pareto. Here, in this sanctum of spreadsheet mysticism, it is dogma that an outcome is “optimal” so long as no one is worse off. Never mind if half the world begins in a ditch and the other half in a penthouse jacuzzi. So long as no one’s Jacuzzi is repossessed, the system is just. Hallelujah.
This cult of cleanliness, cloaked in the language of “efficiency,” performs a marvellous sleight of hand: it transforms systemic injustice into mathematical neutrality. The child working in the lithium mines of the Congo is not βharmedββshe simply doesn’t exist in the model. Her labour is an externality. Her future, an asterisk. Her biosphere, a rounding error in the grand pursuit of equilibrium.
Let us be clear: this is not science. This is not even ideology. It is theologyβan abstract faith-based system garlanded with numbers. And like all good religions, it guards its axioms with fire and brimstone. Question the model? Heretic. Suggest the biosphere might matter? Luddite. Propose redistribution? Marxist. There is no room in this holy order for nuance. Only graphs and gospel.
Jevons warned usβ¦that improvements in efficiency could increase, not reduce, resource consumption.
The rot runs deep. William Stanley Jevonsβyes, that Jevons, patron saint of unintended consequencesβwarned us as early as 1865 that improvements in efficiency could increase, not reduce, resource consumption. But his paradox, like Cassandraβs prophecy, was fated to be ignored. Instead, we built a civilisation on the back of the very logic he warned would destroy it.
Then came Simon Kuznets, whoβbless his empirically addled soulβcrafted a curve that seemed to promise that inequality would fix itself if we just waited politely. We called it the Kuznets Curve and waved it about like a talisman against the ravages of industrial capitalism, ignoring the empirical wreckage that piled up beneath it like bones in a trench.
Meanwhile, Pareto himself, that nobleman of social Darwinism, famously calculated that 80% of Italyβs land was owned by 20% of its peopleβand rather than challenge this grotesque asymmetry, he chose to marvel at its elegance. Economics took this insight and said: βYes, more of this, please.β
And so the model persistedβnarrow, bloodless, and exquisitely ill-suited to the world it presumed to explain. The economy, it turns out, is not a closed system of rational actors optimising utility. It is a planetary-scale thermodynamic engine fuelled by fossil sunlight, pumping entropy into the biosphere faster than it can absorb. But donβt expect to find that on the syllabus.
Mainstream economics has become a tragic farce, mouthing the language of optimisation while presiding over cascading system failure. Climate change? Not in the model. Biodiversity collapse? A regrettable externality. Intergenerational theft? Discounted at 3% annually.
We are witnessing a slow-motion suicide
We are witnessing a slow-motion suicide cloaked in the rhetoric of balance sheets. The Earth is on fire, and the economists are debating interest rates.
What we need is not reform, but exorcism. Burn the models. Salt the axioms. Replace this ossified pseudoscience with something fit for a living worldβecological economics, systems theory, post-growth thinking, anything with the courage to name what this discipline has long ignored: that there are limits, and we are smashing into them at speed.
History will not be kind to this priesthood of polite annihilation. Nor should it be.
The contestation over the definition of probability is alive and wellβlike a philosophical zombie that refuses to lie down and accept the tranquilliser of consensus. Despite over three centuries of intense mathematical, philosophical, and even theological wrangling, no single, universally accepted definition reigns supreme. Instead, we have a constellation of rival interpretations, each staking its claim on the epistemological turf, each clutching its own metaphysical baggage.
Audio: NotebookLM podcast on this topic.
Let us survey the battlefield:
1. Classical Probability (Laplacean Determinism in a Tuxedo)
This old warhorse defines probability as the ratio of favourable outcomes to possible outcomes, assuming all outcomes are equally likely. The problem? That assumption is doing all the heavy lifting, like a butler carrying a grand piano up five flights of stairs. It’s circular: we define probability using equiprobability, which itself presumes a notion of probability. Charming, but logically suspect.
2. Frequentist Probability (The Empiricist’s Fantasy)
Here, probability is the limit of relative frequencies as the number of trials tends to infinity. This gives us the illusion of objectivityβbut only in a Platonic realm where we can conduct infinite coin tosses without the coin disintegrating or the heat death of the universe intervening. Also, it tells us nothing about singular cases. Whatβs the probability this specific bridge will collapse? Undefined, says the frequentist, helpfully.
3. Bayesian Probability (Subjectivity Dressed as Rigor)
Bayesians treat probability as a degree of beliefβquantified plausibility updated with evidence. This is useful, flexible, and epistemically honest, but also deeply subjective. Two Bayesians can start with wildly different priors and, unless carefully constrained, remain in separate probabilistic realities. It’s like epistemology for solipsists with calculators.
4. Propensity Interpretation (The Ontology of Maybes)
Karl Popper and his ilk proposed that probability is a tendency or disposition of a physical system to produce certain outcomes. Sounds scientific, but try locating a “propensity” in a particle colliderβit’s a metaphysical ghost, not a measurable entity. Worse, it struggles with repeatability and relevance outside of controlled environments.
5. Logical Probability (A Sober Attempt at Rationality)
Think of this as probability based on logical relations between propositionsβΓ la Keynes or Carnap. It aims to be objective without being empirical. The problem? Assigning these logical relations is no easier than choosing priors in Bayesianism, and just as subjective when it comes to anything meaty.
6. Quantum Probability (SchrΓΆdingerβs Definition)
In quantum mechanics, probability emerges from the squared modulus of a wave functionβso this is where physics says, βShut up and calculate.β But this doesn’t solve the philosophical issueβit just kicks the can into Hilbert space. Interpretations of quantum theory (Copenhagen? Many Worlds?) embed different philosophies of probability, so the contestation merely changes battlegrounds.
Current Status: War of Attrition
There is no universal agreement, and likely never will be. Probability is used successfully across the sciences, economics, AI, and everyday reasoningβbut the fact that these wildly different interpretations all βworkβ suggests that the concept is operationally robust yet philosophically slippery. Like money, love, or art, we use it constantly but define it poorly.
In short: the contestation endures because probability is not one thingβit is a shape-shifting chimera that serves multiple masters. Each interpretation captures part of the truth, but none hold it entire. Philosophers continue to argue, mathematicians continue to formalise, and practitioners continue to deploy it as if there were no disagreement at all.
And so the probability of this contest being resolved any time soon? About zero. Or one. Depending on your interpretation.
I’m not a fan of IQ as a general metric. Let us be done with the cult of the clever. Let us drag the IQ score from its pedestal, strip it of its statistical robes, and parade it through the streets of history where it belongsβnext to phrenology, eugenics, and other well-meaning pseudosciences once weaponised by men in waistcoats.
The so-called Intelligence Industrial Complexβan infernal alliance of psychologists, bureaucrats, and HR departmentsβhas for too long dictated the terms of thought. It has pretended to measure the immeasurable. It has sold us a fiction in numerical drag: that human intelligence can be distilled, packaged, and ranked.
Audio: NotebookLM podcast on this topic.
What it measures, it defines. What it defines, it controls.
IQ is not intelligence. It is cognitive GDP: a snapshot of what your brain can do under fluorescent lights with a timer running. It rewards abstraction, not understanding; speed, not depth; pattern recognition, not wisdom. Itβs a test of how well youβve been conditioned to think like the test-makers.
This is not to say IQ has no value. Of course it doesβwithin its own ecosystem of schools, bureaucracies, and technocracies. But let us not mistake the ruler for the terrain. Let us not map the entire landscape of human potential using a single colonial compass.
True intelligence is not a number. It is a spectrum of situated knowings, a polyphony of minds tuned to different frequencies. The Inuit hunter tracking a seal through silence. The griot remembering centuries of lineage. The autistic coder intuiting an algorithm in dreamtime. The grandmother sensing a lie with her bones. IQ cannot touch these.
To speak of intelligence as if it belonged to a single theory is to mistake a monoculture for a forest. Let us burn the monoculture. Let us plant a thousand new seeds.
A Comparative Vivisection of Intelligence Theories
Theory / Model
Core Premise
Strengths
Blind Spots / Critiques
Cultural Framing
IQ (Psychometric g)
Intelligence is a single, general cognitive ability measurable via testing
Predicts academic & job performance; standardised
Skewed toward Western logic, ignores context, devalues non-abstract intelligences
Western, industrial, meritocratic
Multiple Intelligences (Gardner)
Intelligence is plural: linguistic, spatial, musical, bodily, etc.
Recognises diversity; challenges IQ monopoly
Still individualistic; categories often vague; Western in formulation
Liberal Western pluralism
Triarchic Theory (Sternberg)
Intelligence = analytical + creative + practical
Includes adaptability, real-world success
Still performance-focused; weak empirical grounding
Western managerial
Emotional Intelligence (Goleman)
Intelligence includes emotion regulation and interpersonal skill
Useful in leadership & education contexts
Commodified into corporate toolkits; leans self-help
Western therapeutic
Socio-Cultural (Vygotsky)
Intelligence develops through social interaction and cultural mediation
Recognises developmental context and culture
Less attention to adult or cross-cultural intelligence
Soviet / constructivist
Distributed Cognition / Extended Mind
Intelligence is distributed across people, tools, systems
Breaks skull-bound model; real-world cognition
Hard to measure; difficult to institutionalise
Post-cognitive, systems-based
Indigenous Epistemologies
Intelligence is relational, ecological, spiritual, embodied, ancestral
Holistic; grounded in lived experience
Marginalised by academia; often untranslatable into standard metrics
Global South / decolonial
Conclusion: Beyond the Monoculture of Mind
If we want a more encompassing theory of intelligence, we must stop looking for a single theory. We must accept pluralityβnot as a nod to diversity, but as an ontological truth.
Intelligence is not a fixed entity to be bottled and graded. It is a living, breathing phenomenon: relational, situated, contextual, historical, ecological, and cultural.
And no test devised in a Princeton psych lab will ever tell you how to walk through a forest without being seen, how to tell when rain is coming by smell alone, or how to speak across generations through story.
Itβs time we told the Intelligence Industrial Complex: your numberβs up.
Given current IQ trends, humanity is getting dumber. Letβs not mince words. This implies the AGI singularityβour long-heralded techno-apotheosisβwill arrive against a backdrop of cognitive decay. A dimming species, squinting into the algorithmic sun.
Audio: NotebookLM podcast discussing this content.
Now, Iβd argue that AIβas instantiated in generative models like Claude and ChatGPTβalready outperforms at least half of the human population. Likely more. The only question worth asking is this: at what percentile does AI need to outperform the human herd to qualify as having βsurpassedβ us?
Living in the United States, Iβm painfully aware that the average IQ hovers somewhere in the mid-90sβcomfortably below the global benchmark of 100. If youβre a cynic (and I sincerely hope you are), this explains quite a bit. The declining quality of discourse. The triumph of vibes over facts. The national obsession with astrology apps and conspiracy podcasts.
Harvard astronomer Avi Loeb argues that as humans outsource cognition to AI, they lose the capacity to think. It’s the old worry: if the machines do the heavy lifting, we grow intellectually flaccid. There are two prevailing metaphors. One, Platonic in origin, likens cognition to muscleβatrophying through neglect. Plato himself worried that writing would ruin memory. He wasnβt wrong.
But thereβs a counterpoint: the cooking hypothesis. Once humans learned to heat food, digestion became easier, freeing up metabolic energy to grow bigger brains. In this light, AI might not be a crutch but a catalystβoffloading grunt work to make space for higher-order thought.
So which is it? Are we becoming intellectually enfeebled? Or are we on the cusp of a renaissanceβprovided we donβt burn it all down first?
Crucially, most people donβt use their full cognitive capacity anyway. So for the bottom halfβhell, maybe the bottom 70%βnothing is really lost. No oneβs delegating their calculus homework to ChatGPT if they were never going to attempt it themselves. For the top 5%, AI is already a glorified research assistantβa handy tool, not a replacement.
The real question is what happens to the middle band. The workaday professionals. The strivers. The accountants, engineers, copywriters, and analysts hovering between the 70th and 95th percentilesβassuming our crude IQ heuristics even hold. They’re the ones who have just enough brainpower to be displaced.
Thatβs where the cognitive carnage will be felt. Not in the depths, not at the heightsβbut in the middle.
Once upon a time β which is how all good fairy tales begin β suspension of disbelief was a tidy little tool we used to indulge in dragons, space travel, talking animals, and the idea that people in rom-coms have apartments that match their personalities and incomes. It was a temporary transaction, a gentlemanβs agreement, a pact signed between audience and creator with metaphorical ink: I know this is nonsense, but Iβll play along if you donβt insult my intelligence.
Audio: NotebookLM podcast of this page content.
This idea, famously coined by Samuel Taylor Coleridge as the βwilling suspension of disbelief,β was meant to give art its necessary air to breathe. Coleridgeβs hope was that audiences would momentarily silence their rational faculties in favour of emotional truth. The dragons werenβt real, but the heartbreak was. The ghosts were fabrications, but the guilt was palpable.
But that was then. Before the world itself began auditioning for the role of absurdist theatre. Before reality TV became neither reality nor television. Before politicians quoted memes, tech CEOs roleplayed as gods, and conspiracy theorists became bestsellers on Amazon. These days, suspension of disbelief is no longer a leisure activity β itβs a survival strategy.
The Fictional Contract: Broken but Not Forgotten
Traditionally, suspension of disbelief was deployed like a visitorβs badge. You wore it when entering the imagined world and returned it at the door on your way out. Fiction, fantasy, speculative fiction β they all relied on that badge. You accepted the implausible if it served the probable. Gandalf could fall into shadow and return whiter than before because he was, after all, a wizard. We were fine with warp speed as long as the emotional logic of Spockβs sacrifice made sense. There were rules β even in rule-breaking.
The genres varied. Hard sci-fi asked you to believe in quantum wormholes but not in lazy plotting. Magical realism got away with absurdities wrapped in metaphor. Superhero films? Well, their disbelief threshold collapsed somewhere between the multiverse and the Bat-credit card.
Still, we always knew we were pretending. We had a tether to the real, even when we floated in the surreal.
But Then Real Life Said, βHold My Beer.β
At some point β letβs call it the twenty-first century β the need to suspend disbelief seeped off the screen and into the bloodstream of everyday life. News cycles became indistinguishable from satire (except that satire still had editors). Headlines read like rejected Black Mirror scripts. A reality TV star became president, and nobody even blinked. Billionaires declared plans to colonise Mars whilst democracy quietly lost its pulse.
We began to live inside a fiction that demanded that our disbelief be suspended daily. Except now, it wasnβt voluntary. It was mandatory. If you wanted to participate in public life β or just maintain your sanity β you had to turn off some corner of your rational mind.
You had to believe, or pretend to, that the same people calling for βfreedomβ were banning books. That artificial intelligence would definitely save us, just as soon as it was done replacing us. That social media was both the great democratiser and the sewer mainline of civilisation.
The boundary between fiction and reality? Eroded. Fact-checking? Optional. Satire? Redundant. Weβre all characters now, improvising in a genreless world that refuses to pick a lane.
What happens to a psyche caught in this funhouse? Nothing good.
Our brains, bless them, were designed for some contradiction β religionβs been pulling that trick for millennia β but the constant toggling between belief and disbelief, trust and cynicism, is another matter. Weβre gaslit by the world itself. Each day, a parade of facts and fabrications marches past, and we’re told to clap for both.
Cognitive dissonance becomes the default. We scroll through doom and memes in the same breath. We read a fact, then three rebuttals, then a conspiracy theory, then a joke about the conspiracy, then a counter-conspiracy about why the joke is state-sponsored. Rinse. Repeat. Sleep if you can.
The result? Mental fatigue. Not just garden-variety exhaustion, but a creeping sense that nothing means anything unless itβs viral. Critical thinking atrophies not because we lack the will but because the floodwaters never recede. You cannot analyse the firehose. You can only drink β or drown.
Culture in Crisis: A Symptom or the Disease?
This isnβt just a media problem. Itβs cultural, epistemological, and possibly even metaphysical.
Weβve become simultaneously more skeptical β distrusting institutions, doubting authorities β and more gullible, accepting the wildly implausible so long as itβs entertaining. Itβs the postmodern paradox in fast-forward: we know everything is a construct, but we still canβt look away. The magician shows us the trick, and we cheer harder.
In a world where everything is performance, authenticity becomes the ultimate fiction. And with that, the line between narrative and news, between aesthetic and actuality, collapses.
So what kind of society does this create?
One where engagement replaces understanding. Where identity is a curated feed. Where politics is cosplay, religion is algorithm, and truth is whatever gets the most shares. We arenβt suspending disbelief anymore. Weβre embalming it.
The Future: A Choose-Your-Own-Delusion Adventure
So where does this all end?
Thereβs a dark path, of course: total epistemic breakdown. Truth becomes just another fandom and reality a subscription model. But there’s another route β one with a sliver of hope β where we become literate in illusion.
We can learn to hold disbelief like a scalpel, not a blindfold. To engage the implausible with curiosity, not capitulation. To distinguish between narratives that serve power and those that serve understanding.
It will require a new kind of literacy. One part media scepticism, one part philosophical rigour, and one part good old-fashioned bullshit detection. Weβll have to train ourselves not just to ask βIs this true?β but βWho benefits if I believe it?β
That doesnβt mean closing our minds. It means opening them with caution. Curiosity without credulity. Wonder without worship. A willingness to imagine the impossible whilst keeping a firm grip on the probable.
In Conclusion, Reality Is Optional, But Reason Is Not
In the age of AI, deepfakes, alt-facts, and hyperreality, we donβt need less imagination. We need more discernment. The world may demand our suspension of disbelief, but we must demand our belief back. In truth, in sense, in each other.
Because if everything becomes fiction, then fiction itself loses its magic. And we, the audience, are left applauding an empty stage.
Lights down. Curtain call. Time to read the footnotes.
Reading Leviathan has me thinking. Nothing new, mind youβjust reinvigorated. Hobbes, bless his scowling soul, is the consummate pessimist. People, in his view, are untrustworthy sods, ready to stab you in the back at the first flicker of opportunity. He doesnβt believe in community. He believes in containment.
Audio: NotebookLM discussion about this topic.
And to be fair, heβs not entirely wrong. He captures a certain cohort with uncanny accuracy. You know the type. Type-Aβ’ personalities: the Donald Trumps, Elon Musks, Adolph Hitlers, Shahs of Iran, and that guy in marketing who always schedules meetings for 8am. The ones who salivate at the mere whiff of power, whoβd sell their grandmothers for a press release and call it vision.
This, in short, is why we canβt have nice things.
Now, Iβll concede that most people want more than they have. Economics depends on this assumption like religion depends on guilt. But not everyone is driven by an insatiable lust for money, dominance, or legacy. That, my friends, is not ambition. It is pathologyβa malignant, metastasising hunger that infects the likes of Trump, Musk, Bezos, Sunak, and their ilk. The hunger to rule, not just participate.
The trouble is, the majority of the worldβs population are idiotsβnot technically, but metaphorically. Soft-headed. Overstimulated. Easily distracted by flags, influencers, and βfree shipping.β And there are flavours of idiots. Musk is a lucky idiot. Trump is a useful idiot. Most are a hairβs breadth from being cannon fodder.
And then we wonder why everythingβs broken.
The world could be configured differently. It could consist of autonomous collectives, each minding its own business, each respecting the otherβs boundaries like courteous houseplants. But this equilibrium is shatteredβalways shatteredβby the predatory few. The outliers. The sharks in suits. The ones who mistake governance for domination and diplomacy for personal branding.
So we build mechanisms to defend ourselvesβlaws, institutions, surveillance, standing armiesβbut these mechanisms inevitably attract the same types we were trying to ward off. Power-hungry cretins in different hats. The protectors, it turns out, are rarely benevolent dictators. They are predacious politicos, wearing virtue like a costume, mouthing justice while tightening the screws.
And here lies the eternal problem. Not technology. Not ideology. Not even ignorance.
But the recurring infestation of pathological ambition in a species otherwise just trying to get on with its day.
I’ve read Part I of Hobbes’ Leviathan and wonder what it would have been like if he filtered his thoughts through Hume or Wittgenstein. Hobbes makes Dickens read like Pollyanna. It’s an interesting historical piece, worth reading on that basis alone. It reads as if the Christian Bible had to pass through a legal review before it had been published, sapped of vigour. As bad a rap as Schopenhauer seems to get, Hobbes is the consummate Ebenezer Scrooge. Bah, humbug β you nasty, brutish, filthy animals!*
Audio: NotebookLM podcast conversation on this topic.
In any case, it got me thinking of free will and, more to the point, of will itself.
A Brief History of Humanityβs Favourite Metaphysical Scapegoat
By the time Free Will turned up to the party, the real guest of honourβthe Willβhad already been drinking heavily, muttering incoherently in the corner, and starting fights with anyone who made eye contact. We like to pretend that the βwillβ is a noble concept: the engine of our autonomy, the core of our moral selves, the brave little metaphysical organ that lets us choose kale over crisps. But in truth, itβs a bloody messβphilosophyβs equivalent of a family heirloom that no one quite understands but refuses to throw away.
So, letβs rewind. Where did this thing come from? And why, after 2,500 years of name-dropping, finger-pointing, and metaphysical gymnastics, are we still not quite sure whether we have a will, are a will, or should be suing it for damages?
Plato: Soul, Reason, and That Poor Horse
In the beginning, there was Plato, whoβas with most thingsβhalf-invented the question and then wandered off before giving a straight answer. For him, the soul was a tripartite circus act: reason, spirit, and appetite. Will, as a term, didnβt get top billingβit didnβt even get its name on the poster. But the idea was there, muddling along somewhere between the charioteer (reason) and the unruly horses (desire and spiritedness).
No explicit will, mind you. Just a vague sense that the rational soul ought to be in charge, even if it had to beat the rest of itself into submission.
Aristotle: Purpose Without Pathos
Aristotle, ever the tidy-minded taxonomist, introduced prohairesisβdeliberate choiceβas a sort of proto-will. But again, it was all about rational calculation toward an end. Ethics was teleological, goal-oriented. You chose what aligned with eudaimonia, that smug Greek term for flourishing. Will, if it existed at all, was just reason picking out dinner options based on your telos. No inner torment, no existential rebellionβjust logos in a toga.
Augustine: Sin, Suffering, and That Eternal No
Fast-forward a few hundred years, and along comes Saint Augustine, traumatised by his libido and determined to make the rest of us suffer for it. Enter voluntas: the will as the seat of choiceβand the scene of the crime. Augustine is the first to really make the will bleed. He discovers he can want two incompatible things at once and feels properly appalled about it.
From this comes the classic Christian cocktail: freedom plus failure equals guilt. The will is free, but broken. Itβs responsible for sin, for disobedience, for not loving God enough on Wednesdays. Thanks to Augustine, weβre stuck with the idea that the will is both the instrument of salvation and the reason weβre going to Hell.
Cheers.
Medievals: God’s Will or Yours, Pick One
The Scholastics, never ones to let an ambiguity pass unanalysed, promptly split into camps. Aquinas, ever the reasonable Dominican, says the will is subordinate to the intellect. God is rational, and so are we, mostly. But Duns Scotus and William of Ockham, the original voluntarist hooligans, argue that the will is superiorβeven in God. God could have made murder a virtue, they claim, and youβd just have to live with it.
From this cheerful perspective, will becomes a force of arbitrary fiat, and humans, made in God’s image, inherit the same capacity for irrational choice. The will is now more than moral; it’s metaphysical. Less reasonβs servant, more chaos goblin.
Hobbes: Appetite with Delusions of Grandeur
Then along comes Thomas Hobbes, who looks at the soul and sees a wheezing machine of appetites. Will, in his famously cheery view, is simply βthe last appetite before action.β No higher calling, no spiritual struggleβjust the twitch that wins. Man is not a rational animal, but a selfish algorithm on legs. For Hobbes, will is where desire stumbles into motion, and morality is a polite euphemism for not getting stabbed.
Kant: The Will Gets a Makeover
Enter Immanuel Kant: powdered wig, pursed lips, and the moral rectitude of a man who scheduled his bowel movements. Kant gives us the βgood willβ, which acts from duty, not desire. Suddenly, the will is autonomous, rational, and morally legislativeβa one-man Parliament of inner law.
Itβs all terribly noble, terribly German, and entirely exhausting. For Kant, free will is not the ability to do whatever you likeβitβs the capacity to choose according to moral law, even when youβd rather be asleep. The will is finally heroicβbut only if it agrees to hate itself a little.
Schopenhauer: Cosmic Will, Cosmic Joke
And then the mood turns. Schopenhauer, worldβs grumpiest mystic, takes Kantβs sublime will and reveals it to be a blind, thrashing, cosmic force. Will, for him, isnβt reasonβitβs suffering in motion. The entire universe is will-to-live: a desperate, pointless striving that dooms us to perpetual dissatisfaction.
There is no freedom, no morality, no point. The only escape is to negate the will, preferably through aesthetic contemplation or Buddhist-like renunciation. In Schopenhauerβs world, the will is not what makes us humanβitβs what makes us miserable.
Nietzsche: Transvaluation and the Will to Shout Loudest
Cue Nietzsche, who takes Schopenhauerβs howling void and says: yes, but what if we made it fabulous? For him, the will is no longer to live, but to powerβto assert, to create, to impose value. βFree willβ is a theologianβs fantasy, a tool of priests and moral accountants. But will itself? Thatβs the fire in the forge. The Γbermensch doesnβt renounce the willβhe rides it like a stallion into the sunset of morality.
Nietzsche doesnβt want to deny the abyss. He wants to waltz with it.
Today: Free Will and the Neuroscientific Hangover
And now? Now weβre left with compatibilists, libertarians, determinists, and neuroscientists all shouting past each other, armed with fMRI machines and TED talks. Some claim free will is an illusion, a post hoc rationalisation made by brains doing what they were always going to do. Others insist that moral responsibility requires it, even if we canβt quite locate it between the neurons.
We talk about willpower, will-to-change, political will, and free will like they’re real things. But under the hood, weβre still wrestling with the same questions Augustine posed in a North African villa: Why do I do what I donβt want to do? And more importantly, whoβs doing it?
Conclusion: Where Thereβs a Will, Thereβs a Mess
From Platoβs silent horses to Nietzscheβs Dionysian pyrotechnics, the will has shape-shifted more times than a politician in an election year. It has been a rational chooser, a moral failure, a divine spark, a mechanical twitch, a cosmic torment, and an existential triumph.
Despite centuries of philosophical handwringing, what it has never been is settled.
So where thereβs a will, thereβs a way. But the way? Twisting, contradictory, and littered with the corpses of half-baked metaphysical systems.
Welcome to the labyrinth. Bring snacks.
* The solitary, poor, nasty, brutish, and short quote is forthcoming. Filthy animals is a nod to Home Alone.
David Brooks and the Hollowing Out of Conservatism
David Brooks is the quintessential old-school Conservativeβthe kind who once upheld a semblance of ideological coherence. He belongs to the pre-Reagan-Thatcher vintage, a time when Conservatism at least had the decency to argue from principles rather than blind tribalism. We could debate these people in good faith. Those days are gone. The current incarnation of Conservatism contains only homoeopathic traces of its Classicalβ’ predecessorβdiluted beyond recognition.
The Degeneration of Conservatism
The rot set in with Reagan, who caught it from Thatcher. Greed and selfishness were laundered into virtues, repackaged as “individual responsibility,” and the party’s intellectual ballast began to erode. By the time Bush IIβs administration rolled in, Neo-Conservatism had replaced any lingering Burkean ethos, and by Trumpβs tenure, even the pretence of ideology was gone. Conservatism-in-Name-Onlyβwhatever Trumpβs brand of reactionary nihilism wasβswallowed the party whole. Do they even call themselves Conservatives anymore, or has that ship sailed along with basic literacy?
To be fair, this didnβt go unnoticed. Plenty of old-school Republicans recoiled in horror when Trump became their figurehead. Before the 2016 election, conservative pundits could barely contain their disdain for his incompetence, lack of moral compass, and general buffoonery. And yet, once they realised he was the partyβs golden goose, they clambered aboard the Trump Train with the enthusiasm of lottery winners at a payday loan office. His staunchest critics became his most obsequious apologists. What does this tell us about their value system? Spoiler: nothing good.
Brooksβ Lament
Which brings us back to Brooks, who now bemoans the death of Conservative values. On this, we agree. Where we part ways is on whether those values were worth saving. Say you’re boarding a train from New York to Los Angeles. Conservatism might argue that a Miami-bound train is still a train, so whatβs the problem? Itβs the same vehicle, just going somewhere else. Except, of course, Conservatism has always insisted on the slow train over the fast trainβbecause urgency is unseemly, and progress must be rationed.
If Iβm an affluent middle-classer, I might prefer Conservatismβs careful incrementalismβit keeps my apple cart stable. Admirable, if you enjoy tunnel vision. Progressives, by contrast, recognise that some people donβt even have apple carts. Some are starving while others hoard orchards. To the Conservative, the poor just arenβt trying hard enough. To the Progressive, the system is broken, and the playing field needs a serious re-levelling. Even when Conservatives acknowledge inequality, their instinct is to tiptoe toward justice rather than risk disrupting their own affluence.
The Fallacy of Objective Reality
Leaving politics for philosophy, Brooks predictably rails against Postmodernism, decrying relativism in favour of good old-fashioned Modernist “reality.” Heβs horrified by subjectivism, as though personal interpretation werenβt the foundation of all human experience. Like Jordan Peterson, he believes his subjective truth is the objective truth. And like Peterson, he takes umbrage at anyone pointing out otherwise. It feels so absolute to them that they mistake their own convictions for universal constants.
As a subjectivist, I accept that reality is socially mediated. We interpret truth claims based on cognitive biases, cultural conditioning, and personal experience. Even when we strive for objectivity, we do so through subjective lenses. Brooksβ Modernist nostalgia is touching but delusionalβakin to demanding we all agree on a single flavour of ice cream.
The Existential Problem
And so, I find myself in partial agreement with Brooks. Yes, there is an existential crisis. The patient has a broken leg. But our prescriptions differ wildly. I won’t offer a metaphor for thatβconsider it your homework as a reader.
Brooks is likely a better writer than a public speaker, but you may still find yourself nodding along with some of his arguments. If youβre a βtrueβ Christian Conservativeβif you still believe in something beyond crass self-interestβhe may well be preaching to the choir. But letβs be honest: how many in that choir are still listening?