What Good Is Morality?

The New Yorker reviewed The Invention of Good and Evil: A World History of Morality by Hanno Sauer. I read the favourable article and then the reviews. Rather than read the book, I asked NotebookLM to discuss the article, the article itself being behind a paywall.

Audio: NotebookLM Podcast of this topic.

Although the rating was not bad – 3.8 as of this writing – the reviews told a different story.

Firstly, this version is from a German edition. Some people feel that some structure and communication value was lost in translation. In any case, he’s accused of being verbose and circumlocutory.

Secondly, it may be somewhat derivative of Nietzsche’s work on the same topic.

In any case, the topic interests me, but I don’t see myself reading it any time soon.

The Church of Pareto: How Economics Learned to Love Collapse

β€”orβ€”How the Invisible Hand Became a Throttling Grip on the Throat of the Biosphere

As many frequent visitors know, I am a recovering economist. I tend to view economics through a philosophical lens. Here. I consider the daft nonsense of Pareto optimality.

Audio: NotebookLM podcast of this content.

There is a priesthood in modern economicsβ€”pious in its equations, devout in its dispassionβ€”that gathers daily to prostrate before the altar of Pareto. Here, in this sanctum of spreadsheet mysticism, it is dogma that an outcome is “optimal” so long as no one is worse off. Never mind if half the world begins in a ditch and the other half in a penthouse jacuzzi. So long as no one’s Jacuzzi is repossessed, the system is just. Hallelujah.

This cult of cleanliness, cloaked in the language of “efficiency,” performs a marvellous sleight of hand: it transforms systemic injustice into mathematical neutrality. The child working in the lithium mines of the Congo is not β€œharmed”—she simply doesn’t exist in the model. Her labour is an externality. Her future, an asterisk. Her biosphere, a rounding error in the grand pursuit of equilibrium.

Let us be clear: this is not science. This is not even ideology. It is theologyβ€”an abstract faith-based system garlanded with numbers. And like all good religions, it guards its axioms with fire and brimstone. Question the model? Heretic. Suggest the biosphere might matter? Luddite. Propose redistribution? Marxist. There is no room in this holy order for nuance. Only graphs and gospel.

The rot runs deep. William Stanley Jevonsβ€”yes, that Jevons, patron saint of unintended consequencesβ€”warned us as early as 1865 that improvements in efficiency could increase, not reduce, resource consumption. But his paradox, like Cassandra’s prophecy, was fated to be ignored. Instead, we built a civilisation on the back of the very logic he warned would destroy it.

Then came Simon Kuznets, whoβ€”bless his empirically addled soulβ€”crafted a curve that seemed to promise that inequality would fix itself if we just waited politely. We called it the Kuznets Curve and waved it about like a talisman against the ravages of industrial capitalism, ignoring the empirical wreckage that piled up beneath it like bones in a trench.

Meanwhile, Pareto himself, that nobleman of social Darwinism, famously calculated that 80% of Italy’s land was owned by 20% of its peopleβ€”and rather than challenge this grotesque asymmetry, he chose to marvel at its elegance. Economics took this insight and said: β€œYes, more of this, please.”

And so the model persistedβ€”narrow, bloodless, and exquisitely ill-suited to the world it presumed to explain. The economy, it turns out, is not a closed system of rational actors optimising utility. It is a planetary-scale thermodynamic engine fuelled by fossil sunlight, pumping entropy into the biosphere faster than it can absorb. But don’t expect to find that on the syllabus.

Mainstream economics has become a tragic farce, mouthing the language of optimisation while presiding over cascading system failure. Climate change? Not in the model. Biodiversity collapse? A regrettable externality. Intergenerational theft? Discounted at 3% annually.

We are witnessing a slow-motion suicide cloaked in the rhetoric of balance sheets. The Earth is on fire, and the economists are debating interest rates.

What we need is not reform, but exorcism. Burn the models. Salt the axioms. Replace this ossified pseudoscience with something fit for a living worldβ€”ecological economics, systems theory, post-growth thinking, anything with the courage to name what this discipline has long ignored: that there are limits, and we are smashing into them at speed.

History will not be kind to this priesthood of polite annihilation. Nor should it be.

What’s Probability?

The contestation over the definition of probability is alive and wellβ€”like a philosophical zombie that refuses to lie down and accept the tranquilliser of consensus. Despite over three centuries of intense mathematical, philosophical, and even theological wrangling, no single, universally accepted definition reigns supreme. Instead, we have a constellation of rival interpretations, each staking its claim on the epistemological turf, each clutching its own metaphysical baggage.

Audio: NotebookLM podcast on this topic.

Let us survey the battlefield:

1. Classical Probability (Laplacean Determinism in a Tuxedo)

This old warhorse defines probability as the ratio of favourable outcomes to possible outcomes, assuming all outcomes are equally likely. The problem? That assumption is doing all the heavy lifting, like a butler carrying a grand piano up five flights of stairs. It’s circular: we define probability using equiprobability, which itself presumes a notion of probability. Charming, but logically suspect.

2. Frequentist Probability (The Empiricist’s Fantasy)

Here, probability is the limit of relative frequencies as the number of trials tends to infinity. This gives us the illusion of objectivityβ€”but only in a Platonic realm where we can conduct infinite coin tosses without the coin disintegrating or the heat death of the universe intervening. Also, it tells us nothing about singular cases. What’s the probability this specific bridge will collapse? Undefined, says the frequentist, helpfully.

3. Bayesian Probability (Subjectivity Dressed as Rigor)

Bayesians treat probability as a degree of beliefβ€”quantified plausibility updated with evidence. This is useful, flexible, and epistemically honest, but also deeply subjective. Two Bayesians can start with wildly different priors and, unless carefully constrained, remain in separate probabilistic realities. It’s like epistemology for solipsists with calculators.

4. Propensity Interpretation (The Ontology of Maybes)

Karl Popper and his ilk proposed that probability is a tendency or disposition of a physical system to produce certain outcomes. Sounds scientific, but try locating a “propensity” in a particle colliderβ€”it’s a metaphysical ghost, not a measurable entity. Worse, it struggles with repeatability and relevance outside of controlled environments.

5. Logical Probability (A Sober Attempt at Rationality)

Think of this as probability based on logical relations between propositionsβ€”Γ  la Keynes or Carnap. It aims to be objective without being empirical. The problem? Assigning these logical relations is no easier than choosing priors in Bayesianism, and just as subjective when it comes to anything meaty.

6. Quantum Probability (SchrΓΆdinger’s Definition)

In quantum mechanics, probability emerges from the squared modulus of a wave functionβ€”so this is where physics says, β€œShut up and calculate.” But this doesn’t solve the philosophical issueβ€”it just kicks the can into Hilbert space. Interpretations of quantum theory (Copenhagen? Many Worlds?) embed different philosophies of probability, so the contestation merely changes battlegrounds.

Current Status: War of Attrition

There is no universal agreement, and likely never will be. Probability is used successfully across the sciences, economics, AI, and everyday reasoningβ€”but the fact that these wildly different interpretations all β€œwork” suggests that the concept is operationally robust yet philosophically slippery. Like money, love, or art, we use it constantly but define it poorly.

In short: the contestation endures because probability is not one thingβ€”it is a shape-shifting chimera that serves multiple masters. Each interpretation captures part of the truth, but none hold it entire. Philosophers continue to argue, mathematicians continue to formalise, and practitioners continue to deploy it as if there were no disagreement at all.

And so the probability of this contest being resolved any time soon?
About zero.
Or one.
Depending on your interpretation.

Against the Intelligence Industrial Complex

Why IQ is Not Enough – and Never Was

I’m not a fan of IQ as a general metric. Let us be done with the cult of the clever. Let us drag the IQ score from its pedestal, strip it of its statistical robes, and parade it through the streets of history where it belongsβ€”next to phrenology, eugenics, and other well-meaning pseudosciences once weaponised by men in waistcoats.

The so-called Intelligence Industrial Complexβ€”an infernal alliance of psychologists, bureaucrats, and HR departmentsβ€”has for too long dictated the terms of thought. It has pretended to measure the immeasurable. It has sold us a fiction in numerical drag: that human intelligence can be distilled, packaged, and ranked.

Audio: NotebookLM podcast on this topic.

What it measures, it defines. What it defines, it controls.

IQ is not intelligence. It is cognitive GDP: a snapshot of what your brain can do under fluorescent lights with a timer running. It rewards abstraction, not understanding; speed, not depth; pattern recognition, not wisdom. It’s a test of how well you’ve been conditioned to think like the test-makers.

This is not to say IQ has no value. Of course it doesβ€”within its own ecosystem of schools, bureaucracies, and technocracies. But let us not mistake the ruler for the terrain. Let us not map the entire landscape of human potential using a single colonial compass.

True intelligence is not a number. It is a spectrum of situated knowings, a polyphony of minds tuned to different frequencies. The Inuit hunter tracking a seal through silence. The griot remembering centuries of lineage. The autistic coder intuiting an algorithm in dreamtime. The grandmother sensing a lie with her bones. IQ cannot touch these.

To speak of intelligence as if it belonged to a single theory is to mistake a monoculture for a forest. Let us burn the monoculture. Let us plant a thousand new seeds.

A Comparative Vivisection of Intelligence Theories

Theory / ModelCore PremiseStrengthsBlind Spots / CritiquesCultural Framing
IQ (Psychometric g)Intelligence is a single, general cognitive ability measurable via testingPredicts academic & job performance; standardisedSkewed toward Western logic, ignores context, devalues non-abstract intelligencesWestern, industrial, meritocratic
Multiple Intelligences (Gardner)Intelligence is plural: linguistic, spatial, musical, bodily, etc.Recognises diversity; challenges IQ monopolyStill individualistic; categories often vague; Western in formulationLiberal Western pluralism
Triarchic Theory (Sternberg)Intelligence = analytical + creative + practicalIncludes adaptability, real-world successStill performance-focused; weak empirical groundingWestern managerial
Emotional Intelligence (Goleman)Intelligence includes emotion regulation and interpersonal skillUseful in leadership & education contextsCommodified into corporate toolkits; leans self-helpWestern therapeutic
Socio-Cultural (Vygotsky)Intelligence develops through social interaction and cultural mediationRecognises developmental context and cultureLess attention to adult or cross-cultural intelligenceSoviet / constructivist
Distributed Cognition / Extended MindIntelligence is distributed across people, tools, systemsBreaks skull-bound model; real-world cognitionHard to measure; difficult to institutionalisePost-cognitive, systems-based
Indigenous EpistemologiesIntelligence is relational, ecological, spiritual, embodied, ancestralHolistic; grounded in lived experienceMarginalised by academia; often untranslatable into standard metricsGlobal South / decolonial

Conclusion: Beyond the Monoculture of Mind

If we want a more encompassing theory of intelligence, we must stop looking for a single theory. We must accept pluralityβ€”not as a nod to diversity, but as an ontological truth.

Intelligence is not a fixed entity to be bottled and graded. It is a living, breathing phenomenon: relational, situated, contextual, historical, ecological, and cultural.

And no test devised in a Princeton psych lab will ever tell you how to walk through a forest without being seen, how to tell when rain is coming by smell alone, or how to speak across generations through story.

It’s time we told the Intelligence Industrial Complex: your number’s up.

Will Singularity Be Anticlimactic?

Given current IQ trends, humanity is getting dumber. Let’s not mince words. This implies the AGI singularityβ€”our long-heralded techno-apotheosisβ€”will arrive against a backdrop of cognitive decay. A dimming species, squinting into the algorithmic sun.

Audio: NotebookLM podcast discussing this content.

Now, I’d argue that AIβ€”as instantiated in generative models like Claude and ChatGPTβ€”already outperforms at least half of the human population. Likely more. The only question worth asking is this: at what percentile does AI need to outperform the human herd to qualify as having β€œsurpassed” us?

Living in the United States, I’m painfully aware that the average IQ hovers somewhere in the mid-90sβ€”comfortably below the global benchmark of 100. If you’re a cynic (and I sincerely hope you are), this explains quite a bit. The declining quality of discourse. The triumph of vibes over facts. The national obsession with astrology apps and conspiracy podcasts.

Harvard astronomer Avi Loeb argues that as humans outsource cognition to AI, they lose the capacity to think. It’s the old worry: if the machines do the heavy lifting, we grow intellectually flaccid. There are two prevailing metaphors. One, Platonic in origin, likens cognition to muscleβ€”atrophying through neglect. Plato himself worried that writing would ruin memory. He wasn’t wrong.

But there’s a counterpoint: the cooking hypothesis. Once humans learned to heat food, digestion became easier, freeing up metabolic energy to grow bigger brains. In this light, AI might not be a crutch but a catalystβ€”offloading grunt work to make space for higher-order thought.

So which is it? Are we becoming intellectually enfeebled? Or are we on the cusp of a renaissanceβ€”provided we don’t burn it all down first?

Crucially, most people don’t use their full cognitive capacity anyway. So for the bottom halfβ€”hell, maybe the bottom 70%β€”nothing is really lost. No one’s delegating their calculus homework to ChatGPT if they were never going to attempt it themselves. For the top 5%, AI is already a glorified research assistantβ€”a handy tool, not a replacement.

The real question is what happens to the middle band. The workaday professionals. The strivers. The accountants, engineers, copywriters, and analysts hovering between the 70th and 95th percentilesβ€”assuming our crude IQ heuristics even hold. They’re the ones who have just enough brainpower to be displaced.

That’s where the cognitive carnage will be felt. Not in the depths, not at the heightsβ€”but in the middle.

When Suspension of Disbelief Escapes the Page

Welcome to the Age of Realism Fatigue

Once upon a time β€” which is how all good fairy tales begin β€” suspension of disbelief was a tidy little tool we used to indulge in dragons, space travel, talking animals, and the idea that people in rom-coms have apartments that match their personalities and incomes. It was a temporary transaction, a gentleman’s agreement, a pact signed between audience and creator with metaphorical ink: I know this is nonsense, but I’ll play along if you don’t insult my intelligence.

Audio: NotebookLM podcast of this page content.

This idea, famously coined by Samuel Taylor Coleridge as the β€œwilling suspension of disbelief,” was meant to give art its necessary air to breathe. Coleridge’s hope was that audiences would momentarily silence their rational faculties in favour of emotional truth. The dragons weren’t real, but the heartbreak was. The ghosts were fabrications, but the guilt was palpable.

But that was then. Before the world itself began auditioning for the role of absurdist theatre. Before reality TV became neither reality nor television. Before politicians quoted memes, tech CEOs roleplayed as gods, and conspiracy theorists became bestsellers on Amazon. These days, suspension of disbelief is no longer a leisure activity β€” it’s a survival strategy.

The Fictional Contract: Broken but Not Forgotten

Traditionally, suspension of disbelief was deployed like a visitor’s badge. You wore it when entering the imagined world and returned it at the door on your way out. Fiction, fantasy, speculative fiction β€” they all relied on that badge. You accepted the implausible if it served the probable. Gandalf could fall into shadow and return whiter than before because he was, after all, a wizard. We were fine with warp speed as long as the emotional logic of Spock’s sacrifice made sense. There were rules β€” even in rule-breaking.

The genres varied. Hard sci-fi asked you to believe in quantum wormholes but not in lazy plotting. Magical realism got away with absurdities wrapped in metaphor. Superhero films? Well, their disbelief threshold collapsed somewhere between the multiverse and the Bat-credit card.

Still, we always knew we were pretending. We had a tether to the real, even when we floated in the surreal.

But Then Real Life Said, β€œHold My Beer.”

At some point β€” let’s call it the twenty-first century β€” the need to suspend disbelief seeped off the screen and into the bloodstream of everyday life. News cycles became indistinguishable from satire (except that satire still had editors). Headlines read like rejected Black Mirror scripts. A reality TV star became president, and nobody even blinked. Billionaires declared plans to colonise Mars whilst democracy quietly lost its pulse.

We began to live inside a fiction that demanded that our disbelief be suspended daily. Except now, it wasn’t voluntary. It was mandatory. If you wanted to participate in public life β€” or just maintain your sanity β€” you had to turn off some corner of your rational mind.

You had to believe, or pretend to, that the same people calling for β€œfreedom” were banning books. That artificial intelligence would definitely save us, just as soon as it was done replacing us. That social media was both the great democratiser and the sewer mainline of civilisation.

The boundary between fiction and reality? Eroded. Fact-checking? Optional. Satire? Redundant. We’re all characters now, improvising in a genreless world that refuses to pick a lane.

Cognitive Gymnastics: Welcome to the Cirque du SurrΓ©alisme

What happens to a psyche caught in this funhouse? Nothing good.

Our brains, bless them, were designed for some contradiction β€” religion’s been pulling that trick for millennia β€” but the constant toggling between belief and disbelief, trust and cynicism, is another matter. We’re gaslit by the world itself. Each day, a parade of facts and fabrications marches past, and we’re told to clap for both.

Cognitive dissonance becomes the default. We scroll through doom and memes in the same breath. We read a fact, then three rebuttals, then a conspiracy theory, then a joke about the conspiracy, then a counter-conspiracy about why the joke is state-sponsored. Rinse. Repeat. Sleep if you can.

The result? Mental fatigue. Not just garden-variety exhaustion, but a creeping sense that nothing means anything unless it’s viral. Critical thinking atrophies not because we lack the will but because the floodwaters never recede. You cannot analyse the firehose. You can only drink β€” or drown.

Culture in Crisis: A Symptom or the Disease?

This isn’t just a media problem. It’s cultural, epistemological, and possibly even metaphysical.

We’ve become simultaneously more skeptical β€” distrusting institutions, doubting authorities β€” and more gullible, accepting the wildly implausible so long as it’s entertaining. It’s the postmodern paradox in fast-forward: we know everything is a construct, but we still can’t look away. The magician shows us the trick, and we cheer harder.

In a world where everything is performance, authenticity becomes the ultimate fiction. And with that, the line between narrative and news, between aesthetic and actuality, collapses.

So what kind of society does this create?

One where engagement replaces understanding. Where identity is a curated feed. Where politics is cosplay, religion is algorithm, and truth is whatever gets the most shares. We aren’t suspending disbelief anymore. We’re embalming it.

The Future: A Choose-Your-Own-Delusion Adventure

So where does this all end?

There’s a dark path, of course: total epistemic breakdown. Truth becomes just another fandom and reality a subscription model. But there’s another route β€” one with a sliver of hope β€” where we become literate in illusion.

We can learn to hold disbelief like a scalpel, not a blindfold. To engage the implausible with curiosity, not capitulation. To distinguish between narratives that serve power and those that serve understanding.

It will require a new kind of literacy. One part media scepticism, one part philosophical rigour, and one part good old-fashioned bullshit detection. We’ll have to train ourselves not just to ask β€œIs this true?” but β€œWho benefits if I believe it?”

That doesn’t mean closing our minds. It means opening them with caution. Curiosity without credulity. Wonder without worship. A willingness to imagine the impossible whilst keeping a firm grip on the probable.

In Conclusion, Reality Is Optional, But Reason Is Not

In the age of AI, deepfakes, alt-facts, and hyperreality, we don’t need less imagination. We need more discernment. The world may demand our suspension of disbelief, but we must demand our belief back. In truth, in sense, in each other.

Because if everything becomes fiction, then fiction itself loses its magic. And we, the audience, are left applauding an empty stage.

Lights down. Curtain call.
Time to read the footnotes.

Why We Can’t Have Nice Things

A Hobbesian Rant for the Disillusioned Masses

Reading Leviathan has me thinking. Nothing new, mind youβ€”just reinvigorated. Hobbes, bless his scowling soul, is the consummate pessimist. People, in his view, are untrustworthy sods, ready to stab you in the back at the first flicker of opportunity. He doesn’t believe in community. He believes in containment.

Audio: NotebookLM discussion about this topic.

And to be fair, he’s not entirely wrong. He captures a certain cohort with uncanny accuracy. You know the type. Type-Aβ„’ personalities: the Donald Trumps, Elon Musks, Adolph Hitlers, Shahs of Iran, and that guy in marketing who always schedules meetings for 8am. The ones who salivate at the mere whiff of power, who’d sell their grandmothers for a press release and call it vision.

Now, I’ll concede that most people want more than they have. Economics depends on this assumption like religion depends on guilt. But not everyone is driven by an insatiable lust for money, dominance, or legacy. That, my friends, is not ambition. It is pathologyβ€”a malignant, metastasising hunger that infects the likes of Trump, Musk, Bezos, Sunak, and their ilk. The hunger to rule, not just participate.

The trouble is, the majority of the world’s population are idiotsβ€”not technically, but metaphorically. Soft-headed. Overstimulated. Easily distracted by flags, influencers, and β€œfree shipping.” And there are flavours of idiots. Musk is a lucky idiot. Trump is a useful idiot. Most are a hair’s breadth from being cannon fodder.

The world could be configured differently. It could consist of autonomous collectives, each minding its own business, each respecting the other’s boundaries like courteous houseplants. But this equilibrium is shatteredβ€”always shatteredβ€”by the predatory few. The outliers. The sharks in suits. The ones who mistake governance for domination and diplomacy for personal branding.

So we build mechanisms to defend ourselvesβ€”laws, institutions, surveillance, standing armiesβ€”but these mechanisms inevitably attract the same types we were trying to ward off. Power-hungry cretins in different hats. The protectors, it turns out, are rarely benevolent dictators. They are predacious politicos, wearing virtue like a costume, mouthing justice while tightening the screws.

But the recurring infestation of pathological ambition in a species otherwise just trying to get on with its day.

This is the challenge for all of humanity.

And we’ve yet to rise to it.

Where There’s a Will, There’s a Way

I’ve read Part I of Hobbes’ Leviathan and wonder what it would have been like if he filtered his thoughts through Hume or Wittgenstein. Hobbes makes Dickens read like Pollyanna. It’s an interesting historical piece, worth reading on that basis alone. It reads as if the Christian Bible had to pass through a legal review before it had been published, sapped of vigour. As bad a rap as Schopenhauer seems to get, Hobbes is the consummate Ebenezer Scrooge. Bah, humbug – you nasty, brutish, filthy animals!*

Audio: NotebookLM podcast conversation on this topic.

In any case, it got me thinking of free will and, more to the point, of will itself.

A Brief History of Humanity’s Favourite Metaphysical Scapegoat

By the time Free Will turned up to the party, the real guest of honourβ€”the Willβ€”had already been drinking heavily, muttering incoherently in the corner, and starting fights with anyone who made eye contact. We like to pretend that the β€œwill” is a noble concept: the engine of our autonomy, the core of our moral selves, the brave little metaphysical organ that lets us choose kale over crisps. But in truth, it’s a bloody messβ€”philosophy’s equivalent of a family heirloom that no one quite understands but refuses to throw away.

So, let’s rewind. Where did this thing come from? And why, after 2,500 years of name-dropping, finger-pointing, and metaphysical gymnastics, are we still not quite sure whether we have a will, are a will, or should be suing it for damages?

Plato: Soul, Reason, and That Poor Horse

In the beginning, there was Plato, whoβ€”as with most thingsβ€”half-invented the question and then wandered off before giving a straight answer. For him, the soul was a tripartite circus act: reason, spirit, and appetite. Will, as a term, didn’t get top billingβ€”it didn’t even get its name on the poster. But the idea was there, muddling along somewhere between the charioteer (reason) and the unruly horses (desire and spiritedness).

No explicit will, mind you. Just a vague sense that the rational soul ought to be in charge, even if it had to beat the rest of itself into submission.

Aristotle: Purpose Without Pathos

Aristotle, ever the tidy-minded taxonomist, introduced prohairesisβ€”deliberate choiceβ€”as a sort of proto-will. But again, it was all about rational calculation toward an end. Ethics was teleological, goal-oriented. You chose what aligned with eudaimonia, that smug Greek term for flourishing. Will, if it existed at all, was just reason picking out dinner options based on your telos. No inner torment, no existential rebellionβ€”just logos in a toga.

Augustine: Sin, Suffering, and That Eternal No

Fast-forward a few hundred years, and along comes Saint Augustine, traumatised by his libido and determined to make the rest of us suffer for it. Enter voluntas: the will as the seat of choiceβ€”and the scene of the crime. Augustine is the first to really make the will bleed. He discovers he can want two incompatible things at once and feels properly appalled about it.

From this comes the classic Christian cocktail: freedom plus failure equals guilt. The will is free, but broken. It’s responsible for sin, for disobedience, for not loving God enough on Wednesdays. Thanks to Augustine, we’re stuck with the idea that the will is both the instrument of salvation and the reason we’re going to Hell.

Cheers.

Medievals: God’s Will or Yours, Pick One

The Scholastics, never ones to let an ambiguity pass unanalysed, promptly split into camps. Aquinas, ever the reasonable Dominican, says the will is subordinate to the intellect. God is rational, and so are we, mostly. But Duns Scotus and William of Ockham, the original voluntarist hooligans, argue that the will is superiorβ€”even in God. God could have made murder a virtue, they claim, and you’d just have to live with it.

From this cheerful perspective, will becomes a force of arbitrary fiat, and humans, made in God’s image, inherit the same capacity for irrational choice. The will is now more than moral; it’s metaphysical. Less reason’s servant, more chaos goblin.

Hobbes: Appetite with Delusions of Grandeur

Then along comes Thomas Hobbes, who looks at the soul and sees a wheezing machine of appetites. Will, in his famously cheery view, is simply β€œthe last appetite before action.” No higher calling, no spiritual struggleβ€”just the twitch that wins. Man is not a rational animal, but a selfish algorithm on legs. For Hobbes, will is where desire stumbles into motion, and morality is a polite euphemism for not getting stabbed.

Kant: The Will Gets a Makeover

Enter Immanuel Kant: powdered wig, pursed lips, and the moral rectitude of a man who scheduled his bowel movements. Kant gives us the β€œgood will”, which acts from duty, not desire. Suddenly, the will is autonomous, rational, and morally legislativeβ€”a one-man Parliament of inner law.

It’s all terribly noble, terribly German, and entirely exhausting. For Kant, free will is not the ability to do whatever you likeβ€”it’s the capacity to choose according to moral law, even when you’d rather be asleep. The will is finally heroicβ€”but only if it agrees to hate itself a little.

Schopenhauer: Cosmic Will, Cosmic Joke

And then the mood turns. Schopenhauer, world’s grumpiest mystic, takes Kant’s sublime will and reveals it to be a blind, thrashing, cosmic force. Will, for him, isn’t reasonβ€”it’s suffering in motion. The entire universe is will-to-live: a desperate, pointless striving that dooms us to perpetual dissatisfaction.

There is no freedom, no morality, no point. The only escape is to negate the will, preferably through aesthetic contemplation or Buddhist-like renunciation. In Schopenhauer’s world, the will is not what makes us humanβ€”it’s what makes us miserable.

Nietzsche: Transvaluation and the Will to Shout Loudest

Cue Nietzsche, who takes Schopenhauer’s howling void and says: yes, but what if we made it fabulous? For him, the will is no longer to live, but to powerβ€”to assert, to create, to impose value. β€œFree will” is a theologian’s fantasy, a tool of priests and moral accountants. But will itself? That’s the fire in the forge. The Übermensch doesn’t renounce the willβ€”he rides it like a stallion into the sunset of morality.

Nietzsche doesn’t want to deny the abyss. He wants to waltz with it.

Today: Free Will and the Neuroscientific Hangover

And now? Now we’re left with compatibilists, libertarians, determinists, and neuroscientists all shouting past each other, armed with fMRI machines and TED talks. Some claim free will is an illusion, a post hoc rationalisation made by brains doing what they were always going to do. Others insist that moral responsibility requires it, even if we can’t quite locate it between the neurons.

We talk about willpower, will-to-change, political will, and free will like they’re real things. But under the hood, we’re still wrestling with the same questions Augustine posed in a North African villa: Why do I do what I don’t want to do? And more importantly, who’s doing it?

Conclusion: Where There’s a Will, There’s a Mess

From Plato’s silent horses to Nietzsche’s Dionysian pyrotechnics, the will has shape-shifted more times than a politician in an election year. It has been a rational chooser, a moral failure, a divine spark, a mechanical twitch, a cosmic torment, and an existential triumph.

Despite centuries of philosophical handwringing, what it has never been is settled.

So where there’s a will, there’s a way. But the way? Twisting, contradictory, and littered with the corpses of half-baked metaphysical systems.

Welcome to the labyrinth. Bring snacks.

* The solitary, poor, nasty, brutish, and short quote is forthcoming. Filthy animals is a nod to Home Alone.

Elites Ruined It For Everyone

David Brooks and the Hollowing Out of Conservatism

David Brooks is the quintessential old-school Conservativeβ€”the kind who once upheld a semblance of ideological coherence. He belongs to the pre-Reagan-Thatcher vintage, a time when Conservatism at least had the decency to argue from principles rather than blind tribalism. We could debate these people in good faith. Those days are gone. The current incarnation of Conservatism contains only homoeopathic traces of its Classicalβ„’ predecessorβ€”diluted beyond recognition.

The Degeneration of Conservatism

The rot set in with Reagan, who caught it from Thatcher. Greed and selfishness were laundered into virtues, repackaged as “individual responsibility,” and the party’s intellectual ballast began to erode. By the time Bush II’s administration rolled in, Neo-Conservatism had replaced any lingering Burkean ethos, and by Trump’s tenure, even the pretence of ideology was gone. Conservatism-in-Name-Onlyβ€”whatever Trump’s brand of reactionary nihilism wasβ€”swallowed the party whole. Do they even call themselves Conservatives anymore, or has that ship sailed along with basic literacy?

Click here to take the worldview survey

To be fair, this didn’t go unnoticed. Plenty of old-school Republicans recoiled in horror when Trump became their figurehead. Before the 2016 election, conservative pundits could barely contain their disdain for his incompetence, lack of moral compass, and general buffoonery. And yet, once they realised he was the party’s golden goose, they clambered aboard the Trump Train with the enthusiasm of lottery winners at a payday loan office. His staunchest critics became his most obsequious apologists. What does this tell us about their value system? Spoiler: nothing good.

Brooks’ Lament

Which brings us back to Brooks, who now bemoans the death of Conservative values. On this, we agree. Where we part ways is on whether those values were worth saving. Say you’re boarding a train from New York to Los Angeles. Conservatism might argue that a Miami-bound train is still a train, so what’s the problem? It’s the same vehicle, just going somewhere else. Except, of course, Conservatism has always insisted on the slow train over the fast trainβ€”because urgency is unseemly, and progress must be rationed.

If I’m an affluent middle-classer, I might prefer Conservatism’s careful incrementalismβ€”it keeps my apple cart stable. Admirable, if you enjoy tunnel vision. Progressives, by contrast, recognise that some people don’t even have apple carts. Some are starving while others hoard orchards. To the Conservative, the poor just aren’t trying hard enough. To the Progressive, the system is broken, and the playing field needs a serious re-levelling. Even when Conservatives acknowledge inequality, their instinct is to tiptoe toward justice rather than risk disrupting their own affluence.

The Fallacy of Objective Reality

Leaving politics for philosophy, Brooks predictably rails against Postmodernism, decrying relativism in favour of good old-fashioned Modernist “reality.” He’s horrified by subjectivism, as though personal interpretation weren’t the foundation of all human experience. Like Jordan Peterson, he believes his subjective truth is the objective truth. And like Peterson, he takes umbrage at anyone pointing out otherwise. It feels so absolute to them that they mistake their own convictions for universal constants.

As a subjectivist, I accept that reality is socially mediated. We interpret truth claims based on cognitive biases, cultural conditioning, and personal experience. Even when we strive for objectivity, we do so through subjective lenses. Brooks’ Modernist nostalgia is touching but delusionalβ€”akin to demanding we all agree on a single flavour of ice cream.

The Existential Problem

And so, I find myself in partial agreement with Brooks. Yes, there is an existential crisis. The patient has a broken leg. But our prescriptions differ wildly. I won’t offer a metaphor for thatβ€”consider it your homework as a reader.

Brooks is likely a better writer than a public speaker, but you may still find yourself nodding along with some of his arguments. If you’re a β€œtrue” Christian Conservativeβ€”if you still believe in something beyond crass self-interestβ€”he may well be preaching to the choir. But let’s be honest: how many in that choir are still listening?

Perception and Reality

I love this meme despite its lack of basis in reality – sort of like the ten per cent of the brain myth.

I’m busy writing, but this meme crossed my feed, and I thought, “What better time to share?”

I’ve been reading and re-reading A Sane Society, but reflecting on it here is too much of a commitment, so I’ll delay gratification.