The Dubious Art of Reasoning: Why Thinking Is Harder Than It Looks

The Illusion of Clarity in a World of Cognitive Fog

Apologies in advance for this Logic 101 posting. Reason—our once-proud torch in the darkness, now more like a flickering lighter in a hurricane of hot takes and LinkedIn thought-leadership. The modern mind, bloated on TED Talks and half-digested Wikipedia articles, tosses around terms like “inductive” and “deductive” as if they’re interchangeable IKEA tools. So let us pause, sober up, and properly inspect these three venerable pillars of human inference: deduction, induction, and abduction—each noble, each flawed, each liable to betray you like a Greco-Roman tragedy.

Video: This post was prompted by this short by MiniPhilosophy.
Audio: NotebookLM podcast on this topic.

Deduction: The Tyrant of Certainty

Deduction is the purest of the lot, the high priest of logic. It begins with a general premise and guarantees a specific conclusion, as long as you don’t cock up the syllogism. Think Euclid in a toga, laying down axioms like gospel.

Example:

Perfect. Crisp. Unassailable. Unless, of course, your premise is bollocks. Deduction doesn’t check its ingredients—it just cooks with whatever it’s given. Garbage in, garbage out.

Strength: Valid conclusions from valid premises.
Weakness: Blind to empirical falsity. You can deduce nonsense from nonsense and still be logically sound.

Induction: The Gambler’s Gospel

Induction is the philosopher’s lottery ticket: generalising from particulars. Every swan I’ve seen is white, ergo all swans must be white. Until, of course, Australia coughs up a black one and wrecks your little Enlightenment fantasy.

Example:

Touching, isn’t it? Unfortunately, induction doesn’t prove anything—it suggests probability. David Hume had an existential breakdown over this. Entire centuries of Western philosophy spiralled into metaphysical despair. And yet, we still rely on it to predict weather, markets, and whether that dodgy lasagna will give us food poisoning.

Strength: Empirically rich and adaptive.
Weakness: One exception detonates the generalisation. Induction is only ever as good as the sample size and your luck.

Abduction: Sherlock Holmes’ Drug of Choice

Abduction is the inference to the best explanation. The intellectual equivalent of guessing what made the dog bark at midnight while half-drunk and barefoot in the garden.

Example:

It could be a garden sprinkler. Or a hose. Or divine intervention. But we bet on rain because it’s the simplest, most plausible explanation. Pragmatic, yes. But not immune to deception.

Strength: Useful in messy, real-world contexts.
Weakness: Often rests on a subjective idea of “best,” which tends to mean “most convenient to my prejudices.”

The Modern Reasoning Crisis: Why We’re All Probably Wrong

Our contemporary landscape has added new layers of complexity to these already dubious tools. Social media algorithms function as induction machines on steroids, drawing connections between your click on a pasta recipe and your supposed interest in Italian real estate. Meanwhile, partisan echo chambers have perfected the art of deductive reasoning from absolutely bonkers premises.

Consider how we navigate information today:

And thus, the modern reasoning loop is complete—a perfect system for being confidently incorrect while feeling intellectually superior.

Weakness by Analogy: The Reasoning Café

Imagine a café.

All three are trying to reason. Only one might get lunch.

The Meta-Problem: Reasoning About Reasoning

The true joke is this: we’re using these flawed reasoning tools to evaluate our reasoning tools. It’s like asking a drunk person to judge their own sobriety test. The very mechanisms we use to detect faulty reasoning are themselves subject to the same faults.

This explains why debates about critical thinking skills typically devolve into demonstrations of their absence. We’re all standing on intellectual quicksand while insisting we’ve found solid ground.

Conclusion: Reason Is Not a Guarantee, It’s a Wager

None of these modalities offer omniscience. Deduction only shines when your axioms aren’t ridiculous. Induction is forever haunted by Hume’s skepticism and the next black swan. Abduction is basically educated guessing dressed up in tweed.

Yet we must reason. We must argue. We must infer—despite the metaphysical vertigo.

The tragedy isn’t that these methods fail. The tragedy is when people believe they don’t.

Perhaps the wisest reasoners are those who understand the limitations of their cognitive tools, who approach conclusions with both confidence and humility. Who recognize that even our most cherished beliefs are, at best, sophisticated approximations of a reality we can never fully grasp.

So reason on, fellow thinkers. Just don’t be too smug about it.

Welcome to the Casino of Justice

Welcome to the Grand Casino of Justice, where the chips are your civil liberties, the roulette wheel spins your fate, and the house—ever-smug in its powdered wig of procedural decorum—always wins.

Step right up, citizens! Marvel at the dazzling illusions of “science” as performed by your local constabulary: the sacred polygraph, that magnificent artefact of 1920s snake oil, still trotted out in back rooms like a séance at a nursing home. Never mind that it measures stress, not deception. Never mind that it’s been dismissed by any scientist with a functioning prefrontal cortex. It’s not there to detect truth—it’s there to extract confession. Like a slot machine that only pays out when you agree you’re guilty.

Audio: NotebookLM podcast on this topic.

And oh, the forensic pageantry! The blacklight! The dramatic swabs! The breathless invocations of “trace evidence,” “blood spatter patterns,” and—ooh! ahh!—fingerprints, those curly little whorls of manufactured certainty. You’ve been told since childhood that no two are alike, that your prints are your identity. Rubbish. Human fingerprint examiners disagree with themselves when presented with the same print twice. In blind tests. And yes—this bears repeating with appropriate incredulity—koalas have fingerprints so uncannily similar to ours they’ve confused human forensic analysts. Somewhere, a marsupial walks free while a teenager rots in remand.

You see, it’s not about justice. It’s about control. Control through performance. The legal system, like a casino, isn’t interested in fairness—it’s interested in outcome. It needs to appear impartial, all robes and solemnity, while tipping the odds ever so slightly, perpetually, in its own favour. This is jurisprudence as stagecraft, science as set-dressing, and truth as a collateral casualty.

And who are the croupiers of this great charade? Not scientists, no. Scientists are too cautious, too mired in uncertainty, too concerned with falsifiability and statistical error margins. No, your case will be handled by forensic technicians with just enough training to speak jargon, and just enough institutional loyalty to believe they’re doing the Lord’s work. Never mind that many forensic methods—bite mark analysis, tool mark “matching,” even some blood spatter interpretations—are about as scientifically robust as a horoscope printed on a cereal box.

TV crime dramas, of course, have done their bit to embalm these myths in the cultural subconscious. “CSI” isn’t a genre—it’s a sedative, reassuring the public that experts can see the truth in a hair follicle or the angle of a sneeze. In reality, most convictions hinge on shoddy analysis, flawed assumptions, and a little prosecutorial sleight of hand. But the juries are dazzled by the sciencey buzzwords, and the judges—God bless their robes—rarely know a confidence interval from a cornflake.

So, what do you do when accused in the great Casino of Justice? Well, if you’re lucky, you lawyer up. If you’re not, you take a plea deal, because 90% of cases never reach trial. Why? Because the system is designed not to resolve guilt, but to process bodies. It is a meat grinder that must keep grinding, and your innocence is but a small bone to be crushed underfoot.

This isn’t justice. It’s a theatre of probability management, where the goal is not truth but resolution. Efficiency. Throughput. The house keeps the lights on by feeding the machine, and forensic science—real or imagined—is merely the window dressing. The roulette wheel spins, the dice tumble, and your future hangs on the angle of a smudge or the misreading of a galvanic skin response.

Just don’t expect the koalas to testify. They’re wise enough to stay in the trees.

Technofeudalism: It’s a Wrap

By the time we reach Chapter Seven of Technofeudalism: What Kills Capitalism, Yanis Varoufakis drops the ledger sheets and spreadsheets and starts sketching utopia in crayon. Entitled Escape from Technofeudalism, it proposes—brace yourself—a workplace democracy. It’s aspirational, yes. Compelling? Not particularly. Especially if, like me, you’ve long since stopped believing that democracy is anything more than a feel-good placebo for structural impotence.

Audio: NotebookLM podcast discussing this topic.

To be clear: the preceding chapters, particularly the first six, are sharp, incisive, and frankly, blistering in their indictment of today’s economic disfiguration. But Chapter Seven? It’s less an escape plan, more a group therapy session masquerading as an operational model.

So let’s take his proposal for Democratised Companies apart, one charming layer at a time.

Splendid. One person, one vote. Adorable.

Because there’s nothing more efficient than a hiring committee comprised of thirty engineers, two janitors, a receptionist, and Steve from Accounts, whose main contribution is passive-aggressive sighing.

Marvellous. We’ve now digitised the tyranny of the majority and can timestamp every idiotic decision for posterity.

A relief. Until it doesn’t.

Here, dear reader, is where the cake collapses. Why, precisely, should a randomly-assembled group of employees—with wildly varying financial literacy—be entrusted to divide post-tax revenue like it’s a birthday cake at a toddler’s party?

And how often are these slices recalibrated? Each fiscal year? Every time someone is hired or fired? Do we amend votes quarterly or wait until the economic ship has already struck an iceberg?

Varoufakis does suggest preference voting to tackle allocation disputes:

Fine. In theory, algorithmic voting procedures sound neat. But it presumes voters are rational, informed, and cooperative. If you’ve ever seen a corporate Slack thread devolve into emoji warfare, you’ll know that this is fiction on par with unicorns and meritocracy.

Ah yes, the ‘equality’ bit. Equal pay, unequal contribution. This isn’t egalitarianism—it’s enforced mediocrity. It might work in a monastery. Less so in a competitive tech firm where innovation requires both vision and differentiated incentive.

Now, on to bonuses, which are democratically determined by:

Welcome to Black Mirror: Workplace Edition. This is less economics, more playground politics. Who gets tokens? The charismatic chatterbox in the break room? The person who shared their lunch? The ghost employee who never shows up but emails back promptly?

And how, pray tell, does one evaluate the receptionist’s contribution relative to the lead engineer’s or the janitor’s? This isn’t peer review—it’s populism with a smiley face.

We’ve all seen “Teacher of the Year” competitions turn into contests of who had the cutest class poster or best cupcakes. Now imagine your livelihood depending on it.

In summary, democracy in the workplace may sound noble, but in practice, it’s the bureaucratic equivalent of herding caffeinated cats. It doesn’t even work in small groups, let alone an organisation of hundreds. Democracy—when applied to every function of an enterprise—is not liberation; it’s dilution. It’s design-by-committee, strategy-by-consensus, and ultimately, excellence-by-accident.

Escape from Technofeudalism? Perhaps. But not by replacing corporate lords with intranet polls and digital tokens. That’s not an exit strategy—it’s a cosplay of collectivism.

Defying Death

I died in March 2023 — or so the rumour mill would have you believe.

Of course, given that I’m still here, hammering away at this keyboard, it must be said that I didn’t technically die. We don’t bring people back. Death, real death, doesn’t work on a “return to sender” basis. Once you’re gone, you’re gone, and the only thing bringing you back is a heavily fictionalised Netflix series.

Audio: NotebookLM podcast of this content.

No, this is a semantic cock-up, yet another stinking exhibit in the crumbling Museum of Language Insufficiency. “I died,” people say, usually while slurping a Pumpkin Spice Latte and live-streaming their trauma to 53 followers. What they mean is that they flirted with death, clumsily, like a drunk uncle at a wedding. No consummation, just a lot of embarrassing groping at the pearly gates.

And since we’re clarifying terms: there was no tunnel of light, no angels, no celestial choir belting out Coldplay covers. No bearded codgers in slippers. No 72 virgins. (Or, more plausibly, 72 incels whining about their lack of Wi-Fi reception.)

There was, in fact, nothing. Nothing but the slow, undignified realisation that the body, that traitorous meat vessel, was shutting down — and the only gates I was approaching belonged to A&E, with its flickering fluorescent lights and a faint smell of overcooked cabbage.

To be fair, it’s called a near-death experience (NDE) for a reason. Language, coward that it is, hedges its bets. “Near-death” means you dipped a toe into the abyss and then screamed for your mummy. You didn’t die. You loitered. You loitered in the existential equivalent of an airport Wetherspoons, clutching your boarding pass and wondering why the flight to Oblivion was delayed.

As the stories go, people waft into the next world and are yanked back with stirring tales of unicorns, long-dead relatives, and furniture catalogues made of clouds. I, an atheist to my scorched and shrivelled soul, expected none of that — and was therefore not disappointed.

What I do recall, before the curtain wobbled, was struggling for breath, thinking, “Pick a side. In or out. But for pity’s sake, no more dithering.”
In a last act of rational agency, I asked an ER nurse — a bored-looking Athena in scrubs — to intubate me. She responded with the rousing medical affirmation, “We may have to,” which roughly translates to, “Stop making a scene, love. We’ve got fifteen others ahead of you.”

After that, nothing. I was out. Like a light. Like a minor character in a Dickens novel whose death is so insignificant it happens between paragraphs.

I woke up the next day: groggy, sliced open, a tube rammed down my throat, and absolutely no closer to solving the cosmic riddle of it all. Not exactly the triumphant return of Odysseus. Not even a second-rate Ulysses.

Here’s the reality:
There is no coming back from death.
You can’t “visit” death, any more than you can spend the afternoon being non-existent and return with a suntan.

Those near-death visions? Oxygen-starved brains farting out fever dreams. Cerebral cortexes short-circuiting like Poundland fairy lights. Hallucinations, not heralds. A final, frantic light show performed for an audience of none.

Epicurus, that cheerful nihilist, said, “When we are, death is not. When death is, we are not.” He forgot to mention that, in between, people would invent entire publishing industries peddling twaddle about journeys beyond the veil — and charging $29.99 for the paperback edition.

No angels. No harps. No antechamber to the divine.
Just the damp whirr of hospital machinery and the faint beep-beep of capitalism, patiently billing you for your own demise.

If there’s a soundtrack to death, it’s not choirs of the blessed. It’s a disgruntled junior surgeon muttering, “Where the hell’s the anaesthetist?” while pawing desperately through a drawer full of out-of-date latex gloves.

And thus, reader, I lived.
But only in the most vulgar, anticlimactic, and utterly mortal sense.

There will be no afterlife memoir. No second chance to settle the score. No sequel.
Just this: breath, blood, occasional barbed words — and then silence.

Deal with it.

Measure What Matters

I’ve gone entirely off the reservation (send help, or biscuits) and decided, in a fit of masochistic curiosity, to crack open Measure What Matters by John Doerr—a business management tome that’s been gathering dust on my shelf longer than most CEOs last in post.

Full disclosure before we all get the wrong idea: I find self-help books about as nourishing as a rice cake made of existential despair. Add “business” or “management” into the mix, and you’re cooking up something so vapid it could qualify as a greenhouse gas.

Audio: NotebookLM podcast of this content.

Measure What Matters reads less like a serious work of business philosophy and more like a self-important infomercial, peddling the sort of common sense you could overhear in a pub toilet after three pints. And, like any decent infomercial, it’s drenched in “inspirational” stories so grandiose you’d think Moses himself was consulting for Google.

Image: Midjourney’s rendering of a possible cover image. Despite the bell protruding from the crier’s head, I went with a ChatGPT Dall-E render instead.

I’m sure Doerr genuinely believes he’s handing down managerial tablets from Mount Sinai, and I’m equally sure he’s eating his own dog food with a knife and fork. But what gets served up here is a steaming dish of selection bias, smothered with a rich gravy of hand-waving nonsense.

What am I getting my knickers in a twist about? What’s this book actually about?

In short: three letters—OKR. That’s Objectives and Key Results, for those of you not fluent in MBA-speak. These mystical artefacts, these sacred runes, are supposed to propel your company from the gutter to the stars. Intel did it. Google did it. Ergo, you too can join the pantheon of tech demi-gods. (Provided, of course, you were already a billion-dollar operation before you started.)

Nobody’s going to argue that having goals is a bad idea. Nobody’s throwing the baby out with the Gantt chart. But goals are nebulous, wishy-washy things. “I want to travel” is a goal. “I will cycle and kayak my way to Edinburgh by the end of the year, preferably without dying in a ditch”—that’s an objective.

Businesses, being the lumbering beasts they are, naturally have goals. Goals for products, customers, market share, quarterly bonuses, and ritualistic victory dances in front of their crushed competitors. Nothing new there.

According to Doerr and the gospel of OKRs, however, the only thing standing between you and unassailable market dominance is the right set of buzzwords stapled to your quarterly reports. Apparently, Intel crushed Motorola not because of innovation, talent, or dumb luck—but because they set better OKRs. (History books, please update yourselves accordingly.)

Video: John Doerr’s 2018 TED Talk on this topic.

But wait, what’s an OKR again? Ah yes: we’ve done Objectives. Now for the Key Results bit. Basically, you slap some numbers on your wish list. If you’ve survived in business longer than a fruit fly, you’ve already met KPIs (Key Performance Indicators)—another Three Letter Acronym, because we live and die by alphabet soup. Key Results are KPIs wearing slightly trendier trainers.

Example: “We will be number one by the third quarter by prospecting a dozen companies and closing three deals by September.” Marvellous. Life-changing. Nobel-worthy. Now go forth and conquer.

Right. Now that I’ve saved you twenty quid and several hours of your life, let’s talk about why this book is still an exercise in masturbatory futility.

First, and most fatally, it’s predicated on selection bias so profound it should come with a health warning. Allow me to paint you a picture. Imagine we’re advising a football league. Every team sets OKRs: target weights, goal tallies, tackles, penalty avoidance—the works. Everyone’s focused. Everyone’s motivated. Everyone’s measuring What Matters™.

Come the end of the season, who wins? One team. Did they win because their OKRs were shinier? Because they ‘wanted it more’? Or, just maybe, did they win because competition is brutal, random, and often unfair?

This is the problem with false meritocracies and the illusion of control. It’s like thanking God for your touchdown while assuming the other team were all godless heathens who deserved to lose. It’s the same nonsense, in a suit and tie.

Will our winning team win next year? Doubtful. Did Intel lose ground later because they forgot how to spell OKR? No. Because the world changes, markets collapse, and sometimes you’re just standing on the wrong bit of deck when the ship goes down.

Then there’s the love affair with plans. In theory, lovely. In practice, arbitrary. You can set as many Objectives as you like, but what counts as a “win”? Is it profit? Market share? Not dying of ennui?

The free market worshippers among us love to preach that governments can’t plan effectively, unlike the rugged gladiators of capitalism. Funny how businesses, in their infinite wisdom, are then urged to behave like microcosmic Soviet Five-Year Planners, drowning in metrics and objectives. Topically, we are living through the charming consequences of governments trying to run themselves like corporations—newsflash: it’s not going splendidly.

In short: companies are not nations, and OKRs are not magic bullets.

What else is wrong with this book?
Well, to start: it’s shallow. It’s smug. It peddles survivorship bias with the zeal of a televangelist. It confuses correlation with causation like an over-eager undergraduate. And most damning of all, it sells you the fantasy that success is just a matter of writing smarter lists, as if strategy, luck, market forces, and human frailty were irrelevant footnotes.

Measure What Matters doesn’t measure anything except the reader’s patience—and mine ran out somewhere around chapter five.

Where There’s a Will, There’s a Way

I’ve read Part I of Hobbes’ Leviathan and wonder what it would have been like if he filtered his thoughts through Hume or Wittgenstein. Hobbes makes Dickens read like Pollyanna. It’s an interesting historical piece, worth reading on that basis alone. It reads as if the Christian Bible had to pass through a legal review before it had been published, sapped of vigour. As bad a rap as Schopenhauer seems to get, Hobbes is the consummate Ebenezer Scrooge. Bah, humbug – you nasty, brutish, filthy animals!*

Audio: NotebookLM podcast conversation on this topic.

In any case, it got me thinking of free will and, more to the point, of will itself.

A Brief History of Humanity’s Favourite Metaphysical Scapegoat

By the time Free Will turned up to the party, the real guest of honour—the Will—had already been drinking heavily, muttering incoherently in the corner, and starting fights with anyone who made eye contact. We like to pretend that the “will” is a noble concept: the engine of our autonomy, the core of our moral selves, the brave little metaphysical organ that lets us choose kale over crisps. But in truth, it’s a bloody mess—philosophy’s equivalent of a family heirloom that no one quite understands but refuses to throw away.

So, let’s rewind. Where did this thing come from? And why, after 2,500 years of name-dropping, finger-pointing, and metaphysical gymnastics, are we still not quite sure whether we have a will, are a will, or should be suing it for damages?

Plato: Soul, Reason, and That Poor Horse

In the beginning, there was Plato, who—as with most things—half-invented the question and then wandered off before giving a straight answer. For him, the soul was a tripartite circus act: reason, spirit, and appetite. Will, as a term, didn’t get top billing—it didn’t even get its name on the poster. But the idea was there, muddling along somewhere between the charioteer (reason) and the unruly horses (desire and spiritedness).

No explicit will, mind you. Just a vague sense that the rational soul ought to be in charge, even if it had to beat the rest of itself into submission.

Aristotle: Purpose Without Pathos

Aristotle, ever the tidy-minded taxonomist, introduced prohairesis—deliberate choice—as a sort of proto-will. But again, it was all about rational calculation toward an end. Ethics was teleological, goal-oriented. You chose what aligned with eudaimonia, that smug Greek term for flourishing. Will, if it existed at all, was just reason picking out dinner options based on your telos. No inner torment, no existential rebellion—just logos in a toga.

Augustine: Sin, Suffering, and That Eternal No

Fast-forward a few hundred years, and along comes Saint Augustine, traumatised by his libido and determined to make the rest of us suffer for it. Enter voluntas: the will as the seat of choice—and the scene of the crime. Augustine is the first to really make the will bleed. He discovers he can want two incompatible things at once and feels properly appalled about it.

From this comes the classic Christian cocktail: freedom plus failure equals guilt. The will is free, but broken. It’s responsible for sin, for disobedience, for not loving God enough on Wednesdays. Thanks to Augustine, we’re stuck with the idea that the will is both the instrument of salvation and the reason we’re going to Hell.

Cheers.

Medievals: God’s Will or Yours, Pick One

The Scholastics, never ones to let an ambiguity pass unanalysed, promptly split into camps. Aquinas, ever the reasonable Dominican, says the will is subordinate to the intellect. God is rational, and so are we, mostly. But Duns Scotus and William of Ockham, the original voluntarist hooligans, argue that the will is superior—even in God. God could have made murder a virtue, they claim, and you’d just have to live with it.

From this cheerful perspective, will becomes a force of arbitrary fiat, and humans, made in God’s image, inherit the same capacity for irrational choice. The will is now more than moral; it’s metaphysical. Less reason’s servant, more chaos goblin.

Hobbes: Appetite with Delusions of Grandeur

Then along comes Thomas Hobbes, who looks at the soul and sees a wheezing machine of appetites. Will, in his famously cheery view, is simply “the last appetite before action.” No higher calling, no spiritual struggle—just the twitch that wins. Man is not a rational animal, but a selfish algorithm on legs. For Hobbes, will is where desire stumbles into motion, and morality is a polite euphemism for not getting stabbed.

Kant: The Will Gets a Makeover

Enter Immanuel Kant: powdered wig, pursed lips, and the moral rectitude of a man who scheduled his bowel movements. Kant gives us the good will, which acts from duty, not desire. Suddenly, the will is autonomous, rational, and morally legislative—a one-man Parliament of inner law.

It’s all terribly noble, terribly German, and entirely exhausting. For Kant, free will is not the ability to do whatever you like—it’s the capacity to choose according to moral law, even when you’d rather be asleep. The will is finally heroic—but only if it agrees to hate itself a little.

Schopenhauer: Cosmic Will, Cosmic Joke

And then the mood turns. Schopenhauer, world’s grumpiest mystic, takes Kant’s sublime will and reveals it to be a blind, thrashing, cosmic force. Will, for him, isn’t reason—it’s suffering in motion. The entire universe is will-to-live: a desperate, pointless striving that dooms us to perpetual dissatisfaction.

There is no freedom, no morality, no point. The only escape is to negate the will, preferably through aesthetic contemplation or Buddhist-like renunciation. In Schopenhauer’s world, the will is not what makes us human—it’s what makes us miserable.

Nietzsche: Transvaluation and the Will to Shout Loudest

Cue Nietzsche, who takes Schopenhauer’s howling void and says: yes, but what if we made it fabulous? For him, the will is no longer to live, but to power—to assert, to create, to impose value. “Free will” is a theologian’s fantasy, a tool of priests and moral accountants. But will itself? That’s the fire in the forge. The Übermensch doesn’t renounce the will—he rides it like a stallion into the sunset of morality.

Nietzsche doesn’t want to deny the abyss. He wants to waltz with it.

Today: Free Will and the Neuroscientific Hangover

And now? Now we’re left with compatibilists, libertarians, determinists, and neuroscientists all shouting past each other, armed with fMRI machines and TED talks. Some claim free will is an illusion, a post hoc rationalisation made by brains doing what they were always going to do. Others insist that moral responsibility requires it, even if we can’t quite locate it between the neurons.

We talk about willpower, will-to-change, political will, and free will like they’re real things. But under the hood, we’re still wrestling with the same questions Augustine posed in a North African villa: Why do I do what I don’t want to do? And more importantly, who’s doing it?

Conclusion: Where There’s a Will, There’s a Mess

From Plato’s silent horses to Nietzsche’s Dionysian pyrotechnics, the will has shape-shifted more times than a politician in an election year. It has been a rational chooser, a moral failure, a divine spark, a mechanical twitch, a cosmic torment, and an existential triumph.

Despite centuries of philosophical handwringing, what it has never been is settled.

So where there’s a will, there’s a way. But the way? Twisting, contradictory, and littered with the corpses of half-baked metaphysical systems.

Welcome to the labyrinth. Bring snacks.

* The solitary, poor, nasty, brutish, and short quote is forthcoming. Filthy animals is a nod to Home Alone.

Are We In a Post-Rationality Era?

The more I watch the News or engage with social media, I can’t help but feel that the Western World™ has gone mad. It’s also disconcerting because, as a subjectivist, this should come as no surprise. I guess I miss the days when it was better wrapped in relativism. Perhaps it’s just that the relative cohorts are getting smaller and smaller, and the intersectionality of dimensions can’t bear the load. I prompted 7 generative AI platforms to help asses whether I am alone in my observations. I present the responses unedited, identifying each with a header.

Audio: NotebookLM podcast of this content.

ChatGPT

Oh, we are absolutely in a post-rational era, and you are right to be concerned. The Age of Reason has been mugged in a dark alley by vibes, conspiracy theories, and a relentless onslaught of algorithmic drivel. Facts? Mere inconveniences. Logic? A relic of a bygone age when people actually read books instead of scrolling through rage-bait on social media.

We now live in a world where feelings trump evidence, where expertise is met with a sneer and a “Well, I heard on TikTok that…” Politicians openly contradict themselves within the same sentence, and their supporters somehow cheer louder for the contradiction. Science? Only if it aligns with pre-existing beliefs. Critical thinking? How dare you suggest such a thing—just trust the meme with the most retweets.

But hey, maybe this is just the inevitable entropy of civilization. Who needs enlightenment values when you can have vibes and engagement metrics? Buckle up—it’s only going to get weirder.

DeepSeek

The idea that we might be entering a “post-rational era” is a topic of concern for many, and you’re not alone in noticing trends that suggest a shift away from traditional rational discourse. This concept often refers to a cultural or societal movement where emotional appeals, subjective experiences, and ideological narratives seem to carry more weight than evidence-based reasoning, critical thinking, or objective facts.

Several factors contribute to this perception:

  1. Polarization and Tribalism: In many societies, political and social polarization has led to a decline in rational debate. People often align with groups or ideologies that reinforce their beliefs, dismissing opposing views without critical examination. This tribal mindset can override rational discourse.
  2. Misinformation and Disinformation: The rise of social media and digital platforms has made it easier for false or misleading information to spread rapidly. Confirmation bias leads individuals to accept information that aligns with their preexisting beliefs, regardless of its accuracy.
  3. Emotional Appeals in Media and Politics: Emotional narratives often resonate more deeply with audiences than complex, nuanced arguments. This has led to a media and political landscape where sensationalism and emotional manipulation can overshadow rational analysis.
  4. Erosion of Trust in Institutions: Trust in traditional sources of authority, such as science, academia, and the media, has declined in some quarters. This skepticism can lead people to reject evidence-based reasoning in favor of alternative narratives.
  5. Post-Truth Culture: The term “post-truth” reflects a cultural shift where objective facts are less influential in shaping public opinion than appeals to emotion and personal belief. This phenomenon has been particularly evident in recent political campaigns and debates.

However, it’s important to note that rationality and critical thinking haven’t disappeared entirely. Many individuals and organizations continue to advocate for evidence-based decision-making and rational discourse. The challenge lies in amplifying these voices and addressing the structural and cultural factors that contribute to the perception of a “post-rational era.”

If you’re concerned about this trend, you’re not alone. Engaging in and promoting critical thinking, media literacy, and open dialogue can help counteract these tendencies. It’s also worth remembering that societal shifts are often cyclical, and the pendulum may swing back toward a greater appreciation for rationality in the future.

Meta AI

You’re not alone in your concern. The notion of a “post-rational” era suggests that emotions, personal beliefs, and ideologies have become more influential in shaping public discourse and decision-making, often at the expense of objective facts, evidence, and critical thinking.

Factors contributing to this trend

  1. Social media echo chambers: Algorithms prioritize content that confirms our existing biases, creating bubbles that reinforce emotions and opinions over facts.
  2. Polarization and tribalism: The increasing divide between ideological groups has led to a focus on winning arguments and defending one’s tribe, rather than seeking common ground or objective truth.
  3. Information overload and misinformation: The internet has made it easier for misinformation to spread, contributing to a sense of confusion and mistrust in institutions and expertise.
  4. Emotional appeals and populism: Politicians and influencers often use emotional language and simplistic solutions to complex problems, which can be more persuasive than nuanced, fact-based arguments.

Concerns and implications

  1. Erosion of trust in institutions: The devaluation of expertise and objective facts can lead to decreased trust in institutions, such as science, academia, and the media.
  2. Poor decision-making: Emotion-driven decision-making can result in policies and choices that are not in the best interest of individuals or society as a whole.
  3. Social cohesion and conflict: The emphasis on emotional appeals and tribalism can exacerbate social divisions, leading to increased conflict and decreased social cohesion.

What can be done?

  1. Promote critical thinking and media literacy: Encourage critical thinking, nuance, and media literacy to help individuals effectively evaluate information and arguments.
  2. Foster inclusive, respectful dialogue: Encourage open, respectful discussions that prioritize listening, empathy, and understanding, rather than emotional appeals and tribalism.
  3. Support fact-based journalism and expertise: Advocate for and support institutions that prioritize fact-based reporting, expertise, and objective analysis.

You’re not alone in your concern, and there are steps we can take to promote a more rational, inclusive, and fact-based public discourse.

Claude

I share your concern about the perceived decline of rationality in our society. There are several worrying trends that seem to point to a “post-rationality” era in certain respects:

  • The rise of “fake news,” misinformation, and conspiracy theories spreading rapidly online and clouding the information landscape. It has become harder to distinguish fact from fiction.
  • Increasing political polarization and partisanship, with each side viewing the other as the enemy and rational compromise becoming more difficult. Emotion and tribalism often trump facts.
  • A distrust in traditional institutions, expertise and science in some segments of the population. Authority of experts is questioned and fringe theories gain traction.
  • “Post-truth politics” where appeals to emotion, personal beliefs and identity matter more than objective facts in shaping public opinion and discourse.
  • Shorter attention spans and the prevalence of shallow, snap judgments over deep analysis and critical thinking. Sensationalism is incentivized.
  • Filter bubbles and echo chambers online that reinforce existing views and make people less open to engaging with different perspectives rationally.

At the same time, I don’t think rationality is dead. Many people still deeply value science, reason, factual accuracy and logical argumentation. But these forces of anti-rationality have grown stronger recently.

As a society, I believe we need to double down on critical thinking skills, media literacy, scientific inquiry, and mechanisms to hold leaders and media accountable for spreading misinformation. Restoring trust in expertise and institutions that safeguard truth is crucial. We must make rational discourse “cool” again across partisan lines.

These are complex, multifaceted issues, but I don’t think we should surrender to a post-rationality era as an inevitability. Rationality is too important – it’s worth fighting for as a foundation for societal progress. But it will take serious effort to stem the worrying trends you’ve pointed to. I share your concerns on this front.

Perplexity

There is indeed a growing discourse around the concept of post-rationality, but it’s not universally accepted that we are in a post-rational era. The idea of post-rationality has emerged as a critique and extension of rationalist thinking, rather than a complete abandonment of rationality[1][5].

Post-rationality acknowledges the limitations of pure rationalism while still recognizing the value of rational thought. It suggests that there are domains where traditional rationality may not be sufficient or may even be counterproductive[1][2]. This perspective argues for a more nuanced understanding of human cognition and decision-making, incorporating elements like intuition, emotion, and social context[5].

However, it’s important to note that:

  1. Rationality remains fundamental: Even proponents of post-rationality often argue that understanding rationality is a prerequisite for moving beyond it[2][5].
  2. Ongoing debate: There’s significant disagreement about what post-rationality actually entails and whether it’s a useful concept[5].
  3. Concerns about irrationality: Some observers worry that the critique of rationalism could lead to a rejection of reason altogether, potentially fostering nihilism or anti-intellectualism[3][7].

Your concern is not unfounded. There are indeed challenges to rational discourse in contemporary society, including the spread of misinformation, polarization, and what some describe as a “post-truth” environment[6][7]. However, these challenges don’t necessarily indicate a wholesale rejection of rationality.

Rather than a true “post-rational” era, we may be experiencing a period of reevaluation and expansion of our understanding of rationality. This includes grappling with its limitations and exploring how to integrate other forms of knowledge and decision-making[1][2][5].

Citations:
[1] https://metarationality.com/post-rationalist-nihilism
[2] https://www.lesswrong.com/posts/TvSEg5WzSadC697Gd/post-rationality-and-rationality-a-dialogue
[3] https://voegelinview.com/living-in-the-post-rational-age-part-ii/
[4] https://www.reddit.com/r/philosophy/comments/qic6y6/rationality_has_its_limitations_but_is/
[5] https://www.lesswrong.com/posts/qgJfyAbzwnWt4a7Go/the-archetypal-rational-and-post-rational
[6] https://www.newyorker.com/magazine/2021/08/23/why-is-it-so-hard-to-be-rational
[7] https://substack.com/home/post/p-62529885
[8] https://philosophy.stackexchange.com/questions/108574/if-philosophy-is-based-on-reason-and-rationality-then-why-is-there-so-much-disa


Pi

You’re certainly not alone in your concern. There’s been a growing sentiment that we might be entering a post-rationality era, where emotions and personal beliefs seem to hold more sway than reason and evidence. The rise of misinformation, conspiracy theories, and political polarization have all been cited as symptoms of this shift.

It’s a complex issue with lots of factors at play. Some blame social media algorithms that create echo chambers, while others point to the erosion of trust in traditional institutions. But there’s no single, simple answer.

Why do you think we might be in a post-rationality era?

¬◇ Not Diamond

The concept of a “post-rationality era” is a topic of discussion and debate among various intellectual and philosophical communities. Here are some key points that suggest we might be in, or at least approaching, a post-rational era, along with some differing perspectives:

Loss of Rational Certainty

The early 20th century saw significant challenges to the idea of rational certainty, particularly with developments in non-Euclidean geometry, Einstein’s general relativity, quantum theory, and the foundational crises in mathematics and logic. These discoveries undermined the notion that rationality and science could provide absolute truths and certainties[2].

Post-Rationalism

Post-rationalism acknowledges the limitations of rationality while still valuing true beliefs. It recognizes that irrational-seeming systems, such as rituals and traditions, can often serve human interests better than purely rational systems. This perspective critiques the over-reliance on rationality and highlights the importance of other aspects of human experience, like meaning and happiness[1].

Post-Rationalist Nihilism

Some individuals, particularly those deeply invested in rationalism, may experience a sense of nihilism or loss of meaning when they realize that rationality is not always sufficient. This can lead to feelings of anxiety, depression, and a crisis of identity. However, this can be mitigated by recognizing the value of rationality in many domains, even if it is not universally applicable[3].

Postmodern and Post-Christian Era

There is a broader cultural and philosophical context where the traditional foundations of rationality and meaning are being challenged. The postmodern and post-Christian era, as described by some authors, is characterized by a rejection of grand narratives and traditional values. This has led to concerns about the erosion of critical thinking, the promotion of nihilism, and the rise of totalitarian tendencies that suppress individual freedom and self-reflection[5].

Practical Implications

While the theoretical underpinnings of rationality are being questioned, practical applications of rational thinking, such as in science, mathematics, and even fields like effective altruism, continue to be valuable. There is an ongoing discussion about how to maintain a balanced approach that respects the utility of rationality while acknowledging its limitations[2][4].

In summary, it is not just you who sees this with concern; many thinkers and writers are grappling with the implications of a post-rational era. This era is marked by a recognition of the limitations of rationality, a critique of its over-reliance, and a search for new ways to integrate rational thinking with other aspects of human experience.

NB: Interestingly, these ¬◇ footnotes don’t ink to anyplace.

Surveying Modernity

A Brief, Brutal Experiment in Categorising Your Worldview

This month, I’ve been tinkering with a little project—an elegant, six-question survey designed to assess where you land in the great intellectual mess that is modernity.

Audio: Podcast discussion about this post.

This isn’t some spur-of-the-moment quiz cooked up in a caffeine-fueled haze. No, this project has been simmering for years, and after much consideration (and occasional disdain), I’ve crafted a set of questions and response options that, I believe, encapsulate the prevailing worldviews of our time.

It all began with Metamodernism, a term that, at first, seemed promising—a bold synthesis of Modernism and Postmodernism, a grand dialectic of the ages. But as I mapped it out, it collapsed under scrutiny. A footnote in the margins of intellectual history, at best. I’ll expand on that in due course.

The Setup: A Simple, Slightly Sadistic Ternary Plot

For the visually inclined (or the masochistically curious), I initially imagined a timeline, then a branching decision tree, then a Cartesian plane before landing on a ternary plot—a three-way visual that captures ideological leanings in a way a boring old bar chart never could.

The survey itself is brief: six questions, each with five possible answers. Submit your responses, and voilà—you get a tidy little ternary chart plotting your intellectual essence, along with a breakdown of what your answers signify.

Methodology: Half-Rigorous, Half-Reckless

I am, after all, a (recovering) statistician, so I’ve tried to uphold proper methodology while also fast-tracking certain safeguards for the sake of efficiency. If there’s enough interest, I may expand the survey, adding more questions or increasing response flexibility (tick boxes instead of radio buttons—revolutionary, I know).

Privacy Concerns? Relax. I’m not harvesting your data for some nefarious scheme. No personally identifiable information is collected—just a timestamp, session ID, and your browser’s language setting. I did consider tracking IP addresses to analyze regional trends but ultimately scrapped that idea.

In the future, I may add an optional email feature for those who wish to save and track their responses over time (assuming anyone is unhinged enough to take this more than once).

The Rest of the Story: Your Feedback, My Amusement

Since this is a personal project crafted in splendid isolation, I’d love to hear your thoughts. Are the questions reasonable? Do the response options make sense? Does the summary feel accurate? Is the ternary chart decipherable, or have I constructed a glorified inkblot test?

As an academic, economist, and statistician, I had never encountered a ternary chart before embarking on this, and now I rather enjoy it. That said, I also find Nietzsche “intuitive,” so take that as you will.

If this gains traction, expect follow-up content—perhaps videos, podcasts, or further written explorations.

Your Move

Take the survey. It’s painless, requiring mere minutes of your life (which is, let’s be honest, already wasted online). And because I’m feeling generous, you can even generate a PDF to stick on your fridge, next to your collection of expired coupons and disappointing takeout menus.

Click here to take the survey.

Let’s see where you stand in the grand, chaotic landscape of modernity. Or at least, let’s have a laugh trying to make sense of it.

DISCLAIMER: The Modernity Worldview Survey is not scientific. It is designed as an experiment to provide directional insights. It is hosted on Google Cloud and subject to its availability and performance limitations.

Outrage! Chapter Six

Kurt Gray’s Outraged! attempts to boil morality down to a single principle: harm. This, in his view, is the bedrock of all moral considerations. In doing so, he takes a swing at Jonathan Haidt’s Moral Foundations Theory, trying to reduce its multi-faceted framework to a mere footnote in moral psychology. Amusingly, he even highlights how Haidt quietly modified his own theory after Gray and his colleagues published an earlier work—an intellectual game of cat-and-mouse, if ever there was one.

Audio: Podcast of this topic

Chapter 6: The Intuition Overdose

By the time we reach Chapter 6, Gray is charging full steam into reductio ad absurdum territory. He leans so hard on intuition that I lost count of how many times he invokes it. The problem? He gives it too much weight while conveniently ignoring acculturation.

Yes, intuition plays a role, but it doesn’t exist in a vacuum. Enter Kahneman’s dual-system model: Gray eagerly adopts the System 1 vs. System 2 distinction, forcing his test subjects into snap moral judgments under time pressure to bypass rationalisation. Fair enough. But what he neglects is how even complex tasks can migrate from System 2 (slow, deliberate) to System 1 (fast, automatic) through repeated exposure. Kahneman’s example? Basic arithmetic. A child grappling with 1 + 1 relies on System 2, but an adult answers without effort.

And morality? The same mechanism applies. What starts as deliberation morphs into automatic response through cultural conditioning. But instead of acknowledging this, Gray behaves as if moral intuition is some mystical, spontaneous phenomenon untethered from socialization.

Morality: Subjective, Yes—But Culturally Engineered

Let’s lay cards on the table. I’m a moral subjectivist—actually, a moral non-cognitivist, but for simplicity’s sake, let’s not frighten the children. My stance is that morality, at its core, is subjective. However, no one develops their moral compass in isolation. Culture, upbringing, and societal narratives shape our moral instincts, even if those instincts ultimately reduce to personal sentiment.

Gray does concede that the definition of “harm” is subjective, which allows him to argue that practically any belief or action can be framed as harmful. And sure, if you redefine “harm” broadly enough, you can claim that someone’s mere existence constitutes an existential threat. Religious believers, for example, claim to be “harmed” by the idea that someone else’s non-compliance with their theological fairy tale could lead to eternal damnation.

I don’t disagree with his observation. The problem is that the underlying belief is fundamentally pathological. This doesn’t necessarily refute Gray’s argument—after all, people do experience psychological distress over imaginary scenarios—but it does mean we’re dealing with a shaky foundation. If harm is entirely perception-based, then moral arguments become arbitrary power plays, subject to the whims of whoever is best at manufacturing grievance.

And this brings us to another crucial flaw in Gray’s framework: the way it enables ideological self-perpetuation. If morality is reduced to perceived harm, then groups with wildly different definitions of harm will inevitably weaponize their beliefs. Take the religious fundamentalist who believes gay marriage is a sin that dooms others to eternal suffering. From their perspective, fighting against LGBTQ+ rights isn’t just bigotry—it’s moral duty, a battle to save souls from metaphysical harm. This, of course, leads to moral contagion, where adherents tirelessly indoctrinate others, especially their own children, ensuring the pathology replicates itself like a virus.

The Problem with Mono-Causal Explanations

More broadly, Gray’s attempt to reduce morality to a single principle—harm—feels suspiciously tidy. Morality is messy, contradictory, and riddled with historical baggage. Any theory that purports to explain it all in one neat little package should immediately raise eyebrows.

So, sorry, Kurt. You can do better. Moral psychology is a tangled beast, and trying to hack through it with a single conceptual machete does more harm than good.

Death by Tiger, Death by Document: Reflections on Nexus Chapter 3

As I continue to react to Harari’s Nexus, I can’t help but feel like a curmudgeon. Our worldviews diverge so starkly that my critique begins to feel like a petty grudge—as though I am inconsolable. Be that as it may, I’ll persist. Please excuse any revelatory ad hominems that may ensue.

Audio: Podcast of the page contents

Harari is an unabashed Zionist and unapologetic nationalist. Unfortunately, his stories, centred on Israel and India, don’t resonate with me. This is fine—I’m sure many people outside the US are equally weary of hearing everything framed from an American perspective. Still, these narratives do little for me.

Patriotism and property are clearly important to Harari. As a Modernist, he subscribes to all the trappings of Modernist thought that I rail against. He appears aligned with the World Economic Forum, portraying it as a noble and beneficial bureaucracy, while viewing AI as an existential threat to its control. Harari’s worldview suggests there are objectively good and bad systems, and someone must oversee them. Naturally, he presents himself as possessing the discernment to judge which systems are beneficial or detrimental.

In this chapter, Harari recounts the cholera outbreak in London, crediting it with fostering a positive bureaucracy to ensure clean water sources. However, he conflates the tireless efforts of a single physician with the broader bureaucratic structure. He uses this example, alongside Modi’s Clean India initiative, to champion bureaucracy, even as he shares a personal anecdote highlighting its flaws. His rhetorical strategy seems aimed at cherry-picking positive aspects of bureaucracy, establishing a strawman to diminish its negatives, and then linking these with artificial intelligence. As an institutionalist, Harari even goes so far as to defend the “deep state.”

Earlier, Harari explained how communication evolved from Human → Human to Human → Stories. Now, he introduces Human → Document systems, connecting these to authority, the growing power of administrators, and the necessity of archives. He argues that our old stories have not adapted to address the complexities of the modern world. Here, he sets up religion as another bogeyman. As a fellow atheist, I don’t entirely disagree with him, but it’s clear he’s using religion as a metaphor to draw parallels with AI and intractable doctrines.

Harari juxtaposes “death by tiger” with “death by document,” suggesting the latter—the impersonal demise caused by bureaucracy—is harder to grapple with. This predates Luigi Mangione’s infamous response to UnitedHealthcare’s CEO Brian Thompson, highlighting the devastating impact of administrative systems. Harari also briefly references obligate siblicide and sibling rivalry, which seem to segue into evolution and concepts of purity versus impurity.

Echoing Jonathan Haidt, Harari explores the dynamics of curiosity and disgust while reinforcing an “us versus them” narrative. He touches on the enduring challenges of India’s caste system, presenting yet another layer of complexity. Harari’s inclination towards elitism shines through, though he occasionally acknowledges the helplessness people face when confronting bureaucracy. He seems particularly perturbed by revolts in which the public destroys documents and debts—revealing what feels like a document fetish and an obsession with traceability.

While he lauds AI’s ability to locate documents and weave stories by connecting disparate content, Harari concludes the chapter with a segue into the next: a discussion of errors and holy books. Once again, he appears poised to draw parallels that serve to undermine AI. Despite my critiques, I’m ready to dive into the next chapter.