Time is not fundamental. It is not lurking behind the curtains of reality, counting down the universe like some cosmic metronome. Time is a human construct, a clumsy accounting trick invented so that hunter-gatherers could remember when to plant seeds, priests could know when to fleece the flock, and later, managers could know when to dock your wages.
Video: Sabine Hossenfelder discusses the proposed origins of time
Yes, our ancestors tracked cycles: the swing of day and night, the waxing of the moon, the slouch of the seasons. But this is proto-time at best. Call it rhythm, call it recurrence, call it the universe refusing to sit still. It was not time. It was change, and we anthropomorphised it.
Then along came Newton with his stopwatch. He baptised “absolute time,” a divine river that flowed “equably without relation to anything external.” In other words, he built the cosmic grandfather clock and declared it law. This fantasy held just long enough for Einstein to make mischief, weaving time into space with duct tape and calling it spacetime. Romantic, yes, but hardly fundamental. Time, in Einstein’s cosmos, bends and dilates depending on who’s doing the bending. Not exactly the firm bedrock you’d expect of a “dimension.”
Meanwhile, in the quantum world, time is the awkward dinner guest: not an observable, not dynamic, just an external parameter scribbled into the equations because no one knew where else to put it. Like a bad houseplant, it sits in the corner – unmeasurable, unaccountable, but impossible to get rid of.
And yet, not everyone has given up the ghost. One camp – think Julian Barbour and Carlo Rovelli – insists time is an illusion, something emergent from relations, clocks, and counters. Others, like Lee Smolin, flip the script and claim that time is more fundamental than space itself, the real stage upon which the cosmic drama plays out. Philosophical infighting aside, what this tells you is that physics doesn’t actually know what time is. They’re as clueless as Aristotle, who called it “the number of motion”, a definition so circular it should’ve been printed on a sundial.
Enter Constructor Theory (Deutsch & Marletto), which simply does away with time entirely in the fundamental laws. No ticking clocks, no background river. Just possible and impossible transformations, with time emerging only when we strap timers onto systems and start counting. Which, of course, makes perfect sense: time is what we measure with clocks – and clocks are just things that change.
The dirty secret is this: every culture, every cosmology, every physics textbook has tried to smuggle “time” in as if it were self-evident, while quietly redefining it to suit the mood. We can’t agree on whether it’s an illusion, a dimension, or the last shred of fundamentality left to cling to. And if that isn’t the mark of a human construct, I don’t know what is.
The Ship of Theseus is philosophy’s favourite parlour trick: swap out the planks of a ship one by one, and ask in your best furrowed-brow voice whether it’s still the same ship. Then, for added spice, reassemble the discarded parts elsewhere and demand to know which version is the “real” one. Cue the existential hand-wringing and smug undergrad smirks. Oh, how clever.
Audio: NotebookLM podcast on this topic.
But here’s the thing: there’s no paradox. Not really. Not unless you buy into the fantasy that identity is some immutable essence, handed down from the gods like a divine barcode. The whole thought experiment hinges on the absurd presumption that something has a fixed, singular identity across time and context, a quaint metaphysical hobby horse that falls apart the moment you look at it sideways.
Let’s be clear: in the realm of language and proto-psychology – the crude, squishy scaffolding of thought that predates syntax and survives long after it – there is no such thing as a fixed “same.” That’s a linguistic illusion, a parlour trick of grammar and nominal categories. Language wasn’t built to hold truth; it was built to herd humans into consensus long enough to survive the winter.
In practice, we use “same” the way we use duct tape: liberally, and with complete disregard for philosophical coherence. The “same” ship? The “same” person? The “same” idea? Please. Ask your hippocampus. Identity is not a container; it’s a hallucinated continuity trick, maintained by memory, narrative, and sheer bloody-minded stubbornness.
The real kicker? Our precious linguistic tools aren’t built to reflect reality. They’re built to reduce it. To chop up the infinite mess of experience into palatable little mouthfuls of meaning. So when we come to the Ship of Theseus with our dull-edged conceptual knives, what we’re really doing is asking a bad question with inadequate tools. It’s like trying to measure wind speed with a sundial.
The paradox isn’t in the ship. It’s in the language.
And no, you don’t need to patch it. You need to sink it.
Let’s start with a thought experiment, because all good existential crises do.
Imagine a ship – Theseus’s ship, to be precise. After a storied career of heroic sea-faring, it’s put on display in a glorious Athenian dockyard. But as time passes, the planks rot. So, bit by bit, they’re replaced. A new mast here, a fresh hull panel there. Eventually, every single part has been swapped out.
Here’s the philosophical conundrum: Is it still the same ship?
Audio: NotebookLM podcast on this topic.
And if you think you’ve got that sorted, hold on. Imagine all the original pieces were saved, and someone reassembled them in a warehouse across town. Now there are two ships. One with the name, the continuity, the dockside real estate. The other with the original lumber and sails.
Which one is the real Ship of Theseus?
The paradox gnaws at our sense of identity. Is continuity enough? Does memory trump material? When everything is replaced – structure, function, even personnel – what makes a thing still that thing?
Now apply that question not to a ship, but to a rock band. A corporation. A country. Yourself.
That’s where things get fun. And slightly horrifying.
I was recently served a video on Facebook, algorithmic ambrosia for the nostalgic mind, showing the band Foreigner performing one of their chart-groping hits from the ‘70s. Polished, crowd-pleasing, competent. And utterly fake.
Not one founding member in sight.
They weren’t bad, mind you. Just… someone else. A Foreigner cover band trading under the original name, like a haunted jukebox stuffed with licensing contracts.
This, friends, is the Ship of Theseus with a tour schedule.
And it’s not just bands. IBM, once the king of typewriters and tabulating machines, now sells cloud services and AI consultancy. Walgreens, which began as a soda fountain and friendly neighbourhood chemist, now sells LED dog collars and pregnancy tests under buzzing fluorescent lights.
These aren’t companies. They’re brands in drag, corporate necromancers chanting the old names to animate new bodies.
But why stop there?
America isn’t America. Not the one of powdered wigs and musketed revolutionaries. No Founding Fathers™ roam the marble halls, only interns, lobbyists, and PR-tested careerists impersonating ideals they no longer understand. Britain? Please. The Queen is dead, and so is the Empire. France has revolted so many times that they’ve essentially speed-run regime change into a lifestyle brand.
And let’s not get too smug. You aren’t even you anymore, not really. Cells replace themselves, beliefs crumble and reform, memories rot and rewrite. You’re a psychological Foreigner tribute band, just with more trauma and less pyrotechnics.
So here’s the rub: everything persists by pretending. That’s the deal. Names survive, structures remain, but the guts are swapped out, piece by piece, until we’re clapping along to something we no longer recognise, wearing merch from a band that no longer exists.
And we call it continuity.
NB: After a dozen Midjourney prompts, I decided to stop and use this one. Ships of Theseus are as rare as centaurs.
On Schopenhauer, Sympathy, and the Unasked-for Gift of Life
“If children were brought into the world by an act of pure reason alone, would the human race continue to exist? Would not a man rather have so much sympathy with the coming generation as to spare it the burden of existence, or at any rate not take it upon himself to impose that burden upon it in cold blood?” ― Arthur Schopenhauer, Studies in Pessimism: The Essays
Arthur Schopenhauer, that delightful black cloud over the 19th century, once asked a question too few parents – or politicians-or pro-natalists – dare entertain: If children were brought into the world by an act of pure reason alone, would the human race continue to exist?
The answer, of course, is no. Not if reason were allowed to have the steering wheel. Not if we truly grasped what existence entails: a lifelong hostage situation punctuated by moments of accidental pleasure, existential debt, and the faint scent of consumer-grade shampoo. To knowingly impose that on another, without their consent, no less, is, Schopenhauer suggests, cold-blooded. Not tragic. Not poetic. Just… callous.
And yet, we romanticise it. We shower it in clichés and bibs. We call it “a miracle” or “the greatest gift.” Gift? With what return policy? And no, a suicide hotline is not customer service.
This isn’t a call to extinction. (Though frankly, who’d notice?) It’s a call to lucidity. To question the reflex to replicate. To wonder, in quiet moments, whether “family planning” is really planning, or just the failure to look past our own narrative instincts.
If sympathy were allowed to speak louder than instinct, Schopenhauer asks, might we choose to spare the next generation from the exquisite burden of becoming? It’s not misanthropy. It’s mercy.
A five-part descent into the illusion of autonomy, where biology writes the script, reason provides the excuse, and the self is merely the echo of its own conditioning.This is a follow-up to a recent post on the implausibility of free will.
You Cannot Originate Yourself
The causa sui argument, and the final collapse of moral responsibility
“If you cannot cause yourself, you cannot cause your choices. And if you cannot cause your choices, you cannot own them.”
Audio: NotenookLM podcast on this topic.
Everything until now has pointed to erosion:
Your choices are state-dependent.
Your identity is cumulative, not authored.
Your evaluations are judged by compromised observers.
But here, finally, we strike at the bedrock.
It isn’t merely that you are manipulated. It isn’t merely that you are misperceived. It’s that you never could have been free, even in theory.
Because you did not make yourself.
The Causa Sui Problem
To be ultimately morally responsible, you must be the origin of who you are.
You must have chosen your disposition.
You must have selected your values.
You must have designed your will.
But you didn’t.
You emerged:
With a particular genetic cocktail.
Into a particular historical moment.
Through particular developmental experiences.
With particular neurological quirks and vulnerabilities.
And at no point did you step outside yourself to say:
“I would like to be this kind of agent, with this kind of character.”
You were thrown — as Heidegger might say — into a situation not of your choosing, with equipment you didn’t request, subject to pressures you couldn’t anticipate.
And everything you think of as “yours” — your courage, your laziness, your generosity, your rage — is the unfolding of that original unchosen situation.
No Escape via Reflexivity
Some will protest:
“But I can reflect! I can change myself!”
But this, too, is a mirage.
Because:
The desire to reflect is conditioned.
The capacity to reflect is conditioned.
The courage to act on reflection is conditioned.
You didn’t author your ability to self-correct. You simply inherited it — like a river inheriting a particular gradient.
Even your rebellion is written in your blueprint.
Freedom by Degrees Is Not Freedom
The compatibilist fallback — that freedom is just “acting according to oneself” — collapses under causa sui.
Because the self that acts was never authored. It was configured by prior causes.
If you cannot be the cause of yourself, then you cannot be the cause of your actions in any ultimate sense.
Thus:
No ultimate credit for your virtues.
No ultimate blame for your vices.
Only causal flow, chemical procession, narrative stitching after the fact.
The criminal and the saint are both unlucky configurations of biology and circumstance.
TL;DR: No Self, No Sovereignty
To be responsible, you must be causa sui — the cause of yourself.
You are not.
Therefore, you are not ultimately responsible for your actions.
Therefore, free will — as traditionally imagined — does not exist.
There is choice. But there is no chooser behind the choice. Only the momentum of prior conditions, impersonating agency.
In The Death Lottery, Johnny Thompson of PhilosophyMinis poses this question:
In 1975 the philosopher John Harris gave us one of the most interesting and challenging thought experiments in moral philosophy it’s inspired lots of science fiction since and it’s a great intuition pump to test how you feel about the value of human life it goes like this imagine at the hospital down the road three people are dying from organ failure and there are no organs to donate and so everybody is given a lottery ticket and if your ticket is chosen then you are killed your organs are harvested they’re given to the dying and your one life will save three and as harris puts it no doubt a suitable euphemism for killed could be employed perhaps we would begin to talk about citizens being called upon to give life to others Harris is keen to add that everybody in this scenario is as innocent as each other so none of the patients did anything in their lives to merit their organ failure and so what is wrong with this system or this world if we say that we value human life then surely saving three lives is three times better than saving just one it might be said that death shouldn’t be determined by the luck of a draw but surely this is what happens anyway one person gets cancer another does not one person is in a car crash another is not luck is the biggest single killer of humanity so what do you think is wrong with harris’s thought experiment and is one life ever more valuable than three?
Video: YouTube inspiration for this post.
This fits rather nicely into a recent theme I’ve been dissecting — The Dubious Art of Reasoning: Why Thinking Is Harder Than It Looks — particularly regarding the limitations of deductive logic built upon premises that are, shall we say, a tad suspect. So what’s actually happening in Harris’s tidy moral meat grinder?
Audio: NotebookLM podcast on this topic.
Let us begin at the root, the hallowed dogma no one dares blaspheme: the belief that life has value. Not just any value, mind you, but a sacred, irrefutable, axiomatic kind of value — the sort of thing whispered in holy tones and enshrined in constitutions, as though handed down by divine courier.
But let’s not genuflect just yet. “Value” is not some transcendent essence; it’s an economic artefact. Value, properly speaking, is something tested in a marketplace. So, is there a market for human life?
Historically, yes — but one doubts Harris is invoking the Atlantic slave trade or Victorian child labour auctions. No, what he’s tapping into is a peculiarly modern, unexamined metaphysical presumption: that human beings possess inherent worth because, well, they simply must. We’ve sentimentalised supply and demand.
Now, this notion of worth — where does it come from? Let us not mince words: it’s theological. It is the residue of religious metaphysics, the spiritual afterbirth of the soul. We’re told that all souls are precious. All life is sacred. Cue the soft lighting and trembling organ chords. But if you strip away the divine scaffolding — and I suggest we do — then this “value” collapses like a soufflé in a thunderstorm. Without God, there is no soul; without soul, there is no sacredness. Without sacredness? Just meat. Glorified offal.
So what are we left with?
Null values. A society of blank spreadsheets, human lives as rows with no data in the ‘Value’ column. A radical equality of the meaningless.
Now let’s take a darker turn — because why not, since we’re already plumbing the ethical abyss. The anti-natalists, those morose prophets of philosophical pessimism, tell us not only that life lacks positive value, but that it is intrinsically a burden. A cosmic mistake. A raw deal. The moment one is born, the suffering clock starts ticking.
Flip the moral equation in The Death Lottery, and what you get is this: saving three lives is not a moral victory — it’s a net increase in sentient suffering. If you kill one to save three, you’ve multiplied misery. Congratulations. You’ve created more anguish with surgical efficiency. And yet we call this a triumph of compassion?
According to this formulation, the ethical choice is not to preserve the many at the cost of the few. It is to accelerate the great forgetting. Reduce the volume of suffering, not its distribution.
But here’s the deeper problem — and it’s a trick of philosophical stagecraft: this entire thought experiment only becomes a “dilemma” if you first accept the premises. That life has value. That death is bad. That ethics is a numbers game. That morality can be conducted like a cost-benefit spreadsheet in a celestial boardroom.
Yet why do we accept these assumptions? Tradition? Indoctrination? Because they sound nice on a Hallmark card? These axioms go unexamined not because they are true, but because they are emotionally convenient. They cradle us in the illusion that we are important, that our lives are imbued with cosmic significance, that our deaths are tragedies rather than banal statistical certainties.
But the truth — the unvarnished, unmarketable truth — is that The Death Lottery is not a test of morality, but a test of credulity. A rigged game. An illusion dressed in the solemn robes of logic.
And like all illusions, it vanishes the moment you stop believing in it.Let’s deconstruct the metanarratives in play. First, we are told uncritically that life has value. Moreover, this value is generally positive. But all of this is a human construct. Value is an economic concept that can be tested in a marketplace. Is there a marketplace for humans? There have been slave marketplaces, but I’m pretty sure that’s not what this aims for. There are wage and salary proxies. Again, I don’t think this is what they are targeting.
This worth is metaphysical. But allow me to cut to the chase. This concept of worth has religious roots, the value of the soul, and all souls are precious, sacred, actually. One might argue that the body is expendable, but let’s not go there. If we ignore the soul nonsense and dispense of the notion that humans have any inherent value not merely conjured, we are left with an empty set, all null values.
But let’s go further. Given anti-natalist philosophy, conscious life not only has value but is inherently negative, at least ex ante. This reverses the maths – or flips the inequality sign – to render one greater than three. It’s better to have only one suffering than three.
Ultimately, this is only a dilemma if one accepts the premises, and the only reason to do so is out of indoctrinated habit.
Postscript: Notes from the Abyss
David Benatar, in Better Never to Have Been, argues with pitiless logic that coming into existence is always a harm — that birth is a curse disguised as celebration. He offers no anaesthetic. Existence is pain; non-existence, the balm.
Peter Wessel Zapffe, the Norwegian prophet of philosophical despair, likened consciousness to a tragic evolutionary overreach — a cosmic misfire that left humanity acutely aware of its own absurdity, scrambling to muffle it with distraction, denial, and delusion. For him, the solution was elegant in its simplicity: do not reproduce. Shut the trapdoor before more souls tumble in.
And then there is Cioran, who did not so much argue as exhale. “It is not worth the bother of killing yourself, since you always kill yourself too late.” He understood what the rest of us politely ignore — that life is a fever dream from which only death delivers.
So if the question is whether one life is worth more than three, we must first ask whether any of them were worth having in the first place.
The answer, for the brave few staring into the black, may be a shrug — or silence.
The Illusion of Clarity in a World of Cognitive Fog
Apologies in advance for this Logic 101 posting. Reason—our once-proud torch in the darkness, now more like a flickering lighter in a hurricane of hot takes and LinkedIn thought-leadership. The modern mind, bloated on TED Talks and half-digested Wikipedia articles, tosses around terms like “inductive” and “deductive” as if they’re interchangeable IKEA tools. So let us pause, sober up, and properly inspect these three venerable pillars of human inference: deduction, induction, and abduction—each noble, each flawed, each liable to betray you like a Greco-Roman tragedy.
Video: This post was prompted by this short by MiniPhilosophy.
Audio: NotebookLM podcast on this topic.
Deduction: The Tyrant of Certainty
Deduction is the purest of the lot, the high priest of logic. It begins with a general premise and guarantees a specific conclusion, as long as you don’t cock up the syllogism. Think Euclid in a toga, laying down axioms like gospel.
Example:
All humans are mortal.
Socrates is human.
Therefore, Socrates is mortal.
Perfect. Crisp. Unassailable. Unless, of course, your premise is bollocks. Deduction doesn’t check its ingredients—it just cooks with whatever it’s given. Garbage in, garbage out.
Strength: Valid conclusions from valid premises. Weakness: Blind to empirical falsity. You can deduce nonsense from nonsense and still be logically sound.
Induction: The Gambler’s Gospel
Induction is the philosopher’s lottery ticket: generalising from particulars. Every swan I’ve seen is white, ergo all swans must be white. Until, of course, Australia coughs up a black one and wrecks your little Enlightenment fantasy.
Example:
The sun rose today.
It rose yesterday.
It has risen every day I’ve been alive.
Therefore, the sun will rise tomorrow.
Touching, isn’t it? Unfortunately, induction doesn’t prove anything—it suggests probability. David Hume had an existential breakdown over this. Entire centuries of Western philosophy spiralled into metaphysical despair. And yet, we still rely on it to predict weather, markets, and whether that dodgy lasagna will give us food poisoning.
Strength: Empirically rich and adaptive. Weakness: One exception detonates the generalisation. Induction is only ever as good as the sample size and your luck.
Abduction: Sherlock Holmes’ Drug of Choice
Abduction is the inference to the best explanation. The intellectual equivalent of guessing what made the dog bark at midnight while half-drunk and barefoot in the garden.
Example:
The lawn is wet.
It probably rained.
It could be a garden sprinkler. Or a hose. Or divine intervention. But we bet on rain because it’s the simplest, most plausible explanation. Pragmatic, yes. But not immune to deception.
Strength: Useful in messy, real-world contexts. Weakness: Often rests on a subjective idea of “best,” which tends to mean “most convenient to my prejudices.”
The Modern Reasoning Crisis: Why We’re All Probably Wrong
Our contemporary landscape has added new layers of complexity to these already dubious tools. Social media algorithms function as induction machines on steroids, drawing connections between your click on a pasta recipe and your supposed interest in Italian real estate. Meanwhile, partisan echo chambers have perfected the art of deductive reasoning from absolutely bonkers premises.
Consider how we navigate information today:
We encounter a headline that confirms our worldview
We accept it without scrutiny (deductive failure)
We see similar headlines repeatedly (inductive trap)
We conclude our worldview is objectively correct (abductive collapse)
And thus, the modern reasoning loop is complete—a perfect system for being confidently incorrect while feeling intellectually superior.
Weakness by Analogy: The Reasoning Café
Imagine a café.
Deduction is the customer who checks the menu and confidently orders “Soup of the Day,” because the chalkboard says “Today’s Soup is Tomato,” and she trusts chalkboards.
Induction is the one who has had tomato soup every Wednesday for months and assumes it’ll be tomato today again—until it isn’t, and now he’s wearing bisque.
Abduction sees the waiter carrying bowls of red liquid to every table and infers it’s probably tomato soup, orders it, and gets… gazpacho. Ice-cold disappointment.
All three are trying to reason. Only one might get lunch.
The Meta-Problem: Reasoning About Reasoning
The true joke is this: we’re using these flawed reasoning tools to evaluate our reasoning tools. It’s like asking a drunk person to judge their own sobriety test. The very mechanisms we use to detect faulty reasoning are themselves subject to the same faults.
This explains why debates about critical thinking skills typically devolve into demonstrations of their absence. We’re all standing on intellectual quicksand while insisting we’ve found solid ground.
Conclusion: Reason Is Not a Guarantee, It’s a Wager
None of these modalities offer omniscience. Deduction only shines when your axioms aren’t ridiculous. Induction is forever haunted by Hume’s skepticism and the next black swan. Abduction is basically educated guessing dressed up in tweed.
Yet we must reason. We must argue. We must infer—despite the metaphysical vertigo.
The tragedy isn’t that these methods fail. The tragedy is when people believe they don’t.
Perhaps the wisest reasoners are those who understand the limitations of their cognitive tools, who approach conclusions with both confidence and humility. Who recognize that even our most cherished beliefs are, at best, sophisticated approximations of a reality we can never fully grasp.
So reason on, fellow thinkers. Just don’t be too smug about it.
The contestation over the definition of probability is alive and well—like a philosophical zombie that refuses to lie down and accept the tranquilliser of consensus. Despite over three centuries of intense mathematical, philosophical, and even theological wrangling, no single, universally accepted definition reigns supreme. Instead, we have a constellation of rival interpretations, each staking its claim on the epistemological turf, each clutching its own metaphysical baggage.
Audio: NotebookLM podcast on this topic.
Let us survey the battlefield:
1. Classical Probability (Laplacean Determinism in a Tuxedo)
This old warhorse defines probability as the ratio of favourable outcomes to possible outcomes, assuming all outcomes are equally likely. The problem? That assumption is doing all the heavy lifting, like a butler carrying a grand piano up five flights of stairs. It’s circular: we define probability using equiprobability, which itself presumes a notion of probability. Charming, but logically suspect.
2. Frequentist Probability (The Empiricist’s Fantasy)
Here, probability is the limit of relative frequencies as the number of trials tends to infinity. This gives us the illusion of objectivity—but only in a Platonic realm where we can conduct infinite coin tosses without the coin disintegrating or the heat death of the universe intervening. Also, it tells us nothing about singular cases. What’s the probability this specific bridge will collapse? Undefined, says the frequentist, helpfully.
3. Bayesian Probability (Subjectivity Dressed as Rigor)
Bayesians treat probability as a degree of belief—quantified plausibility updated with evidence. This is useful, flexible, and epistemically honest, but also deeply subjective. Two Bayesians can start with wildly different priors and, unless carefully constrained, remain in separate probabilistic realities. It’s like epistemology for solipsists with calculators.
4. Propensity Interpretation (The Ontology of Maybes)
Karl Popper and his ilk proposed that probability is a tendency or disposition of a physical system to produce certain outcomes. Sounds scientific, but try locating a “propensity” in a particle collider—it’s a metaphysical ghost, not a measurable entity. Worse, it struggles with repeatability and relevance outside of controlled environments.
5. Logical Probability (A Sober Attempt at Rationality)
Think of this as probability based on logical relations between propositions—à la Keynes or Carnap. It aims to be objective without being empirical. The problem? Assigning these logical relations is no easier than choosing priors in Bayesianism, and just as subjective when it comes to anything meaty.
6. Quantum Probability (Schrödinger’s Definition)
In quantum mechanics, probability emerges from the squared modulus of a wave function—so this is where physics says, “Shut up and calculate.” But this doesn’t solve the philosophical issue—it just kicks the can into Hilbert space. Interpretations of quantum theory (Copenhagen? Many Worlds?) embed different philosophies of probability, so the contestation merely changes battlegrounds.
Current Status: War of Attrition
There is no universal agreement, and likely never will be. Probability is used successfully across the sciences, economics, AI, and everyday reasoning—but the fact that these wildly different interpretations all “work” suggests that the concept is operationally robust yet philosophically slippery. Like money, love, or art, we use it constantly but define it poorly.
In short: the contestation endures because probability is not one thing—it is a shape-shifting chimera that serves multiple masters. Each interpretation captures part of the truth, but none hold it entire. Philosophers continue to argue, mathematicians continue to formalise, and practitioners continue to deploy it as if there were no disagreement at all.
And so the probability of this contest being resolved any time soon? About zero. Or one. Depending on your interpretation.
I’ve read Part I of Hobbes’ Leviathan and wonder what it would have been like if he filtered his thoughts through Hume or Wittgenstein. Hobbes makes Dickens read like Pollyanna. It’s an interesting historical piece, worth reading on that basis alone. It reads as if the Christian Bible had to pass through a legal review before it had been published, sapped of vigour. As bad a rap as Schopenhauer seems to get, Hobbes is the consummate Ebenezer Scrooge. Bah, humbug – you nasty, brutish, filthy animals!*
Audio: NotebookLM podcast conversation on this topic.
In any case, it got me thinking of free will and, more to the point, of will itself.
A Brief History of Humanity’s Favourite Metaphysical Scapegoat
By the time Free Will turned up to the party, the real guest of honour—the Will—had already been drinking heavily, muttering incoherently in the corner, and starting fights with anyone who made eye contact. We like to pretend that the “will” is a noble concept: the engine of our autonomy, the core of our moral selves, the brave little metaphysical organ that lets us choose kale over crisps. But in truth, it’s a bloody mess—philosophy’s equivalent of a family heirloom that no one quite understands but refuses to throw away.
So, let’s rewind. Where did this thing come from? And why, after 2,500 years of name-dropping, finger-pointing, and metaphysical gymnastics, are we still not quite sure whether we have a will, are a will, or should be suing it for damages?
Plato: Soul, Reason, and That Poor Horse
In the beginning, there was Plato, who—as with most things—half-invented the question and then wandered off before giving a straight answer. For him, the soul was a tripartite circus act: reason, spirit, and appetite. Will, as a term, didn’t get top billing—it didn’t even get its name on the poster. But the idea was there, muddling along somewhere between the charioteer (reason) and the unruly horses (desire and spiritedness).
No explicit will, mind you. Just a vague sense that the rational soul ought to be in charge, even if it had to beat the rest of itself into submission.
Aristotle: Purpose Without Pathos
Aristotle, ever the tidy-minded taxonomist, introduced prohairesis—deliberate choice—as a sort of proto-will. But again, it was all about rational calculation toward an end. Ethics was teleological, goal-oriented. You chose what aligned with eudaimonia, that smug Greek term for flourishing. Will, if it existed at all, was just reason picking out dinner options based on your telos. No inner torment, no existential rebellion—just logos in a toga.
Augustine: Sin, Suffering, and That Eternal No
Fast-forward a few hundred years, and along comes Saint Augustine, traumatised by his libido and determined to make the rest of us suffer for it. Enter voluntas: the will as the seat of choice—and the scene of the crime. Augustine is the first to really make the will bleed. He discovers he can want two incompatible things at once and feels properly appalled about it.
From this comes the classic Christian cocktail: freedom plus failure equals guilt. The will is free, but broken. It’s responsible for sin, for disobedience, for not loving God enough on Wednesdays. Thanks to Augustine, we’re stuck with the idea that the will is both the instrument of salvation and the reason we’re going to Hell.
Cheers.
Medievals: God’s Will or Yours, Pick One
The Scholastics, never ones to let an ambiguity pass unanalysed, promptly split into camps. Aquinas, ever the reasonable Dominican, says the will is subordinate to the intellect. God is rational, and so are we, mostly. But Duns Scotus and William of Ockham, the original voluntarist hooligans, argue that the will is superior—even in God. God could have made murder a virtue, they claim, and you’d just have to live with it.
From this cheerful perspective, will becomes a force of arbitrary fiat, and humans, made in God’s image, inherit the same capacity for irrational choice. The will is now more than moral; it’s metaphysical. Less reason’s servant, more chaos goblin.
Hobbes: Appetite with Delusions of Grandeur
Then along comes Thomas Hobbes, who looks at the soul and sees a wheezing machine of appetites. Will, in his famously cheery view, is simply “the last appetite before action.” No higher calling, no spiritual struggle—just the twitch that wins. Man is not a rational animal, but a selfish algorithm on legs. For Hobbes, will is where desire stumbles into motion, and morality is a polite euphemism for not getting stabbed.
Kant: The Will Gets a Makeover
Enter Immanuel Kant: powdered wig, pursed lips, and the moral rectitude of a man who scheduled his bowel movements. Kant gives us the “good will”, which acts from duty, not desire. Suddenly, the will is autonomous, rational, and morally legislative—a one-man Parliament of inner law.
It’s all terribly noble, terribly German, and entirely exhausting. For Kant, free will is not the ability to do whatever you like—it’s the capacity to choose according to moral law, even when you’d rather be asleep. The will is finally heroic—but only if it agrees to hate itself a little.
Schopenhauer: Cosmic Will, Cosmic Joke
And then the mood turns. Schopenhauer, world’s grumpiest mystic, takes Kant’s sublime will and reveals it to be a blind, thrashing, cosmic force. Will, for him, isn’t reason—it’s suffering in motion. The entire universe is will-to-live: a desperate, pointless striving that dooms us to perpetual dissatisfaction.
There is no freedom, no morality, no point. The only escape is to negate the will, preferably through aesthetic contemplation or Buddhist-like renunciation. In Schopenhauer’s world, the will is not what makes us human—it’s what makes us miserable.
Nietzsche: Transvaluation and the Will to Shout Loudest
Cue Nietzsche, who takes Schopenhauer’s howling void and says: yes, but what if we made it fabulous? For him, the will is no longer to live, but to power—to assert, to create, to impose value. “Free will” is a theologian’s fantasy, a tool of priests and moral accountants. But will itself? That’s the fire in the forge. The Übermensch doesn’t renounce the will—he rides it like a stallion into the sunset of morality.
Nietzsche doesn’t want to deny the abyss. He wants to waltz with it.
Today: Free Will and the Neuroscientific Hangover
And now? Now we’re left with compatibilists, libertarians, determinists, and neuroscientists all shouting past each other, armed with fMRI machines and TED talks. Some claim free will is an illusion, a post hoc rationalisation made by brains doing what they were always going to do. Others insist that moral responsibility requires it, even if we can’t quite locate it between the neurons.
We talk about willpower, will-to-change, political will, and free will like they’re real things. But under the hood, we’re still wrestling with the same questions Augustine posed in a North African villa: Why do I do what I don’t want to do? And more importantly, who’s doing it?
Conclusion: Where There’s a Will, There’s a Mess
From Plato’s silent horses to Nietzsche’s Dionysian pyrotechnics, the will has shape-shifted more times than a politician in an election year. It has been a rational chooser, a moral failure, a divine spark, a mechanical twitch, a cosmic torment, and an existential triumph.
Despite centuries of philosophical handwringing, what it has never been is settled.
So where there’s a will, there’s a way. But the way? Twisting, contradictory, and littered with the corpses of half-baked metaphysical systems.
Welcome to the labyrinth. Bring snacks.
* The solitary, poor, nasty, brutish, and short quote is forthcoming. Filthy animals is a nod to Home Alone.
In the great philosophical tug-of-war between materialism and idealism, where reality is argued to be either wholly independent of perception or entirely a construct of the mind, there lies an underexplored middle ground—a conceptual liminal space that we might call “Intersectionalism.” This framework posits that reality is neither purely objective nor subjective but emerges at the intersection of the two. It is the terrain shaped by the interplay between what exists and how it is perceived, mediated by the limits of human cognition and sensory faculties.
Audio: Podcast conversation on this topic.
Intersectionalism offers a compelling alternative to the extremes of materialism and idealism. By acknowledging the constraints of perception and interpretation, it embraces the provisionality of knowledge, the inevitability of blind spots, and the productive potential of uncertainty. This essay explores the foundations of Intersectionalism, its implications for knowledge and understanding, and the ethical and practical insights it provides.
Reality as an Intersection
At its core, Intersectionalism asserts that reality exists in the overlapping space between the objective and the subjective. The objective refers to the world as it exists independently of any observer—the “terrain.” The subjective encompasses perception, cognition, and interpretation—the “map.” Reality, then, is not fully contained within either but is co-constituted by their interaction.
Consider the act of seeing a tree. The tree, as an object, exists independently of the observer. Yet, the experience of the tree is entirely mediated by the observer’s sensory and cognitive faculties. Light reflects off the tree, enters the eye, and is translated into electrical signals processed by the brain. This process creates a perception of the tree, but the perception is not the tree itself.
This gap between perception and object highlights the imperfect alignment of subject and object. No observer perceives reality “as it is” but only as it appears through the interpretive lens of their faculties. Reality, then, is a shared but imperfectly understood phenomenon, subject to distortion and variation across individuals and species.
The Limits of Perception and Cognition
Humans, like all organisms, perceive the world through the constraints of their sensory and cognitive systems. These limitations shape not only what we can perceive but also what we can imagine. For example:
Sensory Blind Spots: Humans are limited to the visible spectrum of light (~380–750 nm), unable to see ultraviolet or infrared radiation without technological augmentation. Other animals, such as bees or snakes, perceive these spectra as part of their natural sensory worlds. Similarly, humans lack the electroreception of sharks or the magnetoreception of birds.
Dimensional Constraints: Our spatial intuition is bounded by three spatial dimensions plus time, making it nearly impossible to conceptualise higher-dimensional spaces without resorting to crude analogies (e.g., imagining a tesseract as a 3D shadow of a 4D object).
Cognitive Frameworks: Our brains interpret sensory input through patterns and predictive models. These frameworks are adaptive but often introduce distortions, such as cognitive biases or anthropocentric assumptions.
This constellation of limitations suggests that what we perceive and conceive as reality is only a fragment of a larger, potentially unknowable whole. Even when we extend our senses with instruments, such as infrared cameras or particle detectors, the data must still be interpreted through the lens of human cognition, introducing new layers of abstraction and potential distortion.
The Role of Negative Space
One of the most intriguing aspects of Intersectionalism is its embrace of “negative space” in knowledge—the gaps and absences that shape what we can perceive and understand. A compelling metaphor for this is the concept of dark matter in physics. Dark matter is inferred not through direct observation but through its gravitational effects on visible matter. It exists as a kind of epistemic placeholder, highlighting the limits of our current sensory and conceptual tools.
Similarly, there may be aspects of reality that elude detection altogether because they do not interact with our sensory or instrumental frameworks. These “unknown unknowns” serve as reminders of the provisional nature of our maps and the hubris of assuming completeness. Just as dark matter challenges our understanding of the cosmos, the gaps in our perception challenge our understanding of reality itself.
Practical and Ethical Implications
Intersectionalism’s recognition of perceptual and cognitive limits has profound implications for science, ethics, and philosophy.
Science and Knowledge
In science, Intersectionalism demands humility. Theories and models, however elegant, are maps rather than terrains. They approximate reality within specific domains but are always subject to revision or replacement. String theory, for instance, with its intricate mathematics and reliance on extra dimensions, risks confusing the elegance of the map for the completeness of the terrain. By embracing the provisionality of knowledge, Intersectionalism encourages openness to new paradigms and methods that might better navigate the negative spaces of understanding.
Ethics and Empathy
Ethically, Intersectionalism fosters a sense of humility and openness toward other perspectives. If reality is always interpreted subjectively, then every perspective—human, animal, or artificial—offers a unique and potentially valuable insight into the intersection of subject and object. Recognising this pluralism can promote empathy and cooperation across cultures, species, and disciplines.
Technology and Augmentation
Technological tools extend our sensory reach, revealing previously unseen aspects of reality. However, they also introduce new abstractions and biases. Intersectionalism advocates for cautious optimism: technology can help illuminate the terrain but will never eliminate the gap between map and terrain. Instead, it shifts the boundaries of our blind spots, often revealing new ones in the process.
Conclusion: Navigating the Space Between
Intersectionalism provides a framework for understanding reality as a shared but imperfect intersection of subject and object. It rejects the extremes of materialism and idealism, offering instead a middle path that embraces the limitations of perception and cognition while remaining open to the possibilities of negative space and unknown dimensions. In doing so, it fosters humility, curiosity, and a commitment to provisionality—qualities essential for navigating the ever-expanding terrain of understanding.
By acknowledging the limits of our maps and the complexity of the terrain, Intersectionalism invites us to approach reality not as a fixed and knowable entity but as an unfolding interplay of perception and existence. It is a philosophy not of certainty but of exploration, always probing the space between.