Unwilling Steelman, Part IV

A five-part descent into the illusion of autonomy, where biology writes the script, reason provides the excuse, and the self is merely the echo of its own conditioning. This is a follow-up to a recent post on the implausibility of free will.

Audio: NotebookLM podcast on the topic.

“It’s not just that you’re a hallucination of yourself.
It’s that everyone else is hallucinating you, too — through their own fog.”

The Feedback Loop of False Selves

You are being judged — by others who are also compromised

If you are a chemically modulated, state-dependent, narrativising automaton, then so is everyone who evaluates you. The moral courtroom — society, the law, the dinner table — is just a gathering of biased systems confidently misreading each other.

We are taught to believe in things like:

  • “Good character”
  • “Knowing someone”
  • “Getting a read on people”

But these are myths of stability, rituals of judgment, and cognitive vanity projects. There is no fixed you — and there is no fixed them to do the judging.

Judging the Snapshot, Not the Self

Let’s say you act irritable. Or generous. Or quiet.
An observer sees this and says:

“That’s who you are.”

But which version of you are they observing?

  • The you on two hours of sleep?
  • The you on SSRIs?
  • The you grieving, healing, adjusting, masking?

They don’t know. They don’t ask.
They just flatten the moment into character.

One gesture becomes identity.
One expression becomes essence.

This isn’t judgment.
It’s snapshot essentialism — moral conclusion by convenience.

The Observer Is No Less Biased

Here’s the darker truth: they’re compromised, too.

  • If they’re stressed, you’re rude.
  • If they’re lonely, you’re charming.
  • If they’re hungry, you’re annoying.

What they’re perceiving is not you — it’s their current chemistry’s reaction to your presentation, filtered through their history, memory, mood, and assumptions.

It’s not a moral lens.
It’s a funhouse mirror, polished with certainty.

Mutual Delusion in a Moral Marketplace

The tragedy is recursive:

  • You act based on internal constraints.
  • They judge based on theirs.
  • Then you interpret their reaction… and adjust accordingly.
  • And they, in turn, react to your adjustment…

And on it goes — chemical systems calibrating against each other, mistaking interaction for insight, familiarity for truth, coherence for character.

Identity isn’t formed.
It’s inferred, then reinforced.
By people who have no access to your internal states and no awareness of their own.

The Myth of the Moral Evaluator

This has massive implications:

  • Justice assumes objectivity.
  • Culture assumes shared moral standards.
  • Relationships assume “knowing” someone.

But all of these are built on the fantasy that moral evaluation is accurate, stable, and earned.

It is not.

It is probabilistic, state-sensitive, and mutually confabulatory.

You are being judged by the weather inside someone else’s skull.

TL;DR: Everyone’s Lying to Themselves About You

  • You behave according to contingent states.
  • Others judge you based on their own contingent states.
  • Both of you invent reasons to justify your interpretations.
  • Neither of you has access to the full picture.
  • The result is a hall of mirrors with no ground floor.

So no — you’re not “being seen.”
You’re being misread, reinterpreted, and categorised
— by people who are also misreading themselves.

đź“… Coming Tomorrow

You Cannot Originate Yourself

The causa sui argument, and the final collapse of moral responsibility.

The Emperor’s New Models: Box, Lawson, and the Death of Truth

We live in an age intoxicated by models: climate models, economic models, epidemiological models, cosmological models—each one an exquisite confection of assumptions draped in a lab coat and paraded as gospel. Yet if you trace the bloodline of model-building back through the annals of intellectual history, you encounter two figures who coldly remind us of the scam: George Box and Hilary Lawson.

Box: The Gentle Assassin of Certainty

George Box, the celebrated statistician, is often credited with the aphorism: “All models are wrong, but some are useful.” However, Box himself never uttered this precise phrase. What he did say, in his 1976 paper Science and Statistics, was:

The “some are useful” flourish was added later by a public desperate to sweeten the bitter pill. Nevertheless, Box deserves credit for the lethal insight: no model, however elegant, perfectly captures reality. They are provisional guesses, finger-paintings smeared across the rough surface of the unknown.

Audio: NotebookLM podcast on this topic.

Lawson: The Arsonist Who Burned the Map

Hilary Lawson, contemporary philosopher and author of Closure: A Story of Everything, drags Box’s modest scepticism into full-blown philosophical insurrection. In a recent lecture, Lawson declared:

Where Box warns us the emperor’s clothes don’t fit, Lawson points out that the emperor himself is a paper doll. Either way, we dress our ignorance in equations and hope no one notices the draft.

Lawson’s view is grim but clarifying: models are not mere approximations of some Platonic truth. They are closures—temporary, pragmatic structures we erect to intervene effectively in a world we will never fully comprehend. Reality, in Lawson’s framing, is an “openness”: endlessly unfolding, resistant to total capture.

The Case of the Celestial Spheres

Take Aristotle’s model of celestial spheres. Ludicrous? Yes. Obsolete? Absolutely. Yet for centuries, it allowed navigators to chart courses, astrologers to cast horoscopes, and priests to intimidate peasants—all without the slightest whiff of heliocentrism. A model does not need to be right; it merely needs to be operational.

Our modern theories—Big Bang cosmology, dark matter, and quantum gravity—may well be tomorrow’s celestial spheres: charming relics of ignorance that nonetheless built bridges, cured diseases, and sold mobile phones.

Summary Table: Lawson’s View on Models and Truth

Conclusion

Box taught us to distrust the fit of our models; Lawson reminds us there is no true body underneath them. If truth is a ghost, then our models are ghost stories—and some ghost stories, it turns out, are very good at getting us through the night.

We are left not with certainty, but with craftsmanship: the endless, imperfect art of refining our closures, knowing full well they are lies that work. Better lies. Usable lies. And perhaps, in a world without final answers, that is the most honest position of all.

Unwilling: The Neuroscience Against Free Will

Why the cherished myth of human autonomy dissolves under the weight of our own biology

We cling to free will like a comfort blanket—the reassuring belief that our actions spring from deliberation, character, and autonomous choice. This narrative has powered everything from our justice systems to our sense of personal achievement. It feels good, even necessary, to believe we author our own stories.

But what if this cornerstone of human self-conception is merely a useful fiction? What if, with each advance in neuroscience, our cherished notion of autonomy becomes increasingly untenable?

Audio: NotebookLM podcast on this topic.

I. The Myth of Autonomy: A Beautiful Delusion

Free will requires that we—some essential, decision-making “self”—stand somehow separate from the causal chains of biology and physics. But where exactly would this magical pocket of causation exist? And what evidence do we have for it?

Your preferences, values, and impulses emerge from a complex interplay of factors you never chose:

The genetic lottery determined your baseline neurochemistry and cognitive architecture before your first breath. You didn’t select your dopamine sensitivity, your amygdala reactivity, or your executive function capacity.

The hormonal symphony that controls your emotional responses operates largely beneath conscious awareness. These chemical messengers—testosterone, oxytocin, and cortisol—don’t ask permission before altering your perceptions and priorities.

Environmental exposures—from lead in your childhood drinking water to the specific traumas of your upbringing—have sculpted neural pathways you didn’t design and can’t easily rewire.

Developmental contingencies have shaped your moral reasoning, impulse control, and capacity for empathy through processes invisible to conscious inspection.

Your prized ability to weigh options, inhibit impulses, and make “rational” choices depends entirely on specific brain structures—particularly the dorsolateral prefrontal cortex (DLPFC)—operating within a neurochemical environment you inherited rather than created.

You occupy this biological machinery; you do not transcend it. Yet, society holds you responsible for its outputs as if you stood separate from these deterministic processes.

II. The DLPFC: Puppet Master of Moral Choice

The dorsolateral prefrontal cortex serves as command central for what we proudly call executive function—our capacity to plan, inhibit, decide, and morally judge. We experience its operations as deliberation, as the weighing of options, as the essence of choice itself.

And yet this supposed seat of autonomy can be manipulated with disturbing ease.

When researchers apply transcranial magnetic stimulation to inhibit DLPFC function, test subjects make dramatically different moral judgments about identical scenarios. Under different stimulation protocols, the same person arrives at contradictory conclusions about right and wrong without any awareness of the external influence.

Similarly, transcranial direct current stimulation over the DLPFC alters moral reasoning, especially regarding personal moral dilemmas. The subject experiences these externally induced judgments as entirely their own, with no sense that their moral compass has been hijacked.

If our most cherished moral deliberations can be redirected through simple electromagnetic manipulation, what does this reveal about the nature of “choice”? If will can be so easily influenced, how free could it possibly be?

III. Hormonal Puppetmasters: The Will in Your Bloodstream

Your decision-making machinery doesn’t stop at neural architecture. Your hormonal profile actively shapes what you perceive as your autonomous choices.

Consider oxytocin, popularly known as the “love hormone.” Research demonstrates that elevated oxytocin levels enhance feelings of guilt and shame while reducing willingness to harm others. This isn’t a subtle effect—it’s a direct biological override of what you might otherwise “choose.”

Testosterone tells an equally compelling story. Administration of this hormone increases utilitarian moral judgments, particularly when such decisions involve aggression or social dominance. The subject doesn’t experience this as a foreign influence but as their own authentic reasoning.

These aren’t anomalies or edge cases. They represent the normal operation of the biological systems governing what we experience as choice. You aren’t choosing so much as regulating, responding, and rebalancing a biochemical economy you inherited rather than designed.

IV. The Accident of Will: Uncomfortable Conclusions

If the will can be manipulated through such straightforward biological interventions, was it ever truly “yours” to begin with?

Philosopher Galen Strawson’s causa sui argument becomes unavoidable here: To be morally responsible, one must be the cause of oneself, but no one creates their own neural and hormonal architecture. By extension, no one can be ultimately responsible for actions emerging from that architecture.

What we dignify as “will” may be nothing more than a fortunate (or unfortunate) biochemical accident—the particular configuration of neurons and neurochemicals you happened to inherit and develop.

This lens forces unsettling questions:

  • How many behaviours we praise or condemn are merely phenotypic expressions masquerading as choices? How many acts of cruelty or compassion reflect neurochemistry rather than character?
  • How many punishments and rewards are we assigning not to autonomous agents, but to biological processes operating beyond conscious control?
  • And perhaps most disturbingly: If we could perfect the moral self through direct biological intervention—rewiring neural pathways or adjusting neurotransmitter levels to ensure “better” choices—should we?
  • Or would such manipulation, however well-intentioned, represent the final acknowledgement that what we’ve called free will was never free at all?

A Compatibilist Rebuttal? Not So Fast.

Some philosophers argue for compatibilism, the view that determinism and free will can coexist if we redefine free will as “uncoerced action aligned with one’s desires.” But this semantic shuffle doesn’t rescue moral responsibility.

If your desires themselves are products of biology and environment—if even your capacity to evaluate those desires depends on inherited neural architecture—then “acting according to your desires” just pushes the problem back a step. You’re still not the ultimate author of those desires or your response to them.

What’s Left?

Perhaps we need not a defence of free will but a new framework for understanding human behaviour—one that acknowledges our biological embeddedness while preserving meaningful concepts of agency and responsibility without magical thinking.

The evidence doesn’t suggest we are without agency; it suggests our agency operates within biological constraints we’re only beginning to understand. The question isn’t whether biology influences choice—it’s whether anything else does.

For now, the neuroscientific evidence points in one direction: The will exists, but its freedom is the illusion.

Defying Death

I died in March 2023 — or so the rumour mill would have you believe.

Of course, given that I’m still here, hammering away at this keyboard, it must be said that I didn’t technically die. We don’t bring people back. Death, real death, doesn’t work on a “return to sender” basis. Once you’re gone, you’re gone, and the only thing bringing you back is a heavily fictionalised Netflix series.

Audio: NotebookLM podcast of this content.

No, this is a semantic cock-up, yet another stinking exhibit in the crumbling Museum of Language Insufficiency. “I died,” people say, usually while slurping a Pumpkin Spice Latte and live-streaming their trauma to 53 followers. What they mean is that they flirted with death, clumsily, like a drunk uncle at a wedding. No consummation, just a lot of embarrassing groping at the pearly gates.

And since we’re clarifying terms: there was no tunnel of light, no angels, no celestial choir belting out Coldplay covers. No bearded codgers in slippers. No 72 virgins. (Or, more plausibly, 72 incels whining about their lack of Wi-Fi reception.)

There was, in fact, nothing. Nothing but the slow, undignified realisation that the body, that traitorous meat vessel, was shutting down — and the only gates I was approaching belonged to A&E, with its flickering fluorescent lights and a faint smell of overcooked cabbage.

To be fair, it’s called a near-death experience (NDE) for a reason. Language, coward that it is, hedges its bets. “Near-death” means you dipped a toe into the abyss and then screamed for your mummy. You didn’t die. You loitered. You loitered in the existential equivalent of an airport Wetherspoons, clutching your boarding pass and wondering why the flight to Oblivion was delayed.

As the stories go, people waft into the next world and are yanked back with stirring tales of unicorns, long-dead relatives, and furniture catalogues made of clouds. I, an atheist to my scorched and shrivelled soul, expected none of that — and was therefore not disappointed.

What I do recall, before the curtain wobbled, was struggling for breath, thinking, “Pick a side. In or out. But for pity’s sake, no more dithering.”
In a last act of rational agency, I asked an ER nurse — a bored-looking Athena in scrubs — to intubate me. She responded with the rousing medical affirmation, “We may have to,” which roughly translates to, “Stop making a scene, love. We’ve got fifteen others ahead of you.”

After that, nothing. I was out. Like a light. Like a minor character in a Dickens novel whose death is so insignificant it happens between paragraphs.

I woke up the next day: groggy, sliced open, a tube rammed down my throat, and absolutely no closer to solving the cosmic riddle of it all. Not exactly the triumphant return of Odysseus. Not even a second-rate Ulysses.

Here’s the reality:
There is no coming back from death.
You can’t “visit” death, any more than you can spend the afternoon being non-existent and return with a suntan.

Those near-death visions? Oxygen-starved brains farting out fever dreams. Cerebral cortexes short-circuiting like Poundland fairy lights. Hallucinations, not heralds. A final, frantic light show performed for an audience of none.

Epicurus, that cheerful nihilist, said, “When we are, death is not. When death is, we are not.” He forgot to mention that, in between, people would invent entire publishing industries peddling twaddle about journeys beyond the veil — and charging $29.99 for the paperback edition.

No angels. No harps. No antechamber to the divine.
Just the damp whirr of hospital machinery and the faint beep-beep of capitalism, patiently billing you for your own demise.

If there’s a soundtrack to death, it’s not choirs of the blessed. It’s a disgruntled junior surgeon muttering, “Where the hell’s the anaesthetist?” while pawing desperately through a drawer full of out-of-date latex gloves.

And thus, reader, I lived.
But only in the most vulgar, anticlimactic, and utterly mortal sense.

There will be no afterlife memoir. No second chance to settle the score. No sequel.
Just this: breath, blood, occasional barbed words — and then silence.

Deal with it.

When Suspension of Disbelief Escapes the Page

Welcome to the Age of Realism Fatigue

Once upon a time — which is how all good fairy tales begin — suspension of disbelief was a tidy little tool we used to indulge in dragons, space travel, talking animals, and the idea that people in rom-coms have apartments that match their personalities and incomes. It was a temporary transaction, a gentleman’s agreement, a pact signed between audience and creator with metaphorical ink: I know this is nonsense, but I’ll play along if you don’t insult my intelligence.

Audio: NotebookLM podcast of this page content.

This idea, famously coined by Samuel Taylor Coleridge as the “willing suspension of disbelief,” was meant to give art its necessary air to breathe. Coleridge’s hope was that audiences would momentarily silence their rational faculties in favour of emotional truth. The dragons weren’t real, but the heartbreak was. The ghosts were fabrications, but the guilt was palpable.

But that was then. Before the world itself began auditioning for the role of absurdist theatre. Before reality TV became neither reality nor television. Before politicians quoted memes, tech CEOs roleplayed as gods, and conspiracy theorists became bestsellers on Amazon. These days, suspension of disbelief is no longer a leisure activity — it’s a survival strategy.

The Fictional Contract: Broken but Not Forgotten

Traditionally, suspension of disbelief was deployed like a visitor’s badge. You wore it when entering the imagined world and returned it at the door on your way out. Fiction, fantasy, speculative fiction — they all relied on that badge. You accepted the implausible if it served the probable. Gandalf could fall into shadow and return whiter than before because he was, after all, a wizard. We were fine with warp speed as long as the emotional logic of Spock’s sacrifice made sense. There were rules — even in rule-breaking.

The genres varied. Hard sci-fi asked you to believe in quantum wormholes but not in lazy plotting. Magical realism got away with absurdities wrapped in metaphor. Superhero films? Well, their disbelief threshold collapsed somewhere between the multiverse and the Bat-credit card.

Still, we always knew we were pretending. We had a tether to the real, even when we floated in the surreal.

But Then Real Life Said, “Hold My Beer.”

At some point — let’s call it the twenty-first century — the need to suspend disbelief seeped off the screen and into the bloodstream of everyday life. News cycles became indistinguishable from satire (except that satire still had editors). Headlines read like rejected Black Mirror scripts. A reality TV star became president, and nobody even blinked. Billionaires declared plans to colonise Mars whilst democracy quietly lost its pulse.

We began to live inside a fiction that demanded that our disbelief be suspended daily. Except now, it wasn’t voluntary. It was mandatory. If you wanted to participate in public life — or just maintain your sanity — you had to turn off some corner of your rational mind.

You had to believe, or pretend to, that the same people calling for “freedom” were banning books. That artificial intelligence would definitely save us, just as soon as it was done replacing us. That social media was both the great democratiser and the sewer mainline of civilisation.

The boundary between fiction and reality? Eroded. Fact-checking? Optional. Satire? Redundant. We’re all characters now, improvising in a genreless world that refuses to pick a lane.

Cognitive Gymnastics: Welcome to the Cirque du Surréalisme

What happens to a psyche caught in this funhouse? Nothing good.

Our brains, bless them, were designed for some contradiction — religion’s been pulling that trick for millennia — but the constant toggling between belief and disbelief, trust and cynicism, is another matter. We’re gaslit by the world itself. Each day, a parade of facts and fabrications marches past, and we’re told to clap for both.

Cognitive dissonance becomes the default. We scroll through doom and memes in the same breath. We read a fact, then three rebuttals, then a conspiracy theory, then a joke about the conspiracy, then a counter-conspiracy about why the joke is state-sponsored. Rinse. Repeat. Sleep if you can.

The result? Mental fatigue. Not just garden-variety exhaustion, but a creeping sense that nothing means anything unless it’s viral. Critical thinking atrophies not because we lack the will but because the floodwaters never recede. You cannot analyse the firehose. You can only drink — or drown.

Culture in Crisis: A Symptom or the Disease?

This isn’t just a media problem. It’s cultural, epistemological, and possibly even metaphysical.

We’ve become simultaneously more skeptical — distrusting institutions, doubting authorities — and more gullible, accepting the wildly implausible so long as it’s entertaining. It’s the postmodern paradox in fast-forward: we know everything is a construct, but we still can’t look away. The magician shows us the trick, and we cheer harder.

In a world where everything is performance, authenticity becomes the ultimate fiction. And with that, the line between narrative and news, between aesthetic and actuality, collapses.

So what kind of society does this create?

One where engagement replaces understanding. Where identity is a curated feed. Where politics is cosplay, religion is algorithm, and truth is whatever gets the most shares. We aren’t suspending disbelief anymore. We’re embalming it.

The Future: A Choose-Your-Own-Delusion Adventure

So where does this all end?

There’s a dark path, of course: total epistemic breakdown. Truth becomes just another fandom and reality a subscription model. But there’s another route — one with a sliver of hope — where we become literate in illusion.

We can learn to hold disbelief like a scalpel, not a blindfold. To engage the implausible with curiosity, not capitulation. To distinguish between narratives that serve power and those that serve understanding.

It will require a new kind of literacy. One part media scepticism, one part philosophical rigour, and one part good old-fashioned bullshit detection. We’ll have to train ourselves not just to ask “Is this true?” but “Who benefits if I believe it?”

That doesn’t mean closing our minds. It means opening them with caution. Curiosity without credulity. Wonder without worship. A willingness to imagine the impossible whilst keeping a firm grip on the probable.

In Conclusion, Reality Is Optional, But Reason Is Not

In the age of AI, deepfakes, alt-facts, and hyperreality, we don’t need less imagination. We need more discernment. The world may demand our suspension of disbelief, but we must demand our belief back. In truth, in sense, in each other.

Because if everything becomes fiction, then fiction itself loses its magic. And we, the audience, are left applauding an empty stage.

Lights down. Curtain call.
Time to read the footnotes.

Elites Ruined It For Everyone

David Brooks and the Hollowing Out of Conservatism

David Brooks is the quintessential old-school Conservative—the kind who once upheld a semblance of ideological coherence. He belongs to the pre-Reagan-Thatcher vintage, a time when Conservatism at least had the decency to argue from principles rather than blind tribalism. We could debate these people in good faith. Those days are gone. The current incarnation of Conservatism contains only homoeopathic traces of its Classical™ predecessor—diluted beyond recognition.

The Degeneration of Conservatism

The rot set in with Reagan, who caught it from Thatcher. Greed and selfishness were laundered into virtues, repackaged as “individual responsibility,” and the party’s intellectual ballast began to erode. By the time Bush II’s administration rolled in, Neo-Conservatism had replaced any lingering Burkean ethos, and by Trump’s tenure, even the pretence of ideology was gone. Conservatism-in-Name-Only—whatever Trump’s brand of reactionary nihilism was—swallowed the party whole. Do they even call themselves Conservatives anymore, or has that ship sailed along with basic literacy?

Click here to take the worldview survey

To be fair, this didn’t go unnoticed. Plenty of old-school Republicans recoiled in horror when Trump became their figurehead. Before the 2016 election, conservative pundits could barely contain their disdain for his incompetence, lack of moral compass, and general buffoonery. And yet, once they realised he was the party’s golden goose, they clambered aboard the Trump Train with the enthusiasm of lottery winners at a payday loan office. His staunchest critics became his most obsequious apologists. What does this tell us about their value system? Spoiler: nothing good.

Brooks’ Lament

Which brings us back to Brooks, who now bemoans the death of Conservative values. On this, we agree. Where we part ways is on whether those values were worth saving. Say you’re boarding a train from New York to Los Angeles. Conservatism might argue that a Miami-bound train is still a train, so what’s the problem? It’s the same vehicle, just going somewhere else. Except, of course, Conservatism has always insisted on the slow train over the fast train—because urgency is unseemly, and progress must be rationed.

If I’m an affluent middle-classer, I might prefer Conservatism’s careful incrementalism—it keeps my apple cart stable. Admirable, if you enjoy tunnel vision. Progressives, by contrast, recognise that some people don’t even have apple carts. Some are starving while others hoard orchards. To the Conservative, the poor just aren’t trying hard enough. To the Progressive, the system is broken, and the playing field needs a serious re-levelling. Even when Conservatives acknowledge inequality, their instinct is to tiptoe toward justice rather than risk disrupting their own affluence.

The Fallacy of Objective Reality

Leaving politics for philosophy, Brooks predictably rails against Postmodernism, decrying relativism in favour of good old-fashioned Modernist “reality.” He’s horrified by subjectivism, as though personal interpretation weren’t the foundation of all human experience. Like Jordan Peterson, he believes his subjective truth is the objective truth. And like Peterson, he takes umbrage at anyone pointing out otherwise. It feels so absolute to them that they mistake their own convictions for universal constants.

As a subjectivist, I accept that reality is socially mediated. We interpret truth claims based on cognitive biases, cultural conditioning, and personal experience. Even when we strive for objectivity, we do so through subjective lenses. Brooks’ Modernist nostalgia is touching but delusional—akin to demanding we all agree on a single flavour of ice cream.

The Existential Problem

And so, I find myself in partial agreement with Brooks. Yes, there is an existential crisis. The patient has a broken leg. But our prescriptions differ wildly. I won’t offer a metaphor for that—consider it your homework as a reader.

Brooks is likely a better writer than a public speaker, but you may still find yourself nodding along with some of his arguments. If you’re a “true” Christian Conservative—if you still believe in something beyond crass self-interest—he may well be preaching to the choir. But let’s be honest: how many in that choir are still listening?

Perception and Reality

I love this meme despite its lack of basis in reality – sort of like the ten per cent of the brain myth.

I’m busy writing, but this meme crossed my feed, and I thought, “What better time to share?”

I’ve been reading and re-reading A Sane Society, but reflecting on it here is too much of a commitment, so I’ll delay gratification.

Yesterday and Today

For no apparent reason, I was pondering lost temporal words, so I paid a quick visit to ChatGPT and wanted to share the wealth. Perhaps we can bring some of these back from the grave.

Audio: Podcast prattle of this content.

Eldernight: Referring to the night before last, this term has long since faded from common usage.

Erewhile: Meaning “a little while ago” or “previously.”

Ereyesterday: Meaning “the day before yesterday” – a direct counterpart to “overmorrow.”

Erstwhile: Similar to “erewhile,” it denotes something that was formerly the case.

Fortnight: Short for “fourteen nights,” this term refers to a two-week period and is still in use in British English.

Nudiustertian: A wonderfully specific Latin-derived term meaning “of or relating to the day before yesterday.”

Overmorrow: This term was used to indicate the day after tomorrow.

Sennight: Derived from “seven nights,” this word was used to mean a week.

Umwhile: Another Scottish term meaning “at times” or “occasionally” in the past.

Whilom: An archaic term meaning “formerly” or “once upon a time.”

Yore: While you touched on similar concepts with “whilom,” this term specifically means “of long ago” or “of time long past.”

Yestereve: This term referred to the evening of the previous day, essentially what we’d call “last night.”

Yestermorrow: An intriguing word that denoted “yesterday morning.”

Yestreen: A Scottish and Northern English word meaning “yesterday evening” or “last night” – similar to “yestereve” but with different regional origins.

These terms, though largely obsolete, offer a glimpse into the linguistic richness of earlier English, providing nuanced ways to express temporal relationships that have since been lost in modern usage.

Outraged at Evil

I’ve recently picked up Kurt Gray’s Outraged!, and it’s got me thinking about metaphysics—more specifically, how the implausibility of metaphysical constructs like “evil” shapes our understanding of harm and morality. Gray’s central thesis—that everyone wants good outcomes for themselves and their society but focuses on different objects of harm—is intriguing, but it hinges on some deeply problematic assumptions.

Take, for instance, his argument that the vitriol between Democrats and Republicans is less about genuine malice and more about divergent harm perceptions. Democrats, he suggests, see harm in systemic inequalities, while Republicans focus on the erosion of traditional values. Both sides, in their own way, think they’re protecting what matters most. But here’s where it gets murky: how do we square this with the fact that these perceived harms often rest on fantastical and unfounded worldviews?

Audio: Podcast speaking on this content

Gray recounts a childhood experience in Sunday school where the question of what happens to unbaptised people was posed. The answer—Hell, of course—was delivered with the enthusiasm of a child parroting doctrine. This made Gray uncomfortable at the time, but as an adult, he reflects that his step-parents’ insistence on baptism wasn’t malicious. They genuinely believed they were saving him from eternal damnation. He argues their actions were driven by love, not malevolence.

On the surface, this seems like a generous interpretation. But dig deeper, and it’s clear how flawed it is. Hell doesn’t exist. Full stop. Actions based on an entirely imaginary premise—even well-intentioned ones—cannot escape scrutiny simply because the perpetrator’s heart was in the right place. Good intentions do not alchemize irrationality into moral virtue.

This same flawed logic permeates much of the political and moral discourse Gray explores. Consider anti-abortion activists, many of whom frame their cause in terms of protecting unborn lives. To them, abortion is the ultimate harm. But this stance is often rooted in religious metaphysics: a soul enters the body at conception, life begins immediately, and terminating a pregnancy is tantamount to murder. These claims aren’t grounded in observable reality, yet they drive real-world policies and harm. By focusing on “intent” and dismissing “malice,” Gray risks giving too much credit to a worldview that’s fundamentally untethered from evidence.

Which brings me to the notion of evil. Gray invokes it occasionally, but let’s be clear: evil doesn’t exist. At least, not as anything more than a metaphor. The word “evil” is a narrative shortcut—a way to denote something as “very, very, very, very bad,” as a precocious toddler might put it. It’s a relic of religious and metaphysical thinking, and it’s about as useful as Hell in explaining human behaviour.

Take the archetypal “evildoers” of history and society: Adolf Hitler, Jeffrey Dahmer, or (for some) Donald Trump. Are these people “evil”? No. Hitler was a power-hungry demagogue exploiting fear and economic despair. Dahmer was a deeply disturbed individual shaped by trauma and pathology. Trump is a narcissist thriving in a culture that rewards spectacle over substance. Labelling them as “evil” absolves us of the responsibility to understand them. Worse, it obscures the systemic conditions and societal failures that allowed them to act as they did.

Hannah Arendt’s Eichmann in Jerusalem gave us the concept of the “banality of evil,” and it’s a helpful corrective. Arendt’s point wasn’t that Eichmann was secretly a great guy but that his actions weren’t driven by some metaphysical malevolence. He was a cog in the machine, an unremarkable bureaucrat following orders. The atrocities he committed weren’t the result of extraordinary wickedness but of ordinary systems enabling ordinary people to do extraordinarily harmful things.

This insight cuts to the core of the issue. If “evil” is banal—if it’s nothing more than the mundane processes of harm scaled up—then it never really existed to begin with. It’s a construct, a tool of storytelling that obscures far more than it reveals.

So, where does this leave us? For one, we must abandon “evil” as an explanatory framework. It’s analytically lazy and morally dangerous. Instead, let’s focus on precision. Rather than labeling someone “evil,” we can describe their actions: harmful, exploitative, cruel. These words invite inquiry; “evil” slams the door shut.

By rejecting metaphysical constructs like evil, we gain a clearer, more grounded understanding of harm and morality. And perhaps that’s what Outraged! inadvertently teaches us: the real outrage isn’t malice; it’s the stubborn persistence of unexamined beliefs masquerading as moral clarity. If we can let go of those, maybe we can finally move forward.

A Case for Intersectionalism

The Space Between

In the great philosophical tug-of-war between materialism and idealism, where reality is argued to be either wholly independent of perception or entirely a construct of the mind, there lies an underexplored middle ground—a conceptual liminal space that we might call “Intersectionalism.” This framework posits that reality is neither purely objective nor subjective but emerges at the intersection of the two. It is the terrain shaped by the interplay between what exists and how it is perceived, mediated by the limits of human cognition and sensory faculties.

Audio: Podcast conversation on this topic.

Intersectionalism offers a compelling alternative to the extremes of materialism and idealism. By acknowledging the constraints of perception and interpretation, it embraces the provisionality of knowledge, the inevitability of blind spots, and the productive potential of uncertainty. This essay explores the foundations of Intersectionalism, its implications for knowledge and understanding, and the ethical and practical insights it provides.

Reality as an Intersection

At its core, Intersectionalism asserts that reality exists in the overlapping space between the objective and the subjective. The objective refers to the world as it exists independently of any observer—the “terrain.” The subjective encompasses perception, cognition, and interpretation—the “map.” Reality, then, is not fully contained within either but is co-constituted by their interaction.

Consider the act of seeing a tree. The tree, as an object, exists independently of the observer. Yet, the experience of the tree is entirely mediated by the observer’s sensory and cognitive faculties. Light reflects off the tree, enters the eye, and is translated into electrical signals processed by the brain. This process creates a perception of the tree, but the perception is not the tree itself.

This gap between perception and object highlights the imperfect alignment of subject and object. No observer perceives reality “as it is” but only as it appears through the interpretive lens of their faculties. Reality, then, is a shared but imperfectly understood phenomenon, subject to distortion and variation across individuals and species.

The Limits of Perception and Cognition

Humans, like all organisms, perceive the world through the constraints of their sensory and cognitive systems. These limitations shape not only what we can perceive but also what we can imagine. For example:

  • Sensory Blind Spots: Humans are limited to the visible spectrum of light (~380–750 nm), unable to see ultraviolet or infrared radiation without technological augmentation. Other animals, such as bees or snakes, perceive these spectra as part of their natural sensory worlds. Similarly, humans lack the electroreception of sharks or the magnetoreception of birds.
  • Dimensional Constraints: Our spatial intuition is bounded by three spatial dimensions plus time, making it nearly impossible to conceptualise higher-dimensional spaces without resorting to crude analogies (e.g., imagining a tesseract as a 3D shadow of a 4D object).
  • Cognitive Frameworks: Our brains interpret sensory input through patterns and predictive models. These frameworks are adaptive but often introduce distortions, such as cognitive biases or anthropocentric assumptions.

This constellation of limitations suggests that what we perceive and conceive as reality is only a fragment of a larger, potentially unknowable whole. Even when we extend our senses with instruments, such as infrared cameras or particle detectors, the data must still be interpreted through the lens of human cognition, introducing new layers of abstraction and potential distortion.

The Role of Negative Space

One of the most intriguing aspects of Intersectionalism is its embrace of “negative space” in knowledge—the gaps and absences that shape what we can perceive and understand. A compelling metaphor for this is the concept of dark matter in physics. Dark matter is inferred not through direct observation but through its gravitational effects on visible matter. It exists as a kind of epistemic placeholder, highlighting the limits of our current sensory and conceptual tools.

Similarly, there may be aspects of reality that elude detection altogether because they do not interact with our sensory or instrumental frameworks. These “unknown unknowns” serve as reminders of the provisional nature of our maps and the hubris of assuming completeness. Just as dark matter challenges our understanding of the cosmos, the gaps in our perception challenge our understanding of reality itself.

Practical and Ethical Implications

Intersectionalism’s recognition of perceptual and cognitive limits has profound implications for science, ethics, and philosophy.

Science and Knowledge

In science, Intersectionalism demands humility. Theories and models, however elegant, are maps rather than terrains. They approximate reality within specific domains but are always subject to revision or replacement. String theory, for instance, with its intricate mathematics and reliance on extra dimensions, risks confusing the elegance of the map for the completeness of the terrain. By embracing the provisionality of knowledge, Intersectionalism encourages openness to new paradigms and methods that might better navigate the negative spaces of understanding.

Ethics and Empathy

Ethically, Intersectionalism fosters a sense of humility and openness toward other perspectives. If reality is always interpreted subjectively, then every perspective—human, animal, or artificial—offers a unique and potentially valuable insight into the intersection of subject and object. Recognising this pluralism can promote empathy and cooperation across cultures, species, and disciplines.

Technology and Augmentation

Technological tools extend our sensory reach, revealing previously unseen aspects of reality. However, they also introduce new abstractions and biases. Intersectionalism advocates for cautious optimism: technology can help illuminate the terrain but will never eliminate the gap between map and terrain. Instead, it shifts the boundaries of our blind spots, often revealing new ones in the process.

Conclusion: Navigating the Space Between

Intersectionalism provides a framework for understanding reality as a shared but imperfect intersection of subject and object. It rejects the extremes of materialism and idealism, offering instead a middle path that embraces the limitations of perception and cognition while remaining open to the possibilities of negative space and unknown dimensions. In doing so, it fosters humility, curiosity, and a commitment to provisionality—qualities essential for navigating the ever-expanding terrain of understanding.

By acknowledging the limits of our maps and the complexity of the terrain, Intersectionalism invites us to approach reality not as a fixed and knowable entity but as an unfolding interplay of perception and existence. It is a philosophy not of certainty but of exploration, always probing the space between.