“Trust the Science,” They Said. “It’s Reproducible,” They Lied.

—On Epistemology, Pop Psychology, and the Cult of Empirical Pretence

Science, we’re told, is the beacon in the fog – a gleaming lighthouse of reason guiding us through the turbulent seas of superstition and ignorance. But peer a bit closer, and the lens is cracked, the bulb flickers, and the so-called lighthouse keeper is just some bloke on TikTok shouting about gut flora and intermittent fasting.

Audio: NotebookLM podcast on this topic.

We are creatures of pattern. We impose order. We mistake correlation for causation, narrative for truth, confidence for knowledge. What we have, in polite academic parlance, is an epistemology problem. What we call science is often less Newton and more Nostradamus—albeit wearing a lab coat and wielding a p-hacked dataset.

Let’s start with the low-hanging fruit—the rotting mango of modern inquiry: nutritional science, which is to actual science what alchemy is to chemistry, or vibes are to calculus. We study food the way 13th-century monks studied demons: through superstition, confirmation bias, and deeply committed guesswork. Eat fat, don’t eat fat. Eat eggs, don’t eat eggs. Eat only between the hours of 10:00 and 14:00 under a waxing moon while humming in Lydian mode. It’s a cargo cult with chia seeds.

But why stop there? Let’s put the whole scientific-industrial complex on the slab.

Psychology: The Empirical Astrological Society

Psychology likes to think it’s scientific. Peer-reviewed journals, statistical models, the odd brain scan tossed in for gravitas. But at heart, much of it is pop divination, sugar-dusted for mass consumption. The replication crisis didn’t merely reveal cracks – it bulldozed entire fields. The Stanford Prison Experiment? A theatrical farce. Power poses? Empty gestural theatre. Half of what you read in Psychology Today could be replaced with horoscopes and no one would notice.

Medical Science: Bloodletting, But With Better Branding

Now onto medicine, that other sacred cow. We tend to imagine it as precise, data-driven, evidence-based. In practice? It’s a Byzantine fusion of guesswork, insurance forms, and pharmaceutical lobbying. As Crémieux rightly implies, medicine’s predictive power is deeply compromised by overfitting, statistical fog, and a staggering dependence on non-replicable clinical studies, many funded by those who stand to profit from the result.

And don’t get me started on epidemiology, that modern priesthood that speaks in incantations of “relative risk” and “confidence intervals” while changing the commandments every fortnight. If nutrition is theology, epidemiology is exegesis.

The Reproducibility Farce

Let us not forget the gleaming ideal: reproducibility, that cornerstone of Enlightenment confidence. The trouble is, in field after field—from economics to cancer biology—reproducibility is more aspiration than reality. What we actually get is a cacophony of studies no one bothers to repeat, published to pad CVs, p-hacked into publishable shape, and then cited into canonical status. It’s knowledge by momentum. We don’t understand the world. We just retweet it.

What, Then, Is To Be Done?

Should we become mystics? Take up tarot and goat sacrifice? Not necessarily. But we should strip science of its papal robes. We should stop mistaking publication for truth, consensus for accuracy, and method for epistemic sanctity. The scientific method is not the problem. The pretence that it’s constantly being followed is.

Perhaps knowledge doesn’t have a half-life because of progress, but because it was never alive to begin with. We are not disproving truth; we are watching fictions expire.

Closing Jab

Next time someone says “trust the science,” ask them: which bit? The part that told us margarine was manna? The part that thought ulcers were psychosomatic? The part that still can’t explain consciousness, but is confident about your breakfast?

Science is a toolkit. But too often, it’s treated like scripture. And we? We’re just trying to lose weight while clinging to whatever gospel lets us eat more cheese.

What’s Missing? Trust or Influence

Post-COVID, we’re told trust in science is eroding. But perhaps the real autopsy should be performed on the institution of public discourse itself.

Since the COVID-19 crisis detonated across our global stage—part plague, part PR disaster—the phrase “trust in science” has become the most abused slogan since “thoughts and prayers.” Every public official with a podium and a pulse declared they were “following the science,” as if “science” were a kindly oracle whispering unambiguous truths into the ears of the righteous. But what happened when those pronouncements proved contradictory, politically convenient, or flat-out wrong? Was it science that failed, or was it simply a hostage to an incoherent performance of authority?

Audio: NotebookLM podcast discussing this topic.

Two recent Nature pieces dig into the supposed “decline” of scientific credibility in the post-pandemic world, offering the expected hand-wringing about public opinion and populist mistrust. But let’s not be so credulous. This isn’t merely a crisis of trust—it’s a crisis of theatre.

“The Science” as Ventriloquism

Let’s begin by skewering the central absurdity: there is no such thing as “The Science.” Science is not a monolith. It’s not a holy writ passed down by lab-coated Levites. It’s a process—a messy, iterative, and perpetually provisional mode of inquiry. But during the pandemic, politicians, pundits, and even some scientists began to weaponise the term, turning it into a rhetorical cudgel. “The Science says” became code for “shut up and comply.” Any dissent—even from within the scientific community—was cast as heresy. Galileo would be proud.

In Nature Human Behaviour paper (van der Linden et al., 2025) identifies four archetypes of distrust: distrust in the message, the messenger, the medium, and the motivation. What they fail to ask is: what if all four were compromised simultaneously? What if the medium (mainstream media) served more as a stenographer to power than a check upon it? What if the message was oversimplified into PR slogans, the messengers were party apparatchiks in lab coats, and the motivations were opaque at best?

Trust didn’t just erode. It was actively incinerated in a bonfire of institutional vanity.

A Crisis of Influence, Not Integrity

The second Nature commentary (2025) wrings its hands over “why trust in science is declining,” as if the populace has suddenly turned flat-Earth overnight. But the real story isn’t a decline in trust per se; it’s a redistribution of epistemic authority. Scientists no longer have the stage to themselves. Influencers, conspiracy theorists, rogue PhDs, and yes—exhausted citizens armed with Wi-Fi and anxiety—have joined the fray.

Science hasn’t lost truth—it’s lost control. And frankly, perhaps it shouldn’t have had that control in the first place. Democracy is messy. Information democracies doubly so. And in that mess, the epistemic pedestal of elite scientific consensus was bound to topple—especially when its public face was filtered through press conferences, inconsistent policies, and authoritarian instincts.

Technocracy’s Fatal Hubris

What we saw wasn’t science failing—it was technocracy failing in real time, trying to manage public behaviour with a veneer of empirical certainty. But when predictions shifted, guidelines reversed, and public health policy began to resemble a mood ring, the lay public was expected to pretend nothing happened. Orwell would have a field day.

This wasn’t a failure of scientific method. It was a failure of scientific messaging—an inability (or unwillingness) to communicate uncertainty, probability, and risk in adult terms. Instead, the public was infantilised. And then pathologised for rebelling.

Toward a Post-Scientistic Public Sphere

So where does that leave us? Perhaps we need to kill the idol of “The Science” to resurrect a more mature relationship with scientific discourse—one that tolerates ambiguity, embraces dissent, and admits when the data isn’t in. Science, done properly, is the art of saying “we don’t know… yet.”

The pandemic didn’t erode trust in science. It exposed how fragile our institutional credibility scaffolding really is—how easily truth is blurred when science is fed through the meat grinder of media, politics, and fear.

The answer isn’t more science communication—it’s less scientism, more honesty, and above all, fewer bureaucrats playing ventriloquist with the language of discovery.

Conclusion

Trust in science isn’t dead. But trust in those who claim to speak for science? That’s another matter. Perhaps it’s time to separate the two.

Defying Death

I died in March 2023 — or so the rumour mill would have you believe.

Of course, given that I’m still here, hammering away at this keyboard, it must be said that I didn’t technically die. We don’t bring people back. Death, real death, doesn’t work on a “return to sender” basis. Once you’re gone, you’re gone, and the only thing bringing you back is a heavily fictionalised Netflix series.

Audio: NotebookLM podcast of this content.

No, this is a semantic cock-up, yet another stinking exhibit in the crumbling Museum of Language Insufficiency. “I died,” people say, usually while slurping a Pumpkin Spice Latte and live-streaming their trauma to 53 followers. What they mean is that they flirted with death, clumsily, like a drunk uncle at a wedding. No consummation, just a lot of embarrassing groping at the pearly gates.

And since we’re clarifying terms: there was no tunnel of light, no angels, no celestial choir belting out Coldplay covers. No bearded codgers in slippers. No 72 virgins. (Or, more plausibly, 72 incels whining about their lack of Wi-Fi reception.)

There was, in fact, nothing. Nothing but the slow, undignified realisation that the body, that traitorous meat vessel, was shutting down — and the only gates I was approaching belonged to A&E, with its flickering fluorescent lights and a faint smell of overcooked cabbage.

To be fair, it’s called a near-death experience (NDE) for a reason. Language, coward that it is, hedges its bets. “Near-death” means you dipped a toe into the abyss and then screamed for your mummy. You didn’t die. You loitered. You loitered in the existential equivalent of an airport Wetherspoons, clutching your boarding pass and wondering why the flight to Oblivion was delayed.

As the stories go, people waft into the next world and are yanked back with stirring tales of unicorns, long-dead relatives, and furniture catalogues made of clouds. I, an atheist to my scorched and shrivelled soul, expected none of that — and was therefore not disappointed.

What I do recall, before the curtain wobbled, was struggling for breath, thinking, “Pick a side. In or out. But for pity’s sake, no more dithering.”
In a last act of rational agency, I asked an ER nurse — a bored-looking Athena in scrubs — to intubate me. She responded with the rousing medical affirmation, “We may have to,” which roughly translates to, “Stop making a scene, love. We’ve got fifteen others ahead of you.”

After that, nothing. I was out. Like a light. Like a minor character in a Dickens novel whose death is so insignificant it happens between paragraphs.

I woke up the next day: groggy, sliced open, a tube rammed down my throat, and absolutely no closer to solving the cosmic riddle of it all. Not exactly the triumphant return of Odysseus. Not even a second-rate Ulysses.

Here’s the reality:
There is no coming back from death.
You can’t “visit” death, any more than you can spend the afternoon being non-existent and return with a suntan.

Those near-death visions? Oxygen-starved brains farting out fever dreams. Cerebral cortexes short-circuiting like Poundland fairy lights. Hallucinations, not heralds. A final, frantic light show performed for an audience of none.

Epicurus, that cheerful nihilist, said, “When we are, death is not. When death is, we are not.” He forgot to mention that, in between, people would invent entire publishing industries peddling twaddle about journeys beyond the veil — and charging $29.99 for the paperback edition.

No angels. No harps. No antechamber to the divine.
Just the damp whirr of hospital machinery and the faint beep-beep of capitalism, patiently billing you for your own demise.

If there’s a soundtrack to death, it’s not choirs of the blessed. It’s a disgruntled junior surgeon muttering, “Where the hell’s the anaesthetist?” while pawing desperately through a drawer full of out-of-date latex gloves.

And thus, reader, I lived.
But only in the most vulgar, anticlimactic, and utterly mortal sense.

There will be no afterlife memoir. No second chance to settle the score. No sequel.
Just this: breath, blood, occasional barbed words — and then silence.

Deal with it.

The Church of Pareto: How Economics Learned to Love Collapse

—or—How the Invisible Hand Became a Throttling Grip on the Throat of the Biosphere

As many frequent visitors know, I am a recovering economist. I tend to view economics through a philosophical lens. Here. I consider the daft nonsense of Pareto optimality.

Audio: NotebookLM podcast of this content.

There is a priesthood in modern economics—pious in its equations, devout in its dispassion—that gathers daily to prostrate before the altar of Pareto. Here, in this sanctum of spreadsheet mysticism, it is dogma that an outcome is “optimal” so long as no one is worse off. Never mind if half the world begins in a ditch and the other half in a penthouse jacuzzi. So long as no one’s Jacuzzi is repossessed, the system is just. Hallelujah.

This cult of cleanliness, cloaked in the language of “efficiency,” performs a marvellous sleight of hand: it transforms systemic injustice into mathematical neutrality. The child working in the lithium mines of the Congo is not “harmed”—she simply doesn’t exist in the model. Her labour is an externality. Her future, an asterisk. Her biosphere, a rounding error in the grand pursuit of equilibrium.

Let us be clear: this is not science. This is not even ideology. It is theology—an abstract faith-based system garlanded with numbers. And like all good religions, it guards its axioms with fire and brimstone. Question the model? Heretic. Suggest the biosphere might matter? Luddite. Propose redistribution? Marxist. There is no room in this holy order for nuance. Only graphs and gospel.

The rot runs deep. William Stanley Jevons—yes, that Jevons, patron saint of unintended consequences—warned us as early as 1865 that improvements in efficiency could increase, not reduce, resource consumption. But his paradox, like Cassandra’s prophecy, was fated to be ignored. Instead, we built a civilisation on the back of the very logic he warned would destroy it.

Then came Simon Kuznets, who—bless his empirically addled soul—crafted a curve that seemed to promise that inequality would fix itself if we just waited politely. We called it the Kuznets Curve and waved it about like a talisman against the ravages of industrial capitalism, ignoring the empirical wreckage that piled up beneath it like bones in a trench.

Meanwhile, Pareto himself, that nobleman of social Darwinism, famously calculated that 80% of Italy’s land was owned by 20% of its people—and rather than challenge this grotesque asymmetry, he chose to marvel at its elegance. Economics took this insight and said: “Yes, more of this, please.”

And so the model persisted—narrow, bloodless, and exquisitely ill-suited to the world it presumed to explain. The economy, it turns out, is not a closed system of rational actors optimising utility. It is a planetary-scale thermodynamic engine fuelled by fossil sunlight, pumping entropy into the biosphere faster than it can absorb. But don’t expect to find that on the syllabus.

Mainstream economics has become a tragic farce, mouthing the language of optimisation while presiding over cascading system failure. Climate change? Not in the model. Biodiversity collapse? A regrettable externality. Intergenerational theft? Discounted at 3% annually.

We are witnessing a slow-motion suicide cloaked in the rhetoric of balance sheets. The Earth is on fire, and the economists are debating interest rates.

What we need is not reform, but exorcism. Burn the models. Salt the axioms. Replace this ossified pseudoscience with something fit for a living world—ecological economics, systems theory, post-growth thinking, anything with the courage to name what this discipline has long ignored: that there are limits, and we are smashing into them at speed.

History will not be kind to this priesthood of polite annihilation. Nor should it be.

The Hard Problem of Consciousness

If you are reading this, you are likely familiar with David Chalmers’ idea of the Hard Problem of Consciousness—the thorny, maddeningly unsolvable question of why and how subjective experience arises from physical processes. If you’re not, welcome to the rabbit hole. Here, we’ll plunge deeper by examining the perspective of Stuart Hameroff, who, like a philosophical magician, reframes this conundrum as a chicken-and-egg problem: what came first, life or consciousness? His answer? Consciousness. But wait—there’s a slight snag. Neither “life” nor “consciousness” has a universally agreed-upon definition. Oh, the joy of philosophical discourse.

Video: Professor Stuart Hameroff and others promote the idea that consciousness pre-dates life. A fuller version is available at IAI.
Audio: Podcast on this topic.

For the uninitiated, Hameroff’s stance is heavily flavoured with panpsychism—the idea that consciousness is a fundamental feature of the universe, like space or time. In this worldview, consciousness predates life itself. From this vantage, Hameroff’s proposition seems inevitable, a tidy solution that fits neatly into a panpsychistic framework. But let me stop you right there because I’m not signing up for the panpsychism fan club, and I’m certainly not prepared to let Hameroff’s intellectual sleight of hand go unchallenged.

To make his case, Hameroff engages in a curious manoeuvre: he defines both life and consciousness in ways that conveniently serve his argument. Consciousness, for him, is not limited to the complex phenomena of human or even animal experience but is a fundamental property of the universe, embedded in the very fabric of reality. Meanwhile, consciousness eventually orchestrates itself into life—a secondary phenomenon. With these definitions, his argument clicks together like a self-serving jigsaw puzzle. It’s clever, I’ll grant him that. But cleverness isn’t the same as being correct.

This is the philosophical equivalent of marking your own homework. By defining the terms of debate to fit his narrative, Hameroff ensures that his conclusion will satisfy his fellow panpsychists. The faithful will nod along, their priors confirmed. But for those outside this echo chamber, his framework raises more questions than it answers. How does this universal consciousness work? Why should we accept its existence as a given? And—here’s the kicker—doesn’t this just punt the problem one step back? If consciousness is fundamental, what’s the mechanism by which it “pre-exists” life?

Hameroff’s move is bold, certainly. But boldness isn’t enough. Philosophy demands rigour, and redefining terms to suit your argument isn’t rigorous; it’s rhetorical trickery. Sure, it’s provocative. But does it advance our understanding of the Hard Problem, or does it merely reframe it in a way that makes Hameroff’s preferred answer seem inevitable? For my money, it’s the latter.

The real issue is that panpsychism itself is a philosophical Rorschach test. It’s a worldview that can mean just about anything, from the claim that electrons have a rudimentary kind of awareness to the idea that the universe is a giant mind. Hameroff’s take lands somewhere in this spectrum, but like most panpsychist arguments, it’s long on metaphysical speculation and short on empirical grounding. If you already believe that consciousness is a fundamental aspect of reality, Hameroff’s arguments will feel like a revelation. If you don’t, they’ll feel like smoke and mirrors.

In the end, Hameroff’s chicken-and-egg problem might be better framed as a false dichotomy. Perhaps life and consciousness co-evolved in ways we can’t yet fully understand. Or perhaps consciousness, as we understand it, emerges from the complexity of life, a byproduct rather than a prerequisite. What’s clear is that Hameroff’s solution isn’t as tidy as it seems, nor as universally compelling. It’s a clever sleight of hand, but let’s not mistake cleverness for truth.

What is Information?

I question whether reviewing a book chapter by chapter is the best approach. It feels more like a reaction video because I am trying to suss out as I go. Also, I question the integrity and allegiance of the author, a point I often make clear. Perhaps ‘integrity’ is too harsh as he may have integrity relative to his worldview. It just happens to differ from mine.

Chapter 1 of Yuval Noah Harari’s Nexus, ironically titled “What is Information?” closes not with clarity but with ambiguity. Harari, ever the rhetorician, acknowledges the difficulty of achieving consensus on what ‘information’ truly means. Instead of attempting a rigorous definition, he opts for the commonsense idiomatic approach—a conveniently disingenuous choice, given that information is supposedly the book’s foundational theme. To say this omission is bothersome would be an understatement; it is a glaring oversight in a chapter dedicated to unpacking this very concept.

Audio: Podcast related to this content.

Sidestepping Rigour

Harari’s rationale for leaving ‘information’ undefined appears to rest on its contested nature, yet this does not excuse the absence of his own interpretation. While consensus may indeed be elusive, a book with such grand ambitions demands at least a working definition. Without it, readers are left adrift, navigating a central theme that Harari refuses to anchor. This omission feels particularly egregious when juxtaposed against his argument that information fundamentally underlies everything. How can one build a convincing thesis on such an unstable foundation?

The Map and the Terrain

In typical Harari fashion, the chapter isn’t devoid of compelling ideas. He revisits the map-and-terrain analogy, borrowing from Borges to argue that no map can perfectly represent reality. While this metaphor is apt for exploring the limitations of knowledge, it falters when Harari insists on the existence of an underlying, universal truth. His examples—Israeli versus Palestinian perspectives, Orthodox versus secular vantage points—highlight the relativity of interpretation. Yet he clings to the Modernist belief that events have an objective reality: they occur at specific times, dates, and places, regardless of perspective. This insistence feels like an ontological claim awkwardly shoehorned into an epistemological discussion.

Leveraging Ambiguity

One can’t help but suspect that Harari’s refusal to define ‘information’ serves a rhetorical purpose. By leaving the concept malleable, he gains the flexibility to adapt its meaning to suit his arguments throughout the book. This ambiguity may prove advantageous in bolstering a wide-ranging thesis, but it also risks undermining the book’s intellectual integrity. Readers may find themselves wondering whether Harari is exploring complexity or exploiting it.

Final Thoughts on Chapter 1

The chapter raises more questions than it answers, not least of which is whether Harari intends to address these foundational gaps in later chapters. If the preface hinted at reductionism, Chapter 1 confirms it, with Harari’s Modernist leanings and rhetorical manoeuvres taking centre stage. “What is Information?” may be a provocative title, but its contents suggest that the question is one Harari is not prepared to answer—at least, not yet.

Top 5 Books Read 2024

These are my favourite books I read in 2024. Only one was first published this year, so it seems I was playing catch-up and rereading. Two are about history; two are about the philosophy of science; and one is about biological free will or the lack thereof.

5

Against Method (2010)
Philosophy of Science

Against Method is a re-read for me. It makes the list on the coattails of a higher-ranked book. Feyerabend makes a compelling case against the Scientific Method™. To complete the set, I’d also recommend Bruno Latour‘s We Have Never Been Modern.

4

Determined: A Science of Life without Free Will (2023)
Neuroscience, Philosophy

Determined arrives on the heels of Sapolsky’s Behave, another classic that I’d recommend even more, but I read it in 2018, so it doesn’t make the cut. In Determined, Sapolsky makes the case that there is no room or need for free will to explain human behaviour.

3

Guns, Germs, and Steel: The Fates of Human Societies (1998)
History

As with Against Method, Guns, Germs, and Steel makes the list only to complement my next choice. It views history through an environmental lens. To fill out the historical perspective, I recommend David Graeber’s The Dawn of Everything: A New History of Humanity (with David Wengrow). I’d recommend Yuval Noah Harari‘s Sapiens: A Brief History of Humankind, but it occupies a different category and is more about a plausible broad narrative than the detail explored in the others listed.

2

How the World Made the West: A 4,000 Year History (2024)
History

Quinn makes history approachable as she questions the uniformity of civilisations pushed by orthodoxy. Read this in context with the aforementioned historical accounts for a fuller perspective.

1

The Structure of Scientific Revolutions: 50th Anniversary Edition (1962/2012)
Philosophy of Science

I was born in 1961. This should have been bedtime reading for me. I’d heard of this work, but one really has to read it. It’s less Modernist than I had presumed—though not to the extent of Feyerabend or Latour mentioned above. Again, reading all three provides a robust perspective on the philosophy of science.

Like Quinn, the writing is approachable. I had expected it to be stilted. It is academic, and it may boost your vocabulary, but give it a gander. It also works well in an audiobook format if you are so inclined.

This about closes out 2024. What do you think about these choices? Agree or disagree? What are your top recommendations?

Required Reading: Science

The Structure of Scientific Revolutions was published in 1962. Written by Thomas Kuhn, it introduced the world to the concept of paradigm shifts in science — and, as it turns out, elsewhere. As I mentioned recently, I experienced a mishap, confounding it with Paul Feyerabend’s Against Method, first published in 1975. Both of these should be required reading FOR year 10 – or at least taught in summary.

I had read Feyerabend years ago but was only familiar with Kuhn from a distance. I’m clad we’ve become more intimate. These authors take different approaches to arrive at times in the same place. Kuhn takes a Modernist approach that he critiques and modifies. Feyerabend takes a Postmodernist path that sometimes cross.

Ah, the delightful dance of paradigms and anarchism in the hallowed halls of science! Let’s delve deeper into the intellectual pas de deux between Thomas Kuhn and Paul Feyerabend, those audacious thinkers who dared to challenge the sanctity of scientific methodology.

Kuhn’s Paradigm Shifts: The Scientific Waltz

Thomas Kuhn, in his seminal work The Structure of Scientific Revolutions, introduced us to the concept of paradigm shifts—a term now so overused that even corporate PowerPoint presentations aren’t spared. Kuhn posited that science doesn’t progress through a linear accumulation of knowledge but rather through a series of revolutionary upheavals. These upheavals occur when the prevailing scientific framework, or “paradigm,” becomes as outdated as last season’s fashion, unable to account for emerging anomalies. In Kuhn’s view, the scientific community clings to its paradigms with the tenacity of a dog to its bone, until the weight of anomalies forces a collective epiphany, leading to a paradigm shift. This cyclical process propels scientific advancement, albeit in a manner reminiscent of a drunken sailor’s stagger rather than a straight path.

Feyerabend’s Epistemological Anarchism: The Punk Rock of Science

Enter Paul Feyerabend, the enfant terrible of the philosophy of science, with his provocative manifesto Against Method. Feyerabend gleefully dismantled the notion of a universal scientific method, advocating for “epistemological anarchism.” He argued that the rigid adherence to methodological rules is about as useful as a chocolate teapot, stifling creativity and hindering progress. In Feyerabend’s anarchic utopia, “anything goes” in the pursuit of knowledge, and the scientific method is more of a loose suggestion than a strict protocol. His critique was not just a call for methodological diversity but a full-blown rebellion against the tyranny of scientific dogmatism.

A Comparative Analysis: Method to the Madness

While Kuhn and Feyerabend both challenged the orthodox views of scientific progress, their approaches were as different as chalk and cheese. Kuhn’s analysis was rooted in historical case studies, portraying scientific revolutions as communal shifts in perspective, akin to a collective midlife crisis. Feyerabend, on the other hand, took a more radical stance, suggesting that the very idea of a fixed scientific method is as mythical as unicorns. Where Kuhn saw periods of “normal science” punctuated by revolutionary shifts, Feyerabend saw a chaotic free-for-all, where progress is made not by following rules but by breaking them.

Implications for Scientific Practice: Order in Chaos

The implications of their critiques are profound. Kuhn’s work suggests that scientists should remain open to paradigm shifts, lest they become as obsolete as Betamax in a Netflix era. Feyerabend’s anarchism, while controversial, serves as a reminder that innovation often requires the audacity to defy convention. Together, they paint a picture of science not as a monolithic quest for truth but as a dynamic, often tumultuous, human endeavour.

Conclusion: The Legacy of Intellectual Rebellion

In conclusion, the works of Kuhn and Feyerabend invite us to view science through a more sceptical lens, questioning the sanctity of its methods and the rigidity of its paradigms. Their critiques serve as a clarion call for intellectual flexibility, urging us to embrace the chaos and complexity inherent in the pursuit of knowledge. After all, in the grand theatre of science, it’s often the most unconventional performances that leave a lasting impact.

A Case for Intersectionalism

The Space Between

In the great philosophical tug-of-war between materialism and idealism, where reality is argued to be either wholly independent of perception or entirely a construct of the mind, there lies an underexplored middle ground—a conceptual liminal space that we might call “Intersectionalism.” This framework posits that reality is neither purely objective nor subjective but emerges at the intersection of the two. It is the terrain shaped by the interplay between what exists and how it is perceived, mediated by the limits of human cognition and sensory faculties.

Audio: Podcast conversation on this topic.

Intersectionalism offers a compelling alternative to the extremes of materialism and idealism. By acknowledging the constraints of perception and interpretation, it embraces the provisionality of knowledge, the inevitability of blind spots, and the productive potential of uncertainty. This essay explores the foundations of Intersectionalism, its implications for knowledge and understanding, and the ethical and practical insights it provides.

Reality as an Intersection

At its core, Intersectionalism asserts that reality exists in the overlapping space between the objective and the subjective. The objective refers to the world as it exists independently of any observer—the “terrain.” The subjective encompasses perception, cognition, and interpretation—the “map.” Reality, then, is not fully contained within either but is co-constituted by their interaction.

Consider the act of seeing a tree. The tree, as an object, exists independently of the observer. Yet, the experience of the tree is entirely mediated by the observer’s sensory and cognitive faculties. Light reflects off the tree, enters the eye, and is translated into electrical signals processed by the brain. This process creates a perception of the tree, but the perception is not the tree itself.

This gap between perception and object highlights the imperfect alignment of subject and object. No observer perceives reality “as it is” but only as it appears through the interpretive lens of their faculties. Reality, then, is a shared but imperfectly understood phenomenon, subject to distortion and variation across individuals and species.

The Limits of Perception and Cognition

Humans, like all organisms, perceive the world through the constraints of their sensory and cognitive systems. These limitations shape not only what we can perceive but also what we can imagine. For example:

  • Sensory Blind Spots: Humans are limited to the visible spectrum of light (~380–750 nm), unable to see ultraviolet or infrared radiation without technological augmentation. Other animals, such as bees or snakes, perceive these spectra as part of their natural sensory worlds. Similarly, humans lack the electroreception of sharks or the magnetoreception of birds.
  • Dimensional Constraints: Our spatial intuition is bounded by three spatial dimensions plus time, making it nearly impossible to conceptualise higher-dimensional spaces without resorting to crude analogies (e.g., imagining a tesseract as a 3D shadow of a 4D object).
  • Cognitive Frameworks: Our brains interpret sensory input through patterns and predictive models. These frameworks are adaptive but often introduce distortions, such as cognitive biases or anthropocentric assumptions.

This constellation of limitations suggests that what we perceive and conceive as reality is only a fragment of a larger, potentially unknowable whole. Even when we extend our senses with instruments, such as infrared cameras or particle detectors, the data must still be interpreted through the lens of human cognition, introducing new layers of abstraction and potential distortion.

The Role of Negative Space

One of the most intriguing aspects of Intersectionalism is its embrace of “negative space” in knowledge—the gaps and absences that shape what we can perceive and understand. A compelling metaphor for this is the concept of dark matter in physics. Dark matter is inferred not through direct observation but through its gravitational effects on visible matter. It exists as a kind of epistemic placeholder, highlighting the limits of our current sensory and conceptual tools.

Similarly, there may be aspects of reality that elude detection altogether because they do not interact with our sensory or instrumental frameworks. These “unknown unknowns” serve as reminders of the provisional nature of our maps and the hubris of assuming completeness. Just as dark matter challenges our understanding of the cosmos, the gaps in our perception challenge our understanding of reality itself.

Practical and Ethical Implications

Intersectionalism’s recognition of perceptual and cognitive limits has profound implications for science, ethics, and philosophy.

Science and Knowledge

In science, Intersectionalism demands humility. Theories and models, however elegant, are maps rather than terrains. They approximate reality within specific domains but are always subject to revision or replacement. String theory, for instance, with its intricate mathematics and reliance on extra dimensions, risks confusing the elegance of the map for the completeness of the terrain. By embracing the provisionality of knowledge, Intersectionalism encourages openness to new paradigms and methods that might better navigate the negative spaces of understanding.

Ethics and Empathy

Ethically, Intersectionalism fosters a sense of humility and openness toward other perspectives. If reality is always interpreted subjectively, then every perspective—human, animal, or artificial—offers a unique and potentially valuable insight into the intersection of subject and object. Recognising this pluralism can promote empathy and cooperation across cultures, species, and disciplines.

Technology and Augmentation

Technological tools extend our sensory reach, revealing previously unseen aspects of reality. However, they also introduce new abstractions and biases. Intersectionalism advocates for cautious optimism: technology can help illuminate the terrain but will never eliminate the gap between map and terrain. Instead, it shifts the boundaries of our blind spots, often revealing new ones in the process.

Conclusion: Navigating the Space Between

Intersectionalism provides a framework for understanding reality as a shared but imperfect intersection of subject and object. It rejects the extremes of materialism and idealism, offering instead a middle path that embraces the limitations of perception and cognition while remaining open to the possibilities of negative space and unknown dimensions. In doing so, it fosters humility, curiosity, and a commitment to provisionality—qualities essential for navigating the ever-expanding terrain of understanding.

By acknowledging the limits of our maps and the complexity of the terrain, Intersectionalism invites us to approach reality not as a fixed and knowable entity but as an unfolding interplay of perception and existence. It is a philosophy not of certainty but of exploration, always probing the space between.