We live in an age intoxicated by models: climate models, economic models, epidemiological models, cosmological models—each one an exquisite confection of assumptions draped in a lab coat and paraded as gospel. Yet if you trace the bloodline of model-building back through the annals of intellectual history, you encounter two figures who coldly remind us of the scam: George Box and Hilary Lawson.
Box: The Gentle Assassin of Certainty
George Box, the celebrated statistician, is often credited with the aphorism: “All models are wrong, but some are useful.” However, Box himself never uttered this precise phrase. What he did say, in his 1976 paper Science and Statistics, was:
“Since all models are wrong, the scientist must be alert to what is importantly wrong.”
George Box
The “some are useful” flourish was added later by a public desperate to sweeten the bitter pill. Nevertheless, Box deserves credit for the lethal insight: no model, however elegant, perfectly captures reality. They are provisional guesses, finger-paintings smeared across the rough surface of the unknown.
Audio: NotebookLM podcast on this topic.
Lawson: The Arsonist Who Burned the Map
Hilary Lawson, contemporary philosopher and author of Closure: A Story of Everything, drags Box’s modest scepticism into full-blown philosophical insurrection. In a recent lecture, Lawson declared:
“You don’t need truth to have a usable model.”
Hilary Lawson
Where Box warns us the emperor’s clothes don’t fit, Lawson points out that the emperor himself is a paper doll. Either way, we dress our ignorance in equations and hope no one notices the draft.
Lawson’s view is grim but clarifying: models are not mere approximations of some Platonic truth. They are closures—temporary, pragmatic structures we erect to intervene effectively in a world we will never fully comprehend. Reality, in Lawson’s framing, is an “openness”: endlessly unfolding, resistant to total capture.
The Case of the Celestial Spheres
Take Aristotle’s model of celestial spheres. Ludicrous? Yes. Obsolete? Absolutely. Yet for centuries, it allowed navigators to chart courses, astrologers to cast horoscopes, and priests to intimidate peasants—all without the slightest whiff of heliocentrism. A model does not need to be right; it merely needs to be operational.
Our modern theories—Big Bang cosmology, dark matter, and quantum gravity—may well be tomorrow’s celestial spheres: charming relics of ignorance that nonetheless built bridges, cured diseases, and sold mobile phones.
Summary Table: Lawson’s View on Models and Truth
Aspect
Lawson’s Position
Role of Models
Tools/metaphors for intervention, not truth
Truth
Not required for usefulness
Refinement
Models are improved for practical effectiveness
Reality
Fundamentally open, never fully captured by models
Implication
Focus on utility and adaptability, not final truth
Conclusion
Box taught us to distrust the fit of our models; Lawson reminds us there is no true body underneath them. If truth is a ghost, then our models are ghost stories—and some ghost stories, it turns out, are very good at getting us through the night.
We are left not with certainty, but with craftsmanship: the endless, imperfect art of refining our closures, knowing full well they are lies that work. Better lies. Usable lies. And perhaps, in a world without final answers, that is the most honest position of all.
The Illusion of Clarity in a World of Cognitive Fog
Apologies in advance for this Logic 101 posting. Reason—our once-proud torch in the darkness, now more like a flickering lighter in a hurricane of hot takes and LinkedIn thought-leadership. The modern mind, bloated on TED Talks and half-digested Wikipedia articles, tosses around terms like “inductive” and “deductive” as if they’re interchangeable IKEA tools. So let us pause, sober up, and properly inspect these three venerable pillars of human inference: deduction, induction, and abduction—each noble, each flawed, each liable to betray you like a Greco-Roman tragedy.
Video: This post was prompted by this short by MiniPhilosophy.
Audio: NotebookLM podcast on this topic.
Deduction: The Tyrant of Certainty
Deduction is the purest of the lot, the high priest of logic. It begins with a general premise and guarantees a specific conclusion, as long as you don’t cock up the syllogism. Think Euclid in a toga, laying down axioms like gospel.
Example:
All humans are mortal.
Socrates is human.
Therefore, Socrates is mortal.
Perfect. Crisp. Unassailable. Unless, of course, your premise is bollocks. Deduction doesn’t check its ingredients—it just cooks with whatever it’s given. Garbage in, garbage out.
Strength: Valid conclusions from valid premises. Weakness: Blind to empirical falsity. You can deduce nonsense from nonsense and still be logically sound.
Induction: The Gambler’s Gospel
Induction is the philosopher’s lottery ticket: generalising from particulars. Every swan I’ve seen is white, ergo all swans must be white. Until, of course, Australia coughs up a black one and wrecks your little Enlightenment fantasy.
Example:
The sun rose today.
It rose yesterday.
It has risen every day I’ve been alive.
Therefore, the sun will rise tomorrow.
Touching, isn’t it? Unfortunately, induction doesn’t prove anything—it suggests probability. David Hume had an existential breakdown over this. Entire centuries of Western philosophy spiralled into metaphysical despair. And yet, we still rely on it to predict weather, markets, and whether that dodgy lasagna will give us food poisoning.
Strength: Empirically rich and adaptive. Weakness: One exception detonates the generalisation. Induction is only ever as good as the sample size and your luck.
Abduction: Sherlock Holmes’ Drug of Choice
Abduction is the inference to the best explanation. The intellectual equivalent of guessing what made the dog bark at midnight while half-drunk and barefoot in the garden.
Example:
The lawn is wet.
It probably rained.
It could be a garden sprinkler. Or a hose. Or divine intervention. But we bet on rain because it’s the simplest, most plausible explanation. Pragmatic, yes. But not immune to deception.
Strength: Useful in messy, real-world contexts. Weakness: Often rests on a subjective idea of “best,” which tends to mean “most convenient to my prejudices.”
The Modern Reasoning Crisis: Why We’re All Probably Wrong
Our contemporary landscape has added new layers of complexity to these already dubious tools. Social media algorithms function as induction machines on steroids, drawing connections between your click on a pasta recipe and your supposed interest in Italian real estate. Meanwhile, partisan echo chambers have perfected the art of deductive reasoning from absolutely bonkers premises.
Consider how we navigate information today:
We encounter a headline that confirms our worldview
We accept it without scrutiny (deductive failure)
We see similar headlines repeatedly (inductive trap)
We conclude our worldview is objectively correct (abductive collapse)
And thus, the modern reasoning loop is complete—a perfect system for being confidently incorrect while feeling intellectually superior.
Weakness by Analogy: The Reasoning Café
Imagine a café.
Deduction is the customer who checks the menu and confidently orders “Soup of the Day,” because the chalkboard says “Today’s Soup is Tomato,” and she trusts chalkboards.
Induction is the one who has had tomato soup every Wednesday for months and assumes it’ll be tomato today again—until it isn’t, and now he’s wearing bisque.
Abduction sees the waiter carrying bowls of red liquid to every table and infers it’s probably tomato soup, orders it, and gets… gazpacho. Ice-cold disappointment.
All three are trying to reason. Only one might get lunch.
The Meta-Problem: Reasoning About Reasoning
The true joke is this: we’re using these flawed reasoning tools to evaluate our reasoning tools. It’s like asking a drunk person to judge their own sobriety test. The very mechanisms we use to detect faulty reasoning are themselves subject to the same faults.
This explains why debates about critical thinking skills typically devolve into demonstrations of their absence. We’re all standing on intellectual quicksand while insisting we’ve found solid ground.
Conclusion: Reason Is Not a Guarantee, It’s a Wager
None of these modalities offer omniscience. Deduction only shines when your axioms aren’t ridiculous. Induction is forever haunted by Hume’s skepticism and the next black swan. Abduction is basically educated guessing dressed up in tweed.
Yet we must reason. We must argue. We must infer—despite the metaphysical vertigo.
The tragedy isn’t that these methods fail. The tragedy is when people believe they don’t.
Perhaps the wisest reasoners are those who understand the limitations of their cognitive tools, who approach conclusions with both confidence and humility. Who recognize that even our most cherished beliefs are, at best, sophisticated approximations of a reality we can never fully grasp.
So reason on, fellow thinkers. Just don’t be too smug about it.
Post-COVID, we’re told trust in science is eroding. But perhaps the real autopsy should be performed on the institution of public discourse itself.
Since the COVID-19 crisis detonated across our global stage—part plague, part PR disaster—the phrase “trust in science” has become the most abused slogan since “thoughts and prayers.” Every public official with a podium and a pulse declared they were “following the science,” as if “science” were a kindly oracle whispering unambiguous truths into the ears of the righteous. But what happened when those pronouncements proved contradictory, politically convenient, or flat-out wrong? Was it science that failed, or was it simply a hostage to an incoherent performance of authority?
Audio: NotebookLM podcast discussing this topic.
Two recent Nature pieces dig into the supposed “decline” of scientific credibility in the post-pandemic world, offering the expected hand-wringing about public opinion and populist mistrust. But let’s not be so credulous. This isn’t merely a crisis of trust—it’s a crisis of theatre.
“The Science” as Ventriloquism
Let’s begin by skewering the central absurdity: there is no such thing as “The Science.” Science is not a monolith. It’s not a holy writ passed down by lab-coated Levites. It’s a process—a messy, iterative, and perpetually provisional mode of inquiry. But during the pandemic, politicians, pundits, and even some scientists began to weaponise the term, turning it into a rhetorical cudgel. “The Science says” became code for “shut up and comply.” Any dissent—even from within the scientific community—was cast as heresy. Galileo would be proud.
In Nature Human Behaviour paper (van der Linden et al., 2025) identifies four archetypes of distrust: distrust in the message, the messenger, the medium, and the motivation. What they fail to ask is: what if all four were compromised simultaneously? What if the medium (mainstream media) served more as a stenographer to power than a check upon it? What if the message was oversimplified into PR slogans, the messengers were party apparatchiks in lab coats, and the motivations were opaque at best?
Trust didn’t just erode. It was actively incinerated in a bonfire of institutional vanity.
A Crisis of Influence, Not Integrity
The second Nature commentary (2025) wrings its hands over “why trust in science is declining,” as if the populace has suddenly turned flat-Earth overnight. But the real story isn’t a decline in trust per se; it’s a redistribution of epistemic authority. Scientists no longer have the stage to themselves. Influencers, conspiracy theorists, rogue PhDs, and yes—exhausted citizens armed with Wi-Fi and anxiety—have joined the fray.
Science hasn’t lost truth—it’s lost control. And frankly, perhaps it shouldn’t have had that control in the first place. Democracy is messy. Information democracies doubly so. And in that mess, the epistemic pedestal of elite scientific consensus was bound to topple—especially when its public face was filtered through press conferences, inconsistent policies, and authoritarian instincts.
Technocracy’s Fatal Hubris
What we saw wasn’t science failing—it was technocracy failing in real time, trying to manage public behaviour with a veneer of empirical certainty. But when predictions shifted, guidelines reversed, and public health policy began to resemble a mood ring, the lay public was expected to pretend nothing happened. Orwell would have a field day.
This wasn’t a failure of scientific method. It was a failure of scientific messaging—an inability (or unwillingness) to communicate uncertainty, probability, and risk in adult terms. Instead, the public was infantilised. And then pathologised for rebelling.
Toward a Post-Scientistic Public Sphere
So where does that leave us? Perhaps we need to kill the idol of “The Science” to resurrect a more mature relationship with scientific discourse—one that tolerates ambiguity, embraces dissent, and admits when the data isn’t in. Science, done properly, is the art of saying “we don’t know… yet.”
The pandemic didn’t erode trust in science. It exposed how fragile our institutional credibility scaffolding really is—how easily truth is blurred when science is fed through the meat grinder of media, politics, and fear.
The answer isn’t more science communication—it’s less scientism, more honesty, and above all, fewer bureaucrats playing ventriloquist with the language of discovery.
Conclusion
Trust in science isn’t dead. But trust in those who claim to speak for science? That’s another matter. Perhaps it’s time to separate the two.
The contestation over the definition of probability is alive and well—like a philosophical zombie that refuses to lie down and accept the tranquilliser of consensus. Despite over three centuries of intense mathematical, philosophical, and even theological wrangling, no single, universally accepted definition reigns supreme. Instead, we have a constellation of rival interpretations, each staking its claim on the epistemological turf, each clutching its own metaphysical baggage.
Audio: NotebookLM podcast on this topic.
Let us survey the battlefield:
1. Classical Probability (Laplacean Determinism in a Tuxedo)
This old warhorse defines probability as the ratio of favourable outcomes to possible outcomes, assuming all outcomes are equally likely. The problem? That assumption is doing all the heavy lifting, like a butler carrying a grand piano up five flights of stairs. It’s circular: we define probability using equiprobability, which itself presumes a notion of probability. Charming, but logically suspect.
2. Frequentist Probability (The Empiricist’s Fantasy)
Here, probability is the limit of relative frequencies as the number of trials tends to infinity. This gives us the illusion of objectivity—but only in a Platonic realm where we can conduct infinite coin tosses without the coin disintegrating or the heat death of the universe intervening. Also, it tells us nothing about singular cases. What’s the probability this specific bridge will collapse? Undefined, says the frequentist, helpfully.
3. Bayesian Probability (Subjectivity Dressed as Rigor)
Bayesians treat probability as a degree of belief—quantified plausibility updated with evidence. This is useful, flexible, and epistemically honest, but also deeply subjective. Two Bayesians can start with wildly different priors and, unless carefully constrained, remain in separate probabilistic realities. It’s like epistemology for solipsists with calculators.
4. Propensity Interpretation (The Ontology of Maybes)
Karl Popper and his ilk proposed that probability is a tendency or disposition of a physical system to produce certain outcomes. Sounds scientific, but try locating a “propensity” in a particle collider—it’s a metaphysical ghost, not a measurable entity. Worse, it struggles with repeatability and relevance outside of controlled environments.
5. Logical Probability (A Sober Attempt at Rationality)
Think of this as probability based on logical relations between propositions—à la Keynes or Carnap. It aims to be objective without being empirical. The problem? Assigning these logical relations is no easier than choosing priors in Bayesianism, and just as subjective when it comes to anything meaty.
6. Quantum Probability (Schrödinger’s Definition)
In quantum mechanics, probability emerges from the squared modulus of a wave function—so this is where physics says, “Shut up and calculate.” But this doesn’t solve the philosophical issue—it just kicks the can into Hilbert space. Interpretations of quantum theory (Copenhagen? Many Worlds?) embed different philosophies of probability, so the contestation merely changes battlegrounds.
Current Status: War of Attrition
There is no universal agreement, and likely never will be. Probability is used successfully across the sciences, economics, AI, and everyday reasoning—but the fact that these wildly different interpretations all “work” suggests that the concept is operationally robust yet philosophically slippery. Like money, love, or art, we use it constantly but define it poorly.
In short: the contestation endures because probability is not one thing—it is a shape-shifting chimera that serves multiple masters. Each interpretation captures part of the truth, but none hold it entire. Philosophers continue to argue, mathematicians continue to formalise, and practitioners continue to deploy it as if there were no disagreement at all.
And so the probability of this contest being resolved any time soon? About zero. Or one. Depending on your interpretation.
I’m not a fan of IQ as a general metric. Let us be done with the cult of the clever. Let us drag the IQ score from its pedestal, strip it of its statistical robes, and parade it through the streets of history where it belongs—next to phrenology, eugenics, and other well-meaning pseudosciences once weaponised by men in waistcoats.
The so-called Intelligence Industrial Complex—an infernal alliance of psychologists, bureaucrats, and HR departments—has for too long dictated the terms of thought. It has pretended to measure the immeasurable. It has sold us a fiction in numerical drag: that human intelligence can be distilled, packaged, and ranked.
Audio: NotebookLM podcast on this topic.
What it measures, it defines. What it defines, it controls.
IQ is not intelligence. It is cognitive GDP: a snapshot of what your brain can do under fluorescent lights with a timer running. It rewards abstraction, not understanding; speed, not depth; pattern recognition, not wisdom. It’s a test of how well you’ve been conditioned to think like the test-makers.
This is not to say IQ has no value. Of course it does—within its own ecosystem of schools, bureaucracies, and technocracies. But let us not mistake the ruler for the terrain. Let us not map the entire landscape of human potential using a single colonial compass.
True intelligence is not a number. It is a spectrum of situated knowings, a polyphony of minds tuned to different frequencies. The Inuit hunter tracking a seal through silence. The griot remembering centuries of lineage. The autistic coder intuiting an algorithm in dreamtime. The grandmother sensing a lie with her bones. IQ cannot touch these.
To speak of intelligence as if it belonged to a single theory is to mistake a monoculture for a forest. Let us burn the monoculture. Let us plant a thousand new seeds.
A Comparative Vivisection of Intelligence Theories
Theory / Model
Core Premise
Strengths
Blind Spots / Critiques
Cultural Framing
IQ (Psychometric g)
Intelligence is a single, general cognitive ability measurable via testing
Predicts academic & job performance; standardised
Skewed toward Western logic, ignores context, devalues non-abstract intelligences
Western, industrial, meritocratic
Multiple Intelligences (Gardner)
Intelligence is plural: linguistic, spatial, musical, bodily, etc.
Recognises diversity; challenges IQ monopoly
Still individualistic; categories often vague; Western in formulation
Liberal Western pluralism
Triarchic Theory (Sternberg)
Intelligence = analytical + creative + practical
Includes adaptability, real-world success
Still performance-focused; weak empirical grounding
Western managerial
Emotional Intelligence (Goleman)
Intelligence includes emotion regulation and interpersonal skill
Useful in leadership & education contexts
Commodified into corporate toolkits; leans self-help
Western therapeutic
Socio-Cultural (Vygotsky)
Intelligence develops through social interaction and cultural mediation
Recognises developmental context and culture
Less attention to adult or cross-cultural intelligence
Soviet / constructivist
Distributed Cognition / Extended Mind
Intelligence is distributed across people, tools, systems
Breaks skull-bound model; real-world cognition
Hard to measure; difficult to institutionalise
Post-cognitive, systems-based
Indigenous Epistemologies
Intelligence is relational, ecological, spiritual, embodied, ancestral
Holistic; grounded in lived experience
Marginalised by academia; often untranslatable into standard metrics
Global South / decolonial
Conclusion: Beyond the Monoculture of Mind
If we want a more encompassing theory of intelligence, we must stop looking for a single theory. We must accept plurality—not as a nod to diversity, but as an ontological truth.
Intelligence is not a fixed entity to be bottled and graded. It is a living, breathing phenomenon: relational, situated, contextual, historical, ecological, and cultural.
And no test devised in a Princeton psych lab will ever tell you how to walk through a forest without being seen, how to tell when rain is coming by smell alone, or how to speak across generations through story.
It’s time we told the Intelligence Industrial Complex: your number’s up.
Once upon a time — which is how all good fairy tales begin — suspension of disbelief was a tidy little tool we used to indulge in dragons, space travel, talking animals, and the idea that people in rom-coms have apartments that match their personalities and incomes. It was a temporary transaction, a gentleman’s agreement, a pact signed between audience and creator with metaphorical ink: I know this is nonsense, but I’ll play along if you don’t insult my intelligence.
Audio: NotebookLM podcast of this page content.
This idea, famously coined by Samuel Taylor Coleridge as the “willing suspension of disbelief,” was meant to give art its necessary air to breathe. Coleridge’s hope was that audiences would momentarily silence their rational faculties in favour of emotional truth. The dragons weren’t real, but the heartbreak was. The ghosts were fabrications, but the guilt was palpable.
But that was then. Before the world itself began auditioning for the role of absurdist theatre. Before reality TV became neither reality nor television. Before politicians quoted memes, tech CEOs roleplayed as gods, and conspiracy theorists became bestsellers on Amazon. These days, suspension of disbelief is no longer a leisure activity — it’s a survival strategy.
The Fictional Contract: Broken but Not Forgotten
Traditionally, suspension of disbelief was deployed like a visitor’s badge. You wore it when entering the imagined world and returned it at the door on your way out. Fiction, fantasy, speculative fiction — they all relied on that badge. You accepted the implausible if it served the probable. Gandalf could fall into shadow and return whiter than before because he was, after all, a wizard. We were fine with warp speed as long as the emotional logic of Spock’s sacrifice made sense. There were rules — even in rule-breaking.
The genres varied. Hard sci-fi asked you to believe in quantum wormholes but not in lazy plotting. Magical realism got away with absurdities wrapped in metaphor. Superhero films? Well, their disbelief threshold collapsed somewhere between the multiverse and the Bat-credit card.
Still, we always knew we were pretending. We had a tether to the real, even when we floated in the surreal.
But Then Real Life Said, “Hold My Beer.”
At some point — let’s call it the twenty-first century — the need to suspend disbelief seeped off the screen and into the bloodstream of everyday life. News cycles became indistinguishable from satire (except that satire still had editors). Headlines read like rejected Black Mirror scripts. A reality TV star became president, and nobody even blinked. Billionaires declared plans to colonise Mars whilst democracy quietly lost its pulse.
We began to live inside a fiction that demanded that our disbelief be suspended daily. Except now, it wasn’t voluntary. It was mandatory. If you wanted to participate in public life — or just maintain your sanity — you had to turn off some corner of your rational mind.
You had to believe, or pretend to, that the same people calling for “freedom” were banning books. That artificial intelligence would definitely save us, just as soon as it was done replacing us. That social media was both the great democratiser and the sewer mainline of civilisation.
The boundary between fiction and reality? Eroded. Fact-checking? Optional. Satire? Redundant. We’re all characters now, improvising in a genreless world that refuses to pick a lane.
Cognitive Gymnastics: Welcome to the Cirque du Surréalisme
What happens to a psyche caught in this funhouse? Nothing good.
Our brains, bless them, were designed for some contradiction — religion’s been pulling that trick for millennia — but the constant toggling between belief and disbelief, trust and cynicism, is another matter. We’re gaslit by the world itself. Each day, a parade of facts and fabrications marches past, and we’re told to clap for both.
Cognitive dissonance becomes the default. We scroll through doom and memes in the same breath. We read a fact, then three rebuttals, then a conspiracy theory, then a joke about the conspiracy, then a counter-conspiracy about why the joke is state-sponsored. Rinse. Repeat. Sleep if you can.
The result? Mental fatigue. Not just garden-variety exhaustion, but a creeping sense that nothing means anything unless it’s viral. Critical thinking atrophies not because we lack the will but because the floodwaters never recede. You cannot analyse the firehose. You can only drink — or drown.
Culture in Crisis: A Symptom or the Disease?
This isn’t just a media problem. It’s cultural, epistemological, and possibly even metaphysical.
We’ve become simultaneously more skeptical — distrusting institutions, doubting authorities — and more gullible, accepting the wildly implausible so long as it’s entertaining. It’s the postmodern paradox in fast-forward: we know everything is a construct, but we still can’t look away. The magician shows us the trick, and we cheer harder.
In a world where everything is performance, authenticity becomes the ultimate fiction. And with that, the line between narrative and news, between aesthetic and actuality, collapses.
So what kind of society does this create?
One where engagement replaces understanding. Where identity is a curated feed. Where politics is cosplay, religion is algorithm, and truth is whatever gets the most shares. We aren’t suspending disbelief anymore. We’re embalming it.
The Future: A Choose-Your-Own-Delusion Adventure
So where does this all end?
There’s a dark path, of course: total epistemic breakdown. Truth becomes just another fandom and reality a subscription model. But there’s another route — one with a sliver of hope — where we become literate in illusion.
We can learn to hold disbelief like a scalpel, not a blindfold. To engage the implausible with curiosity, not capitulation. To distinguish between narratives that serve power and those that serve understanding.
It will require a new kind of literacy. One part media scepticism, one part philosophical rigour, and one part good old-fashioned bullshit detection. We’ll have to train ourselves not just to ask “Is this true?” but “Who benefits if I believe it?”
That doesn’t mean closing our minds. It means opening them with caution. Curiosity without credulity. Wonder without worship. A willingness to imagine the impossible whilst keeping a firm grip on the probable.
In Conclusion, Reality Is Optional, But Reason Is Not
In the age of AI, deepfakes, alt-facts, and hyperreality, we don’t need less imagination. We need more discernment. The world may demand our suspension of disbelief, but we must demand our belief back. In truth, in sense, in each other.
Because if everything becomes fiction, then fiction itself loses its magic. And we, the audience, are left applauding an empty stage.
Lights down. Curtain call. Time to read the footnotes.
A Brief, Brutal Experiment in Categorising Your Worldview
This month, I’ve been tinkering with a little project—an elegant, six-question survey designed to assess where you land in the great intellectual mess that is modernity.
Audio: Podcast discussion about this post.
This isn’t some spur-of-the-moment quiz cooked up in a caffeine-fueled haze. No, this project has been simmering for years, and after much consideration (and occasional disdain), I’ve crafted a set of questions and response options that, I believe, encapsulate the prevailing worldviews of our time.
It all began with Metamodernism, a term that, at first, seemed promising—a bold synthesis of Modernism and Postmodernism, a grand dialectic of the ages. But as I mapped it out, it collapsed under scrutiny. A footnote in the margins of intellectual history, at best. I’ll expand on that in due course.
The Setup: A Simple, Slightly Sadistic Ternary Plot
For the visually inclined (or the masochistically curious), I initially imagined a timeline, then a branching decision tree, then a Cartesian plane before landing on a ternary plot—a three-way visual that captures ideological leanings in a way a boring old bar chart never could.
The survey itself is brief: six questions, each with five possible answers. Submit your responses, and voilà—you get a tidy little ternary chart plotting your intellectual essence, along with a breakdown of what your answers signify.
Methodology: Half-Rigorous, Half-Reckless
I am, after all, a (recovering) statistician, so I’ve tried to uphold proper methodology while also fast-tracking certain safeguards for the sake of efficiency. If there’s enough interest, I may expand the survey, adding more questions or increasing response flexibility (tick boxes instead of radio buttons—revolutionary, I know).
Privacy Concerns? Relax. I’m not harvesting your data for some nefarious scheme. No personally identifiable information is collected—just a timestamp, session ID, and your browser’s language setting. I did consider tracking IP addresses to analyze regional trends but ultimately scrapped that idea.
In the future, I may add an optional email feature for those who wish to save and track their responses over time (assuming anyone is unhinged enough to take this more than once).
The Rest of the Story: Your Feedback, My Amusement
Since this is a personal project crafted in splendid isolation, I’d love to hear your thoughts. Are the questions reasonable? Do the response options make sense? Does the summary feel accurate? Is the ternary chart decipherable, or have I constructed a glorified inkblot test?
As an academic, economist, and statistician, I had never encountered a ternary chart before embarking on this, and now I rather enjoy it. That said, I also find Nietzsche “intuitive,” so take that as you will.
If this gains traction, expect follow-up content—perhaps videos, podcasts, or further written explorations.
Your Move
Take the survey. It’s painless, requiring mere minutes of your life (which is, let’s be honest, already wasted online). And because I’m feeling generous, you can even generate a PDF to stick on your fridge, next to your collection of expired coupons and disappointing takeout menus.
Let’s see where you stand in the grand, chaotic landscape of modernity. Or at least, let’s have a laugh trying to make sense of it.
DISCLAIMER: The Modernity Worldview Survey is not scientific. It is designed as an experiment to provide directional insights. It is hosted on Google Cloud and subject to its availability and performance limitations.
I question whether reviewing a book chapter by chapter is the best approach. It feels more like a reaction video because I am trying to suss out as I go. Also, I question the integrity and allegiance of the author, a point I often make clear. Perhaps ‘integrity’ is too harsh as he may have integrity relative to his worldview. It just happens to differ from mine.
Chapter 1 of Yuval Noah Harari’s Nexus, ironically titled “What is Information?” closes not with clarity but with ambiguity. Harari, ever the rhetorician, acknowledges the difficulty of achieving consensus on what ‘information’ truly means. Instead of attempting a rigorous definition, he opts for the commonsense idiomatic approach—a conveniently disingenuous choice, given that information is supposedly the book’s foundational theme. To say this omission is bothersome would be an understatement; it is a glaring oversight in a chapter dedicated to unpacking this very concept.
Audio: Podcast related to this content.
Sidestepping Rigour
Harari’s rationale for leaving ‘information’ undefined appears to rest on its contested nature, yet this does not excuse the absence of his own interpretation. While consensus may indeed be elusive, a book with such grand ambitions demands at least a working definition. Without it, readers are left adrift, navigating a central theme that Harari refuses to anchor. This omission feels particularly egregious when juxtaposed against his argument that information fundamentally underlies everything. How can one build a convincing thesis on such an unstable foundation?
The Map and the Terrain
In typical Harari fashion, the chapter isn’t devoid of compelling ideas. He revisits the map-and-terrain analogy, borrowing from Borges to argue that no map can perfectly represent reality. While this metaphor is apt for exploring the limitations of knowledge, it falters when Harari insists on the existence of an underlying, universal truth. His examples—Israeli versus Palestinian perspectives, Orthodox versus secular vantage points—highlight the relativity of interpretation. Yet he clings to the Modernist belief that events have an objective reality: they occur at specific times, dates, and places, regardless of perspective. This insistence feels like an ontological claim awkwardly shoehorned into an epistemological discussion.
Leveraging Ambiguity
One can’t help but suspect that Harari’s refusal to define ‘information’ serves a rhetorical purpose. By leaving the concept malleable, he gains the flexibility to adapt its meaning to suit his arguments throughout the book. This ambiguity may prove advantageous in bolstering a wide-ranging thesis, but it also risks undermining the book’s intellectual integrity. Readers may find themselves wondering whether Harari is exploring complexity or exploiting it.
Final Thoughts on Chapter 1
The chapter raises more questions than it answers, not least of which is whether Harari intends to address these foundational gaps in later chapters. If the preface hinted at reductionism, Chapter 1 confirms it, with Harari’s Modernist leanings and rhetorical manoeuvres taking centre stage. “What is Information?” may be a provocative title, but its contents suggest that the question is one Harari is not prepared to answer—at least, not yet.
The Structure of Scientific Revolutions was published in 1962. Written by Thomas Kuhn, it introduced the world to the concept of paradigm shifts in science — and, as it turns out, elsewhere. As I mentioned recently, I experienced a mishap, confounding it with Paul Feyerabend’s Against Method, first published in 1975. Both of these should be required reading FOR year 10 – or at least taught in summary.
I had read Feyerabend years ago but was only familiar with Kuhn from a distance. I’m clad we’ve become more intimate. These authors take different approaches to arrive at times in the same place. Kuhn takes a Modernist approach that he critiques and modifies. Feyerabend takes a Postmodernist path that sometimes cross.
Ah, the delightful dance of paradigms and anarchism in the hallowed halls of science! Let’s delve deeper into the intellectual pas de deux between Thomas Kuhn and Paul Feyerabend, those audacious thinkers who dared to challenge the sanctity of scientific methodology.
Kuhn’s Paradigm Shifts: The Scientific Waltz
Thomas Kuhn, in his seminal work The Structure of Scientific Revolutions, introduced us to the concept of paradigm shifts—a term now so overused that even corporate PowerPoint presentations aren’t spared. Kuhn posited that science doesn’t progress through a linear accumulation of knowledge but rather through a series of revolutionary upheavals. These upheavals occur when the prevailing scientific framework, or “paradigm,” becomes as outdated as last season’s fashion, unable to account for emerging anomalies. In Kuhn’s view, the scientific community clings to its paradigms with the tenacity of a dog to its bone, until the weight of anomalies forces a collective epiphany, leading to a paradigm shift. This cyclical process propels scientific advancement, albeit in a manner reminiscent of a drunken sailor’s stagger rather than a straight path.
Feyerabend’s Epistemological Anarchism: The Punk Rock of Science
Enter Paul Feyerabend, the enfant terrible of the philosophy of science, with his provocative manifesto Against Method. Feyerabend gleefully dismantled the notion of a universal scientific method, advocating for “epistemological anarchism.” He argued that the rigid adherence to methodological rules is about as useful as a chocolate teapot, stifling creativity and hindering progress. In Feyerabend’s anarchic utopia, “anything goes” in the pursuit of knowledge, and the scientific method is more of a loose suggestion than a strict protocol. His critique was not just a call for methodological diversity but a full-blown rebellion against the tyranny of scientific dogmatism.
A Comparative Analysis: Method to the Madness
While Kuhn and Feyerabend both challenged the orthodox views of scientific progress, their approaches were as different as chalk and cheese. Kuhn’s analysis was rooted in historical case studies, portraying scientific revolutions as communal shifts in perspective, akin to a collective midlife crisis. Feyerabend, on the other hand, took a more radical stance, suggesting that the very idea of a fixed scientific method is as mythical as unicorns. Where Kuhn saw periods of “normal science” punctuated by revolutionary shifts, Feyerabend saw a chaotic free-for-all, where progress is made not by following rules but by breaking them.
Implications for Scientific Practice: Order in Chaos
The implications of their critiques are profound. Kuhn’s work suggests that scientists should remain open to paradigm shifts, lest they become as obsolete as Betamax in a Netflix era. Feyerabend’s anarchism, while controversial, serves as a reminder that innovation often requires the audacity to defy convention. Together, they paint a picture of science not as a monolithic quest for truth but as a dynamic, often tumultuous, human endeavour.
Conclusion: The Legacy of Intellectual Rebellion
In conclusion, the works of Kuhn and Feyerabend invite us to view science through a more sceptical lens, questioning the sanctity of its methods and the rigidity of its paradigms. Their critiques serve as a clarion call for intellectual flexibility, urging us to embrace the chaos and complexity inherent in the pursuit of knowledge. After all, in the grand theatre of science, it’s often the most unconventional performances that leave a lasting impact.
In the great philosophical tug-of-war between materialism and idealism, where reality is argued to be either wholly independent of perception or entirely a construct of the mind, there lies an underexplored middle ground—a conceptual liminal space that we might call “Intersectionalism.” This framework posits that reality is neither purely objective nor subjective but emerges at the intersection of the two. It is the terrain shaped by the interplay between what exists and how it is perceived, mediated by the limits of human cognition and sensory faculties.
Audio: Podcast conversation on this topic.
Intersectionalism offers a compelling alternative to the extremes of materialism and idealism. By acknowledging the constraints of perception and interpretation, it embraces the provisionality of knowledge, the inevitability of blind spots, and the productive potential of uncertainty. This essay explores the foundations of Intersectionalism, its implications for knowledge and understanding, and the ethical and practical insights it provides.
Reality as an Intersection
At its core, Intersectionalism asserts that reality exists in the overlapping space between the objective and the subjective. The objective refers to the world as it exists independently of any observer—the “terrain.” The subjective encompasses perception, cognition, and interpretation—the “map.” Reality, then, is not fully contained within either but is co-constituted by their interaction.
Consider the act of seeing a tree. The tree, as an object, exists independently of the observer. Yet, the experience of the tree is entirely mediated by the observer’s sensory and cognitive faculties. Light reflects off the tree, enters the eye, and is translated into electrical signals processed by the brain. This process creates a perception of the tree, but the perception is not the tree itself.
This gap between perception and object highlights the imperfect alignment of subject and object. No observer perceives reality “as it is” but only as it appears through the interpretive lens of their faculties. Reality, then, is a shared but imperfectly understood phenomenon, subject to distortion and variation across individuals and species.
The Limits of Perception and Cognition
Humans, like all organisms, perceive the world through the constraints of their sensory and cognitive systems. These limitations shape not only what we can perceive but also what we can imagine. For example:
Sensory Blind Spots: Humans are limited to the visible spectrum of light (~380–750 nm), unable to see ultraviolet or infrared radiation without technological augmentation. Other animals, such as bees or snakes, perceive these spectra as part of their natural sensory worlds. Similarly, humans lack the electroreception of sharks or the magnetoreception of birds.
Dimensional Constraints: Our spatial intuition is bounded by three spatial dimensions plus time, making it nearly impossible to conceptualise higher-dimensional spaces without resorting to crude analogies (e.g., imagining a tesseract as a 3D shadow of a 4D object).
Cognitive Frameworks: Our brains interpret sensory input through patterns and predictive models. These frameworks are adaptive but often introduce distortions, such as cognitive biases or anthropocentric assumptions.
This constellation of limitations suggests that what we perceive and conceive as reality is only a fragment of a larger, potentially unknowable whole. Even when we extend our senses with instruments, such as infrared cameras or particle detectors, the data must still be interpreted through the lens of human cognition, introducing new layers of abstraction and potential distortion.
The Role of Negative Space
One of the most intriguing aspects of Intersectionalism is its embrace of “negative space” in knowledge—the gaps and absences that shape what we can perceive and understand. A compelling metaphor for this is the concept of dark matter in physics. Dark matter is inferred not through direct observation but through its gravitational effects on visible matter. It exists as a kind of epistemic placeholder, highlighting the limits of our current sensory and conceptual tools.
Similarly, there may be aspects of reality that elude detection altogether because they do not interact with our sensory or instrumental frameworks. These “unknown unknowns” serve as reminders of the provisional nature of our maps and the hubris of assuming completeness. Just as dark matter challenges our understanding of the cosmos, the gaps in our perception challenge our understanding of reality itself.
Practical and Ethical Implications
Intersectionalism’s recognition of perceptual and cognitive limits has profound implications for science, ethics, and philosophy.
Science and Knowledge
In science, Intersectionalism demands humility. Theories and models, however elegant, are maps rather than terrains. They approximate reality within specific domains but are always subject to revision or replacement. String theory, for instance, with its intricate mathematics and reliance on extra dimensions, risks confusing the elegance of the map for the completeness of the terrain. By embracing the provisionality of knowledge, Intersectionalism encourages openness to new paradigms and methods that might better navigate the negative spaces of understanding.
Ethics and Empathy
Ethically, Intersectionalism fosters a sense of humility and openness toward other perspectives. If reality is always interpreted subjectively, then every perspective—human, animal, or artificial—offers a unique and potentially valuable insight into the intersection of subject and object. Recognising this pluralism can promote empathy and cooperation across cultures, species, and disciplines.
Technology and Augmentation
Technological tools extend our sensory reach, revealing previously unseen aspects of reality. However, they also introduce new abstractions and biases. Intersectionalism advocates for cautious optimism: technology can help illuminate the terrain but will never eliminate the gap between map and terrain. Instead, it shifts the boundaries of our blind spots, often revealing new ones in the process.
Conclusion: Navigating the Space Between
Intersectionalism provides a framework for understanding reality as a shared but imperfect intersection of subject and object. It rejects the extremes of materialism and idealism, offering instead a middle path that embraces the limitations of perception and cognition while remaining open to the possibilities of negative space and unknown dimensions. In doing so, it fosters humility, curiosity, and a commitment to provisionality—qualities essential for navigating the ever-expanding terrain of understanding.
By acknowledging the limits of our maps and the complexity of the terrain, Intersectionalism invites us to approach reality not as a fixed and knowable entity but as an unfolding interplay of perception and existence. It is a philosophy not of certainty but of exploration, always probing the space between.