The Rhetoric of Realism: When Language Pretends to Know

Let us begin with the heresy: Truth is a rhetorical artefact. Not a revelation. Not a metaphysical essence glimmering behind the veil. Just language — persuasive, repeatable, institutionally ratified language. In other words: branding.

Audio: NotebookLM podcast on this topic.

This is not merely a postmodern tantrum thrown at the altar of Enlightenment rationalism. It is a sober, if impolite, reminder that nearly everything we call “knowledge” is stitched together with narrative glue and semantic spit. Psychology. Neuroscience. Ethics. Economics. Each presents itself as a science — or worse, a moral imperative — but their foundations are built atop a linguistic faultline. They are, at best, elegant approximations; at worst, dogma in drag.

Let’s take psychology. Here is a field that diagnoses your soul via consensus. A committee of credentialed clerics sits down and declares a cluster of behaviours to be a disorder, assigns it a code, and hands you a script. It is then canonised in the DSM, the Diagnostic Scripture Manual. Doubt its legitimacy and you are either naïve or ill — which is to say, you’ve just confirmed the diagnosis. It’s a theological trap dressed in the language of care.

Or neuroscience — the church of the glowing blob. An fMRI shows a region “lighting up” and we are meant to believe we’ve located the seat of love, the anchor of morality, or the birthplace of free will. Never mind that we’re interpreting blood-oxygen fluctuations in composite images smoothed by statistical witchcraft. It looks scientific, therefore it must be real. The map is not the territory, but in neuroscience, it’s often a mood board.

And then there is language itself, the medium through which all these illusions are transmitted. It is the stage, the scenery, and the unreliable narrator. My Language Insufficiency Hypothesis proposes that language is not simply a flawed tool — it is fundamentally unfit for the task it pretends to perform. It was forged in the furnace of survival, not truth. We are asking a fork to play the violin.

This insufficiency is not an error to be corrected by better definitions or clever metaphors. It is the architecture of the system. To speak is to abstract. To abstract is to exclude. To exclude is to falsify. Every time we speak of a thing, we lose the thing itself. Language functions best not as a window to the real but as a veil — translucent, patterned, and perpetually in the way.

So what, then, are our Truths™? They are narratives that have won. Stories that survived the epistemic hunger games. They are rendered authoritative not by accuracy, but by resonance — psychological, cultural, institutional. A “truth” is what is widely accepted, not because it is right, but because it is rhetorically unassailable — for now.

This is the dirty secret of epistemology: coherence masquerades as correspondence. If enough concepts link arms convincingly, we grant them status. Not because they touch reality, but because they echo each other convincingly in our linguistic theatre.

Libet’s experiment, Foucault’s genealogies, McGilchrist’s hemispheric metaphors — each peels back the curtain in its own way. Libet shows that agency might be a post-hoc illusion. Foucault reveals that disciplines don’t describe the subject; they produce it. McGilchrist laments that the Emissary now rules the Master, and the world is flatter for it.

But all of them — and all of us — are trapped in the same game: the tyranny of the signifier. We speak not to uncover truth, but to make truth-sounding noises. And the tragedy is, we often convince ourselves.

So no, we cannot escape the prison of language. But we can acknowledge its bars. And maybe, just maybe, we can rattle them loudly enough that others hear the clank.

Until then, we continue — philosophers, scientists, diagnosticians, rhetoricians — playing epistemology like a parlour game with rigged dice, congratulating each other on how well the rules make sense.

And why wouldn’t they? We wrote them.

Sustenance: A Book About Aliens, Language, and Everything You’re Getting Wrong

Violet aliens on a farm

So, I wrote a book and published it under Ridley Park, the pseudonym I use for fiction.

It has aliens. But don’t get excited—they’re not here to save us, probe us, or blow up the White House. They’re not even here for us.

Which is, frankly, the point.

Audio: NotebookLM podcast on this topic.

The book’s called Sustenance, and while it’s technically speculative fiction, it’s more about us than them. Or rather, it’s about how we can’t stop making everything about us—even when it shouldn’t be. Especially when it shouldn’t be.

Let’s talk themes. And yes, we’re using that word like academics do: as a smokescreen for saying uncomfortable things abstractly.

Language: The Original Scam

Language is the ultimate colonial tool. We call it communication, but it’s mostly projection. You speak. You hope. You assume. You superimpose meaning on other people like a cling film of your own ego.

Sustenance leans into this—not by showing a breakdown of communication, but by showing what happens when communication was never mutual in the first place. When the very idea of “meaning” has no purchase. It’s not about mishearing—it’s about misbeing.

Culture: A Meme You Were Born Into

Culture is the software you didn’t choose to install, and probably can’t uninstall. Most people treat it like a universal law—until they meet someone running a different OS. Cue confusion, arrogance, or violence.

The book explores what happens when cultural norms aren’t shared, and worse, aren’t even legible. Imagine trying to enforce property rights on beings who don’t understand “ownership.” It’s like trying to baptise a toaster.

Sex/Gender: You Keep Using Those Words…

One of the quiet joys of writing non-human characters is discarding human assumptions about sex and gender—and watching readers squirm.

What if sex wasn’t about power, pleasure, or identity? What if it was just a biological procedure, like cell division or pruning roses? Would you still be interested? Would you still moralise about it?

We love to believe our sex/gender constructs are inevitable. They’re not. They’re habits—often bad ones.

Consent: Your Framework Is Showing

Consent, as we use it, assumes mutual understanding, shared stakes, and equivalent agency. Remove any one of those and what’s left?

Sustenance doesn’t try to solve this—it just shows what happens when those assumptions fall apart. Spoiler: it’s not pretty, but it is honest.

Projection: The Mirror That Lies

Humans are deeply committed to anthropocentrism. If it walks like us, or flinches like us, it must be us. This is why we get so disoriented when faced with the truly alien: it won’t dance to our tune, and we’re left staring at ourselves in the funhouse mirror.

This isn’t a book about aliens.

It’s a book about the ways we refuse to see what’s not us.

Memory: The Autobiography of Your Justifications

Memory is not a record. It’s a defence attorney with a narrative license. We rewrite the past to make ourselves look consistent, or innocent, or right.

In Sustenance, memory acts less as a tether to truth and more as a sculpting tool—a way to carve guilt into something manageable. Something you can live with. Until you can’t.

In Summary: It’s Not About Them. It’s About You.

If that sounds bleak, good. It’s meant to.

But it’s also a warning: don’t get too comfortable in your own categories. They’re only universal until you meet someone who doesn’t share them.

Like I said, it’s not really about the aliens.

It’s about us.


If you enjoy fiction that’s more unsettling than escapist, more question than answer, you might be interested in Sustenance. It’s live on Kindle now for the cost of a regrettable coffee:

📘 Sustenance on Amazon US
Also available in the UK, DE, FR, ES, IT, NL, JP, BR, CA, MX, AU, and IN—because alienation is a universal language.

“Trust the Science,” They Said. “It’s Reproducible,” They Lied.

—On Epistemology, Pop Psychology, and the Cult of Empirical Pretence

Science, we’re told, is the beacon in the fog – a gleaming lighthouse of reason guiding us through the turbulent seas of superstition and ignorance. But peer a bit closer, and the lens is cracked, the bulb flickers, and the so-called lighthouse keeper is just some bloke on TikTok shouting about gut flora and intermittent fasting.

Audio: NotebookLM podcast on this topic.

We are creatures of pattern. We impose order. We mistake correlation for causation, narrative for truth, confidence for knowledge. What we have, in polite academic parlance, is an epistemology problem. What we call science is often less Newton and more Nostradamus—albeit wearing a lab coat and wielding a p-hacked dataset.

Let’s start with the low-hanging fruit—the rotting mango of modern inquiry: nutritional science, which is to actual science what alchemy is to chemistry, or vibes are to calculus. We study food the way 13th-century monks studied demons: through superstition, confirmation bias, and deeply committed guesswork. Eat fat, don’t eat fat. Eat eggs, don’t eat eggs. Eat only between the hours of 10:00 and 14:00 under a waxing moon while humming in Lydian mode. It’s a cargo cult with chia seeds.

But why stop there? Let’s put the whole scientific-industrial complex on the slab.

Psychology: The Empirical Astrological Society

Psychology likes to think it’s scientific. Peer-reviewed journals, statistical models, the odd brain scan tossed in for gravitas. But at heart, much of it is pop divination, sugar-dusted for mass consumption. The replication crisis didn’t merely reveal cracks – it bulldozed entire fields. The Stanford Prison Experiment? A theatrical farce. Power poses? Empty gestural theatre. Half of what you read in Psychology Today could be replaced with horoscopes and no one would notice.

Medical Science: Bloodletting, But With Better Branding

Now onto medicine, that other sacred cow. We tend to imagine it as precise, data-driven, evidence-based. In practice? It’s a Byzantine fusion of guesswork, insurance forms, and pharmaceutical lobbying. As Crémieux rightly implies, medicine’s predictive power is deeply compromised by overfitting, statistical fog, and a staggering dependence on non-replicable clinical studies, many funded by those who stand to profit from the result.

And don’t get me started on epidemiology, that modern priesthood that speaks in incantations of “relative risk” and “confidence intervals” while changing the commandments every fortnight. If nutrition is theology, epidemiology is exegesis.

The Reproducibility Farce

Let us not forget the gleaming ideal: reproducibility, that cornerstone of Enlightenment confidence. The trouble is, in field after field—from economics to cancer biology—reproducibility is more aspiration than reality. What we actually get is a cacophony of studies no one bothers to repeat, published to pad CVs, p-hacked into publishable shape, and then cited into canonical status. It’s knowledge by momentum. We don’t understand the world. We just retweet it.

What, Then, Is To Be Done?

Should we become mystics? Take up tarot and goat sacrifice? Not necessarily. But we should strip science of its papal robes. We should stop mistaking publication for truth, consensus for accuracy, and method for epistemic sanctity. The scientific method is not the problem. The pretence that it’s constantly being followed is.

Perhaps knowledge doesn’t have a half-life because of progress, but because it was never alive to begin with. We are not disproving truth; we are watching fictions expire.

Closing Jab

Next time someone says “trust the science,” ask them: which bit? The part that told us margarine was manna? The part that thought ulcers were psychosomatic? The part that still can’t explain consciousness, but is confident about your breakfast?

Science is a toolkit. But too often, it’s treated like scripture. And we? We’re just trying to lose weight while clinging to whatever gospel lets us eat more cheese.

The Emperor’s New Models: Box, Lawson, and the Death of Truth

We live in an age intoxicated by models: climate models, economic models, epidemiological models, cosmological models—each one an exquisite confection of assumptions draped in a lab coat and paraded as gospel. Yet if you trace the bloodline of model-building back through the annals of intellectual history, you encounter two figures who coldly remind us of the scam: George Box and Hilary Lawson.

Box: The Gentle Assassin of Certainty

George Box, the celebrated statistician, is often credited with the aphorism: “All models are wrong, but some are useful.” However, Box himself never uttered this precise phrase. What he did say, in his 1976 paper Science and Statistics, was:

The “some are useful” flourish was added later by a public desperate to sweeten the bitter pill. Nevertheless, Box deserves credit for the lethal insight: no model, however elegant, perfectly captures reality. They are provisional guesses, finger-paintings smeared across the rough surface of the unknown.

Audio: NotebookLM podcast on this topic.

Lawson: The Arsonist Who Burned the Map

Hilary Lawson, contemporary philosopher and author of Closure: A Story of Everything, drags Box’s modest scepticism into full-blown philosophical insurrection. In a recent lecture, Lawson declared:

Where Box warns us the emperor’s clothes don’t fit, Lawson points out that the emperor himself is a paper doll. Either way, we dress our ignorance in equations and hope no one notices the draft.

Lawson’s view is grim but clarifying: models are not mere approximations of some Platonic truth. They are closures—temporary, pragmatic structures we erect to intervene effectively in a world we will never fully comprehend. Reality, in Lawson’s framing, is an “openness”: endlessly unfolding, resistant to total capture.

The Case of the Celestial Spheres

Take Aristotle’s model of celestial spheres. Ludicrous? Yes. Obsolete? Absolutely. Yet for centuries, it allowed navigators to chart courses, astrologers to cast horoscopes, and priests to intimidate peasants—all without the slightest whiff of heliocentrism. A model does not need to be right; it merely needs to be operational.

Our modern theories—Big Bang cosmology, dark matter, and quantum gravity—may well be tomorrow’s celestial spheres: charming relics of ignorance that nonetheless built bridges, cured diseases, and sold mobile phones.

Summary Table: Lawson’s View on Models and Truth

Conclusion

Box taught us to distrust the fit of our models; Lawson reminds us there is no true body underneath them. If truth is a ghost, then our models are ghost stories—and some ghost stories, it turns out, are very good at getting us through the night.

We are left not with certainty, but with craftsmanship: the endless, imperfect art of refining our closures, knowing full well they are lies that work. Better lies. Usable lies. And perhaps, in a world without final answers, that is the most honest position of all.

The Dubious Art of Reasoning: Why Thinking Is Harder Than It Looks

The Illusion of Clarity in a World of Cognitive Fog

Apologies in advance for this Logic 101 posting. Reason—our once-proud torch in the darkness, now more like a flickering lighter in a hurricane of hot takes and LinkedIn thought-leadership. The modern mind, bloated on TED Talks and half-digested Wikipedia articles, tosses around terms like “inductive” and “deductive” as if they’re interchangeable IKEA tools. So let us pause, sober up, and properly inspect these three venerable pillars of human inference: deduction, induction, and abduction—each noble, each flawed, each liable to betray you like a Greco-Roman tragedy.

Video: This post was prompted by this short by MiniPhilosophy.
Audio: NotebookLM podcast on this topic.

Deduction: The Tyrant of Certainty

Deduction is the purest of the lot, the high priest of logic. It begins with a general premise and guarantees a specific conclusion, as long as you don’t cock up the syllogism. Think Euclid in a toga, laying down axioms like gospel.

Example:

Perfect. Crisp. Unassailable. Unless, of course, your premise is bollocks. Deduction doesn’t check its ingredients—it just cooks with whatever it’s given. Garbage in, garbage out.

Strength: Valid conclusions from valid premises.
Weakness: Blind to empirical falsity. You can deduce nonsense from nonsense and still be logically sound.

Induction: The Gambler’s Gospel

Induction is the philosopher’s lottery ticket: generalising from particulars. Every swan I’ve seen is white, ergo all swans must be white. Until, of course, Australia coughs up a black one and wrecks your little Enlightenment fantasy.

Example:

Touching, isn’t it? Unfortunately, induction doesn’t prove anything—it suggests probability. David Hume had an existential breakdown over this. Entire centuries of Western philosophy spiralled into metaphysical despair. And yet, we still rely on it to predict weather, markets, and whether that dodgy lasagna will give us food poisoning.

Strength: Empirically rich and adaptive.
Weakness: One exception detonates the generalisation. Induction is only ever as good as the sample size and your luck.

Abduction: Sherlock Holmes’ Drug of Choice

Abduction is the inference to the best explanation. The intellectual equivalent of guessing what made the dog bark at midnight while half-drunk and barefoot in the garden.

Example:

It could be a garden sprinkler. Or a hose. Or divine intervention. But we bet on rain because it’s the simplest, most plausible explanation. Pragmatic, yes. But not immune to deception.

Strength: Useful in messy, real-world contexts.
Weakness: Often rests on a subjective idea of “best,” which tends to mean “most convenient to my prejudices.”

The Modern Reasoning Crisis: Why We’re All Probably Wrong

Our contemporary landscape has added new layers of complexity to these already dubious tools. Social media algorithms function as induction machines on steroids, drawing connections between your click on a pasta recipe and your supposed interest in Italian real estate. Meanwhile, partisan echo chambers have perfected the art of deductive reasoning from absolutely bonkers premises.

Consider how we navigate information today:

And thus, the modern reasoning loop is complete—a perfect system for being confidently incorrect while feeling intellectually superior.

Weakness by Analogy: The Reasoning Café

Imagine a café.

All three are trying to reason. Only one might get lunch.

The Meta-Problem: Reasoning About Reasoning

The true joke is this: we’re using these flawed reasoning tools to evaluate our reasoning tools. It’s like asking a drunk person to judge their own sobriety test. The very mechanisms we use to detect faulty reasoning are themselves subject to the same faults.

This explains why debates about critical thinking skills typically devolve into demonstrations of their absence. We’re all standing on intellectual quicksand while insisting we’ve found solid ground.

Conclusion: Reason Is Not a Guarantee, It’s a Wager

None of these modalities offer omniscience. Deduction only shines when your axioms aren’t ridiculous. Induction is forever haunted by Hume’s skepticism and the next black swan. Abduction is basically educated guessing dressed up in tweed.

Yet we must reason. We must argue. We must infer—despite the metaphysical vertigo.

The tragedy isn’t that these methods fail. The tragedy is when people believe they don’t.

Perhaps the wisest reasoners are those who understand the limitations of their cognitive tools, who approach conclusions with both confidence and humility. Who recognize that even our most cherished beliefs are, at best, sophisticated approximations of a reality we can never fully grasp.

So reason on, fellow thinkers. Just don’t be too smug about it.

What’s Missing? Trust or Influence

Post-COVID, we’re told trust in science is eroding. But perhaps the real autopsy should be performed on the institution of public discourse itself.

Since the COVID-19 crisis detonated across our global stage—part plague, part PR disaster—the phrase “trust in science” has become the most abused slogan since “thoughts and prayers.” Every public official with a podium and a pulse declared they were “following the science,” as if “science” were a kindly oracle whispering unambiguous truths into the ears of the righteous. But what happened when those pronouncements proved contradictory, politically convenient, or flat-out wrong? Was it science that failed, or was it simply a hostage to an incoherent performance of authority?

Audio: NotebookLM podcast discussing this topic.

Two recent Nature pieces dig into the supposed “decline” of scientific credibility in the post-pandemic world, offering the expected hand-wringing about public opinion and populist mistrust. But let’s not be so credulous. This isn’t merely a crisis of trust—it’s a crisis of theatre.

“The Science” as Ventriloquism

Let’s begin by skewering the central absurdity: there is no such thing as “The Science.” Science is not a monolith. It’s not a holy writ passed down by lab-coated Levites. It’s a process—a messy, iterative, and perpetually provisional mode of inquiry. But during the pandemic, politicians, pundits, and even some scientists began to weaponise the term, turning it into a rhetorical cudgel. “The Science says” became code for “shut up and comply.” Any dissent—even from within the scientific community—was cast as heresy. Galileo would be proud.

In Nature Human Behaviour paper (van der Linden et al., 2025) identifies four archetypes of distrust: distrust in the message, the messenger, the medium, and the motivation. What they fail to ask is: what if all four were compromised simultaneously? What if the medium (mainstream media) served more as a stenographer to power than a check upon it? What if the message was oversimplified into PR slogans, the messengers were party apparatchiks in lab coats, and the motivations were opaque at best?

Trust didn’t just erode. It was actively incinerated in a bonfire of institutional vanity.

A Crisis of Influence, Not Integrity

The second Nature commentary (2025) wrings its hands over “why trust in science is declining,” as if the populace has suddenly turned flat-Earth overnight. But the real story isn’t a decline in trust per se; it’s a redistribution of epistemic authority. Scientists no longer have the stage to themselves. Influencers, conspiracy theorists, rogue PhDs, and yes—exhausted citizens armed with Wi-Fi and anxiety—have joined the fray.

Science hasn’t lost truth—it’s lost control. And frankly, perhaps it shouldn’t have had that control in the first place. Democracy is messy. Information democracies doubly so. And in that mess, the epistemic pedestal of elite scientific consensus was bound to topple—especially when its public face was filtered through press conferences, inconsistent policies, and authoritarian instincts.

Technocracy’s Fatal Hubris

What we saw wasn’t science failing—it was technocracy failing in real time, trying to manage public behaviour with a veneer of empirical certainty. But when predictions shifted, guidelines reversed, and public health policy began to resemble a mood ring, the lay public was expected to pretend nothing happened. Orwell would have a field day.

This wasn’t a failure of scientific method. It was a failure of scientific messaging—an inability (or unwillingness) to communicate uncertainty, probability, and risk in adult terms. Instead, the public was infantilised. And then pathologised for rebelling.

Toward a Post-Scientistic Public Sphere

So where does that leave us? Perhaps we need to kill the idol of “The Science” to resurrect a more mature relationship with scientific discourse—one that tolerates ambiguity, embraces dissent, and admits when the data isn’t in. Science, done properly, is the art of saying “we don’t know… yet.”

The pandemic didn’t erode trust in science. It exposed how fragile our institutional credibility scaffolding really is—how easily truth is blurred when science is fed through the meat grinder of media, politics, and fear.

The answer isn’t more science communication—it’s less scientism, more honesty, and above all, fewer bureaucrats playing ventriloquist with the language of discovery.

Conclusion

Trust in science isn’t dead. But trust in those who claim to speak for science? That’s another matter. Perhaps it’s time to separate the two.

What’s Probability?

The contestation over the definition of probability is alive and well—like a philosophical zombie that refuses to lie down and accept the tranquilliser of consensus. Despite over three centuries of intense mathematical, philosophical, and even theological wrangling, no single, universally accepted definition reigns supreme. Instead, we have a constellation of rival interpretations, each staking its claim on the epistemological turf, each clutching its own metaphysical baggage.

Audio: NotebookLM podcast on this topic.

Let us survey the battlefield:

1. Classical Probability (Laplacean Determinism in a Tuxedo)

This old warhorse defines probability as the ratio of favourable outcomes to possible outcomes, assuming all outcomes are equally likely. The problem? That assumption is doing all the heavy lifting, like a butler carrying a grand piano up five flights of stairs. It’s circular: we define probability using equiprobability, which itself presumes a notion of probability. Charming, but logically suspect.

2. Frequentist Probability (The Empiricist’s Fantasy)

Here, probability is the limit of relative frequencies as the number of trials tends to infinity. This gives us the illusion of objectivity—but only in a Platonic realm where we can conduct infinite coin tosses without the coin disintegrating or the heat death of the universe intervening. Also, it tells us nothing about singular cases. What’s the probability this specific bridge will collapse? Undefined, says the frequentist, helpfully.

3. Bayesian Probability (Subjectivity Dressed as Rigor)

Bayesians treat probability as a degree of belief—quantified plausibility updated with evidence. This is useful, flexible, and epistemically honest, but also deeply subjective. Two Bayesians can start with wildly different priors and, unless carefully constrained, remain in separate probabilistic realities. It’s like epistemology for solipsists with calculators.

4. Propensity Interpretation (The Ontology of Maybes)

Karl Popper and his ilk proposed that probability is a tendency or disposition of a physical system to produce certain outcomes. Sounds scientific, but try locating a “propensity” in a particle collider—it’s a metaphysical ghost, not a measurable entity. Worse, it struggles with repeatability and relevance outside of controlled environments.

5. Logical Probability (A Sober Attempt at Rationality)

Think of this as probability based on logical relations between propositions—à la Keynes or Carnap. It aims to be objective without being empirical. The problem? Assigning these logical relations is no easier than choosing priors in Bayesianism, and just as subjective when it comes to anything meaty.

6. Quantum Probability (Schrödinger’s Definition)

In quantum mechanics, probability emerges from the squared modulus of a wave function—so this is where physics says, “Shut up and calculate.” But this doesn’t solve the philosophical issue—it just kicks the can into Hilbert space. Interpretations of quantum theory (Copenhagen? Many Worlds?) embed different philosophies of probability, so the contestation merely changes battlegrounds.

Current Status: War of Attrition

There is no universal agreement, and likely never will be. Probability is used successfully across the sciences, economics, AI, and everyday reasoning—but the fact that these wildly different interpretations all “work” suggests that the concept is operationally robust yet philosophically slippery. Like money, love, or art, we use it constantly but define it poorly.

In short: the contestation endures because probability is not one thing—it is a shape-shifting chimera that serves multiple masters. Each interpretation captures part of the truth, but none hold it entire. Philosophers continue to argue, mathematicians continue to formalise, and practitioners continue to deploy it as if there were no disagreement at all.

And so the probability of this contest being resolved any time soon?
About zero.
Or one.
Depending on your interpretation.

Against the Intelligence Industrial Complex

Why IQ is Not Enough – and Never Was

I’m not a fan of IQ as a general metric. Let us be done with the cult of the clever. Let us drag the IQ score from its pedestal, strip it of its statistical robes, and parade it through the streets of history where it belongs—next to phrenology, eugenics, and other well-meaning pseudosciences once weaponised by men in waistcoats.

The so-called Intelligence Industrial Complex—an infernal alliance of psychologists, bureaucrats, and HR departments—has for too long dictated the terms of thought. It has pretended to measure the immeasurable. It has sold us a fiction in numerical drag: that human intelligence can be distilled, packaged, and ranked.

Audio: NotebookLM podcast on this topic.

What it measures, it defines. What it defines, it controls.

IQ is not intelligence. It is cognitive GDP: a snapshot of what your brain can do under fluorescent lights with a timer running. It rewards abstraction, not understanding; speed, not depth; pattern recognition, not wisdom. It’s a test of how well you’ve been conditioned to think like the test-makers.

This is not to say IQ has no value. Of course it does—within its own ecosystem of schools, bureaucracies, and technocracies. But let us not mistake the ruler for the terrain. Let us not map the entire landscape of human potential using a single colonial compass.

True intelligence is not a number. It is a spectrum of situated knowings, a polyphony of minds tuned to different frequencies. The Inuit hunter tracking a seal through silence. The griot remembering centuries of lineage. The autistic coder intuiting an algorithm in dreamtime. The grandmother sensing a lie with her bones. IQ cannot touch these.

To speak of intelligence as if it belonged to a single theory is to mistake a monoculture for a forest. Let us burn the monoculture. Let us plant a thousand new seeds.

A Comparative Vivisection of Intelligence Theories

Theory / ModelCore PremiseStrengthsBlind Spots / CritiquesCultural Framing
IQ (Psychometric g)Intelligence is a single, general cognitive ability measurable via testingPredicts academic & job performance; standardisedSkewed toward Western logic, ignores context, devalues non-abstract intelligencesWestern, industrial, meritocratic
Multiple Intelligences (Gardner)Intelligence is plural: linguistic, spatial, musical, bodily, etc.Recognises diversity; challenges IQ monopolyStill individualistic; categories often vague; Western in formulationLiberal Western pluralism
Triarchic Theory (Sternberg)Intelligence = analytical + creative + practicalIncludes adaptability, real-world successStill performance-focused; weak empirical groundingWestern managerial
Emotional Intelligence (Goleman)Intelligence includes emotion regulation and interpersonal skillUseful in leadership & education contextsCommodified into corporate toolkits; leans self-helpWestern therapeutic
Socio-Cultural (Vygotsky)Intelligence develops through social interaction and cultural mediationRecognises developmental context and cultureLess attention to adult or cross-cultural intelligenceSoviet / constructivist
Distributed Cognition / Extended MindIntelligence is distributed across people, tools, systemsBreaks skull-bound model; real-world cognitionHard to measure; difficult to institutionalisePost-cognitive, systems-based
Indigenous EpistemologiesIntelligence is relational, ecological, spiritual, embodied, ancestralHolistic; grounded in lived experienceMarginalised by academia; often untranslatable into standard metricsGlobal South / decolonial

Conclusion: Beyond the Monoculture of Mind

If we want a more encompassing theory of intelligence, we must stop looking for a single theory. We must accept plurality—not as a nod to diversity, but as an ontological truth.

Intelligence is not a fixed entity to be bottled and graded. It is a living, breathing phenomenon: relational, situated, contextual, historical, ecological, and cultural.

And no test devised in a Princeton psych lab will ever tell you how to walk through a forest without being seen, how to tell when rain is coming by smell alone, or how to speak across generations through story.

It’s time we told the Intelligence Industrial Complex: your number’s up.

When Suspension of Disbelief Escapes the Page

Welcome to the Age of Realism Fatigue

Once upon a time — which is how all good fairy tales begin — suspension of disbelief was a tidy little tool we used to indulge in dragons, space travel, talking animals, and the idea that people in rom-coms have apartments that match their personalities and incomes. It was a temporary transaction, a gentleman’s agreement, a pact signed between audience and creator with metaphorical ink: I know this is nonsense, but I’ll play along if you don’t insult my intelligence.

Audio: NotebookLM podcast of this page content.

This idea, famously coined by Samuel Taylor Coleridge as the “willing suspension of disbelief,” was meant to give art its necessary air to breathe. Coleridge’s hope was that audiences would momentarily silence their rational faculties in favour of emotional truth. The dragons weren’t real, but the heartbreak was. The ghosts were fabrications, but the guilt was palpable.

But that was then. Before the world itself began auditioning for the role of absurdist theatre. Before reality TV became neither reality nor television. Before politicians quoted memes, tech CEOs roleplayed as gods, and conspiracy theorists became bestsellers on Amazon. These days, suspension of disbelief is no longer a leisure activity — it’s a survival strategy.

The Fictional Contract: Broken but Not Forgotten

Traditionally, suspension of disbelief was deployed like a visitor’s badge. You wore it when entering the imagined world and returned it at the door on your way out. Fiction, fantasy, speculative fiction — they all relied on that badge. You accepted the implausible if it served the probable. Gandalf could fall into shadow and return whiter than before because he was, after all, a wizard. We were fine with warp speed as long as the emotional logic of Spock’s sacrifice made sense. There were rules — even in rule-breaking.

The genres varied. Hard sci-fi asked you to believe in quantum wormholes but not in lazy plotting. Magical realism got away with absurdities wrapped in metaphor. Superhero films? Well, their disbelief threshold collapsed somewhere between the multiverse and the Bat-credit card.

Still, we always knew we were pretending. We had a tether to the real, even when we floated in the surreal.

But Then Real Life Said, “Hold My Beer.”

At some point — let’s call it the twenty-first century — the need to suspend disbelief seeped off the screen and into the bloodstream of everyday life. News cycles became indistinguishable from satire (except that satire still had editors). Headlines read like rejected Black Mirror scripts. A reality TV star became president, and nobody even blinked. Billionaires declared plans to colonise Mars whilst democracy quietly lost its pulse.

We began to live inside a fiction that demanded that our disbelief be suspended daily. Except now, it wasn’t voluntary. It was mandatory. If you wanted to participate in public life — or just maintain your sanity — you had to turn off some corner of your rational mind.

You had to believe, or pretend to, that the same people calling for “freedom” were banning books. That artificial intelligence would definitely save us, just as soon as it was done replacing us. That social media was both the great democratiser and the sewer mainline of civilisation.

The boundary between fiction and reality? Eroded. Fact-checking? Optional. Satire? Redundant. We’re all characters now, improvising in a genreless world that refuses to pick a lane.

Cognitive Gymnastics: Welcome to the Cirque du Surréalisme

What happens to a psyche caught in this funhouse? Nothing good.

Our brains, bless them, were designed for some contradiction — religion’s been pulling that trick for millennia — but the constant toggling between belief and disbelief, trust and cynicism, is another matter. We’re gaslit by the world itself. Each day, a parade of facts and fabrications marches past, and we’re told to clap for both.

Cognitive dissonance becomes the default. We scroll through doom and memes in the same breath. We read a fact, then three rebuttals, then a conspiracy theory, then a joke about the conspiracy, then a counter-conspiracy about why the joke is state-sponsored. Rinse. Repeat. Sleep if you can.

The result? Mental fatigue. Not just garden-variety exhaustion, but a creeping sense that nothing means anything unless it’s viral. Critical thinking atrophies not because we lack the will but because the floodwaters never recede. You cannot analyse the firehose. You can only drink — or drown.

Culture in Crisis: A Symptom or the Disease?

This isn’t just a media problem. It’s cultural, epistemological, and possibly even metaphysical.

We’ve become simultaneously more skeptical — distrusting institutions, doubting authorities — and more gullible, accepting the wildly implausible so long as it’s entertaining. It’s the postmodern paradox in fast-forward: we know everything is a construct, but we still can’t look away. The magician shows us the trick, and we cheer harder.

In a world where everything is performance, authenticity becomes the ultimate fiction. And with that, the line between narrative and news, between aesthetic and actuality, collapses.

So what kind of society does this create?

One where engagement replaces understanding. Where identity is a curated feed. Where politics is cosplay, religion is algorithm, and truth is whatever gets the most shares. We aren’t suspending disbelief anymore. We’re embalming it.

The Future: A Choose-Your-Own-Delusion Adventure

So where does this all end?

There’s a dark path, of course: total epistemic breakdown. Truth becomes just another fandom and reality a subscription model. But there’s another route — one with a sliver of hope — where we become literate in illusion.

We can learn to hold disbelief like a scalpel, not a blindfold. To engage the implausible with curiosity, not capitulation. To distinguish between narratives that serve power and those that serve understanding.

It will require a new kind of literacy. One part media scepticism, one part philosophical rigour, and one part good old-fashioned bullshit detection. We’ll have to train ourselves not just to ask “Is this true?” but “Who benefits if I believe it?”

That doesn’t mean closing our minds. It means opening them with caution. Curiosity without credulity. Wonder without worship. A willingness to imagine the impossible whilst keeping a firm grip on the probable.

In Conclusion, Reality Is Optional, But Reason Is Not

In the age of AI, deepfakes, alt-facts, and hyperreality, we don’t need less imagination. We need more discernment. The world may demand our suspension of disbelief, but we must demand our belief back. In truth, in sense, in each other.

Because if everything becomes fiction, then fiction itself loses its magic. And we, the audience, are left applauding an empty stage.

Lights down. Curtain call.
Time to read the footnotes.

Surveying Modernity

A Brief, Brutal Experiment in Categorising Your Worldview

This month, I’ve been tinkering with a little project—an elegant, six-question survey designed to assess where you land in the great intellectual mess that is modernity.

Audio: Podcast discussion about this post.

This isn’t some spur-of-the-moment quiz cooked up in a caffeine-fueled haze. No, this project has been simmering for years, and after much consideration (and occasional disdain), I’ve crafted a set of questions and response options that, I believe, encapsulate the prevailing worldviews of our time.

It all began with Metamodernism, a term that, at first, seemed promising—a bold synthesis of Modernism and Postmodernism, a grand dialectic of the ages. But as I mapped it out, it collapsed under scrutiny. A footnote in the margins of intellectual history, at best. I’ll expand on that in due course.

The Setup: A Simple, Slightly Sadistic Ternary Plot

For the visually inclined (or the masochistically curious), I initially imagined a timeline, then a branching decision tree, then a Cartesian plane before landing on a ternary plot—a three-way visual that captures ideological leanings in a way a boring old bar chart never could.

The survey itself is brief: six questions, each with five possible answers. Submit your responses, and voilà—you get a tidy little ternary chart plotting your intellectual essence, along with a breakdown of what your answers signify.

Methodology: Half-Rigorous, Half-Reckless

I am, after all, a (recovering) statistician, so I’ve tried to uphold proper methodology while also fast-tracking certain safeguards for the sake of efficiency. If there’s enough interest, I may expand the survey, adding more questions or increasing response flexibility (tick boxes instead of radio buttons—revolutionary, I know).

Privacy Concerns? Relax. I’m not harvesting your data for some nefarious scheme. No personally identifiable information is collected—just a timestamp, session ID, and your browser’s language setting. I did consider tracking IP addresses to analyze regional trends but ultimately scrapped that idea.

In the future, I may add an optional email feature for those who wish to save and track their responses over time (assuming anyone is unhinged enough to take this more than once).

The Rest of the Story: Your Feedback, My Amusement

Since this is a personal project crafted in splendid isolation, I’d love to hear your thoughts. Are the questions reasonable? Do the response options make sense? Does the summary feel accurate? Is the ternary chart decipherable, or have I constructed a glorified inkblot test?

As an academic, economist, and statistician, I had never encountered a ternary chart before embarking on this, and now I rather enjoy it. That said, I also find Nietzsche “intuitive,” so take that as you will.

If this gains traction, expect follow-up content—perhaps videos, podcasts, or further written explorations.

Your Move

Take the survey. It’s painless, requiring mere minutes of your life (which is, let’s be honest, already wasted online). And because I’m feeling generous, you can even generate a PDF to stick on your fridge, next to your collection of expired coupons and disappointing takeout menus.

Click here to take the survey.

Let’s see where you stand in the grand, chaotic landscape of modernity. Or at least, let’s have a laugh trying to make sense of it.

DISCLAIMER: The Modernity Worldview Survey is not scientific. It is designed as an experiment to provide directional insights. It is hosted on Google Cloud and subject to its availability and performance limitations.