A Case for Intersectionalism

The Space Between

In the great philosophical tug-of-war between materialism and idealism, where reality is argued to be either wholly independent of perception or entirely a construct of the mind, there lies an underexplored middle ground—a conceptual liminal space that we might call “Intersectionalism.” This framework posits that reality is neither purely objective nor subjective but emerges at the intersection of the two. It is the terrain shaped by the interplay between what exists and how it is perceived, mediated by the limits of human cognition and sensory faculties.

Audio: Podcast conversation on this topic.

Intersectionalism offers a compelling alternative to the extremes of materialism and idealism. By acknowledging the constraints of perception and interpretation, it embraces the provisionality of knowledge, the inevitability of blind spots, and the productive potential of uncertainty. This essay explores the foundations of Intersectionalism, its implications for knowledge and understanding, and the ethical and practical insights it provides.

Reality as an Intersection

At its core, Intersectionalism asserts that reality exists in the overlapping space between the objective and the subjective. The objective refers to the world as it exists independently of any observer—the “terrain.” The subjective encompasses perception, cognition, and interpretation—the “map.” Reality, then, is not fully contained within either but is co-constituted by their interaction.

Consider the act of seeing a tree. The tree, as an object, exists independently of the observer. Yet, the experience of the tree is entirely mediated by the observer’s sensory and cognitive faculties. Light reflects off the tree, enters the eye, and is translated into electrical signals processed by the brain. This process creates a perception of the tree, but the perception is not the tree itself.

This gap between perception and object highlights the imperfect alignment of subject and object. No observer perceives reality “as it is” but only as it appears through the interpretive lens of their faculties. Reality, then, is a shared but imperfectly understood phenomenon, subject to distortion and variation across individuals and species.

The Limits of Perception and Cognition

Humans, like all organisms, perceive the world through the constraints of their sensory and cognitive systems. These limitations shape not only what we can perceive but also what we can imagine. For example:

  • Sensory Blind Spots: Humans are limited to the visible spectrum of light (~380–750 nm), unable to see ultraviolet or infrared radiation without technological augmentation. Other animals, such as bees or snakes, perceive these spectra as part of their natural sensory worlds. Similarly, humans lack the electroreception of sharks or the magnetoreception of birds.
  • Dimensional Constraints: Our spatial intuition is bounded by three spatial dimensions plus time, making it nearly impossible to conceptualise higher-dimensional spaces without resorting to crude analogies (e.g., imagining a tesseract as a 3D shadow of a 4D object).
  • Cognitive Frameworks: Our brains interpret sensory input through patterns and predictive models. These frameworks are adaptive but often introduce distortions, such as cognitive biases or anthropocentric assumptions.

This constellation of limitations suggests that what we perceive and conceive as reality is only a fragment of a larger, potentially unknowable whole. Even when we extend our senses with instruments, such as infrared cameras or particle detectors, the data must still be interpreted through the lens of human cognition, introducing new layers of abstraction and potential distortion.

The Role of Negative Space

One of the most intriguing aspects of Intersectionalism is its embrace of “negative space” in knowledge—the gaps and absences that shape what we can perceive and understand. A compelling metaphor for this is the concept of dark matter in physics. Dark matter is inferred not through direct observation but through its gravitational effects on visible matter. It exists as a kind of epistemic placeholder, highlighting the limits of our current sensory and conceptual tools.

Similarly, there may be aspects of reality that elude detection altogether because they do not interact with our sensory or instrumental frameworks. These “unknown unknowns” serve as reminders of the provisional nature of our maps and the hubris of assuming completeness. Just as dark matter challenges our understanding of the cosmos, the gaps in our perception challenge our understanding of reality itself.

Practical and Ethical Implications

Intersectionalism’s recognition of perceptual and cognitive limits has profound implications for science, ethics, and philosophy.

Science and Knowledge

In science, Intersectionalism demands humility. Theories and models, however elegant, are maps rather than terrains. They approximate reality within specific domains but are always subject to revision or replacement. String theory, for instance, with its intricate mathematics and reliance on extra dimensions, risks confusing the elegance of the map for the completeness of the terrain. By embracing the provisionality of knowledge, Intersectionalism encourages openness to new paradigms and methods that might better navigate the negative spaces of understanding.

Ethics and Empathy

Ethically, Intersectionalism fosters a sense of humility and openness toward other perspectives. If reality is always interpreted subjectively, then every perspective—human, animal, or artificial—offers a unique and potentially valuable insight into the intersection of subject and object. Recognising this pluralism can promote empathy and cooperation across cultures, species, and disciplines.

Technology and Augmentation

Technological tools extend our sensory reach, revealing previously unseen aspects of reality. However, they also introduce new abstractions and biases. Intersectionalism advocates for cautious optimism: technology can help illuminate the terrain but will never eliminate the gap between map and terrain. Instead, it shifts the boundaries of our blind spots, often revealing new ones in the process.

Conclusion: Navigating the Space Between

Intersectionalism provides a framework for understanding reality as a shared but imperfect intersection of subject and object. It rejects the extremes of materialism and idealism, offering instead a middle path that embraces the limitations of perception and cognition while remaining open to the possibilities of negative space and unknown dimensions. In doing so, it fosters humility, curiosity, and a commitment to provisionality—qualities essential for navigating the ever-expanding terrain of understanding.

By acknowledging the limits of our maps and the complexity of the terrain, Intersectionalism invites us to approach reality not as a fixed and knowable entity but as an unfolding interplay of perception and existence. It is a philosophy not of certainty but of exploration, always probing the space between.

What’s in a word?

The winding down seems to take longer than the winding up. I’m trying to finish my Language Insufficiency Hypothesis manuscript, and I’m hung up on word choices. I’ve shared this chart before. My conundrum is the name of the Y-axis.

I’ve been calling it Language Effectiveness, but I’ve been losing sleep. Rather than effectiveness, should it rather be efficiency or efficacy? These are all near-synonyms, but they have nuanced differences, so I turned to ChatGPT.

The terms efficiency, efficacy, and effectiveness are often used interchangeably but have distinct meanings, especially in professional, scientific, and practical contexts. Here is a breakdown of their differences:


Efficiency

  • Definition: The ability to achieve a desired result with minimal waste of resources (time, money, energy, materials).
  • Focus: Resource utilisation.
  • Key Question: How well are resources being used to achieve the goal?
  • Example: A fuel-efficient car travels further on less petrol, minimising waste.

In short, efficiency measures how economically resources are used to accomplish a task.


Efficacy

  • Definition: The capacity to produce the desired result under ideal conditions.
  • Focus: Capability and potential.
  • Key Question: Does it work as intended when tested in controlled settings?
  • Example: A new medication shows high efficacy in clinical trials, curing 95% of patients in controlled environments.

In short, efficacy refers to the theoretical or tested success of an intervention under optimal conditions.


Effectiveness

  • Definition: The degree to which something produces the desired result in real-world conditions.
  • Focus: Practical success.
  • Key Question: Does it work in everyday use or the real world?
  • Example: A medication might be effective in treating 80% of patients in a hospital, accounting for factors like adherence and side effects.

In short, effectiveness measures how well something performs when applied practically, outside idealised settings.


Comparison

  • Efficiency concerns how something is done (minimal resource use).
  • Efficacy assesses if it can be done successfully (ideal conditions).
  • Effectiveness evaluates how well it works in real situations (practical application).

Think of a machine:

  • Its efficacy is its ability to perform a task perfectly when running in optimal conditions.
  • Its efficiency is how little energy or resources it uses while running.
  • Its effectiveness is how well it performs the task in a chaotic, real-world factory.

Interestingly, this was a helpful inquiry. Notice that the chart has three curves. My first choice seems to capture the entire framework, but the other two better capture the curves themselves.

The lower curve reflects efficiency — though effectiveness given the real-world circumstances. The middle curve represents efficacy. This is because conditions are ideal if the speaker and listener are on the same page and the speaker is a skilled communicator. The chart illustrates that even in this perfect situation, language communication cannot reach one hundred per cent for abstract concepts. The top curve illustrates the cognitive phenomenon that makes some people believe they communicate at a higher level than they do. This is akin to the illusory superiority cognitive bias, where people tend to believe they are above average.

I’m leaning towards naming the bottom curve language effectiveness and the middle curve the language efficacy horizon. Please stand by.

NB: If the cover image makes no sense, it’s because I entered ‘efficiency effectiveness efficacy’ into Midjourney, and this was one of the images it spat out.

Metamodernism: A Retrograde Synthesis Disguised as Progress

I’ve written about this topic before. Metamodernism has been heralded as the great reconciler of Modernism and Postmodernism, a dialectical triumph that purports to synthesise these two oppositional paradigms. On the one hand, Modernism clings to its belief in objective truths, rationality, and universal principles. On the other, Postmodernism dismantles those certainties, exposing them as fragile constructs, rooted as much in ideology as in reason. The promise of metamodernism is to bridge this divide, to create a space where the objectivity of Modernism and the relativism of Postmodernism can coexist. But can it?

Audio: NotebookLM Podcast about this topic.

Spoiler alert: it cannot. In fact, metamodernism doesn’t even attempt to fulfil its stated goal. Instead, what it really does—intentionally or not—is meld Modernism’s objective framework with Pre-Enlightenment mysticism, offering a regressive concoction that romanticises the past while pretending to chart a bold new future. This isn’t synthesis; it’s nostalgia masquerading as innovation.

The Unbridgeable Divide: Objective vs. Relative

To understand why metamodernism’s claimed synthesis is untenable, we need to examine the fundamental incompatibility of its supposed components. Modernism rests on the firm foundation of objectivity: truth is universal, reason is supreme, and progress is inevitable. Postmodernism, however, thrives in the cracks of that foundation, pointing out that these so-called universal truths are culturally and historically contingent, and that “progress” often serves as a euphemism for domination or erasure.

Reconciling these two positions is like trying to mix oil and water. Modernism’s faith in absolutes cannot coexist with Postmodernism’s celebration of ambiguity and multiplicity without reducing one to a mere aesthetic flourish for the other. The result is not a synthesis but a superficial oscillation, an endless back-and-forth that achieves neither clarity nor coherence.

The Real Agenda: A Fusion of Objectivities

What metamodernism actually achieves is something quite different. Instead of bridging the gap between Modernism and Postmodernism, it fuses Modernism’s objective certainties with the equally objective but pre-rational framework of Pre-Enlightenment mysticism. In doing so, it abandons the critical lens of Postmodernism altogether, retreating to a worldview that is comfortingly familiar but intellectually regressive.

Consider the resurgence of myth, spirituality, and transcendence in metamodernist discourse. These elements hark back to a time when objective truths were dictated by divine authority or cosmological narratives rather than scientific inquiry. By incorporating these pre-modern ideas into its framework, metamodernism sidesteps the hard questions posed by Postmodernism, offering a fusion that is plausible only because both Modernism and Pre-Enlightenment mysticism share a common belief in absolute truths.

Plausible but Retrograde

This melding of Modernist and Pre-Enlightenment frameworks might seem plausible because, in truth, many Moderns never fully abandoned their mystical roots. The Enlightenment’s project of replacing religious dogma with reason was always incomplete; its foundational assumptions about universality and objectivity often carried an unspoken theological residue. Metamodernism taps into this latent nostalgia, offering a vision of the world that feels grounded and comforting, but at the cost of intellectual progress.

The problem is that this vision is fundamentally retrograde. By retreating to the certainties of the past, metamodernism ignores the most valuable insight of Postmodernism: that all frameworks, whether Modern or mystical, are ultimately constructed and contingent. To move forward, we need to grapple with this contingency, not escape from it.

Conclusion: Nostalgia in Disguise

Far from being a dialectical synthesis, metamodernism is a retreat. It cloaks itself in the language of progress while recycling old patterns of thought. Its attempt to reconcile Modernism and Postmodernism collapses into a fusion of Modernist objectivity and Pre-Enlightenment mysticism, leaving the critical insights of Postmodernism by the wayside.

If we are to truly progress, we must resist the siren song of metamodernism’s nostalgia. Instead, we should embrace the challenge of living without absolutes, grappling with the ambiguity and multiplicity that define our postmodern condition. Anything less is not synthesis but surrender.

“Your Triggers Aren’t My Problem!”

…except, sometimes they are.

This came across my feed, the laminated wisdom of our times: Your triggers are your responsibility. It isn’t the world’s obligation to tiptoe around you. A phrase so crisp, so confident, it practically struts. You can imagine it on a mug, alongside slogans like Live, Laugh, Gaslight. These are the language games I love to hate.

Now, there’s a certain truth here. Life is hard, and people aren’t psychic. We can’t reasonably expect the world to read our mental weather reports—50% chance of anxiety, rising storms of existential dread. In an adult society, we are responsible for understanding our own emotional terrain, building the bridges and detours that allow us to navigate it. That’s called resilience, and it’s a good thing.

Audio: NotebookLM Podcast on this topic.

But (and it’s a big but) this maxim becomes far less admirable when you scratch at its glossy surface. What does triggers even mean here? Because trigger is a shape-shifter, what I term Shrödinger’s Weasels. For someone with PTSD, a trigger is not a metaphor; it’s a live wire. It’s a flashback to trauma, a visceral hijacking of the nervous system. That’s not just “feeling sensitive” or “taking offence”—it’s a different universe entirely.

Yet, the word has been kidnapped by the cultural peanut gallery, drained of precision and applied to everything from discomfort to mild irritation. Didn’t like that movie? Triggered. Uncomfortable hearing about your privilege? Triggered. This semantic dilution lets people dodge accountability. Now, when someone names harm—racism, misogyny, homophobia, you name it—the accused can throw up their hands and say, Well, that’s your problem, not mine.

And there’s the rub. The neat simplicity of Your triggers are your responsibility allows individuals to dress their cruelty as stoic rationality. It’s not their job, you see, to worry about your “feelings.” They’re just being honest. Real.

Except, honesty without compassion isn’t noble; it’s lazy. Cruelty without self-reflection isn’t courage; it’s cowardice. And rejecting someone’s very real pain because you’re too inconvenienced to care? Well, that’s not toughness—it’s emotional illiteracy.

Let’s be clear: the world shouldn’t have to tiptoe. But that doesn’t mean we’re free to stomp. If someone’s discomfort stems from bigotry, prejudice, or harm, then dismissing them as “too sensitive” is gaslighting, plain and simple. The right to swing your fist, as the old adage goes, ends at someone else’s nose. Likewise, the right to be “brutally honest” ends when your honesty is just brutality.

The truth is messy, as most truths are. Some triggers are absolutely our responsibility—old wounds, minor slights, bruised egos—and expecting the world to cushion us is neither reasonable nor fair. But if someone names harm that points to a broader problem? That’s not a trigger. That’s a mirror.

So yes, let’s all take responsibility for ourselves—our pain, our growth, our reactions. But let’s also remember that real strength is found in the space where resilience meets accountability. Life isn’t about tiptoeing or stomping; it’s about walking together, with enough care to watch where we step.

Slice of Life

This is a timeline of foundational ideas on which I’ve built my Language Insufficiency Hypothesis. I spent a day compositing this timeline in Adobe Illustrator. I hadn’t used Illustrator in decades. It’s got a lot of options, so I’ve been leveraging ChatGPT as a help guide. It seems the UI/UX could be improved, but I’m sure I’ll get used to it. I’ve got another couple dozen to go. I’m hoping a learning/efficiency curve kicks in.

WordPress wouldn’t accept or render my first few file types, even though they are listed as acceptable – SVG, PNG, TIF, WEBP – so I opted for gold, old-fashioned BMP, so it’s pretty hefty for inline rendering on a blog. I want to share, and so here is a late draft.

I’m no graphic artist, so it’s relatively primitive. I’ve been experimenting with colours, but the book is black and white, so I’ll probably just keep it the same.

There are a lot of data points to fit on this timeline, and I’m limited to a 6″ x 9″ form factor. Except for the first 3 entries, the items are to-scale by year. I have more information, but I can’t really fit it in. In fact, this started as two timelines. There was so much cross-over that I merged them into one, forcing me to remove some meta information and lose some detail. Design is hard.

It looks fine on the page. I don’t want to reduce the font sizes to cram in more info. I’ll probably put the more descriptive copy in an appendix, if not on a facing page.

The Morality of Ants

Taking Moral Cues from Ants: Because Humans are Too Busy Defending the Indefensible

Ah, ants. Tiny, unassuming, and quite literally beneath us — unless you’re sprawled out on a picnic blanket fighting off a colony swarming your questionable sandwich. Yet, while humanity busies itself polluting oceans, strip-mining rainforests, and justifying corporate bloodsucking as “necessary for the economy,” ants are out here performing life-saving surgeries on their comrades.

You heard that correctly.

Researchers have now observed certain ant species (yes, ants) performing amputations on their injured nestmates to prevent infections from spreading. Picture it: a worker ant limping home, leg shredded by some territorial skirmish, and the squad rolls up like a triage team, deciding whether to (a) gently clean the wound or (b) lop the limb off entirely. Amputation is precise and deliberate — snip at the hip joint if the upper leg is toast. Lower leg injuries? Too risky. Infection spreads faster there, so it’s all hands (or mandibles) on deck for some industrial-strength licking.

It’s a brutal but effective social health system. The results? Injured ants survive. They get patched up, return to work, and contribute to the collective. The colony benefits, everyone thrives, and not a single ant launches into a fevered tirade about how “it’s their individual right to rot from gangrene in peace.”

Contrast this with humanity, where the very notion of collective good seems to spark mass hysteria in certain corners. Here, defending dubious practices — say, unfettered pollution, exploitative labour conditions, or the kind of wealth-hoarding that would make a dragon blush — has become a full-time hobby for some. “Personal responsibility!” they scream whilst someone chokes on smog or shivers in a warehouse set to Arctic temperatures. Heaven forbid we intervene.

Imagine explaining to ants that humans argue about whether everyone deserves basic healthcare. That we let industries poison rivers because regulations might “hurt innovation.” Some believe that letting people suffer and die without help is somehow noble.

Ants would stare at us — or they would if they had discernible faces. Then they’d probably do what they always do: get back to work ensuring their colony survives and thrives, as any halfway intelligent species might.

A Case for the Collective

What makes this ant behaviour so fascinating isn’t just that it exists, but that it demonstrates something humanity supposedly prides itself on: adaptability. Faced with an existential threat to one of their own, ants don’t moralise. They don’t argue about the costs or logistics of care. They don’t abandon the injured because helping them isn’t “profitable.” They just act. Quickly, efficiently, and for the collective good.

Meanwhile, humans act like the collective good is some leftist fever dream. Suggest tax-funded healthcare or basic environmental protections, and someone inevitably starts shrieking about “slippery slopes” toward tyranny, as though being able to breathe clean air or avoid bankruptcy after surgery is the thin edge of some Orwellian wedge.

We have entire systems built on the premise that it’s fine for some to suffer if others can profit. Does that sound hyperbolic? I’ll wait while you Google “externalised costs.” Spoiler alert: your cheap burger came at the expense of rainforest ecosystems and underpaid workers. But hey, as long as we’re prioritising shareholder value, all’s fair, right?

The Ants Would Like a Word

Here’s the thing: ants don’t amputate limbs because they’re altruistic softies. They do it because it makes sense. An injured worker can still contribute to the colony, and the colony’s survival depends on its members pulling together. It’s cold, pragmatic, and effective.

Now consider our own global “colony.” Why do we resist solutions that would make all of us more resilient? Healthcare, environmental protections, workers’ rights — these aren’t radical. They’re practical. Just like amputating a leg to save an ant, safeguarding the vulnerable helps everyone. Yet here we are, letting metaphorical infections spread because someone’s feelings about rugged individualism got in the way.

If Ants Can Do It, So Can We

At this point, humanity doesn’t need a lofty moral awakening. We just need to be marginally smarter than ants. Think about it: they’re tiny-brained insects who figured out that collective care improves survival rates. What’s our excuse?

Perhaps it’s time we take a page out of the ants’ playbook: diagnose the problem, take decisive action, and prioritise the common good. Amputate the rot. Treat the infection. And for the love of whatever deity or science you hold dear, stop defending systems that sacrifice the many for the few.

If ants can do it, we have no excuse.

In Conclusion:

When ants are more socially responsible than we are, it’s time to ask some tough questions. Now get it together, or the ants are going to outlive us all.

Banality of Evil

I thought I was done wittering on about Brian Thompson, the late CEO of United Healthcare, but here we are. His name lingers like the corporate perfume of systemic rot—an enduring testament to how we’ve elevated unethical behaviour into performance art. It got me thinking: what if we brought back a bit of old-school accountability? In Ancient Rome, outlaws lost their citizenship, legal protections, and status as people. That’s right—booted out of polite society. Meanwhile, we’ve done the opposite: we hand out golden parachutes and slap their faces on business magazine covers.

To some, Brian Thompson was a good man – apart from the insider trading, of course. He was successful, a nice guy, funny, and had a good family, and a few million-dollar homes. What else could you ask for? But his success came in the way of blood money. It seems we need fewer people who think like this, not more.

Then I recalled The Purge franchise. And sure, The Purge is a dystopian fantasy, but let’s up the stakes. Picture this: bounties on corporate villains. Not literal carnage, of course—let’s leave that for the big screen—but the return of real consequences. Instead of allowing their PR teams to smooth it all over with buzzwords and philanthropy crumbs, what if we made it socially unacceptable to be a snake in a suit? What if moral suasion—the lost art of persuading someone to do right because it’s, you know, right—actually came back into fashion?

Nietzsche nailed it ages ago. We’ve got two moral codes: one for people and one for money. And guess which one wins every time? All it takes is enough cash and the right rhetoric, and suddenly, everyone forgets who’s really getting fleeced. This is the banality of evil in its purest form: not grand acts of villainy but a shrugging normalisation of corruption. We don’t even consider it corruption. We see it as business as usual. We support and work for these businesses.

The tragedy is that we’ve become so desensitised to it that we are adept at ignoring the stench of moral failure that even calling it out feels quaint. But it’s not hopeless. Some of us still notice. Some of us still care. The real question is, how long can we keep tolerating this farce before we remember that morality isn’t just for the powerless?

Meantime, I just imagine these grubbers being stripped of power and protection, running scared from the likes of Luigi Mangioni.

From Homo Sacer to Wolf’s Head

A Stroll Through the Bloodstained Woods of Legal History

Ah, the Royal Forests of medieval England – a term so delightfully misleading that it could teach modern PR firms a thing or two. Far from evoking pastoral woodlands teeming with squirrels and picnic spots, these ‘forests’ were not defined by trees but by legal tyranny. Thanks to our favourite Norman conqueror, William the First (or William the Worst, if you were an unlucky peasant), these exclusive playgrounds for kings became the ultimate no-go zones for the hoi polloi.

Of Forests and Fictions

Contrary to what your Instagram influencer friends might think, a ‘forest’ back then didn’t need a single tree. It was the law, darling, not the foliage, that counted. These Royal Forests were terra sacra for the crown’s hunting pleasures, with laws so draconian they’d make Draco himself blush. Need firewood? Tough luck. Want to graze your sheep? Not unless you fancy forfeiting your flock – or perhaps a hand.

Speaking of hands, the forest laws weren’t just about controlling land; they were a petri dish for class warfare. Hunting deer without royal permission? You might not be ‘caught red-handed’ (hold that thought for later), but the penalties ensured your dignity – and possibly your anatomy – were left in the woods.

Enter the Outlaw: Homo Sacer in Doublet and Hose

Which brings us to that delightful medieval innovation: outlawry. To be declared an outlaw wasn’t just to be slapped with a fine or given a metaphorical wag of the finger. Oh no, you became a walking target, stripped of all legal protections. A medieval outlaw wasn’t just a criminal; they were legally dead – a status once reserved for the Roman homo sacer, the accursed man outside the pale of law and civilisation.

Declared an outlaw? Congratulations, you’re now a ‘wolf’s head.’ A charming term, really – essentially a poetic way of saying ‘fair game.’ Anyone could hunt you down without consequence. Add in a bit of medieval flair, and voilà: outlawry became less about justice and more about population control via recreational murder.

Caught Red-Handed: Scotland’s Contribution to the Blood-soaked Lexicon

Speaking of blood, let’s dissect that juicy phrase, ‘caught red-handed.’ Many would love to connect this idiom to poaching in Royal Forests, but alas, its origins are as Scottish as whisky and poor weather. The term ‘red hand’ first appeared in the Acts of Parliament of James I in 1432, long after the Normans had finished turning England into one giant gated community for deer.

Back then, being ‘caught reid hand’ wasn’t just a metaphor. It meant literally being caught with blood on your hands, usually from slaughtering someone else’s sheep – or worse, their lord’s. Fast-forward to Sir Walter Scott’s Ivanhoe in 1819, and the phrase gets a literary boost, morphing into ‘red-handed.’ By the Victorian era, it had become the darling of pulp crime writers everywhere.

Robin Hood: Outlaw Extraordinaire or Tudor PR Ploy?

And what’s a medieval blog post without a nod to Robin Hood, England’s most famous outlaw? Let’s be honest: Robin Hood probably didn’t exist, and if he did, he was less about redistributing wealth and more about ensuring his band of merry men didn’t starve. But Sherwood Forest’s association with this legendary thief cements the notion that outlaws weren’t always villains. Some were folk heroes – or at least, they were heroes to anyone who wasn’t a sheriff or a Norman noble.

Forests, Outlaws, and Bloodied Hands: A Legacy Worth Remembering

The legal forests of medieval England weren’t just about game preservation; they were a microcosm of royal power, social exclusion, and judicial brutality. The outlaw, stripped of all rights, was both a product and a victim of this system – a ‘wolf’s head’ wandering the wilderness, neither man nor beast in the eyes of the law.

And what of ‘caught red-handed’? A phrase born in blood-soaked Scottish pastures, far removed from the Royal Forests of England but just as evocative of humanity’s fixation on crime, punishment, and evidence that sticks – quite literally.

So next time you hear about forests, think less ‘enchanted woods’ and more ‘legal hellscape.’ And if you’re ever ‘caught red-handed,’ remember: at least you’re not a wolf’s head.

Thomas Sowell on Artificial Stupidity

A Masterpiece of Dog Whistle Rhetoric

Thomas Sowell once opined:

Image: Thomas Sowell with superimposed quotation cited above.

What a delightfully loaded statement. Sowell—a man whose intellectual credentials are as impeccable as his sweeping generalisations – manages, in a single breath, to malign teachers, dismiss contemporary education, and suggest that we’re hurtling towards some dystopian abyss because children today aren’t being taught…what, exactly? Latin declensions? The works of Burke? Perhaps the art of deference to authority? He never specifies. And why should he? Specifics would ruin the vibe.

This statement is a masterpiece of rhetorical dog-whistling. To those predisposed to Sowell’s worldview, it’s just common sense. Teachers are ignorant, modern education is a farce, and our children are doomed to a future of robotic ineptitude. It sounds plausible enough, provided you don’t stop to ask pesky questions like, “Which teachers? What nonsense? How, exactly, does one create artificial stupidity?”

The Cult of Common Sense

Let’s take a moment to examine the talismanic invocation of “common sense,” a concept as revered as it is elusive. Voltaire’s quip that “common sense is not so common” seems particularly apt here. What Sowell calls common sense is really shorthand for a monolithic worldview where civilisation is a neatly defined entity under siege by radical educators and their progressive agendas.

The problem? This worldview collapses under even cursory scrutiny. Civilisation is not a singular, static entity but an ever-evolving tapestry of conflicting ideas, cultures, and innovations. Teachers are not a homogenous cabal conspiring to dismantle society but an underpaid, overworked group trying their best to navigate a minefield of bureaucracy and societal expectations. And as for the “dangerous nonsense” being taught? Well, your guess is as good as mine. Critical thinking? Equity? Heaven forbid, empathy?

Ignorance as a Natural State

Sowell’s fans bristle at any suggestion that their intellectual idol might be guilty of hyperbole. But let’s consider the claim that teachers are “creating” stupidity. This presupposes that stupidity is an artificial construct rather than the natural baseline of humanity. The average IQ is, after all, 100 by design. For Americans, it hovers slightly below that at 97. This isn’t new. Stupidity doesn’t need to be created; it’s the default. Education’s job is to chip away at this deficit, not conjure intelligence ex nihilo.

To cast educators as villains in this endeavour is a disingenuous sleight of hand. Are there systemic issues in education? Of course. But to claim that teachers are actively fostering stupidity is akin to blaming firefighters for the existence of fires.

The Paradox of Intellectual Elitism

Here’s the kicker: Sowell himself is an intellectual. An elitist, no less, if we’re using his own fans’ definition. Yet his critique of intellectuals resonates with his audience precisely because they perceive him as an exception to the rule. “He’s one of us,” they say, failing to notice the irony. It’s the classic populist manoeuvre: position yourself as the voice of the people while enjoying all the privileges of the elite.

This paradox is not unique to Sowell. It’s the same dynamic that fuels the cult of Jordan Peterson, another intellectual who rails against intellectualism while wielding its tools. The result is a rhetorical echo chamber where dissent is dismissed as ignorance and agreement is lauded as truth.

The Dog Whistle Symphony

Sowell’s statement is, at its core, a symphony of dog whistles. It’s designed to resonate with those who already believe that modern education is a hotbed of progressive indoctrination. To this audience, it’s not a call to debate but a rallying cry. The terms – civilisation, ignorance, nonsense, artificial stupidity – are intentionally vague, allowing listeners to project their own fears and grievances onto them.

This vagueness is both the strength and the weakness of the argument. It’s compelling to those who share Sowell’s worldview but collapses under scrutiny. What is civilisation? What constitutes dangerous nonsense? Without definitions, these are just buzzwords masquerading as profundity.

Reframing the Conversation

So, how do we engage with such rhetoric? First, by refusing to accept its premises without question. Who are these teachers, and what are they allegedly teaching? What does Sowell mean by “civilisation”? Without specifics, his statement is not an argument but an incantation.

Second, by exposing the contradictions. If intelligence is the antidote to societal decline, as Sowell implies, then dismissing intellectuals wholesale is self-defeating. If education is the solution, then scapegoating teachers undermines the very people tasked with implementing it.

Finally, by recognising the emotional appeal at play. Sowell’s rhetoric taps into a deep-seated fear of change and loss. Addressing this fear requires empathy and nuance – qualities absent from his statement but essential for meaningful dialogue.

Conclusion

Thomas Sowell’s warning about “artificial stupidity” is less a diagnosis of societal decline than a litmus test for ideological allegiance. It’s a brilliant piece of rhetoric but a poor substitute for critical analysis. By unpacking its assumptions and exposing its contradictions, we can move beyond the echo chamber of dog whistles and engage in the kind of nuanced, constructive debate that Sowell’s own critique ostensibly calls for.

But then, nuance has never been common sense, has it?

A Buddhist Critique of Modern Livelihoods

It’s interesting to me that as an atheist and non-cognitivist, I can take the moral high ground relative to health insurance concerns in the United States. So, I write about it.

Blood Money and Broken Principles

In the aftermath of the tragic killing of Brian Thompson, the CEO of a health insurance conglomerate, a striking narrative has emerged. Many Americans view this act—shocking though it is—as emblematic of the anger and despair born of a system that profits by exploiting human vulnerability. Such reactions compel us to examine the ethics of industries that flourish on what can only be described as blood money. From health insurance to tobacco, alcohol, and the arms trade, these livelihoods raise profound ethical questions when viewed through the lens of the Buddhist Noble Eightfold Path, specifically Right Livelihood and Right Action.

The Moral Framework: Buddhism’s Path to Ethical Livelihood

Buddhism’s Eightfold Path provides a blueprint for ethical living, with Right Livelihood and Right Action serving as its ethical cornerstones. These principles demand that one’s work and deeds contribute to the welfare of others, avoid harm, and align with compassion and integrity. In short, they urge us to earn a living in a manner that uplifts rather than exploits. The health insurance industry’s business model—which often prioritises profits over the preservation of life—challenges these tenets in ways that are difficult to overlook.

Consider the denial of coverage for life-saving treatments, the exploitation of legal loopholes to reduce payouts, or the systemic perpetuation of healthcare inequality. These actions, while legally sanctioned, conflict sharply with the Buddhist ideal of avoiding harm and promoting well-being. Yet, this industry is not alone in its ethical failings. Many others—both legal and illegal—fall similarly short.

Industries of Exploitation: Tobacco, Alcohol, and Arms

The tobacco and alcohol industries provide stark examples of livelihoods that thrive on human suffering. Their products, despite their legality, are designed to foster dependency and harm. They exact a heavy toll on both individual lives and public health systems, a reality that makes them incompatible with Right Livelihood. The arms trade—arguably the most egregious example—profits directly from conflict and human misery. How can such industries possibly align with the Buddhist ideal of ahimsa (non-violence) or the compassionate aspiration to alleviate suffering?

In these cases, the harm caused is not incidental; it is fundamental to their business models. Whether one manufactures cigarettes, brews alcohol, or sells weapons, the destruction wrought by these activities is integral to their profitability. The contradiction is stark: the greater the harm, the greater the profit. This stands in direct opposition to the Buddhist call for livelihoods that sustain and support life.

Organised Crime: The Dark Mirror

When we turn to organised crime, the parallels become even more unsettling. Whether it’s the drug trade, human trafficking, or financial fraud, these activities epitomise unethical livelihoods. They exploit the vulnerable, foster violence, and undermine social cohesion. Yet, when viewed alongside certain legal industries, the line between “organised crime” and “corporate enterprise” begins to blur. Is the denial of life-saving healthcare less egregious than a gang’s extortion racket? Both profit by preying on human suffering. Both thrive in systems that prioritise gain over humanity.

The Buddhist Response: From Outrage to Action

Buddhism does not condone violence, no matter how symbolic or righteous it may appear. Right Action demands non-violence not only in deeds but also in thoughts and intentions. The killing of Brian Thompson, though perhaps an act of desperation or symbolism, cannot align with Buddhist ethics. Yet this tragedy should not eclipse the broader systemic critique. The true challenge is not to exact retribution but to transform the systems that perpetuate harm.

To move forward, we must ask how our societies can pivot toward livelihoods that align with compassion and justice. This entails holding exploitative industries to account and fostering economic systems that prioritise well-being over profit. The Buddhist path offers not only a critique of harmful practices but also a vision for ethical living—a vision that demands courage, compassion, and unwavering commitment to the common good.

Conclusion: Choosing a Better Path

The case of Brian Thompson’s killing is a symptom of a much larger ethical crisis. It forces us to confront uncomfortable truths about the industries that shape our world. Whether we scrutinise health insurance, tobacco, alcohol, the arms trade, or organised crime, the moral calculus remains the same: livelihoods that thrive on harm cannot be reconciled with the principles of Right Livelihood and Right Action.

As individuals and societies, we face a choice. We can continue to turn a blind eye to the suffering embedded in these industries, or we can commit to transforming them. The Buddhist path challenges us to choose the latter, to build systems and livelihoods rooted in compassion and justice. In doing so, we can begin to heal not only the wounds of individual tragedies but also the deeper fractures in our collective soul.