These are my favourite books I read in 2024. Only one was first published this year, so it seems I was playing catch-up and rereading. Two are about history; two are about the philosophy of science; and one is about biological free will or the lack thereof.
Against Method is a re-read for me. It makes the list on the coattails of a higher-ranked book. Feyerabend makes a compelling case against the Scientific Method™. To complete the set, I’d also recommend Bruno Latour‘s We Have Never Been Modern.
Determined arrives on the heels of Sapolsky’s Behave, another classic that I’d recommend even more, but I read it in 2018, so it doesn’t make the cut. In Determined, Sapolsky makes the case that there is no room or need for free will to explain human behaviour.
As with Against Method, Guns, Germs, and Steel makes the list only to complement my next choice. It views history through an environmental lens. To fill out the historical perspective, I recommend David Graeber’s The Dawn of Everything: A New History of Humanity (with David Wengrow). I’d recommend Yuval Noah Harari‘s Sapiens: A Brief History of Humankind, but it occupies a different category and is more about a plausible broad narrative than the detail explored in the others listed.
Quinn makes history approachable as she questions the uniformity of civilisations pushed by orthodoxy. Read this in context with the aforementioned historical accounts for a fuller perspective.
I was born in 1961. This should have been bedtime reading for me. I’d heard of this work, but one really has to read it. It’s less Modernist than I had presumed—though not to the extent of Feyerabend or Latour mentioned above. Again, reading all three provides a robust perspective on the philosophy of science.
Like Quinn, the writing is approachable. I had expected it to be stilted. It is academic, and it may boost your vocabulary, but give it a gander. It also works well in an audiobook format if you are so inclined.
This about closes out 2024. What do you think about these choices? Agree or disagree? What are your top recommendations?
The Structure of Scientific Revolutions was published in 1962. Written by Thomas Kuhn, it introduced the world to the concept of paradigm shifts in science — and, as it turns out, elsewhere. As I mentioned recently, I experienced a mishap, confounding it with Paul Feyerabend’s Against Method, first published in 1975. Both of these should be required reading FOR year 10 – or at least taught in summary.
I had read Feyerabend years ago but was only familiar with Kuhn from a distance. I’m clad we’ve become more intimate. These authors take different approaches to arrive at times in the same place. Kuhn takes a Modernist approach that he critiques and modifies. Feyerabend takes a Postmodernist path that sometimes cross.
Ah, the delightful dance of paradigms and anarchism in the hallowed halls of science! Let’s delve deeper into the intellectual pas de deux between Thomas Kuhn and Paul Feyerabend, those audacious thinkers who dared to challenge the sanctity of scientific methodology.
Kuhn’s Paradigm Shifts: The Scientific Waltz
Thomas Kuhn, in his seminal work The Structure of Scientific Revolutions, introduced us to the concept of paradigm shifts—a term now so overused that even corporate PowerPoint presentations aren’t spared. Kuhn posited that science doesn’t progress through a linear accumulation of knowledge but rather through a series of revolutionary upheavals. These upheavals occur when the prevailing scientific framework, or “paradigm,” becomes as outdated as last season’s fashion, unable to account for emerging anomalies. In Kuhn’s view, the scientific community clings to its paradigms with the tenacity of a dog to its bone, until the weight of anomalies forces a collective epiphany, leading to a paradigm shift. This cyclical process propels scientific advancement, albeit in a manner reminiscent of a drunken sailor’s stagger rather than a straight path.
Feyerabend’s Epistemological Anarchism: The Punk Rock of Science
Enter Paul Feyerabend, the enfant terrible of the philosophy of science, with his provocative manifesto Against Method. Feyerabend gleefully dismantled the notion of a universal scientific method, advocating for “epistemological anarchism.” He argued that the rigid adherence to methodological rules is about as useful as a chocolate teapot, stifling creativity and hindering progress. In Feyerabend’s anarchic utopia, “anything goes” in the pursuit of knowledge, and the scientific method is more of a loose suggestion than a strict protocol. His critique was not just a call for methodological diversity but a full-blown rebellion against the tyranny of scientific dogmatism.
A Comparative Analysis: Method to the Madness
While Kuhn and Feyerabend both challenged the orthodox views of scientific progress, their approaches were as different as chalk and cheese. Kuhn’s analysis was rooted in historical case studies, portraying scientific revolutions as communal shifts in perspective, akin to a collective midlife crisis. Feyerabend, on the other hand, took a more radical stance, suggesting that the very idea of a fixed scientific method is as mythical as unicorns. Where Kuhn saw periods of “normal science” punctuated by revolutionary shifts, Feyerabend saw a chaotic free-for-all, where progress is made not by following rules but by breaking them.
Implications for Scientific Practice: Order in Chaos
The implications of their critiques are profound. Kuhn’s work suggests that scientists should remain open to paradigm shifts, lest they become as obsolete as Betamax in a Netflix era. Feyerabend’s anarchism, while controversial, serves as a reminder that innovation often requires the audacity to defy convention. Together, they paint a picture of science not as a monolithic quest for truth but as a dynamic, often tumultuous, human endeavour.
Conclusion: The Legacy of Intellectual Rebellion
In conclusion, the works of Kuhn and Feyerabend invite us to view science through a more sceptical lens, questioning the sanctity of its methods and the rigidity of its paradigms. Their critiques serve as a clarion call for intellectual flexibility, urging us to embrace the chaos and complexity inherent in the pursuit of knowledge. After all, in the grand theatre of science, it’s often the most unconventional performances that leave a lasting impact.
In the great philosophical tug-of-war between materialism and idealism, where reality is argued to be either wholly independent of perception or entirely a construct of the mind, there lies an underexplored middle ground—a conceptual liminal space that we might call “Intersectionalism.” This framework posits that reality is neither purely objective nor subjective but emerges at the intersection of the two. It is the terrain shaped by the interplay between what exists and how it is perceived, mediated by the limits of human cognition and sensory faculties.
Audio: Podcast conversation on this topic.
Intersectionalism offers a compelling alternative to the extremes of materialism and idealism. By acknowledging the constraints of perception and interpretation, it embraces the provisionality of knowledge, the inevitability of blind spots, and the productive potential of uncertainty. This essay explores the foundations of Intersectionalism, its implications for knowledge and understanding, and the ethical and practical insights it provides.
Reality as an Intersection
At its core, Intersectionalism asserts that reality exists in the overlapping space between the objective and the subjective. The objective refers to the world as it exists independently of any observer—the “terrain.” The subjective encompasses perception, cognition, and interpretation—the “map.” Reality, then, is not fully contained within either but is co-constituted by their interaction.
Consider the act of seeing a tree. The tree, as an object, exists independently of the observer. Yet, the experience of the tree is entirely mediated by the observer’s sensory and cognitive faculties. Light reflects off the tree, enters the eye, and is translated into electrical signals processed by the brain. This process creates a perception of the tree, but the perception is not the tree itself.
This gap between perception and object highlights the imperfect alignment of subject and object. No observer perceives reality “as it is” but only as it appears through the interpretive lens of their faculties. Reality, then, is a shared but imperfectly understood phenomenon, subject to distortion and variation across individuals and species.
The Limits of Perception and Cognition
Humans, like all organisms, perceive the world through the constraints of their sensory and cognitive systems. These limitations shape not only what we can perceive but also what we can imagine. For example:
Sensory Blind Spots: Humans are limited to the visible spectrum of light (~380–750 nm), unable to see ultraviolet or infrared radiation without technological augmentation. Other animals, such as bees or snakes, perceive these spectra as part of their natural sensory worlds. Similarly, humans lack the electroreception of sharks or the magnetoreception of birds.
Dimensional Constraints: Our spatial intuition is bounded by three spatial dimensions plus time, making it nearly impossible to conceptualise higher-dimensional spaces without resorting to crude analogies (e.g., imagining a tesseract as a 3D shadow of a 4D object).
Cognitive Frameworks: Our brains interpret sensory input through patterns and predictive models. These frameworks are adaptive but often introduce distortions, such as cognitive biases or anthropocentric assumptions.
This constellation of limitations suggests that what we perceive and conceive as reality is only a fragment of a larger, potentially unknowable whole. Even when we extend our senses with instruments, such as infrared cameras or particle detectors, the data must still be interpreted through the lens of human cognition, introducing new layers of abstraction and potential distortion.
The Role of Negative Space
One of the most intriguing aspects of Intersectionalism is its embrace of “negative space” in knowledge—the gaps and absences that shape what we can perceive and understand. A compelling metaphor for this is the concept of dark matter in physics. Dark matter is inferred not through direct observation but through its gravitational effects on visible matter. It exists as a kind of epistemic placeholder, highlighting the limits of our current sensory and conceptual tools.
Similarly, there may be aspects of reality that elude detection altogether because they do not interact with our sensory or instrumental frameworks. These “unknown unknowns” serve as reminders of the provisional nature of our maps and the hubris of assuming completeness. Just as dark matter challenges our understanding of the cosmos, the gaps in our perception challenge our understanding of reality itself.
Practical and Ethical Implications
Intersectionalism’s recognition of perceptual and cognitive limits has profound implications for science, ethics, and philosophy.
Science and Knowledge
In science, Intersectionalism demands humility. Theories and models, however elegant, are maps rather than terrains. They approximate reality within specific domains but are always subject to revision or replacement. String theory, for instance, with its intricate mathematics and reliance on extra dimensions, risks confusing the elegance of the map for the completeness of the terrain. By embracing the provisionality of knowledge, Intersectionalism encourages openness to new paradigms and methods that might better navigate the negative spaces of understanding.
Ethics and Empathy
Ethically, Intersectionalism fosters a sense of humility and openness toward other perspectives. If reality is always interpreted subjectively, then every perspective—human, animal, or artificial—offers a unique and potentially valuable insight into the intersection of subject and object. Recognising this pluralism can promote empathy and cooperation across cultures, species, and disciplines.
Technology and Augmentation
Technological tools extend our sensory reach, revealing previously unseen aspects of reality. However, they also introduce new abstractions and biases. Intersectionalism advocates for cautious optimism: technology can help illuminate the terrain but will never eliminate the gap between map and terrain. Instead, it shifts the boundaries of our blind spots, often revealing new ones in the process.
Conclusion: Navigating the Space Between
Intersectionalism provides a framework for understanding reality as a shared but imperfect intersection of subject and object. It rejects the extremes of materialism and idealism, offering instead a middle path that embraces the limitations of perception and cognition while remaining open to the possibilities of negative space and unknown dimensions. In doing so, it fosters humility, curiosity, and a commitment to provisionality—qualities essential for navigating the ever-expanding terrain of understanding.
By acknowledging the limits of our maps and the complexity of the terrain, Intersectionalism invites us to approach reality not as a fixed and knowable entity but as an unfolding interplay of perception and existence. It is a philosophy not of certainty but of exploration, always probing the space between.
The winding down seems to take longer than the winding up. I’m trying to finish my Language Insufficiency Hypothesis manuscript, and I’m hung up on word choices. I’ve shared this chart before. My conundrum is the name of the Y-axis.
I’ve been calling it Language Effectiveness, but I’ve been losing sleep. Rather than effectiveness, should it rather be efficiency or efficacy? These are all near-synonyms, but they have nuanced differences, so I turned to ChatGPT.
The terms efficiency, efficacy, and effectiveness are often used interchangeably but have distinct meanings, especially in professional, scientific, and practical contexts. Here is a breakdown of their differences:
Efficiency
Definition: The ability to achieve a desired result with minimal waste of resources (time, money, energy, materials).
Focus: Resource utilisation.
Key Question: How well are resources being used to achieve the goal?
Example: A fuel-efficient car travels further on less petrol, minimising waste.
In short, efficiency measures how economically resources are used to accomplish a task.
Efficacy
Definition: The capacity to produce the desired result under ideal conditions.
Focus: Capability and potential.
Key Question: Does it work as intended when tested in controlled settings?
Example: A new medication shows high efficacy in clinical trials, curing 95% of patients in controlled environments.
In short, efficacy refers to the theoretical or tested success of an intervention under optimal conditions.
Effectiveness
Definition: The degree to which something produces the desired result in real-world conditions.
Focus: Practical success.
Key Question: Does it work in everyday use or the real world?
Example: A medication might be effective in treating 80% of patients in a hospital, accounting for factors like adherence and side effects.
In short, effectiveness measures how well something performs when applied practically, outside idealised settings.
Comparison
Efficiency concerns how something is done (minimal resource use).
Efficacy assesses if it can be done successfully (ideal conditions).
Effectiveness evaluates how well it works in real situations (practical application).
Think of a machine:
Its efficacy is its ability to perform a task perfectly when running in optimal conditions.
Its efficiency is how little energy or resources it uses while running.
Its effectiveness is how well it performs the task in a chaotic, real-world factory.
Interestingly, this was a helpful inquiry. Notice that the chart has three curves. My first choice seems to capture the entire framework, but the other two better capture the curves themselves.
The lower curve reflects efficiency — though effectiveness given the real-world circumstances. The middle curve represents efficacy. This is because conditions are ideal if the speaker and listener are on the same page and the speaker is a skilled communicator. The chart illustrates that even in this perfect situation, language communication cannot reach one hundred per cent for abstract concepts. The top curve illustrates the cognitive phenomenon that makes some people believe they communicate at a higher level than they do. This is akin to the illusory superiority cognitive bias, where people tend to believe they are above average.
I’m leaning towards naming the bottom curve language effectiveness and the middle curve the language efficacy horizon. Please stand by.
NB: If the cover image makes no sense, it’s because I entered ‘efficiency effectiveness efficacy’ into Midjourney, and this was one of the images it spat out.
I’ve written about this topic before. Metamodernism has been heralded as the great reconciler of Modernism and Postmodernism, a dialectical triumph that purports to synthesise these two oppositional paradigms. On the one hand, Modernism clings to its belief in objective truths, rationality, and universal principles. On the other, Postmodernism dismantles those certainties, exposing them as fragile constructs, rooted as much in ideology as in reason. The promise of metamodernism is to bridge this divide, to create a space where the objectivity of Modernism and the relativism of Postmodernism can coexist. But can it?
Audio: NotebookLM Podcast about this topic.
Spoiler alert: it cannot. In fact, metamodernism doesn’t even attempt to fulfil its stated goal. Instead, what it really does—intentionally or not—is meld Modernism’s objective framework with Pre-Enlightenment mysticism, offering a regressive concoction that romanticises the past while pretending to chart a bold new future. This isn’t synthesis; it’s nostalgia masquerading as innovation.
The Unbridgeable Divide: Objective vs. Relative
To understand why metamodernism’s claimed synthesis is untenable, we need to examine the fundamental incompatibility of its supposed components. Modernism rests on the firm foundation of objectivity: truth is universal, reason is supreme, and progress is inevitable. Postmodernism, however, thrives in the cracks of that foundation, pointing out that these so-called universal truths are culturally and historically contingent, and that “progress” often serves as a euphemism for domination or erasure.
Reconciling these two positions is like trying to mix oil and water. Modernism’s faith in absolutes cannot coexist with Postmodernism’s celebration of ambiguity and multiplicity without reducing one to a mere aesthetic flourish for the other. The result is not a synthesis but a superficial oscillation, an endless back-and-forth that achieves neither clarity nor coherence.
The Real Agenda: A Fusion of Objectivities
What metamodernism actually achieves is something quite different. Instead of bridging the gap between Modernism and Postmodernism, it fuses Modernism’s objective certainties with the equally objective but pre-rational framework of Pre-Enlightenment mysticism. In doing so, it abandons the critical lens of Postmodernism altogether, retreating to a worldview that is comfortingly familiar but intellectually regressive.
Consider the resurgence of myth, spirituality, and transcendence in metamodernist discourse. These elements hark back to a time when objective truths were dictated by divine authority or cosmological narratives rather than scientific inquiry. By incorporating these pre-modern ideas into its framework, metamodernism sidesteps the hard questions posed by Postmodernism, offering a fusion that is plausible only because both Modernism and Pre-Enlightenment mysticism share a common belief in absolute truths.
Plausible but Retrograde
This melding of Modernist and Pre-Enlightenment frameworks might seem plausible because, in truth, many Moderns never fully abandoned their mystical roots. The Enlightenment’s project of replacing religious dogma with reason was always incomplete; its foundational assumptions about universality and objectivity often carried an unspoken theological residue. Metamodernism taps into this latent nostalgia, offering a vision of the world that feels grounded and comforting, but at the cost of intellectual progress.
The problem is that this vision is fundamentally retrograde. By retreating to the certainties of the past, metamodernism ignores the most valuable insight of Postmodernism: that all frameworks, whether Modern or mystical, are ultimately constructed and contingent. To move forward, we need to grapple with this contingency, not escape from it.
Conclusion: Nostalgia in Disguise
Far from being a dialectical synthesis, metamodernism is a retreat. It cloaks itself in the language of progress while recycling old patterns of thought. Its attempt to reconcile Modernism and Postmodernism collapses into a fusion of Modernist objectivity and Pre-Enlightenment mysticism, leaving the critical insights of Postmodernism by the wayside.
If we are to truly progress, we must resist the siren song of metamodernism’s nostalgia. Instead, we should embrace the challenge of living without absolutes, grappling with the ambiguity and multiplicity that define our postmodern condition. Anything less is not synthesis but surrender.
This came across my feed, the laminated wisdom of our times: Your triggers are your responsibility. It isn’t the world’s obligation to tiptoe around you. A phrase so crisp, so confident, it practically struts. You can imagine it on a mug, alongside slogans like Live, Laugh, Gaslight. These are the language games I love to hate.
Now, there’s a certain truth here. Life is hard, and people aren’t psychic. We can’t reasonably expect the world to read our mental weather reports—50% chance of anxiety, rising storms of existential dread. In an adult society, we are responsible for understanding our own emotional terrain, building the bridges and detours that allow us to navigate it. That’s called resilience, and it’s a good thing.
Audio: NotebookLM Podcast on this topic.
But (and it’s a big but) this maxim becomes far less admirable when you scratch at its glossy surface. What does triggers even mean here? Because trigger is a shape-shifter, what I term Shrödinger’s Weasels. For someone with PTSD, a trigger is not a metaphor; it’s a live wire. It’s a flashback to trauma, a visceral hijacking of the nervous system. That’s not just “feeling sensitive” or “taking offence”—it’s a different universe entirely.
Yet, the word has been kidnapped by the cultural peanut gallery, drained of precision and applied to everything from discomfort to mild irritation. Didn’t like that movie? Triggered. Uncomfortable hearing about your privilege? Triggered. This semantic dilution lets people dodge accountability. Now, when someone names harm—racism, misogyny, homophobia, you name it—the accused can throw up their hands and say, Well, that’s your problem, not mine.
And there’s the rub. The neat simplicity of Your triggers are your responsibility allows individuals to dress their cruelty as stoic rationality. It’s not their job, you see, to worry about your “feelings.” They’re just being honest. Real.
Except, honesty without compassion isn’t noble; it’s lazy. Cruelty without self-reflection isn’t courage; it’s cowardice. And rejecting someone’s very real pain because you’re too inconvenienced to care? Well, that’s not toughness—it’s emotional illiteracy.
Let’s be clear: the world shouldn’t have to tiptoe. But that doesn’t mean we’re free to stomp. If someone’s discomfort stems from bigotry, prejudice, or harm, then dismissing them as “too sensitive” is gaslighting, plain and simple. The right to swing your fist, as the old adage goes, ends at someone else’s nose. Likewise, the right to be “brutally honest” ends when your honesty is just brutality.
The truth is messy, as most truths are. Some triggers are absolutely our responsibility—old wounds, minor slights, bruised egos—and expecting the world to cushion us is neither reasonable nor fair. But if someone names harm that points to a broader problem? That’s not a trigger. That’s a mirror.
So yes, let’s all take responsibility for ourselves—our pain, our growth, our reactions. But let’s also remember that real strength is found in the space where resilience meets accountability. Life isn’t about tiptoeing or stomping; it’s about walking together, with enough care to watch where we step.
I tend to listen to audiobooks as I fall asleep. For short chapters, I set it to stop at the end of the chapter. For longer pieces, I might choose 15 or 30 minutes. If I’m still awake and hear it end, I’ll restart a timer or continue to try to sleep. If I fall asleep in the middle, when I resume listening, I’ll find the last spot I remember and continue from there. Often, I even roll back too far, but there’s no harm in re-listening. But this is the setup, not the story.
Having just finished Guns, Germs, and Steel by Jared Diamond, I started a new book – The Structure of Scientific Revolutions by Thomas Kuhn. It’s a classic work over half a century old, but I’ve yet to read it. What better time than now. I was tired. I cued the book and set a thirty-minute timer.
As I listen, I can’t help but think how comfortable I was with the material. It reminded me so much of Paul Feyerabend. I considered how much the two had in common on the philosophy of science. Thirty minutes passed, and I decided just to fall asleep, keeping in mind the notion and looking forward to hearing more.
As is a habit, when I awoke, I started another chapter or finished the one I had been listening to. What did I find? I had been listening to Against Method by Feyerabend. Mystery solved. Instead, I decided to listen to Kuhn’s opening chapter. Whilst the two are not hand in glove, they do share some insights. I won’t get into them now. Meantime, I recommend these books, each for its own reason.
And Now for Something Completely Different…
I’m still finishing my Language Insufficiency Hypothesis manuscript. I still need to review it at least a final time, but I paused to create charts, graphs, and annotations. This time, as I was documenting the central Language Effectiveness-Complexity Gradient (you had to have been there), I had an epiphany. I won’t delve into details, but how I thought of it conceptually and how it graphs are different. It won’t make sense without a lot of backstory, but it’s related to the way I critique and dis-integrate Gallie’s Essentially Contestable Concepts (ECCs).
In a nutshell, I contest ECCs, and then I explode them into three categories. I retain but redefine his core ECCs – I call them Contestables. As Gallie notes, these are value-laden and ideological concepts. I identified another category for boundary-resistant terms, which I named Fluidics. I name a third category Ambigents, as they share aspects of both – being ideological as well as boundary-resistant.
In conception, I viewed it as a Venn diagramme – a case where Ambigents are formed by the union of Contestables and Fluidics. In fact, that’s how I coined the term Ambigents at the start – think ambidextrous. I constructed a chart with this in mind. These fit between other conceptual complexity categories. This has been the configuration for about a month – until now.
On the Complexity axis of a chart, it’s obvious that a combination of two complex concepts will be more complex than the constituent parts, not between them.
The good news is that I hadn’t created and placed all of the charts, so I didn’t waste that time. I did have to re-work the charts I had made, but this was trivial. Now, I need to revisit the manuscript and ensure any discussion involving orientation is amended. Meantime, crisis averted.
This is a timeline of foundational ideas on which I’ve built my Language Insufficiency Hypothesis. I spent a day compositing this timeline in Adobe Illustrator. I hadn’t used Illustrator in decades. It’s got a lot of options, so I’ve been leveraging ChatGPT as a help guide. It seems the UI/UX could be improved, but I’m sure I’ll get used to it. I’ve got another couple dozen to go. I’m hoping a learning/efficiency curve kicks in.
WordPress wouldn’t accept or render my first few file types, even though they are listed as acceptable – SVG, PNG, TIF, WEBP – so I opted for gold, old-fashioned BMP, so it’s pretty hefty for inline rendering on a blog. I want to share, and so here is a late draft.
I’m no graphic artist, so it’s relatively primitive. I’ve been experimenting with colours, but the book is black and white, so I’ll probably just keep it the same.
There are a lot of data points to fit on this timeline, and I’m limited to a 6″ x 9″ form factor. Except for the first 3 entries, the items are to-scale by year. I have more information, but I can’t really fit it in. In fact, this started as two timelines. There was so much cross-over that I merged them into one, forcing me to remove some meta information and lose some detail. Design is hard.
It looks fine on the page. I don’t want to reduce the font sizes to cram in more info. I’ll probably put the more descriptive copy in an appendix, if not on a facing page.
I thought I was done wittering on about Brian Thompson, the late CEO of United Healthcare, but here we are. His name lingers like the corporate perfume of systemic rot—an enduring testament to how we’ve elevated unethical behaviour into performance art. It got me thinking: what if we brought back a bit of old-school accountability? In Ancient Rome, outlaws lost their citizenship, legal protections, and status as people. That’s right—booted out of polite society. Meanwhile, we’ve done the opposite: we hand out golden parachutes and slap their faces on business magazine covers.
To some, Brian Thompson was a good man – apart from the insider trading, of course. He was successful, a nice guy, funny, and had a good family, and a few million-dollar homes. What else could you ask for? But his success came in the way of blood money. It seems we need fewer people who think like this, not more.
Then I recalled The Purge franchise. And sure, The Purge is a dystopian fantasy, but let’s up the stakes. Picture this: bounties on corporate villains. Not literal carnage, of course—let’s leave that for the big screen—but the return of real consequences. Instead of allowing their PR teams to smooth it all over with buzzwords and philanthropy crumbs, what if we made it socially unacceptable to be a snake in a suit? What if moral suasion—the lost art of persuading someone to do right because it’s, you know, right—actually came back into fashion?
Nietzsche nailed it ages ago. We’ve got two moral codes: one for people and one for money. And guess which one wins every time? All it takes is enough cash and the right rhetoric, and suddenly, everyone forgets who’s really getting fleeced. This is the banality of evil in its purest form: not grand acts of villainy but a shrugging normalisation of corruption. We don’t even consider it corruption. We see it as business as usual. We support and work for these businesses.
The tragedy is that we’ve become so desensitised to it that we are adept at ignoring the stench of moral failure that even calling it out feels quaint. But it’s not hopeless. Some of us still notice. Some of us still care. The real question is, how long can we keep tolerating this farce before we remember that morality isn’t just for the powerless?
Meantime, I just imagine these grubbers being stripped of power and protection, running scared from the likes of Luigi Mangioni.
A Stroll Through the Bloodstained Woods of Legal History
Ah, the Royal Forests of medieval England – a term so delightfully misleading that it could teach modern PR firms a thing or two. Far from evoking pastoral woodlands teeming with squirrels and picnic spots, these ‘forests’ were not defined by trees but by legal tyranny. Thanks to our favourite Norman conqueror, William the First (or William the Worst, if you were an unlucky peasant), these exclusive playgrounds for kings became the ultimate no-go zones for the hoi polloi.
Of Forests and Fictions
Contrary to what your Instagram influencer friends might think, a ‘forest’ back then didn’t need a single tree. It was the law, darling, not the foliage, that counted. These Royal Forests were terra sacra for the crown’s hunting pleasures, with laws so draconian they’d make Draco himself blush. Need firewood? Tough luck. Want to graze your sheep? Not unless you fancy forfeiting your flock – or perhaps a hand.
Speaking of hands, the forest laws weren’t just about controlling land; they were a petri dish for class warfare. Hunting deer without royal permission? You might not be ‘caught red-handed’ (hold that thought for later), but the penalties ensured your dignity – and possibly your anatomy – were left in the woods.
Enter the Outlaw: Homo Sacer in Doublet and Hose
Which brings us to that delightful medieval innovation: outlawry. To be declared an outlaw wasn’t just to be slapped with a fine or given a metaphorical wag of the finger. Oh no, you became a walking target, stripped of all legal protections. A medieval outlaw wasn’t just a criminal; they were legally dead – a status once reserved for the Roman homo sacer, the accursed man outside the pale of law and civilisation.
Declared an outlaw? Congratulations, you’re now a ‘wolf’s head.’ A charming term, really – essentially a poetic way of saying ‘fair game.’ Anyone could hunt you down without consequence. Add in a bit of medieval flair, and voilà: outlawry became less about justice and more about population control via recreational murder.
Caught Red-Handed: Scotland’s Contribution to the Blood-soaked Lexicon
Speaking of blood, let’s dissect that juicy phrase, ‘caught red-handed.’ Many would love to connect this idiom to poaching in Royal Forests, but alas, its origins are as Scottish as whisky and poor weather. The term ‘red hand’ first appeared in the Acts of Parliament of James I in 1432, long after the Normans had finished turning England into one giant gated community for deer.
Back then, being ‘caught reid hand’ wasn’t just a metaphor. It meant literally being caught with blood on your hands, usually from slaughtering someone else’s sheep – or worse, their lord’s. Fast-forward to Sir Walter Scott’s Ivanhoe in 1819, and the phrase gets a literary boost, morphing into ‘red-handed.’ By the Victorian era, it had become the darling of pulp crime writers everywhere.
Robin Hood: Outlaw Extraordinaire or Tudor PR Ploy?
And what’s a medieval blog post without a nod to Robin Hood, England’s most famous outlaw? Let’s be honest: Robin Hood probably didn’t exist, and if he did, he was less about redistributing wealth and more about ensuring his band of merry men didn’t starve. But Sherwood Forest’s association with this legendary thief cements the notion that outlaws weren’t always villains. Some were folk heroes – or at least, they were heroes to anyone who wasn’t a sheriff or a Norman noble.
Forests, Outlaws, and Bloodied Hands: A Legacy Worth Remembering
The legal forests of medieval England weren’t just about game preservation; they were a microcosm of royal power, social exclusion, and judicial brutality. The outlaw, stripped of all rights, was both a product and a victim of this system – a ‘wolf’s head’ wandering the wilderness, neither man nor beast in the eyes of the law.
And what of ‘caught red-handed’? A phrase born in blood-soaked Scottish pastures, far removed from the Royal Forests of England but just as evocative of humanity’s fixation on crime, punishment, and evidence that sticks – quite literally.
So next time you hear about forests, think less ‘enchanted woods’ and more ‘legal hellscape.’ And if you’re ever ‘caught red-handed,’ remember: at least you’re not a wolf’s head.