Philosophic Influences

I just finished the writing and editorial parts of my Language Insufficiency Hypothesis. It still needs cover art and some administrative odds and ends, but I’m taking a day for a breather to share something about myself and my worldview. For this, I share my philosophical influences and how they support my core insights. For dramatic effect, I’ll even try to weight them to 100 per cent, leaving an ‘others’ bucket for the unaccounted ones.

Audio: NotebookLM summary podcast of this topic.

Obviously, this highly scientific accounting is about as useful as a Facebook survey or a fortune cookie, but it should reveal something. I have influences outside of philosophy, but I’ll limit this list at the start. The numbers don’t exactly add to 100% because there is a bit of cross-pollination, say, between Foucault and Nietzsche or Ĺ˝iĹžek and Hegel – or perhaps I’m just not good at maths. You decide.

Nietzsche (≈18)

Nietzsche is likely the uranium core. Haters and detractors like to diminish his contribution – and he didn’t play by the old rules – but they are wrong. He contributes value-creation, anti-moralism, perspectivism, the critique of ressentiment, the demolition of universals.

Nietzsche sits at the centre of the blast radius. Everything else is shrapnel. If there’s a thinker who detonated the Enlightenment’s pretensions more elegantly, I’ve not met them. He showed us that values are forged, not found; that morality is a genealogy of grievances; that certainty is the last refuge of the timid. In other words, he cleared the ground so the rest of us could get to work without tripping over Kantian furniture. But after Nietzsche’s uranium core, the next concentric ring becomes murkier.

Foucault (≈20%)

Foucault supplies the schematics. Where Nietzsche swung a hammer at the idols, Foucault identified the building codes. He mapped power as a set of subtle, everyday enchantments. He showed how ‘knowledge’ is simply what a society rewards with credibility. He is the patron saint of anyone who suspects normality is an instrument, not a neutral state of affairs. The world looks different once you see the disciplinary fingerprints on everything.

Derrida (≈10%)

Derrida gives me language as mischief. Meaning wobbles, slides, cracks; binaries betray themselves; every conceptual edifice contains its own trapdoor. Derrida isn’t a system; he’s an escape artist. And frankly, you can’t write anything about the insufficiency of language without genuflecting in his general direction.

Late Wittgenstein (≈15%)

The quiet structural pillar. If Derrida is the saboteur, Wittgenstein is the carpenter who informs you that the house was never stable anyway. Meaning-as-use, language-games, the dissolution of philosophical pseudo-problems: his later work underwrites virtually every modern suspicion about fixed categories and timeless essences. He doesn’t shout; he shrugs – and everything collapses neatly.

Rorty (≈5%)

Rorty replaces metaphysical longing with cultural pragmatism. He teaches you to stop hunting for capital-T Truth and instead track the vocabularies we actually live in. He’s the friendly voice whispering, ‘You don’t need foundations. You need better conversations’. His influence is felt mostly in the tone of my epistemic cynicism: relaxed rather than tragic. Besides, we disagree on the better conversations bit.

Geuss (≈4%)

If Rorty makes you light-footed, Geuss reminds you not to float off into abstraction entirely. He is the critic of moralism par excellence, the man who drags philosophy kicking and screaming back into politics. Geuss is the voice that asks, ‘Yes, but who benefits?’ A worldview without him would be a soufflĂŠ.

Heidegger (≈6%)

Selective extraction only. Being-in-the-world, thrownness, worldhood – the existential scaffolding. His political judgment was catastrophic, of course, but the ontological move away from detached subjectivity remains invaluable. He gives the metaphysics a certain grain.

Existentialists: Beauvoir, Sartre, Camus (≈6%)

They provide the atmospheric weather: choice, finitude, absurdity, revolt, the sheer mess of human freedom. They don’t define the system; they give it blood pressure. Besides, I met them before I switched to Team Nietzsche-Foucault.

Žižek, Latour, Baudrillard (≈2% combined)

These three are my licensed provocateurs.

  • Ĺ˝iĹžek exposes how ideology infiltrates desire.
  • Latour dismantles the Nature/Society binary with glee.
  • Baudrillard whispers that representation ate reality while we were looking at our phones.

They’re trickster figures, not architects.

Hume, Putnam, Dennett, and the Ancillaries (≈1% combined)

These are the seasonings.

  • Hume is the Scottish acid bath under every epistemic claim.
  • Putnam gives internal realism its analytic passport.
  • Dennett offers mechanistic metaphors you can steal even when you disagree.
  • Kant and Hegel hover like compulsory ghosts.
  • Rawls remains decorative parsley: included for completeness, consumed by none.

The Others Bucket (≈5%)

The unallocated mass: writers, anthropologists, theorists, stray thinkers you absorb without noticing. The ‘residuals’ category for the philosophical inventory – the bit fortune cookies never warn you about.

Enfin

Obviously, these ratios are more for humour than substance, but these are the thinkers I return to — the ones whose fingerprints I keep discovering on my own pages, no matter how many years or detours intervene.

Perhaps more revealing are those who didn’t make the guest list. Socrates, Plato, and Aristotle remain outside, smoking in the cold. The Stoics, Marcus Aurelius and his well-meaning self-help descendants, also failed to RSVP. In truth, I admire the posture but have little patience for the consolations – especially when they become the emotional training wheels of neoliberalism.

And then, of course, the Enlightenment patriarchs: Hobbes, Locke, Montesquieu and the rest of the constitutional furniture. I acknowledge their historical necessity the way one acknowledges plumbing – grateful it exists, uninterested in climbing inside the pipes. Rousseau, admittedly, I tolerate with something approaching affection, but only because he never pretended to be tidy.

I forgot Descartes, Voltaire, and Pascal, but it’s too late to scroll back and adjust the ledger. Consider them rounding errors – casualties of the margins, lost to the tyranny of percentages.

If anyone mentions another one – Spinoza comes to mind – I’ll try to figure out where they fit in my pantheon. Were I to render this tomorrow, the results may vary.

The Tyranny of “Human Nature”

There is a kind of political necromancy afoot in modern discourse—a dreary chant murmured by pundits, CEOs, and power-drunk bureaucrats alike: “It’s just human nature.” As if this incantation explains, excuses, and absolves all manner of violent absurdities. As if, by invoking the mystic forces of evolution or primal instinct, one can justify the grotesque state of things. Income inequality? Human nature. War? Human nature. Corporate psychopathy? Oh, sweetie, it’s just how we’re wired.

What a convenient mythology.

Audio: NotebookLM podcast on this topic.

If “human nature” is inherently brutish and selfish, then resistance is not only futile, it is unnatural. The doctrine of dominance gets sanctified, the lust to rule painted as destiny rather than deviance. Meanwhile, the quiet, unglamorous yearning of most people—to live undisturbed, to coĂśperate rather than conquer—is dismissed as naĂŻve, childish, and unrealistic. How curious that the preferences of the vast majority are always sacrificed at the altar of some aggressive minority’s ambitions.

Let us dispense with this dogma. The desire to dominate is not a feature of human nature writ large; it is a glitch exploited by systems that reward pathological ambition. Most of us would rather not be ruled, and certainly not managed by glorified algorithms in meat suits. The real human inclination, buried beneath centuries of conquest and control, is to live in peace, tend to our gardens, and perhaps be left the hell alone.

And yet, we are not. Because there exists a virulent cohort—call them oligarchs, executives, generals, kings—whose raison d’ĂŞtre is the acquisition and consolidation of power. Not content to build a life, they must build empires. Not content to share, they must extract. They regard the rest of us as livestock: occasionally troublesome, but ultimately manageable.

To pacify us, they offer the Social Contract™—a sort of ideological bribe that says, “Give us your freedom, and we promise not to let the wolves in.” But what if the wolves are already inside the gates, wearing suits and passing legislation? What if the protection racket is the threat itself?

So no, it is not “human nature” that is the problem. Cancer is natural, too, but we don’t celebrate its tenacity. We treat it, research it, and fight like hell to survive it. Likewise, we must treat pathological power-lust not as an inevitability to be managed but as a disease to be diagnosed and dismantled.

The real scandal isn’t that humans sometimes fail to coöperate. It’s that we’re constantly told we’re incapable of it by those whose power depends on keeping it that way.

Let the ruling classes peddle their myths. The rest of us might just choose to write new ones.

Welcome to the Casino of Justice

Welcome to the Grand Casino of Justice, where the chips are your civil liberties, the roulette wheel spins your fate, and the house—ever-smug in its powdered wig of procedural decorum—always wins.

Step right up, citizens! Marvel at the dazzling illusions of “science” as performed by your local constabulary: the sacred polygraph, that magnificent artefact of 1920s snake oil, still trotted out in back rooms like a sĂŠance at a nursing home. Never mind that it measures stress, not deception. Never mind that it’s been dismissed by any scientist with a functioning prefrontal cortex. It’s not there to detect truth—it’s there to extract confession. Like a slot machine that only pays out when you agree you’re guilty.

Audio: NotebookLM podcast on this topic.

And oh, the forensic pageantry! The blacklight! The dramatic swabs! The breathless invocations of “trace evidence,” “blood spatter patterns,” and—ooh! ahh!—fingerprints, those curly little whorls of manufactured certainty. You’ve been told since childhood that no two are alike, that your prints are your identity. Rubbish. Human fingerprint examiners disagree with themselves when presented with the same print twice. In blind tests. And yes—this bears repeating with appropriate incredulity—koalas have fingerprints so uncannily similar to ours they’ve confused human forensic analysts. Somewhere, a marsupial walks free while a teenager rots in remand.

You see, it’s not about justice. It’s about control. Control through performance. The legal system, like a casino, isn’t interested in fairness—it’s interested in outcome. It needs to appear impartial, all robes and solemnity, while tipping the odds ever so slightly, perpetually, in its own favour. This is jurisprudence as stagecraft, science as set-dressing, and truth as a collateral casualty.

And who are the croupiers of this great charade? Not scientists, no. Scientists are too cautious, too mired in uncertainty, too concerned with falsifiability and statistical error margins. No, your case will be handled by forensic technicians with just enough training to speak jargon, and just enough institutional loyalty to believe they’re doing the Lord’s work. Never mind that many forensic methods—bite mark analysis, tool mark “matching,” even some blood spatter interpretations—are about as scientifically robust as a horoscope printed on a cereal box.

TV crime dramas, of course, have done their bit to embalm these myths in the cultural subconscious. “CSI” isn’t a genre—it’s a sedative, reassuring the public that experts can see the truth in a hair follicle or the angle of a sneeze. In reality, most convictions hinge on shoddy analysis, flawed assumptions, and a little prosecutorial sleight of hand. But the juries are dazzled by the sciencey buzzwords, and the judges—God bless their robes—rarely know a confidence interval from a cornflake.

So, what do you do when accused in the great Casino of Justice? Well, if you’re lucky, you lawyer up. If you’re not, you take a plea deal, because 90% of cases never reach trial. Why? Because the system is designed not to resolve guilt, but to process bodies. It is a meat grinder that must keep grinding, and your innocence is but a small bone to be crushed underfoot.

This isn’t justice. It’s a theatre of probability management, where the goal is not truth but resolution. Efficiency. Throughput. The house keeps the lights on by feeding the machine, and forensic science—real or imagined—is merely the window dressing. The roulette wheel spins, the dice tumble, and your future hangs on the angle of a smudge or the misreading of a galvanic skin response.

Just don’t expect the koalas to testify. They’re wise enough to stay in the trees.

Reflections on Chapter 6 of Harari’s Nexus

As I continue reading Chapter 6 of Yuval Noah Harari’s Nexus, I find myself wrestling with the masterful misdirection and rhetorical strategies he employs. A critical reader can discern the writing on the wall, but his choir of loyal readers likely consumes his narrative like red meat, uncritically savouring its surface-level appeal.

Social Media and Misinformation

Harari begins by addressing the role of social media in spreading disinformation and misinformation, particularly singling out Facebook. From there, he pivots to Q-Anon conspiracy theories. While these topics are undeniably relevant, Harari’s framing feels more like an indictment of the masses rather than a nuanced critique of the systemic factors enabling these phenomena.

The Voter Knows Best?

Harari leans heavily on platitudes like “the customer is always right” and “the voters know best.” These truisms may resonate with an indoctrinated audience but fail to hold up under scrutiny. The powers that be—whether governments or corporations—exploit this mentality, much like religious institutions exploit faith. Harari’s concern seems rooted in the fear that AI could outmanoeuvre these same masses, creating competition for global entities like the World Economic Forum (WEF), which, in his view, aims to remain unchallenged.

Taxation, Nexus, and the Future of Nation-States

Harari’s discussion of taxation and the nexus between power and information is intriguing, but it misses a larger point. Nation-states, as I see it, are becoming anachronisms, unable to defend themselves against the rise of technocratic forces. Taxation, once a cornerstone of state power, may soon be irrelevant as the global landscape shifts toward what I call Feudalism 2.0—a hierarchy dominated by transnational actors like the WEF.

Harari poorly frames a Uruguayan taxation dilemma, reducing it to a simplistic trade-off between information and power without addressing the broader implications. This shallow analysis leaves much to be desired.

Determinism and Misdirection

Next, Harari mischaracterises the philosophical concept of determinism, likely to mislead readers who aren’t well-versed in its nuances. He spins a cautionary tale based on this revised definition, which may serve his rhetorical goals but detracts from the intellectual integrity of his argument.

Setting the Stage

Harari ends the chapter with a statement about the importance of time and place in history, using it as a setup to provoke a sense of urgency. While this is a classic rhetorical device, it feels hollow without substantive backing.

Final Reflections

Many Modernists may embrace Harari’s narrative uncritically, but for me, the veneer is thin and riddled with holes. His analysis fails to engage with more profound critiques of power and governance, relying instead on cherry-picked anecdotes and oversimplified arguments. The chapter’s focus on social media, AI, and taxation could have been fertile ground for profound insights, but Harari instead opts for rhetorical flourish over rigorous examination. Still, I’ll press on and see what the next chapter holds.

The Relativity of Morality: A Penguin’s Tale

I recently watched The Penguin on HBO Max, a series set in DC’s Batman universe. Ordinarily, I avoid television – especially the superhero genre – but this one intrigued me. Less spandex, more mob drama. An origin story with a dash of noir. I’ll spare you spoilers, but suffice it to say that it was an enjoyable detour, even for someone like me who prefers philosophy over fistfights.

This post isn’t a review, though. It’s a springboard into a larger idea: morality’s subjectivity – or, more precisely, its relativity.

Audio: Spotify podcast related to this topic.

Morality in a Vacuum

Morality, as I see it, is a social construct. You might carry a private moral compass, but without society, it’s about as useful as a clock on a desert island. A personal code of ethics might guide you in solitary moments, but breaking your own rules – eating that forbidden biscuit after vowing to abstain, for instance – doesn’t carry the weight of a true moral transgression. It’s more akin to reneging on a New Year’s resolution. Who’s harmed? Who’s holding you accountable? The answer is: no one but yourself, and even then, only if you care.

The Social Contract

Introduce a second person, and suddenly, morality gains traction. Agreements form – explicit or tacit – about how to behave. Multiply that to the level of a community or society, and morality becomes a kind of currency, exchanged and enforced by the group. Sometimes, these codes are elevated to laws. And, ironically, the act of adhering to a law – even one devoid of moral content – can itself become the moral thing to do. Not because the act is inherently right, but because it reinforces the structure society depends upon.

But morality is neither universal nor monolithic. It is as fractured and kaleidoscopic as the societies and subcultures that create it. Which brings us back to The Penguin.

Crime’s Moral Code

The Penguin thrives in a criminal underworld where the moral compass points in a different direction. In the dominant society’s eyes, crime is immoral. Robbery, murder, racketeering – all “bad,” all forbidden. But within the subculture of organised crime, a parallel morality exists. Honour among thieves, loyalty to the family, the unspoken rules of the game – these are their ethics, and they matter deeply to those who live by them.

When one criminal praises another – “You done good” – after a successful heist or a precise hit, it’s a moral judgement within their own framework. Outside that framework, society condemns the same actions as abhorrent. Yet even dominant societies carve out their own moral exceptions. Killing, for instance, is broadly considered immoral. Murder is outlawed. But capital punishment? That’s legal, and often deemed not only acceptable but righteous. Kant argued it was a moral imperative. Nietzsche, ever the cynic, saw this duality for what it was: a power dynamic cloaked in self-righteousness.

In The Penguin, we see this dichotomy laid bare. The underworld isn’t without morals; it simply operates on a different axis. And while the larger society might disdain it, the hypocrisy of their own shifting moral codes remains unexamined.

Final Thoughts on the Series

I’ll save other philosophical musings about The Penguin for another time – spoilers would be unavoidable, after all. But here’s a quick review: the series leans into drama, eschewing flashy gimmicks for a grittier, more grounded tone. The writing is generally strong, though there are moments of inconsistency – plot holes and contrivances that mar an otherwise immersive experience. Whether these flaws stem from the writers, director, or editor is anyone’s guess, but the effect is the same: they momentarily yank the viewer out of the world they’ve built.

Still, it’s a worthwhile watch, especially if you’re a fan of mob-style crime dramas. The final episode was, in my estimation, the best of the lot – a satisfying culmination that leaves the door ajar for philosophical ruminations like these.

Have you seen it? What are your thoughts – philosophical or otherwise? Drop a comment below. Let’s discuss.

The Scientist’s Dilemma: Truth-Seeking in an Age of Institutional Constraints

In an idealised vision of science, the laboratory is a hallowed space of discovery and intellectual rigour, where scientists chase insights that reshape the world. Yet, in a reflection as candid as it is disconcerting, Sabine Hossenfelder pulls back the curtain on a reality few outside academia ever glimpse. She reveals an industry often more concerned with securing grants and maintaining institutional structures than with the philosophical ideals of knowledge and truth. In her journey from academic scientist to science communicator, Hossenfelder confronts the limitations imposed on those who dare to challenge the mainstream — a dilemma that raises fundamental questions about the relationship between truth, knowledge, and institutional power.

I’ve also created a podcast to discuss Sabine’s topic. Part 2 is also available.

Institutionalised Knowledge: A Double-Edged Sword

The history of science is often framed as a relentless quest for truth, independent of cultural or economic pressures. But as science became more institutionalised, a paradox emerged. On the one hand, large academic structures offer resources, collaboration, and legitimacy, enabling ambitious research to flourish. On the other, they impose constraints, creating an ecosystem where institutional priorities — often financial — can easily overshadow intellectual integrity. The grant-based funding system, which prioritises projects likely to yield quick results or conform to popular trends, inherently discourages research that is too risky or “edgy.” Thus, scientific inquiry can become a compromise, a performance in which scientists must balance their pursuit of truth with the practicalities of securing their positions within the system.

Hossenfelder’s account reveals the philosophical implications of this arrangement: by steering researchers toward commercially viable or “safe” topics, institutions reshape not just what knowledge is pursued but also how knowledge itself is conceptualised. A system prioritising funding over foundational curiosity risks constraining science to shallow waters, where safe, incremental advances take precedence over paradigm-shifting discoveries.

Gender, Equity, and the Paradoxes of Representation

Hossenfelder’s experience with gender-based bias in her early career unveils a further paradox of institutional science. Being advised to apply for scholarships specifically for women, rather than being offered a job outright, reinforced a stereotype that women in science might be less capable or less deserving of direct support. Though well-intentioned, such programs can perpetuate inequality by distinguishing between “real” hires and “funded outsiders.” For Hossenfelder, this distinction created a unique strain on her identity as a scientist, leaving her caught between competing narratives: one of hard-earned expertise and one of institutionalised otherness.

The implications of this dilemma are profound. Philosophically, they touch on questions of identity and value: How does an individual scientist maintain a sense of purpose when confronted with systems that, however subtly, diminish their role or undercut their value? And how might institutional structures evolve to genuinely support underrepresented groups without reinforcing the very prejudices they seek to dismantle?

The Paper Mill and the Pursuit of Legacy

Another powerful critique in Hossenfelder’s reflection is her insight into academia as a “paper production machine.” In this system, academics are pushed to publish continuously, often at the expense of quality or depth, to secure their standing and secure further funding. This structure, which rewards volume over insight, distorts the very foundation of scientific inquiry. A paper may become less a beacon of truth and more a token in an endless cycle of academic currency.

This pursuit of constant output reveals the philosopher’s age-old tension between legacy and ephemerality. In a system driven by constant publication, scientific “advancements” are at risk of being rendered meaningless, subsumed by an industry that prizes short-term gains over enduring impact. For scientists like Hossenfelder, this treadmill of productivity diminishes the romantic notion of a career in science. It highlights a contemporary existential question: Can a career built on constant output yield a genuine legacy, or does it risk becoming mere noise in an endless stream of data?

Leaving the Ivory Tower: Science Communication and the Ethics of Accessibility

Hossenfelder’s decision to leave academia for science communication raises a question central to contemporary philosophy: What is the ethical responsibility of a scientist to the public? When institutional science falters in its pursuit of truth, perhaps scientists have a duty to step beyond its walls and speak directly to the public. In her pivot to YouTube, Hossenfelder finds a new audience, one driven not by academic pressures but by genuine curiosity.

This shift embodies a broader rethinking of what it means to be a scientist today. Rather than publishing in academic journals read by a narrow circle of peers, Hossenfelder now shares her insights with a public eager to understand the cosmos. It’s a move that redefines knowledge dissemination, making science a dialogue rather than an insular monologue. Philosophically, her journey suggests that in an age where institutions may constrain truth, the public sphere might become a more authentic arena for its pursuit.

Conclusion: A New Paradigm for Scientific Integrity

Hossenfelder’s reflections are not merely the story of a disillusioned scientist; they are a call to re-evaluate the structures that define modern science. Her journey underscores the need for institutional reform — not only to allow for freer intellectual exploration but also to foster a science that serves humanity rather than merely serving itself.

Ultimately, the scientist’s dilemma that Hossenfelder presents is a philosophical one: How does one remain true to the quest for knowledge in an age of institutional compromise? As she shares her story, she opens the door to a conversation that transcends science itself, calling us all to consider what it means to seek truth in a world that may have forgotten its value. Her insights remind us that the pursuit of knowledge, while often fraught, is ultimately a deeply personal, ethical journey, one that extends beyond the walls of academia into the broader, often messier realm of human understanding.

Value of Life

Captain Bonespurs now has a flesh wound. Former president-elect Donald J Trump was the target of a not-so-sharpshooter yesterday. Immediately resorting to Godwin’s Law, I wondered if this was like the philosophical hypothetical asking, ‘Would you kill baby Hitler to prevent the eventualities that unfolded?’ Was Hitler the symptom or the disease? What about Donald J? Whatever the cause or motivation, not unlike the fire at the Reichstag, this event has galvanised his supporters. Let’s hope that the outcome doesn’t follow the same path. There is a fear that he’ll take a path similar to Hitler or Ceasar before him in a quest for power.

What is a life worth? The average US-American life is valued at around $7 million, give or take a few million. The number ranges between $1 MM and $10 MM depending on which agency you see. That they equate lives to dollars is curious enough, but that they can’t agree on a single figure is priceless.

For background, this value is used to determine intervention. For FEMA (Federal Emergency Management Agency), a human life is worth about $7.5 MM For the EPA (Environmental Protection Agency) it’s slightly more than $10 MM. Are these cats playing Monopoly? Nah.

The human life calculus considers factors like lifetime earnings potential and discounts it to Present Value. In action, assume there is a disaster. Let’s not use COVID-19. Instead, there is an island with 1,000 inhabitants. Using the $10 MM per person figure to simplify the maths, we would be justified in spending up to $10,000,000,000 to intervene in some potential disaster – $10 MMM or $1e10.

Human lifetime value is an average. Mr Trump has already shown himself to be worth more than $10 MM. I suppose this means that not all humans are created equal. No matter. Another logical question might be what is the cost of a person’s detriment to society. This is a question for a Modernist or someone who feels that a given configuration of society is preferred to all others – or at least some others. How much damage might one human do?

Trump enriched himself and his family and entourage in his first term. In Ukraine, Zelenskyy and his lot bilked the country out of billions. It’s nothing new, but do we subtract the costs from the benefits or is this a gross calculation?

Irrespective of the costs, the next four years ahead are expected to be tumultuous no matter which corporate-sponsored party prevails. Heads, they win; tails, the country – if not the world – loses.

Understanding the Historical Context of the Ukraine-Russia Conflict

I’ve made several political posts in this space, but I was researching the backstory of the Ukraine-Russia conflict. Yet again, insufficiency of language is involved. It’s been said that the West promised not to expand NATO, ‘not one inch’, but it’s not clear whether NATO was the subject of that promise. Even Gorbachov said that NATO was not a topic of discussion, and that omission fell squarely on him. Even if this is the case, Putin made it clear in 2008 that this was his interpretation. Here’s a brief history for those interested. It’s decidedly not an academic affair, but I try to be neutral.

The Ukraine-Russia conflict is deeply rooted in the complex web of historical tensions and geopolitical dynamics that have shaped Eastern Europe from the Cold War to the present day. This article explores the critical developments and decisions from the end of World War II through to the events leading up to 2014, setting the stage for the current tensions.

From World War II to Cold War End

The geopolitical landscape of post-World War II Europe was significantly shaped at the Yalta Conference, where Allied leaders divided Europe into spheres of influence, leading to the establishment of a Soviet-dominated Eastern bloc. This division set the stage for the Cold War and the creation of NATO in 1949, a collective defence alliance that would come to play a central role in later tensions.

Collapse of the Soviet Union and Early Post-Cold War Hopes

The policies of glasnost and perestroika under Mikhail Gorbachev in the late 1980s, followed by the fall of the Berlin Wall, signalled a shift towards greater openness and potential integration. However, the dissolution of the Soviet Union in 1991 led to new states grappling with independence and the redefining of security and economic relations in the region. Initial hopes for a peaceful Europe were soon challenged by emerging security concerns.

NATO Expansion and Growing Tensions

NATO’s eastward expansion began in earnest in 1999 with the inclusion of the Czech Republic, Hungary, and Poland. This move, perceived by Russia as a strategic threat, reignited long-standing fears of encirclement and influenced Russia’s foreign policy. The expansion was justified by NATO as a way to stabilize Eastern Europe and integrate it into a democratic, peaceful Europe.

The 2008 Bucharest Summit

The 2008 NATO summit in Bucharest was a watershed moment. Although Ukraine and Georgia were not offered immediate membership, NATO’s declaration that they would eventually join the alliance was seen as provocative by Russia. The subsequent Russo-Georgian War in August 2008 underscored Russia’s willingness to use military force in response to perceived encroachments on its sphere of influence.

Deepening Crisis: 2010-2014

Relations continued to deteriorate with the EU’s Eastern Partnership program, which sought to deepen ties with former Soviet states, including Ukraine. The situation escalated dramatically in 2014 following the Euromaidan protests in Ukraine, the ousting of President Viktor Yanukovych, and Russia’s annexation of Crimea, a move widely condemned internationally but justified by Russia as a necessary defensive action.

Putin’s Defensive Stance

Throughout these developments, Vladimir Putin has maintained that NATO expansion represents a direct security threat to Russia. The narrative from the Russian perspective frames the expansion as a continuation of Cold War antagonism and a disregard for Russia’s security concerns, contrary to what they interpret as promises made during the 1990s.

Conclusion

This detailed narrative from the end of World War II through 2014 illuminates the complexities of Eastern European security dynamics and the challenges in reconciling the strategic interests of NATO and Russia. The ongoing conflict in Ukraine is deeply intertwined with these historical tensions, reflecting long-standing struggles for influence and security in post-Cold War Europe.

Additional background and context.

If anything, perhaps this will help with SEO.

To provide a clearer picture of the discussions and statements made about NATO expansion during the early 1990s, particularly around the time of German reunification, here are some notable quotes and summaries from key figures involved:

James Baker (U.S. Secretary of State)

James Baker reportedly told Mikhail Gorbachev during a meeting in 1990:

  • “Not one inch eastward” was a phrase used by Baker to assure Gorbachev about NATO’s military posture not moving eastward, in the context of German reunification. This phrase has been widely cited but is subject to interpretation regarding its precise meaning and whether it referred to broader NATO expansion.

Mikhail Gorbachev (Soviet President)

Gorbachev’s response to these discussions has been a source of significant interest:

  • “The topic of ‘NATO expansion’ was not discussed at all, and it wasn’t brought up in those years. I say this with full responsibility. Not a single Eastern European country raised the issue, not even after the Warsaw Pact ceased to exist in 1991.” – Gorbachev, in later interviews, emphasized that the assurances were more about not deploying NATO troops to Eastern Germany than about preventing future NATO expansion.

Hans-Dietrich Genscher (German Foreign Minister)

Genscher’s position was similarly focused on reducing Soviet fears about NATO:

  • “NATO should rule out an ‘expansion of its territory towards the east, i.e. moving it closer to the Soviet borders.'” – Genscher said this in a speech in Tutzing, Germany, in 1990, which was aimed at assuaging Soviet concerns about German reunification and NATO.

Western and Soviet Interpretations

The assurances regarding NATO not expanding “one inch eastward” were primarily discussed in the context of German reunification and the integration of East Germany into NATO without expanding NATO’s military presence further east. The ambiguity lies in whether these assurances were understood to apply only temporarily or permanently, and specifically to Eastern Germany or more broadly to Eastern Europe.

Later Developments

After these discussions, in 1991 and beyond, the situation changed dramatically with the dissolution of the Soviet Union. The security landscape in Europe was fundamentally altered, leading to different priorities and decisions. By the mid-1990s, the question of broader NATO expansion became a topic of much debate, and in 1999, several former Eastern Bloc countries were admitted into NATO.

Conclusion

The quotes and the context they were spoken in reveal the complexities of diplomatic communications and the difficulties in interpreting what was meant and understood by different parties. These discussions were contingent on numerous factors, including the rapidly changing geopolitical landscape following the end of the Cold War.

Video: Blame and Causa Sui

In this segment, I ponder the interplay between blame and Causa Sui. I’ll discuss the implications for moral responsibility as well as legal responsibility, which are not as in sync as one might imagine they might be.

Video: Blame & Causa Sui

To the uninitiated, Western legal systems have no pretensions about being about morality or justice. Legal systems are designed to maintain power structures and the status quo. They are deontological machines, making them prime targets for automation by the machine learning associated with artificial intelligence. This would also diminish the power of rhetoric over facts to some extent. But, I am no legal scholar, and all of this will have to wait for another segment.

I recently shared a video on causa sui and the basics of blame and blameworthiness, so I want to intersect those topics here.

Peter Strawson suggested that for humans, blame is a reactive response. It’s reflexive like having your knee jerk when tapped. Essentially, his position is that if blame didn’t naturally exist, we’d have to invent it, mirroring Voltaire’s quip, ‘If God did not exist, it would be necessary to invent Him’. Of course, this is because they serve the same power control purpose.

If God did not exist, it would be necessary to invent Him

Voltaire

To be fair, blame is closer to real than God, but the point remains. Strawson’s point is also that humans are saddled with blame and it’s not going anywhere no matter how nebulous it becomes in execution. It’s natural.

To me, this starts to sound suspiciously like a naturalistic fallacy. Humans seem to selectively cherry-pick which so-called natural tendencies they choose to defend. One might use nature to argue that female sexual availability begins at menstruation, and yet we have decided to ignore this and defer this on the grounds of civility. It’s obvious that we could consider blame to be an animal instinct we want to domesticate away, but because it serves other purposes, per Strawson’s perspective, it’s a useful tool.
But what’s the causa sui challenge. Let’s quickly recapitulate.

Causa sui argues that one cannot be the cause of oneself, ex nihilo. Being full products of nature and nurture to adopt the lay parlance, any blameworthiness lies with the sources or creators. Since we are concerned with moral responsibility, we can eliminate nature forthrightly. Nature may be responsible—by many estimations approximately 40 per cent responsible—, it possesses no moral agency. And if the individual is not responsible, then we are left with the environment and society, including the social environment. Of course, the environment gets off the hook in the same manner as the genetic and hereditary factors of nature.

Before we consider society, let’s regard the individual.

Albeit the brain-as-computer is a bit facile, it’s still good enough for illustrative purposes. When you are born, your cognitive hardware is installed, as are your edge peripherals and update protocols. Any of these can become damaged through some degenerative processes, or external environmental factors, but since my interest is in optimistic rather than pessimistic scenarios, I’ll ignore these instances. Given that blameworthiness is directly related to presumed cognitive processing, factors that diminish these faculties, mitigate blameworthiness and factors than increase it, ameliorate it.

As a—quote—’normal’ child becomes an adolescent and then an adult, the probability it will become blameworthy, increases with age, ceteris paribus. A person with cognitive deficits or conditions such as aphasia or dementia decreases the probability of blame assignment. Even temporary impairment mitigates judgment—oh, she was drunk.

So, following the brain-as-computer analogy, your brain is a CPU with a self-updating cognitive operating system and instruction set. Essentially, there is also short and long-term memory.
In the case of cognitive deficits, one of these components might be effectively broken. The CPU might process too slowly; it might misinterpret what it receives; there may be issues with the sense organs or the nerves that transport signals.

I’ve got a mate who, due to medical malpractice at birth, experienced nerve damage. Although his eyes and brain are normal, his optic nerve cannot carry signals very well, effectively leaving him blind. Neither can he taste nor smell. So there’s that.

But assuming that this processing and storage hardware are intact, the causa sui constraint still applies, but let’s spend some time evaluating societal interactions.

All inputs come from society—cultures and subcultures. Apart from misinterpreted processing scenarios, if a person doesn’t receive a particular moral instruction set, that person should surely be considered to be exempt from moral blame. It may be difficult to assess whether an instruction has been input. This is a reason why children are categorically exempted: they may not have received all of the expected moral codes, they may not have been stored or effectively indexed, and their processing hardware is still in development—alpha code if you will. Brain plasticity is another attribute I won’t spend much time on, but the current state of science says that the brain is still not fully developed even by age 30, so this is certainly a mitigating factor, even if we allow leeway for the causa sui argument.

I mention subculture explicitly because the predominant culture is not the only signal source. A child raised by, I don’t know, say pirates, would have an amended moral code. I am sure we can all think of different subcultures that might undermine or come at cross odds with the dominant culture, whether hippies, religious cultists, militia groups, racial purist groups, and so on.

So, a commonly held moral in the subdominant group may counter that of the prevailing one. An example that comes to mind is some religious organisations that do not agree with human medical intervention. There have been cases where parents have allowed a child to die from an otherwise curable condition. Although in the United States, there is a claim of freedom of religion—a claim that is spotty at best—, parents or guardians in situations like these have been convicted and sentenced for following their own moral codes. But as with all people, these people are as susceptible to the limitations of causa sui as the rest of us. They are not responsible for creating themselves, but moral responsibility was asserted based on the beliefs of the prevailing culture. Even besides the legal context, persons in the larger society would likely blame the parents for their neglect—though they may be praised for being resolute in their righteousness by their in-group. This just underscores that morality is a collection of socially constructed conventions rather than something more objective.

Returning to causa sui, let’s say a person commits an act that society would typically assign blame. Rather than exercise some act of retributive justice—a concept with no foundation in a causa sui universe—the course of action was remediation. In this case, the desired moral instruction would be delivered thereby seemingly making the moral offender blameworthy. But would they be?

Presumably, (for what it’s worth) psychologists would evaluate the subject for competency in maintaining the programming. In the case of the aforementioned religious parents, they may be threatened with retribution for not abiding by the superseding rules of the prevailing power structure.

Although I might personally allow some leeway even with the causa sui in full force and effect, but I can’t say that I have much faith in the ability of humans to make a correct assessment. My impression is that any assessment would be one of convenience than something sounder.

Perhaps I’ll produce a more robust segment on retributive justice, but my feeling is that retributive justice is an area that legal systems should avoid altogether. If necessary, focus on restorative justice, rehabilitation (or ‘habilitation’ as the case might be) and quarantine models to ensure any bad actors are contained away from society. Again, this puts individuals at the mercy of cultures they find themselves a part of. I am not going to delve into this any further save to remind the listener of gang initiation schemes where a person needs to kill a member of a rival gang to become a trusted member. This is their moral code—quite at odds with the mainstream.

So there you have it. Owing to causa sui constraints, a person cannot be ultimately responsible for their actions. My primary thesis is—apart from metaphorical equipment failures—that any moral responsibility falls wholly on the society or culture. Full stop. And this isn’t as foreign as one might first feel. Although for most people blame is natural, in an individualistic society, people are interested in finding the culprit. In collectivist cultures, any culprit might do. Perhaps I’ll share some stories in a future segment.
Meantime, what are your thoughts on moral responsibility? Can someone be ultimately responsible? Some have said the ‘ultimate responsibility’ is a philosophical red herring and that we can still hold someone responsible, even if not in the ultimate sense, which causa sui disallows. Are you more in this camp? Is this enough to mete out so-called retributive justice? For me, retributive justice is a euphemism for vengeance, and justice is a weasel word. But that’s just me, and perhaps a topic for another segment.

Are there any topics you’d like me to cover? Leave a comment below.

Hierarchies and Meritocracy

Jordan Peterson and Russell Brand chat for about 12 minutes on sex differences and personality, but that’s not where I want to focus commentary. What I will say is that Peterson continually conflates sex and gender, and I find that disconcerting for a research psychologist.

I’ve queued this video near the end, where Peterson delineates his conception of how the political right and left (as defined by him and the US media-industrial complex).

I feel he does a good job of defining the right, and he may have even captured whatever he means by left—radical left even—, but he doesn’t capture my concerns, hence I write.

To recap his positions,

Premises

  • We need to pursue things of value
  • Hierarchies are inevitable
  • [One has] to value things in order to move forward in life
  • [One has] to value things in order to have something valuable to produce
  • [One has] to value some things more than others or [they] don’t have anything like beauty or strength or…competence or…whatever…
  • If [one] value[s] [some domain] then [one is] going to value some [things in that domain] more than others because some are better
  • If [one] play[s] out the value in a social landscape, a hierarchy [will result]
  • A small number of people are going to be more successful than the majority
  • A very large number of people aren’t going to be successful at all

Conservative (Right)

  • Hierarchies are justifiable and necessary

Left

  • Hierarchies … stack [people] up at the bottom
  • [Hierarchies] tilt towards tyranny across time

Critique

I feel I’ve captured his position from the video transcript, but feel free to watch the clip to determine if I’ve mischaracterised his position. I have reordered some of his points for readability and for a more ordered response on my part.

To be fair, I feel his delivery is confused and the message becomes ambiguous, so I may end up addressing the ‘wrong’ portion of his ambiguous statement.

We need to pursue things of value

This is sloganeering. The question is how are we defining value? Is it a shared definition? How is this value measured? How are we attributing contribution to value? And do we really need to pursue these things?

Hierarchies are inevitable

Hierarchies may be inevitable, but they are also constructed. They are not natural. They are a taxonomical function of human language. Being constructed, they can be managed. Peterson will suggest meritocracy as an organising principle, so we’ll return to that presently.

[One has] to value things in order to move forward in life

This is a particular worldview predicated on the teleological notion of progress. I’ve discussed elsewhere that all movement is not progress, and perceived progress is not necessarily progress on a global scale.

Moreover, what one values may not conform with what another values. In practice, what one values can be to the detriment of another, so how is this arbitrated or mediated?

[One has] to value things in order to have something valuable to produce

I think he is trying to put this into an economic lens, but I don’t know where he was going with this line. Perhaps it was meant to emphasise the previous point. I’ll just leave it here.

[One has] to value some things more than others or [they] don’t have anything like beauty or strength or…competence or…whatever…

This one is particularly interesting. Ostensibly, I believe he is making the claim that we force rank individual preferences, then he provides examples of items he values: beauty, strength, competence, and whatever. Telling here is that he chooses aesthetic and unmeasurable items that are not comparable across group members and are not even stable for a particular individual. I won’t fall down the rabbit hole of preference theory, but this is a known limitation of that theory.

If [one] value[s] [some domain] then [one is] going to value some [things in that domain] more than others because some are better

We’ve already touched on most of this concept. The key term here is ‘better‘. Better is typically subjective. Even in sports, where output and stats are fairly well dimensionalised, one might have to evaluate the contributions of a single athlete versus another with lower ‘output’ but who serves as a catalyst for others. In my mental model, I am thinking of a person who has higher arbitrary stats than another on all levels versus another with (necessarily) lower stats but who elevates the performance (hence) stats of teammates. This person would likely be undervalued (hence under-compensated) relative to the ‘star’ performer.

In other domains, such as art, academics, or even accounting and all measurement bets are off.

If [one] play[s] out the value in a social landscape, a hierarchy [will result]

Agreed, but the outcome will be based on rules—written and unwritten.

A small number of people are going to be more successful than the majority

Agreed.

A very large number of people aren’t going to be successful at all

Agreed

Conclusion

The notion of meritocracy is fraught with errors, most notably that merit can be meaningfully assessed in all but the most simple and controlled circumstances. But societies and cultures are neither simple nor controlled. They are complex organisms. And as Daniel Kahneman notes, most merit can likely be chalked up to luck, so it’s all bullshit at the start.

In the end, Peterson and people like him believe that the world works in a way that it doesn’t. They believe that thinking makes it so and that you can get an is from an ought. Almost no amount of argument will convince them otherwise. It reminds me of the time Alan Greenspan finally admitted to the US Congress that his long-held adopted worldview was patently wrong.

Video: CSPAN: Oversight and Government Reform Committee Chairman, Rep. Henry Waxman and Former Fed Chairman Alan Greenspan Testimony

WAXMAN: “You found a flaw…”

GREENSPAN: “In the reality—more in the model—that I perceived is the critical functioning structure that defines how the world works, so to speak.”

WAXMAN: “In other words, you found that your your view of the world—your ideology—was not right. It was not what it had it…”

GREENSPAN: “Precisely. No, I… That’s precisely the reason I was shocked because I have been going for 40 years or more with very considerable evidence that it was working exceptionally well.”

To paraphrase musically

Video: Social Distortion, I Was Wrong