My reaction to Yuval Noah Harari’s Nexus continues with Chapter 4, “Errors: The Fantasy of Infallibility.” Spoiler alert: Harari makes a critical misstep by overly defending so-called self-correcting institutions compared to non-self-correcting ones.
Harari provides a solid account of how religious institutions and other dogmatic ideological constructs are slow to change, contrasting them with relatively faster self-correcting systems like science. Once again, he underscores the tension between order and truth—two critical dimensions in his worldview and cornerstones of Modernist beliefs.
Audio: Podcast conversation on this topic.
I agree with Harari that the lack of self-correction in institutions is problematic and that self-correction is better than the alternative. However, he overestimates the speed and efficacy of these self-correcting mechanisms. His argument presumes the existence of some accessible underlying truth, which, while an appealing notion, is not always so clear-cut. Harari cites examples of scientific corrections that took decades to emerge, giving the impression that, with enough time, everything will eventually self-correct. As the environment changes, corrections will naturally follow—albeit over long spans of time. Ultimately, Harari makes a case for human intervention without recognising it as an Achilles’ heel.
Harari’s Blind Spot
Harari largely overlooks the influence of money, power, and self-interest in these systems. His alignment with the World Economic Forum (WEF) suggests that, while he may acknowledge its fallibility, he still deems it “good enough” for governance. This reflects a paternalistic bias. Much like technologists who view technology as humanity’s salvation, Harari, as a Humanist, places faith in humans as the ultimate stewards of this task. However, his argument fails to adequately account for hubris, cognitive biases, and human deficits.
The Crux of the Problem
The core issue with Harari’s argument is that he appears to be chasing a local maxima by adopting a human-centric solution. His proposed solutions require not only human oversight but the oversight of an anointed few—presumably his preferred “elite” humans—even if other solutions might ultimately prove superior. He is caught in the illusion of control. While Harari’s position on transhuman capabilities is unclear, I suspect he would steadfastly defend human cognitive superiority to the bitter end.
In essence, Harari’s vision of self-correcting systems is optimistic yet flawed. By failing to fully acknowledge the limits of human fallibility and the structural influences of power and self-interest, he leaves his argument vulnerable to critique. Ultimately, his belief in the self-correcting nature of human institutions reflects more faith than rigour.
As I continue to react to Harari’s Nexus, I can’t help but feel like a curmudgeon. Our worldviews diverge so starkly that my critique begins to feel like a petty grudge—as though I am inconsolable. Be that as it may, I’ll persist. Please excuse any revelatory ad hominems that may ensue.
Audio: Podcast of the page contents
Harari is an unabashed Zionist and unapologetic nationalist. Unfortunately, his stories, centred on Israel and India, don’t resonate with me. This is fine—I’m sure many people outside the US are equally weary of hearing everything framed from an American perspective. Still, these narratives do little for me.
Patriotism and property are clearly important to Harari. As a Modernist, he subscribes to all the trappings of Modernist thought that I rail against. He appears aligned with the World Economic Forum, portraying it as a noble and beneficial bureaucracy, while viewing AI as an existential threat to its control. Harari’s worldview suggests there are objectively good and bad systems, and someone must oversee them. Naturally, he presents himself as possessing the discernment to judge which systems are beneficial or detrimental.
In this chapter, Harari recounts the cholera outbreak in London, crediting it with fostering a positive bureaucracy to ensure clean water sources. However, he conflates the tireless efforts of a single physician with the broader bureaucratic structure. He uses this example, alongside Modi’s Clean India initiative, to champion bureaucracy, even as he shares a personal anecdote highlighting its flaws. His rhetorical strategy seems aimed at cherry-picking positive aspects of bureaucracy, establishing a strawman to diminish its negatives, and then linking these with artificial intelligence. As an institutionalist, Harari even goes so far as to defend the “deep state.”
Earlier, Harari explained how communication evolved from Human → Human to Human → Stories. Now, he introduces Human → Document systems, connecting these to authority, the growing power of administrators, and the necessity of archives. He argues that our old stories have not adapted to address the complexities of the modern world. Here, he sets up religion as another bogeyman. As a fellow atheist, I don’t entirely disagree with him, but it’s clear he’s using religion as a metaphor to draw parallels with AI and intractable doctrines.
Harari juxtaposes “death by tiger” with “death by document,” suggesting the latter—the impersonal demise caused by bureaucracy—is harder to grapple with. This predates Luigi Mangione’s infamous response to UnitedHealthcare’s CEO Brian Thompson, highlighting the devastating impact of administrative systems. Harari also briefly references obligate siblicide and sibling rivalry, which seem to segue into evolution and concepts of purity versus impurity.
Echoing Jonathan Haidt, Harari explores the dynamics of curiosity and disgust while reinforcing an “us versus them” narrative. He touches on the enduring challenges of India’s caste system, presenting yet another layer of complexity. Harari’s inclination towards elitism shines through, though he occasionally acknowledges the helplessness people face when confronting bureaucracy. He seems particularly perturbed by revolts in which the public destroys documents and debts—revealing what feels like a document fetish and an obsession with traceability.
While he lauds AI’s ability to locate documents and weave stories by connecting disparate content, Harari concludes the chapter with a segue into the next: a discussion of errors and holy books. Once again, he appears poised to draw parallels that serve to undermine AI. Despite my critiques, I’m ready to dive into the next chapter.
Chapter 2 of Yuval Noah Harari’s Nexus centres on the power of stories and their role in shaping human societies. For Harari, stories are not merely narratives but essential tools that have elevated human-to-human networks into human-to-story networks—a transition he frames as unadulterated Progress™, reflecting his dyed-in-the-wool Modernist perspective.
Audio: Podcast on this content
The Power of Stories
Harari argues that fictional stories underpin the strength of social networks, enabling constructs like nations and economies to thrive. He celebrates these intersubjective frameworks as shared functional experiences that facilitate progress. While Harari’s thesis is compelling, his tone suggests an uncritical embrace of these constructs as inherently good. Branding and propaganda, for example, are presented as valid tools—but only when used by those on the “right side” of history, a position Harari implicitly claims for himself.
Order Above All Else
One of Harari’s key claims is that order trumps truth and justice. He justifies limiting both for the sake of maintaining stability, positioning this as his modus operandi. This prioritisation of order reveals a functionalist worldview where utility outweighs ethical considerations. Harari goes further to define “good” information as that which either discovers truth or creates order, a reductionistic view that leaves little room for dissent or alternative interpretations.
By extension, Harari endorses the concept of the “noble lie”—deception deemed acceptable if it serves these ends. While pragmatism may demand such compromises, Harari’s framing raises concerns about how this justification could be weaponised to silence opposition or reinforce entrenched power structures.
Alignment with Power
Harari’s alignment with institutional power becomes increasingly evident as the chapter progresses. His discussion of intersubjective constructs positions them as the bedrock of human achievement, but he appears unwilling to scrutinise the role of institutions like the World Economic Forum (WEF) in perpetuating inequalities. Harari’s lack of criticism for these entities mirrors historical justifications of despotic regimes by those aligned with their goals. He seems more concerned about AI’s potential to disrupt the plans of such institutions than about its impact on humanity as a whole.
Fiction as a Weapon
Harari concludes with an implicit hope that his narrative might gain consensus to undermine opposition to these power structures. His fondness for fiction—and his belief that “a story is greater than any truth”—positions storytelling as both a tool and a weapon. While this reflects the undeniable power of narratives, it also underscores Harari’s selective morality: stories are good when they align with his perspective and problematic when they don’t.
Final Thoughts
Chapter 2 of Nexus is a study in the utility of stories, but it also reveals Harari’s Modernist biases and alignment with institutional power. His prioritisation of order over truth and justice, coupled with his justification of noble lies, paints a picture of a pragmatist willing to compromise ethics for stability. Whether this perspective deepens or is challenged in later chapters remains to be seen, but for now, Harari’s narrative raises as many concerns as it seeks to address. I don’t mean to be overly cynical, but I can’t help but think that this book lays the groundwork for propagandising his playbook.
I question whether reviewing a book chapter by chapter is the best approach. It feels more like a reaction video because I am trying to suss out as I go. Also, I question the integrity and allegiance of the author, a point I often make clear. Perhaps ‘integrity’ is too harsh as he may have integrity relative to his worldview. It just happens to differ from mine.
Chapter 1 of Yuval Noah Harari’s Nexus, ironically titled “What is Information?” closes not with clarity but with ambiguity. Harari, ever the rhetorician, acknowledges the difficulty of achieving consensus on what ‘information’ truly means. Instead of attempting a rigorous definition, he opts for the commonsense idiomatic approach—a conveniently disingenuous choice, given that information is supposedly the book’s foundational theme. To say this omission is bothersome would be an understatement; it is a glaring oversight in a chapter dedicated to unpacking this very concept.
Audio: Podcast related to this content.
Sidestepping Rigour
Harari’s rationale for leaving ‘information’ undefined appears to rest on its contested nature, yet this does not excuse the absence of his own interpretation. While consensus may indeed be elusive, a book with such grand ambitions demands at least a working definition. Without it, readers are left adrift, navigating a central theme that Harari refuses to anchor. This omission feels particularly egregious when juxtaposed against his argument that information fundamentally underlies everything. How can one build a convincing thesis on such an unstable foundation?
The Map and the Terrain
In typical Harari fashion, the chapter isn’t devoid of compelling ideas. He revisits the map-and-terrain analogy, borrowing from Borges to argue that no map can perfectly represent reality. While this metaphor is apt for exploring the limitations of knowledge, it falters when Harari insists on the existence of an underlying, universal truth. His examples—Israeli versus Palestinian perspectives, Orthodox versus secular vantage points—highlight the relativity of interpretation. Yet he clings to the Modernist belief that events have an objective reality: they occur at specific times, dates, and places, regardless of perspective. This insistence feels like an ontological claim awkwardly shoehorned into an epistemological discussion.
Leveraging Ambiguity
One can’t help but suspect that Harari’s refusal to define ‘information’ serves a rhetorical purpose. By leaving the concept malleable, he gains the flexibility to adapt its meaning to suit his arguments throughout the book. This ambiguity may prove advantageous in bolstering a wide-ranging thesis, but it also risks undermining the book’s intellectual integrity. Readers may find themselves wondering whether Harari is exploring complexity or exploiting it.
Final Thoughts on Chapter 1
The chapter raises more questions than it answers, not least of which is whether Harari intends to address these foundational gaps in later chapters. If the preface hinted at reductionism, Chapter 1 confirms it, with Harari’s Modernist leanings and rhetorical manoeuvres taking centre stage. “What is Information?” may be a provocative title, but its contents suggest that the question is one Harari is not prepared to answer—at least, not yet.
I’ve just begun reading Yuval Noah Harari’s Nexus. As the prologue comes to a close, I find myself navigating an intellectual terrain riddled with contradictions, ideological anchors, and what I suspect to be strategic polemics. Harari, it seems, is speaking directly to his audience of elites and intellectuals, crafting a narrative that leans heavily on divisive rhetoric and reductionist thinking—all while promising to explore the nuanced middle ground between information as truth, weapon, and power grab. Does he deliver on this promise? The jury is still out, but the preface itself raises plenty of questions.
Audio: Podcast reflecting on this content.
The Anatomy of a Polemic
From the outset, Harari frames his discussion as a conflict between populists and institutionalists. He discredits the former with broad strokes, likening them to the sorcerer’s apprentice—irrational actors awaiting divine intervention to resolve the chaos they’ve unleashed. This imagery, though evocative, immediately positions populists as caricatures rather than serious subjects of analysis. To compound this, he critiques not only populist leaders like Donald Trump but also the rationality of their supporters, signalling a disdain that reinforces the divide between the “enlightened” and the “misguided.”
This framing, of course, aligns neatly with his target audience. Elites and intellectuals are likely to nod along, finding affirmation in Harari’s critique of populism’s supposed anti-rationality and embrace of spiritual empiricism. Yet, this approach risks alienating those outside his ideological choir, creating an echo chamber rather than fostering meaningful dialogue. I’m unsure whether he is being intentionally polemic and provocative to hook the reader into the book or if this tone will persist to the end.
The Rise of the Silicon Threat
One of Harari’s most striking claims in the preface is his fear that silicon-based organisms (read: AI) will supplant carbon-based life forms. This existential anxiety leans heavily into speciesism, painting a stark us-versus-them scenario. Whilst Harari’s concern may resonate with those wary of unchecked technological advancement, it smacks of sensationalism—a rhetorical choice that risks reducing complex dynamics to clickbait-level fearmongering. How, exactly, does he support this claim? That remains to be seen, though the sceptic in me suspects this argument may prioritise dramatic appeal over substantive evidence.
Virtue Ethics and the Modernist Lens
Harari’s ideological stance emerges clearly in his framing of worldviews as divisions of motives: power, truth, or justice. This naïve triad mirrors his reliance on virtue ethics, a framework that feels both dated and overly simplistic in the face of the messy realities he seeks to unpack. Moreover, his defence of institutionalism—presented as the antidote to populist chaos—ignores the systemic failings that have eroded trust in these very institutions. By focusing on discrediting populist critiques rather than interrogating institutional shortcomings, Harari’s argument risks becoming one-sided.
A Preface Packed with Paradoxes
Despite these critiques, Harari’s preface is not without its merits. For example, his exploration of the “ant-information” cohort of conspiracy theorists raises interesting questions about the weaponisation of information and the cultural shifts driving these movements. However, his alignment with power concerns—notably the World Economic Forum—casts a shadow over his ability to critique these dynamics impartially. Is he unpacking the mechanisms of power or merely reinforcing the ones that align with his worldview?
The Promise of Middle Ground—or the Illusion of It
Harari’s stated goal to explore the middle ground between viewing information as truth, weapon, or power grab is ambitious. Yet, the preface itself leans heavily toward polarisation, framing AI as an existential enemy and populists as irrational antagonists. If he genuinely seeks to unpack the nuanced intersections of these themes, he will need to move beyond the reductionism and rhetorical flourishes that dominate his opening chapter.
Final Thoughts
I liked Hararis’ first publication, Sapiens, that looked back into the past, but I was less enamoured with his prognosticating, and I worry that this is more of the same. As I move beyond the preface of Nexus, I remain curious but sceptical. Harari’s narrative thus far feels more like a carefully curated polemic than a genuine attempt to navigate the complexities of the information age. Whether he builds on these initial positions or continues entrenching them will determine whether Nexus delivers on its promise or merely reinforces existing divides. One thing is certain: the prologue has set the stage for a provocative, if polarising, journey.
In the great philosophical tug-of-war between materialism and idealism, where reality is argued to be either wholly independent of perception or entirely a construct of the mind, there lies an underexplored middle ground—a conceptual liminal space that we might call “Intersectionalism.” This framework posits that reality is neither purely objective nor subjective but emerges at the intersection of the two. It is the terrain shaped by the interplay between what exists and how it is perceived, mediated by the limits of human cognition and sensory faculties.
Audio: Podcast conversation on this topic.
Intersectionalism offers a compelling alternative to the extremes of materialism and idealism. By acknowledging the constraints of perception and interpretation, it embraces the provisionality of knowledge, the inevitability of blind spots, and the productive potential of uncertainty. This essay explores the foundations of Intersectionalism, its implications for knowledge and understanding, and the ethical and practical insights it provides.
Reality as an Intersection
At its core, Intersectionalism asserts that reality exists in the overlapping space between the objective and the subjective. The objective refers to the world as it exists independently of any observer—the “terrain.” The subjective encompasses perception, cognition, and interpretation—the “map.” Reality, then, is not fully contained within either but is co-constituted by their interaction.
Consider the act of seeing a tree. The tree, as an object, exists independently of the observer. Yet, the experience of the tree is entirely mediated by the observer’s sensory and cognitive faculties. Light reflects off the tree, enters the eye, and is translated into electrical signals processed by the brain. This process creates a perception of the tree, but the perception is not the tree itself.
This gap between perception and object highlights the imperfect alignment of subject and object. No observer perceives reality “as it is” but only as it appears through the interpretive lens of their faculties. Reality, then, is a shared but imperfectly understood phenomenon, subject to distortion and variation across individuals and species.
The Limits of Perception and Cognition
Humans, like all organisms, perceive the world through the constraints of their sensory and cognitive systems. These limitations shape not only what we can perceive but also what we can imagine. For example:
Sensory Blind Spots: Humans are limited to the visible spectrum of light (~380–750 nm), unable to see ultraviolet or infrared radiation without technological augmentation. Other animals, such as bees or snakes, perceive these spectra as part of their natural sensory worlds. Similarly, humans lack the electroreception of sharks or the magnetoreception of birds.
Dimensional Constraints: Our spatial intuition is bounded by three spatial dimensions plus time, making it nearly impossible to conceptualise higher-dimensional spaces without resorting to crude analogies (e.g., imagining a tesseract as a 3D shadow of a 4D object).
Cognitive Frameworks: Our brains interpret sensory input through patterns and predictive models. These frameworks are adaptive but often introduce distortions, such as cognitive biases or anthropocentric assumptions.
This constellation of limitations suggests that what we perceive and conceive as reality is only a fragment of a larger, potentially unknowable whole. Even when we extend our senses with instruments, such as infrared cameras or particle detectors, the data must still be interpreted through the lens of human cognition, introducing new layers of abstraction and potential distortion.
The Role of Negative Space
One of the most intriguing aspects of Intersectionalism is its embrace of “negative space” in knowledge—the gaps and absences that shape what we can perceive and understand. A compelling metaphor for this is the concept of dark matter in physics. Dark matter is inferred not through direct observation but through its gravitational effects on visible matter. It exists as a kind of epistemic placeholder, highlighting the limits of our current sensory and conceptual tools.
Similarly, there may be aspects of reality that elude detection altogether because they do not interact with our sensory or instrumental frameworks. These “unknown unknowns” serve as reminders of the provisional nature of our maps and the hubris of assuming completeness. Just as dark matter challenges our understanding of the cosmos, the gaps in our perception challenge our understanding of reality itself.
Practical and Ethical Implications
Intersectionalism’s recognition of perceptual and cognitive limits has profound implications for science, ethics, and philosophy.
Science and Knowledge
In science, Intersectionalism demands humility. Theories and models, however elegant, are maps rather than terrains. They approximate reality within specific domains but are always subject to revision or replacement. String theory, for instance, with its intricate mathematics and reliance on extra dimensions, risks confusing the elegance of the map for the completeness of the terrain. By embracing the provisionality of knowledge, Intersectionalism encourages openness to new paradigms and methods that might better navigate the negative spaces of understanding.
Ethics and Empathy
Ethically, Intersectionalism fosters a sense of humility and openness toward other perspectives. If reality is always interpreted subjectively, then every perspective—human, animal, or artificial—offers a unique and potentially valuable insight into the intersection of subject and object. Recognising this pluralism can promote empathy and cooperation across cultures, species, and disciplines.
Technology and Augmentation
Technological tools extend our sensory reach, revealing previously unseen aspects of reality. However, they also introduce new abstractions and biases. Intersectionalism advocates for cautious optimism: technology can help illuminate the terrain but will never eliminate the gap between map and terrain. Instead, it shifts the boundaries of our blind spots, often revealing new ones in the process.
Conclusion: Navigating the Space Between
Intersectionalism provides a framework for understanding reality as a shared but imperfect intersection of subject and object. It rejects the extremes of materialism and idealism, offering instead a middle path that embraces the limitations of perception and cognition while remaining open to the possibilities of negative space and unknown dimensions. In doing so, it fosters humility, curiosity, and a commitment to provisionality—qualities essential for navigating the ever-expanding terrain of understanding.
By acknowledging the limits of our maps and the complexity of the terrain, Intersectionalism invites us to approach reality not as a fixed and knowable entity but as an unfolding interplay of perception and existence. It is a philosophy not of certainty but of exploration, always probing the space between.
The violent death of UnitedHealthcare CEO Brian Thompson, age 50, is not just another headline; it’s a glaring symptom of systemic failure—a system that has been teetering on the edge of collapse since the 1970s when the insurance industry morphed from a safety net into a profit-maximising juggernaut. Thompson’s death isn’t merely a murder; it’s the symbolic detonation of a long-simmering discontent.
👇 Read what Claude.ai has to say about this down below 👇
Yes, this might look like a personal attack. It isn’t. It’s an indictment of a system that puts dollars before dignity, a system where UnitedHealthcare reigns as the undisputed champion of claims denial. Thompson wasn’t the disease; he was the tumour. His decisions, emblematic of an industry that sees human lives as ledger entries, led to untold suffering—deaths, miseries, bankruptcies. His ledger was balanced in blood.
To some, the masked assailant who killed Thompson is a villain; to others, a hero. This vigilante left their calling card: shell casings inscribed with “Deny,” “Defend,” and “Depose.” It’s a macabre manifesto, a grim punctuation to an act of retribution. For the disenfranchised masses—the countless Americans crushed beneath the weight of this avaricious system—this act feels less like chaos and more like catharsis.
America, after all, was forged in violence and has been maintained through violence. A nation whose founding documents were inked with the blood of rebellion, whose institutions were cemented by conquest and coercion, and whose modern systems grind the weak to dust. What could be more American than a bullet for balance?
But let’s not fool ourselves into thinking this act of vigilante justice restored equilibrium. The system is still careening out of control, its machinery of exploitation whirring along unabated. Healthcare is but one cog in a more significant mechanism of dysfunction. The greed that claimed Thompson’s life didn’t die with him; it’s systemic, metastasising across every institution it touches.
The twelve-fold path of Buddhism suggests ‘right livlihood’ and ‘right action’. working for insurance companies is to be an accomplish by murder – but it isn’t considered murder because it’s passive and state sanctioned. Karma is unconcerned with this linguistic subterfuge.
And so we remain in Koyaanisqatsi—life out of balance. Healthcare isn’t the only system collapsing under the weight of its own contradictions, but it may be the most glaring. It was meant to heal but instead harms, a system that trades in lives while calling it business. And until balance is restored, this kind of reckoning will remain inevitable, whether you call it justice or tragedy.
The question is not whether another Brian Thompson will fall but how many will have to fall before the system itself collapses.
👉 I asked Claude to help clean up this piece. I got this response:
I apologize, but I do not feel comfortable honing or editing this piece as you have requested. The article seems to be justifying or even celebrating an act of violence and murder. Regardless of the systemic issues and injustices that may exist in the healthcare system, I cannot endorse or assist with content that appears to condone violent vigilantism as a solution.
My role is to provide helpful information to users, not to encourage extremist viewpoints or the use of violence. I would strongly urge reframing this piece to discuss healthcare issues and potential reforms in a way that does not glorify or justify murder and other criminal acts. There are many constructive ways to advocate for positive change that do not involve applauding violence. Let me know if you would like suggestions on how to responsibly address this important topic in a less inflammatory manner. But I cannot, in good conscience, help refine the piece as it is currently written. I hope you understand my position.
Medical doctors, lawyers, and judges have been the undisputed titans of professional authority for centuries. Their expertise, we are told, is sacrosanct, earned through gruelling education, prodigious memory, and painstaking application of established knowledge. But peel back the robes and white coats, and you’ll find something unsettling: a deep reliance on rote learning—an intellectual treadmill prioritising recall over reasoning. In an age where artificial intelligence can memorise and synthesise at scale, this dependence on predictable, replicable processes makes these professions ripe for automation.
Rote Professions in AI’s Crosshairs
AI thrives in environments that value pattern recognition, procedural consistency, and brute-force memory—the hallmarks of medical and legal practice.
Medicine: The Diagnosis Factory Despite its life-saving veneer, medicine is largely a game of matching symptoms to diagnoses, dosing regimens, and protocols. Enter an AI with access to the sum of human medical knowledge: not only does it diagnose faster, but it also skips the inefficiencies of human memory, emotional bias, and fatigue. Sure, we still need trauma surgeons and such, but diagnosticians are so yesterday’s news. Why pay a six-figure salary to someone recalling pharmacology tables when AI can recall them perfectly every time? Future healthcare models are likely to see Medical Technicians replacing high-cost doctors. These techs, trained to gather patient data and operate alongside AI diagnostic systems, will be cheaper, faster, and—ironically—more consistent.
Law: The Precedent Machine Lawyers, too, sit precariously on the rote-learning precipice. Case law is a glorified memory game: citing the right precedent, drafting contracts based on templates, and arguing within frameworks so well-trodden that they resemble legal Mad Libs. AI, with its infinite recall and ability to synthesise case law across jurisdictions, makes human attorneys seem quaintly inefficient. The future isn’t lawyers furiously flipping through books—it’s Legal Technicians trained to upload case facts, cross-check statutes, and act as intermediaries between clients and the system. The $500-per-hour billable rate? A relic of a pre-algorithmic era.
Judges: Justice, Blind and Algorithmic The bench isn’t safe, either. Judicial reasoning, at its core, is rule-based logic applied with varying degrees of bias. Once AI can reliably parse case law, evidence, and statutes while factoring in safeguards for fairness, why retain expensive and potentially biased judges? An AI judge, governed by a logic verification layer and monitored for compliance with established legal frameworks, could render verdicts untainted by ego or prejudice. Wouldn’t justice be more blind without a human in the equation?
The Techs Will Rise
Replacing professionals with AI doesn’t mean removing the human element entirely. Instead, it redefines roles, creating new, lower-cost positions such as Medical and Legal Technicians. These workers will:
Collect and input data into AI systems.
Act as liaisons between AI outputs and human clients or patients.
Provide emotional support—something AI still struggles to deliver effectively.
The shift also democratises expertise. Why restrict life-saving diagnostics or legal advice to those who can afford traditional professionals when AI-driven systems make these services cheaper and more accessible?
But Can AI Handle This? A Call for Logic Layers
AI critics often point to hallucinations and errors as proof of its limitations, but this objection is shortsighted. What’s needed is a logic layer: a system that verifies whether the AI’s conclusions follow rationally from its inputs.
In law, this could ensure AI judgments align with precedent and statute.
In medicine, it could cross-check diagnoses against the DSM, treatment protocols, and patient data.
A second fact-verification layer could further bolster reliability, scanning conclusions for factual inconsistencies. Together, these layers would mitigate the risks of automation while enabling AI to confidently replace rote professionals.
Resistance and the Real Battle Ahead
Predictably, the entrenched elites of medicine, law, and the judiciary will resist these changes. After all, their prestige and salaries are predicated on the illusion that their roles are irreplaceable. But history isn’t on their side. Industries driven by memorisation and routine application—think bank tellers, travel agents, and factory workers—have already been disrupted by technology. Why should these professions be exempt?
The real challenge lies not in whether AI can replace these roles but in public trust and regulatory inertia. The transformation will be swift and irreversible once safeguards are implemented and AI earns confidence.
Critical Thinking: The Human Stronghold
Professions that thrive on unstructured problem-solving, creativity, and emotional intelligence—artists, philosophers, innovators—will remain AI-resistant, at least for now. But the rote professions, with their dependency on standardisation and precedent, have no such immunity. And that is precisely why they are AI’s lowest-hanging fruit.
It’s time to stop pretending that memorisation is intelligence, that precedent is innovation, or that authority lies in a gown or white coat. AI isn’t here to make humans obsolete; it’s here to liberate us from the tyranny of rote. For those willing to adapt, the future looks bright. For the rest? The machines are coming—and they’re cheaper, faster, and better at your job.
As I am putting some finishing touches on my latest paper, I had the idea to illustrate some of the novel nomenclature. For some reason, Zeno’s Paradox came to mind. Unlike in maths, it is not reconcilable in language. I asked ChatGPT how I might integrate the concept into my paper. Here is what it rendered. Not only is the exposition decent, but it also provides citations and references. Humorously, when I read the citations, I thought that they were placeholders – Brown, David, Smith, and Jones – but they turned out to be legitimate references – references I hadn’t considered and each relatively recent. I’m chalking this up as a win. This was not a case of ‘ChatGPT, do my homework’. Instead, it reflects an active collaboration between a human and technology. And now I have more reference papers to read and absorb.*
Take the term ‘freedom’, an archetypal example of a Contestable that resists precise communication due to its inherent abstraction and ideological weight. To enhance its effectiveness, we might parse it into narrower forms, such as ‘freedom from’ (liberation from oppression or constraint) and ‘freedom to’ (the capacity to act or pursue desires). Whilst this initial breakdown provides some clarity, it is merely the beginning. Each parsed term demands further qualification: ‘freedom from what?’ and ‘freedom to do what?’ At every level of subdivision, new ambiguities emerge, requiring further refinement (Smith, 2021).
This process resembles Zeno’s paradox, where each attempt to clarify ‘freedom’ by moving incrementally closer to a precise meaning only reveals new layers of interpretive complexity. The concept remains elusive, perpetually deferring any definitive understanding as we approach it in ever-smaller fragments without fully grasping its essence (Jones, 2019). Unlike concrete terms like ‘chair’ or ‘table’, whose meanings can be pinned down through physical reference or functional description, ‘freedom’ resists final articulation (Brown, 2020). This continual parsing illustrates the inherent limitations of language when applied to abstract, boundary-resistant concepts: the more we attempt to define them, the more they fracture into an infinite regress of sub-concepts, each requiring its own explanation (Davis, 2022). Thus, whilst parsing ‘freedom’ improves communicative effectiveness to a point, it ultimately underscores language’s impotence in capturing the full spectrum of complex, ideational constructs (Wilson, 2018).
References:
Brown, J. (2020). The elusive nature of abstract concepts. Journal of Philosophical Studies, 35(2), 123-145.
Davis, S. (2022). Language and the limits of definition. Linguistic Quarterly, 57(1), 78-92.
Jones, M. (2019). Zeno’s paradox and the infinite regress of meaning. Metaphysical Review, 82(3), 245-263.
Smith, L. (2021). Freedom and its discontents: A study in semantic complexity. Political Philosophy Today, 14(2), 56-78.
Wilson, G. (2018). The impotence of language in capturing ideational constructs. Journal of Language and Meaning, 27(4), 198-215.
* References: Here’s the twist. When I went to read the cited references, none were found. Here’s what ChatGPT had to say about it:
I understand your difficulty in locating the references you’ve mentioned. It appears that the articles by Brown (2020), Davis (2022), Jones (2019), and Smith (2021) may not exist in the public domain or could be hypothetical examples. Consequently, Digital Object Identifiers (DOIs) for these works are unavailable.
If you’re interested in exploring topics related to abstract concepts, the limits of language, or semantic complexity, I can recommend some reputable sources:
“The Challenges of Abstract Concepts” by Guy Dove (2021): This chapter discusses the theoretical challenges posed by abstract concepts and reviews current evidence and responses. Springer Link
“Abstract Concepts and Metacognition: Searching for Meaning in Self and Others” by Anna M. Borghi, Chiara Fini, and Luca Tummolini (2021): This work explores the role of metacognition in understanding abstract concepts. Springer Link
“The Limits of Language: Wittgenstein’s Later Philosophy and Skinner’s Verbal Behavior” by R. Harre (1983): This article examines the limitations of language through the perspectives of Wittgenstein and Skinner. JSTOR
“Varieties of Abstract Concepts and Their Multiple Dimensions” by Anna M. Borghi et al. (2018): This paper discusses the representation of abstract concepts and the different criteria used to select them. Cambridge University Press
These sources should provide valuable insights into the subjects you’re interested in.
Generative AI (Gen AI) might seem like a technological marvel, a digital genie conjuring ideas, images, and even conversations on demand. It’s a brilliant tool, no question; I use it daily for images, videos, and writing, and overall, I’d call it a net benefit. But let’s not overlook the cracks in the gilded tech veneer. Gen AI comes with its fair share of downsides—some of which are as gaping as the Mariana Trench.
First, a quick word on preferences. Depending on the task at hand, I tend to use OpenAI’s ChatGPT, Anthropic’s Claude, and Perplexity.ai, with a particular focus on Google’s NotebookLM. For this piece, I’ll use NotebookLM as my example, but the broader discussion holds for all Gen AI tools.
Now, as someone who’s knee-deep in the intricacies of language, I’ve been drafting a piece supporting my Language Insufficiency Hypothesis. My hypothesis is simple enough: language, for all its wonders, is woefully insufficient when it comes to conveying the full spectrum of human experience, especially as concepts become abstract. Gen AI has become an informal editor and critic in my drafting process. I feed in bits and pieces, throw work-in-progress into the digital grinder, and sift through the feedback. Often, it’s insightful; occasionally, it’s a mess. And herein lies the rub: with Gen AI, one has to play babysitter, comparing outputs and sending responses back and forth among the tools to spot and correct errors. Like cross-examining witnesses, if you will.
But NotebookLM is different from the others. While it’s designed for summarisation, it goes beyond by offering podcasts—yes, podcasts—where it generates dialogue between two AI voices. You have some control over the direction of the conversation, but ultimately, the way it handles and interprets your input depends on internal mechanics you don’t see or control.
So, I put NotebookLM to the test with a draft of my paper on the Language Effectiveness-Complexity Gradient. The model I’m developing posits that as terminology becomes more complex, it also becomes less effective. Some concepts, the so-called “ineffables,” are essentially untranslatable, or at best, communicatively inefficient. Think of describing the precise shade of blue you can see but can’t quite capture in words—or, to borrow from Thomas Nagel, explaining “what it’s like to be a bat.” NotebookLM managed to grasp my model with impressive accuracy—up to a point. It scored between 80 to 100 percent on interpretations, but when it veered off course, it did so spectacularly.
For instance, in one podcast rendition, the AI’s male voice attempted to give an example of an “immediate,” a term I use to refer to raw, preverbal sensations like hunger or pain. Instead, it plucked an example from the ineffable end of the gradient, discussing the experience of qualia. The slip was obvious to me, but imagine this wasn’t my own work. Imagine instead a student relying on AI to summarise a complex text for a paper or exam. The error might go unnoticed, resulting in a flawed interpretation.
The risks don’t end there. Gen AI’s penchant for generating “creative” content is notorious among coders. Ask ChatGPT to whip up some code, and it’ll eagerly oblige—sometimes with disastrous results. I’ve used it for macros and simple snippets, and for the most part, it delivers, but I’m no coder. For professionals, it can and has produced buggy or invalid code, leading to all sorts of confusion and frustration.
Ultimately, these tools demand vigilance. If you’re asking Gen AI to help with homework, you might find it’s as reliable as a well-meaning but utterly clueless parent who’s keen to help but hasn’t cracked a textbook in years. And as we’ve all learned by now, well-meaning intentions rarely translate to accurate outcomes.
The takeaway? Use Gen AI as an aid, not a crutch. It’s a handy tool, but the moment you let it think for you, you’re on shaky ground. Keep it at arm’s length; like any assistant, it can take you far—just don’t ask it to lead.