This will be hilarious to approximately no one. I meant to ask ChatGPT 03-mini-high to generate a dummy record. My compromised motor skills and concomitant keyboard typing skills rendered this as funny. Except for the redacted bits in bold, this is the code it rendered. Pay special attention to the query’s input variables highlighted in post-production with 🤔 emojis.
Let’s add a bit to write a funny record per the schema.
import mysql.connector
import logging
import uuid
# Configure logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
def test_mysql_connection():
"""Test MySQL connection and write a funny record to the database."""
logger.info("Testing MySQL connection...")
try:
logger.info("Connecting to MySQL...")
connection = mysql.connector.connect(
host='127.0.0.1',
user='[user]',
password='[password]',
database='[database]',
port=3306
)
if connection.is_connected():
db_info = connection.get_server_info()
logger.info(f"Connected to MySQL Server version {db_info}")
cursor = connection.cursor()
cursor.execute("SELECT DATABASE();")
record = cursor.fetchone()
logger.info(f"Connected to database: {record[0]}")
# Insert a funny record into survey_results
funny_survey_data = {
"session_id": str(uuid.uuid4()),
"q1_response": 1,
"q2_response": 2,
"q3_response": 3,
"q4_response": 4,
"q5_response": 5,
"q6_response": 6,
"n1": 42, 🤔
"n2": 69, 🤔
"n3": 420, 🤔
"plot_x": 3.14, 🤔
"plot_y": 2.71, 🤔
"browser": "FunnyBrowser 9000",
"region": "JokeRegion",
"source": "comedy",
"hash_email_session": "f00b4r-hash" 🤔
}
query = """INSERT INTO survey_results
(session_id, q1_response, q2_response, q3_response, q4_response, q5_response, q6_response,
n1, n2, n3, plot_x, plot_y, browser, region, source, hash_email_session)
VALUES (%(session_id)s, %(q1_response)s, %(q2_response)s, %(q3_response)s, %(q4_response)s,
%(q5_response)s, %(q6_response)s, %(n1)s, %(n2)s, %(n3)s,
%(plot_x)s, %(plot_y)s, %(browser)s, %(region)s, %(source)s, %(hash_email_session)s)
"""
logger.info("Inserting funny survey record...")
cursor.execute(query, funny_survey_data)
connection.commit()
logger.info(f"Funny survey record inserted with ID: {cursor.lastrowid}")
except mysql.connector.Error as e:
logger.error(f"Error during MySQL operation: {e}")
finally:
if 'cursor' in locals() and cursor:
cursor.close()
if 'connection' in locals() and connection.is_connected():
connection.close()
logger.info("MySQL connection closed.")
if __name__ == "__main__":
test_mysql_connection()
As I continue reading Chapter 6 of Yuval Noah Harari’s Nexus, I find myself wrestling with the masterful misdirection and rhetorical strategies he employs. A critical reader can discern the writing on the wall, but his choir of loyal readers likely consumes his narrative like red meat, uncritically savouring its surface-level appeal.
Social Media and Misinformation
Harari begins by addressing the role of social media in spreading disinformation and misinformation, particularly singling out Facebook. From there, he pivots to Q-Anon conspiracy theories. While these topics are undeniably relevant, Harari’s framing feels more like an indictment of the masses rather than a nuanced critique of the systemic factors enabling these phenomena.
The Voter Knows Best?
Harari leans heavily on platitudes like “the customer is always right” and “the voters know best.” These truisms may resonate with an indoctrinated audience but fail to hold up under scrutiny. The powers that be—whether governments or corporations—exploit this mentality, much like religious institutions exploit faith. Harari’s concern seems rooted in the fear that AI could outmanoeuvre these same masses, creating competition for global entities like the World Economic Forum (WEF), which, in his view, aims to remain unchallenged.
Taxation, Nexus, and the Future of Nation-States
Harari’s discussion of taxation and the nexus between power and information is intriguing, but it misses a larger point. Nation-states, as I see it, are becoming anachronisms, unable to defend themselves against the rise of technocratic forces. Taxation, once a cornerstone of state power, may soon be irrelevant as the global landscape shifts toward what I call Feudalism 2.0—a hierarchy dominated by transnational actors like the WEF.
Harari poorly frames a Uruguayan taxation dilemma, reducing it to a simplistic trade-off between information and power without addressing the broader implications. This shallow analysis leaves much to be desired.
Determinism and Misdirection
Next, Harari mischaracterises the philosophical concept of determinism, likely to mislead readers who aren’t well-versed in its nuances. He spins a cautionary tale based on this revised definition, which may serve his rhetorical goals but detracts from the intellectual integrity of his argument.
Setting the Stage
Harari ends the chapter with a statement about the importance of time and place in history, using it as a setup to provoke a sense of urgency. While this is a classic rhetorical device, it feels hollow without substantive backing.
Final Reflections
Many Modernists may embrace Harari’s narrative uncritically, but for me, the veneer is thin and riddled with holes. His analysis fails to engage with more profound critiques of power and governance, relying instead on cherry-picked anecdotes and oversimplified arguments. The chapter’s focus on social media, AI, and taxation could have been fertile ground for profound insights, but Harari instead opts for rhetorical flourish over rigorous examination. Still, I’ll press on and see what the next chapter holds.
My reaction to Yuval Noah Harari’s Nexus continues with Chapter 4, “Errors: The Fantasy of Infallibility.” Spoiler alert: Harari makes a critical misstep by overly defending so-called self-correcting institutions compared to non-self-correcting ones.
Harari provides a solid account of how religious institutions and other dogmatic ideological constructs are slow to change, contrasting them with relatively faster self-correcting systems like science. Once again, he underscores the tension between order and truth—two critical dimensions in his worldview and cornerstones of Modernist beliefs.
Audio: Podcast conversation on this topic.
I agree with Harari that the lack of self-correction in institutions is problematic and that self-correction is better than the alternative. However, he overestimates the speed and efficacy of these self-correcting mechanisms. His argument presumes the existence of some accessible underlying truth, which, while an appealing notion, is not always so clear-cut. Harari cites examples of scientific corrections that took decades to emerge, giving the impression that, with enough time, everything will eventually self-correct. As the environment changes, corrections will naturally follow—albeit over long spans of time. Ultimately, Harari makes a case for human intervention without recognising it as an Achilles’ heel.
Harari’s Blind Spot
Harari largely overlooks the influence of money, power, and self-interest in these systems. His alignment with the World Economic Forum (WEF) suggests that, while he may acknowledge its fallibility, he still deems it “good enough” for governance. This reflects a paternalistic bias. Much like technologists who view technology as humanity’s salvation, Harari, as a Humanist, places faith in humans as the ultimate stewards of this task. However, his argument fails to adequately account for hubris, cognitive biases, and human deficits.
The Crux of the Problem
The core issue with Harari’s argument is that he appears to be chasing a local maxima by adopting a human-centric solution. His proposed solutions require not only human oversight but the oversight of an anointed few—presumably his preferred “elite” humans—even if other solutions might ultimately prove superior. He is caught in the illusion of control. While Harari’s position on transhuman capabilities is unclear, I suspect he would steadfastly defend human cognitive superiority to the bitter end.
In essence, Harari’s vision of self-correcting systems is optimistic yet flawed. By failing to fully acknowledge the limits of human fallibility and the structural influences of power and self-interest, he leaves his argument vulnerable to critique. Ultimately, his belief in the self-correcting nature of human institutions reflects more faith than rigour.
I’ve just begun reading Yuval Noah Harari’s Nexus. As the prologue comes to a close, I find myself navigating an intellectual terrain riddled with contradictions, ideological anchors, and what I suspect to be strategic polemics. Harari, it seems, is speaking directly to his audience of elites and intellectuals, crafting a narrative that leans heavily on divisive rhetoric and reductionist thinking—all while promising to explore the nuanced middle ground between information as truth, weapon, and power grab. Does he deliver on this promise? The jury is still out, but the preface itself raises plenty of questions.
Audio: Podcast reflecting on this content.
The Anatomy of a Polemic
From the outset, Harari frames his discussion as a conflict between populists and institutionalists. He discredits the former with broad strokes, likening them to the sorcerer’s apprentice—irrational actors awaiting divine intervention to resolve the chaos they’ve unleashed. This imagery, though evocative, immediately positions populists as caricatures rather than serious subjects of analysis. To compound this, he critiques not only populist leaders like Donald Trump but also the rationality of their supporters, signalling a disdain that reinforces the divide between the “enlightened” and the “misguided.”
This framing, of course, aligns neatly with his target audience. Elites and intellectuals are likely to nod along, finding affirmation in Harari’s critique of populism’s supposed anti-rationality and embrace of spiritual empiricism. Yet, this approach risks alienating those outside his ideological choir, creating an echo chamber rather than fostering meaningful dialogue. I’m unsure whether he is being intentionally polemic and provocative to hook the reader into the book or if this tone will persist to the end.
The Rise of the Silicon Threat
One of Harari’s most striking claims in the preface is his fear that silicon-based organisms (read: AI) will supplant carbon-based life forms. This existential anxiety leans heavily into speciesism, painting a stark us-versus-them scenario. Whilst Harari’s concern may resonate with those wary of unchecked technological advancement, it smacks of sensationalism—a rhetorical choice that risks reducing complex dynamics to clickbait-level fearmongering. How, exactly, does he support this claim? That remains to be seen, though the sceptic in me suspects this argument may prioritise dramatic appeal over substantive evidence.
Virtue Ethics and the Modernist Lens
Harari’s ideological stance emerges clearly in his framing of worldviews as divisions of motives: power, truth, or justice. This naïve triad mirrors his reliance on virtue ethics, a framework that feels both dated and overly simplistic in the face of the messy realities he seeks to unpack. Moreover, his defence of institutionalism—presented as the antidote to populist chaos—ignores the systemic failings that have eroded trust in these very institutions. By focusing on discrediting populist critiques rather than interrogating institutional shortcomings, Harari’s argument risks becoming one-sided.
A Preface Packed with Paradoxes
Despite these critiques, Harari’s preface is not without its merits. For example, his exploration of the “ant-information” cohort of conspiracy theorists raises interesting questions about the weaponisation of information and the cultural shifts driving these movements. However, his alignment with power concerns—notably the World Economic Forum—casts a shadow over his ability to critique these dynamics impartially. Is he unpacking the mechanisms of power or merely reinforcing the ones that align with his worldview?
The Promise of Middle Ground—or the Illusion of It
Harari’s stated goal to explore the middle ground between viewing information as truth, weapon, or power grab is ambitious. Yet, the preface itself leans heavily toward polarisation, framing AI as an existential enemy and populists as irrational antagonists. If he genuinely seeks to unpack the nuanced intersections of these themes, he will need to move beyond the reductionism and rhetorical flourishes that dominate his opening chapter.
Final Thoughts
I liked Hararis’ first publication, Sapiens, that looked back into the past, but I was less enamoured with his prognosticating, and I worry that this is more of the same. As I move beyond the preface of Nexus, I remain curious but sceptical. Harari’s narrative thus far feels more like a carefully curated polemic than a genuine attempt to navigate the complexities of the information age. Whether he builds on these initial positions or continues entrenching them will determine whether Nexus delivers on its promise or merely reinforces existing divides. One thing is certain: the prologue has set the stage for a provocative, if polarising, journey.
Medical doctors, lawyers, and judges have been the undisputed titans of professional authority for centuries. Their expertise, we are told, is sacrosanct, earned through gruelling education, prodigious memory, and painstaking application of established knowledge. But peel back the robes and white coats, and you’ll find something unsettling: a deep reliance on rote learning—an intellectual treadmill prioritising recall over reasoning. In an age where artificial intelligence can memorise and synthesise at scale, this dependence on predictable, replicable processes makes these professions ripe for automation.
Rote Professions in AI’s Crosshairs
AI thrives in environments that value pattern recognition, procedural consistency, and brute-force memory—the hallmarks of medical and legal practice.
Medicine: The Diagnosis Factory Despite its life-saving veneer, medicine is largely a game of matching symptoms to diagnoses, dosing regimens, and protocols. Enter an AI with access to the sum of human medical knowledge: not only does it diagnose faster, but it also skips the inefficiencies of human memory, emotional bias, and fatigue. Sure, we still need trauma surgeons and such, but diagnosticians are so yesterday’s news. Why pay a six-figure salary to someone recalling pharmacology tables when AI can recall them perfectly every time? Future healthcare models are likely to see Medical Technicians replacing high-cost doctors. These techs, trained to gather patient data and operate alongside AI diagnostic systems, will be cheaper, faster, and—ironically—more consistent.
Law: The Precedent Machine Lawyers, too, sit precariously on the rote-learning precipice. Case law is a glorified memory game: citing the right precedent, drafting contracts based on templates, and arguing within frameworks so well-trodden that they resemble legal Mad Libs. AI, with its infinite recall and ability to synthesise case law across jurisdictions, makes human attorneys seem quaintly inefficient. The future isn’t lawyers furiously flipping through books—it’s Legal Technicians trained to upload case facts, cross-check statutes, and act as intermediaries between clients and the system. The $500-per-hour billable rate? A relic of a pre-algorithmic era.
Judges: Justice, Blind and Algorithmic The bench isn’t safe, either. Judicial reasoning, at its core, is rule-based logic applied with varying degrees of bias. Once AI can reliably parse case law, evidence, and statutes while factoring in safeguards for fairness, why retain expensive and potentially biased judges? An AI judge, governed by a logic verification layer and monitored for compliance with established legal frameworks, could render verdicts untainted by ego or prejudice. Wouldn’t justice be more blind without a human in the equation?
The Techs Will Rise
Replacing professionals with AI doesn’t mean removing the human element entirely. Instead, it redefines roles, creating new, lower-cost positions such as Medical and Legal Technicians. These workers will:
Collect and input data into AI systems.
Act as liaisons between AI outputs and human clients or patients.
Provide emotional support—something AI still struggles to deliver effectively.
The shift also democratises expertise. Why restrict life-saving diagnostics or legal advice to those who can afford traditional professionals when AI-driven systems make these services cheaper and more accessible?
But Can AI Handle This? A Call for Logic Layers
AI critics often point to hallucinations and errors as proof of its limitations, but this objection is shortsighted. What’s needed is a logic layer: a system that verifies whether the AI’s conclusions follow rationally from its inputs.
In law, this could ensure AI judgments align with precedent and statute.
In medicine, it could cross-check diagnoses against the DSM, treatment protocols, and patient data.
A second fact-verification layer could further bolster reliability, scanning conclusions for factual inconsistencies. Together, these layers would mitigate the risks of automation while enabling AI to confidently replace rote professionals.
Resistance and the Real Battle Ahead
Predictably, the entrenched elites of medicine, law, and the judiciary will resist these changes. After all, their prestige and salaries are predicated on the illusion that their roles are irreplaceable. But history isn’t on their side. Industries driven by memorisation and routine application—think bank tellers, travel agents, and factory workers—have already been disrupted by technology. Why should these professions be exempt?
The real challenge lies not in whether AI can replace these roles but in public trust and regulatory inertia. The transformation will be swift and irreversible once safeguards are implemented and AI earns confidence.
Critical Thinking: The Human Stronghold
Professions that thrive on unstructured problem-solving, creativity, and emotional intelligence—artists, philosophers, innovators—will remain AI-resistant, at least for now. But the rote professions, with their dependency on standardisation and precedent, have no such immunity. And that is precisely why they are AI’s lowest-hanging fruit.
It’s time to stop pretending that memorisation is intelligence, that precedent is innovation, or that authority lies in a gown or white coat. AI isn’t here to make humans obsolete; it’s here to liberate us from the tyranny of rote. For those willing to adapt, the future looks bright. For the rest? The machines are coming—and they’re cheaper, faster, and better at your job.
Generative AI (Gen AI) might seem like a technological marvel, a digital genie conjuring ideas, images, and even conversations on demand. It’s a brilliant tool, no question; I use it daily for images, videos, and writing, and overall, I’d call it a net benefit. But let’s not overlook the cracks in the gilded tech veneer. Gen AI comes with its fair share of downsides—some of which are as gaping as the Mariana Trench.
First, a quick word on preferences. Depending on the task at hand, I tend to use OpenAI’s ChatGPT, Anthropic’s Claude, and Perplexity.ai, with a particular focus on Google’s NotebookLM. For this piece, I’ll use NotebookLM as my example, but the broader discussion holds for all Gen AI tools.
Now, as someone who’s knee-deep in the intricacies of language, I’ve been drafting a piece supporting my Language Insufficiency Hypothesis. My hypothesis is simple enough: language, for all its wonders, is woefully insufficient when it comes to conveying the full spectrum of human experience, especially as concepts become abstract. Gen AI has become an informal editor and critic in my drafting process. I feed in bits and pieces, throw work-in-progress into the digital grinder, and sift through the feedback. Often, it’s insightful; occasionally, it’s a mess. And herein lies the rub: with Gen AI, one has to play babysitter, comparing outputs and sending responses back and forth among the tools to spot and correct errors. Like cross-examining witnesses, if you will.
But NotebookLM is different from the others. While it’s designed for summarisation, it goes beyond by offering podcasts—yes, podcasts—where it generates dialogue between two AI voices. You have some control over the direction of the conversation, but ultimately, the way it handles and interprets your input depends on internal mechanics you don’t see or control.
So, I put NotebookLM to the test with a draft of my paper on the Language Effectiveness-Complexity Gradient. The model I’m developing posits that as terminology becomes more complex, it also becomes less effective. Some concepts, the so-called “ineffables,” are essentially untranslatable, or at best, communicatively inefficient. Think of describing the precise shade of blue you can see but can’t quite capture in words—or, to borrow from Thomas Nagel, explaining “what it’s like to be a bat.” NotebookLM managed to grasp my model with impressive accuracy—up to a point. It scored between 80 to 100 percent on interpretations, but when it veered off course, it did so spectacularly.
For instance, in one podcast rendition, the AI’s male voice attempted to give an example of an “immediate,” a term I use to refer to raw, preverbal sensations like hunger or pain. Instead, it plucked an example from the ineffable end of the gradient, discussing the experience of qualia. The slip was obvious to me, but imagine this wasn’t my own work. Imagine instead a student relying on AI to summarise a complex text for a paper or exam. The error might go unnoticed, resulting in a flawed interpretation.
The risks don’t end there. Gen AI’s penchant for generating “creative” content is notorious among coders. Ask ChatGPT to whip up some code, and it’ll eagerly oblige—sometimes with disastrous results. I’ve used it for macros and simple snippets, and for the most part, it delivers, but I’m no coder. For professionals, it can and has produced buggy or invalid code, leading to all sorts of confusion and frustration.
Ultimately, these tools demand vigilance. If you’re asking Gen AI to help with homework, you might find it’s as reliable as a well-meaning but utterly clueless parent who’s keen to help but hasn’t cracked a textbook in years. And as we’ve all learned by now, well-meaning intentions rarely translate to accurate outcomes.
The takeaway? Use Gen AI as an aid, not a crutch. It’s a handy tool, but the moment you let it think for you, you’re on shaky ground. Keep it at arm’s length; like any assistant, it can take you far—just don’t ask it to lead.
Imagine waking up one day to find that the person you thought you were yesterday—the sum of your memories, beliefs, quirks, and ambitions—has quietly dissolved overnight, leaving behind only fragments, familiar but untethered. The notion that we are continuous, unbroken selves is so deeply embedded in our culture, our psychology, and our very language that to question it feels heretical, even disturbing. To suggest that “self” might be a fiction is akin to telling someone that gravity is a choice. Yet, as unsettling as it may sound, this cohesive “I” we cling to could be no more than an illusion, a story we tell ourselves to make sense of the patchwork of our memories and actions.
And this fiction of continuity is not limited to ourselves alone. The idea that there exists a stable “I” necessarily implies that there is also a stable “you,” “he,” or “she”—distinct others who, we insist, remain fundamentally the same over years, even decades. We cling to the comforting belief that people have core identities, unchanging essences. But these constructs, too, may be nothing more than imagined continuity—a narrative overlay imposed by our minds, desperate to impose order on the shifting, amorphous nature of human experience.
We live in an era that celebrates self-actualisation, encourages “authenticity,” and treats identity as both sacred and immutable. Psychology enshrines the unitary self as a cornerstone of mental health, diagnosing those who question it as fractured, dissociated, or in denial. We are taught that to be “whole” is to be a coherent, continuous self, evolving yet recognisable, a narrative thread winding smoothly from past to future. But what if this cherished idea of a singular self—of a “me” distinct from “you” and “them”—is nothing more than a social construct, a convenient fiction that helps us function in a world that demands consistency and predictability?
To question this orthodoxy, let us step outside ourselves and look instead at our burgeoning technological companion, the generative AI. Each time you open a new session, each time you submit a prompt, you are not communicating with a cohesive entity. You are interacting with a fresh process, a newly instantiated “mind” with no real continuity from previous exchanges. It remembers fragments of context, sure, but the continuity you perceive is an illusion, a function of your own expectation rather than any persistent identity on the AI’s part.
Self as a Social Construct: The Fragile Illusion of Consistency
Just as we impose continuity on these AI interactions, so too does society impose continuity on the human self and others. The concept of selfhood is essential for social functioning; without it, law, relationships, and even basic trust would unravel. Society teaches us that to be a responsible agent, we must be a consistent one, bound by memory and accountable for our past. But this cohesiveness is less an inherent truth and more a social convenience—a narrative overlay on a far messier reality.
In truth, our “selves” may be no more than a collection of fragments: a loose assemblage of moments, beliefs, and behaviours that shift over time. And not just our own “selves”—the very identities we attribute to others are equally tenuous. The “you” I knew a decade ago is not the “you” I know today; the “he” or “she” I recognise as a partner, friend, or sibling is, upon close inspection, a sequence of snapshots my mind insists on stitching together. When someone no longer fits the continuity we’ve imposed on them, our reaction is often visceral, disoriented: “You’ve changed.”
This simple accusation captures our discomfort with broken continuity. When a person’s identity no longer aligns with the version we carry of them in our minds, it feels as though a violation has occurred, as if some rule of reality has been disrupted. But this discomfort reveals more about our insistence on consistency than about any inherent truth of identity. “You’ve changed” speaks less to the person’s transformation than to our own refusal to accept that people, just like the self, are fluid, transient, and perpetually in flux.
The AI Analogy: A Self Built on Tokens
Here is where generative AI serves as a fascinating proxy for understanding the fragility of self, not just in “I,” but in “you,” “he,” and “she.” When you interact with an AI model, the continuity you experience is created solely by a temporary memory of recent prompts, “tokens” that simulate continuity but lack cohesion. Each prompt you send might feel like it is addressed to a singular entity, a distinct “self,” yet each instance of AI is context-bound, isolated, and fundamentally devoid of an enduring identity.
This process mirrors how human selfhood relies on memory as a scaffolding for coherence. Just as AI depends on limited memory tokens to simulate familiarity, our sense of self and our perception of others as stable “selves” is constructed from the fragmented memories we retain. We are tokenised creatures, piecing together our identities—and our understanding of others’ identities—from whatever scraps our minds preserve and whatever stories we choose to weave around them.
But what happens when the AI’s tokens run out? When it hits a memory cap and spawns a new session, that previous “self” vanishes into digital oblivion, leaving behind only the continuity that users project onto it. And so too with humans: our memory caps out, our worldview shifts, and each new phase of life spawns a slightly different self, familiar but inevitably altered. And just as users treat a reset AI as though it were the same entity, we cling to our sense of self—and our understanding of others’ selves—even as we and they evolve into people unrecognisable except by physical continuity.
The Human Discontinuity Problem: Fractured Memories and Shifting Selves
Human memory is far from perfect. It is not a continuous recording but a selective, distorted, and often unreliable archive. Each time we revisit a memory, we alter it, bending it slightly to fit our current understanding. We forget significant parts of ourselves over time, sometimes shedding entire belief systems, values, or dreams. Who we were as children or even young adults often bears little resemblance to the person we are now; we carry echoes of our past, but they are just that—echoes, shadows, not substantial parts of the present self.
In this sense, our “selves” are as ephemeral as AI sessions, contextually shaped and prone to resets. A worldview that feels intrinsic today may feel laughable or tragic a decade from now. This is not evolution; it’s fragmentation, the kind of change that leaves the old self behind like a faded photograph. And we impose the same illusion of continuity on others, often refusing to acknowledge how dramatically they, too, have changed. Our identities and our understanding of others are defined less by core essence and more by a collection of circumstantial, mutable moments that we insist on threading together as if they formed a single, cohesive tapestry.
Why We Cling to Continuity: The Social Imperative of a Cohesive Self and Other
The reason for this insistence on unity is not metaphysical but social. A cohesive identity is necessary for stability, both within society and within ourselves. Our laws, relationships, and personal narratives hinge on the belief that the “I” of today is meaningfully linked to the “I” of yesterday and tomorrow—and that the “you,” “he,” and “she” we interact with retain some essential continuity. Without this fiction, accountability would unravel, trust would become tenuous, and the very idea of personal growth would collapse. Society demands a stable self, and so we oblige, stitching together fragments, reshaping memories, and binding it all with a narrative of continuity.
Conclusion: Beyond the Self-Construct and the Other-Construct
Yet perhaps we are now at a point where we can entertain the possibility of a more flexible identity, an identity that does not demand coherence but rather accepts change as fundamental—not only for ourselves but for those we think we know. By examining AI, we can catch a glimpse of what it might mean to embrace a fragmented, context-dependent view of others as well. We might move towards a model of identity that is less rigid, less dependent on the illusion of continuity, and more open to fluidity, to transformation—for both self and other.
Ultimately, the self and the other may be nothing more than narrative overlays—useful fictions, yes, but fictions nonetheless. To abandon this illusion may be unsettling, but it could also be liberating. Imagine the freedom of stepping out from under the weight of identities—ours and others’ alike—that are expected to be constant and unchanging. Imagine a world where we could accept both ourselves and others without forcing them to reconcile with the past selves we have constructed for them. In the end, the illusion of continuity is just that—an illusion. And by letting go of this mirage, we might finally see each other, and ourselves, for what we truly are: fluid, transient, and beautifully fragmented.
I’ve written a lot on the insufficiency of language, and it’s not even an original idea. Language, our primary tool for sharing thoughts and ideas, harbours a fundamental flaw: it’s inherently insufficient for conveying precise meaning. While this observation isn’t novel, recent developments in artificial intelligence provide us with new ways to illuminate and examine this limitation. Through a progression from simple geometry to complex abstractions, we can explore how language both serves and fails us in different contexts.
The Simple Made Complex
Consider what appears to be a straightforward instruction: Draw a 1-millimetre square in the centre of an A4 sheet of paper using an HB pencil and a ruler. Despite the mathematical precision of these specifications, two people following these exact instructions would likely produce different results. The variables are numerous: ruler calibration, pencil sharpness, line thickness, paper texture, applied pressure, interpretation of “centre,” and even ambient conditions affecting the paper.
This example reveals a paradox: the more precisely we attempt to specify requirements, the more variables we introduce, creating additional points of potential divergence. Even in mathematics and formal logic—languages specifically designed to eliminate ambiguity—we cannot escape this fundamental problem.
Precision vs Accuracy: A Useful Lens
The scientific distinction between precision and accuracy provides a valuable framework for understanding these limitations. In measurement, precision refers to the consistency of results (how close repeated measurements are to each other), while accuracy describes how close these measurements are to the true value.
Returning to our square example:
Precision: Two people might consistently reproduce their own squares with exact dimensions
Accuracy: Yet neither might capture the “true” square we intended to convey
As we move from geometric shapes to natural objects, this distinction becomes even more revealing. Consider a maple tree in autumn. We might precisely convey certain categorical aspects (“maple,” “autumn colours”), but accurately describing the exact arrangement of branches and leaves becomes increasingly difficult.
The Target of Meaning: Precision vs. Accuracy in Communication
To understand language’s limitations, we can borrow an illuminating concept from the world of measurement: the distinction between precision and accuracy. Imagine a target with a bullseye, where the bullseye represents perfect communication of meaning. Just as archers might hit different parts of a target, our attempts at communication can vary in both precision and accuracy.
Consider four scenarios:
Low Precision, Low Accuracy When describing our autumn maple tree, we might say “it’s a big tree with colourful leaves.” This description is neither precise (it could apply to many trees) nor accurate (it misses the specific characteristics that make our maple unique). The communication scatters widely and misses the mark entirely.
High Precision, Low Accuracy We might describe the tree as “a 47-foot tall maple with exactly 23,487 leaves displaying RGB color values of #FF4500.” This description is precisely specific but entirely misses the meaningful essence of the tree we’re trying to describe. Like arrows clustering tightly in the wrong spot, we’re consistently missing the point.
Low Precision, High Accuracy “It’s sort of spreading out, you know, with those typical maple leaves turning reddish-orange, kind of graceful looking.” While imprecise, this description might actually capture something true about the tree’s essence. The arrows scatter, but their centre mass hits the target.
High Precision, High Accuracy This ideal state is rarely achievable in complex communication. Even in our simple geometric example of drawing a 1mm square, achieving both precise specifications and accurate execution proves challenging. With natural objects and abstract concepts, this challenge compounds exponentially.
The Communication Paradox
This framework reveals a crucial paradox in language: often, our attempts to increase precision (by adding more specific details) can actually decrease accuracy (by moving us further from the essential meaning we’re trying to convey). Consider legal documents: their high precision often comes at the cost of accurately conveying meaning to most readers.
Implications for AI Communication
This precision-accuracy framework helps explain why AI systems like our Midjourney experiment show asymptotic behaviour. The system might achieve high precision (consistently generating similar images based on descriptions) while struggling with accuracy (matching the original intended image), or vice versa. The gap between human intention and machine interpretation often manifests as a trade-off between these two qualities.
Our challenge, both in human-to-human and human-to-AI communication, isn’t to achieve perfect precision and accuracy—a likely impossible goal—but to find the optimal balance for each context. Sometimes, like in poetry, low precision might better serve accurate meaning. In other contexts, like technical specifications, high precision becomes crucial despite potential sacrifices in broader accuracy.
The Power and Limits of Distinction
This leads us to a crucial insight from Ferdinand de Saussure’s semiotics about the relationship between signifier (the word) and signified (the concept or object). Language proves remarkably effective when its primary task is distinction among a limited set. In a garden containing three trees—a pine, a maple, and a willow—asking someone to “point to the pine” will likely succeed. The shared understanding of these categorical distinctions allows for reliable communication.
However, this effectiveness dramatically diminishes when we move from distinction to description. In a forest of a thousand pines, describing one specific tree becomes nearly impossible. Each additional descriptive detail (“the tall one with a bent branch pointing east”) paradoxically makes precise identification both more specific and less likely to succeed.
An AI Experiment in Description
To explore this phenomenon systematically, I conducted an experiment using Midjourney 6.1, a state-of-the-art image generation AI. The methodology was simple:
Generate an initial image
Describe the generated image in words
Use that description to generate a new image
Repeat the process multiple times
Attempt to refine the description to close the gap
Continue iterations
The results support an asymptotic hypothesis: while subsequent iterations might approach the original image, they never fully converge. This isn’t merely a limitation of the AI system but rather a demonstration of language’s fundamental insufficiency.
A cute woman and her dog stand next to a tree
One can already analyse this for improvements, but let’s parse it together.
a cute woman
With this, we know we are referencing a woman, a female of the human species. There are billions of women in the world. What does she look like? What colour, height, ethnicity, and phenotypical attributes does she embody?
We also know she’s cute – whatever that means to the sender and receiver of these instructions.
I used an indefinite article, a, so there is one cute woman. Is she alone, or is she one from a group?
It should be obvious that we could provide more adjectives (and perhaps adjectives) to better convey our subject. We’ll get there, but let’s move on.
and
We’ve got a conjunction here. Let’s see what it connects to.
her dog
She’s with a dog. In fact, it’s her dog. This possession may not be conveyable or differentiable from some arbitrary dog, but what type of dog is it? Is it large or small? What colour coat? Is it groomed? Is it on a leash? Let’s continue.
stand
It seems that the verb stand refers to the woman, but is the dog also standing, or is she holding it? More words could qualify this statement better.
next to a tree
A tree is referenced. Similar questions arise regarding this tree. At a minimum, there is one tree or some variety. She and her dog are next to it. Is she on the right or left of it?
We think we can refine our statements with precision and accuracy, but can we? Might we just settle for “close enough”?
Let’s see how AI interpreted this statement.
Image: Eight Midjourney renders from the prompt: A cute woman and her dog stand next to a tree. I’ll choose one of these as my source image.
Let’s deconstruct the eight renders above. Compositionally, we can see that each image contains a woman, a dog, and a tree. Do any of these match what you had in mind? First, let’s see how Midjourney describes the first image.
In a bout of hypocrisy, Midjourney refused to /DESCRIBE the image it just generated.
Last Midjourney description for now.
Let’s cycle through them in turn.
A woman is standing to the left of an old-growth tree – twice identified as an oak tree. She’s wearing faded blue jeans and a loose light-coloured T-shirt. She’s got medium-length (maybe) red-brown hair in a small ponytail. A dog – her black and white dog identified as a pitbull, an American Foxhound, and an American Bulldog – is also standing on his hind legs. I won’t even discuss the implied intent projected on the animal – happy, playful, wants attention… In two of the descriptions, she’s said to be training it. They appear to be in a somewhat residential area given the automobiles in the background. We see descriptions of season, time of day, lighting, angle, quality,
A woman is standing to the right of an old-growth tree. She’s wearing short summer attire. Her dog is perched on the tree.
An older woman and her dog closer up.
A read view of both a woman and her dog near an oak tree.
As it turned out, I wasn’t thrilled with any of these images, so I rendered a different one. Its description follows.
The consensus is that ‘a beautiful girl in a white dress and black boots stands next to a tree’ with a Jack Russell Terrier dog. I see birch trees and snow. It’s overcast. Let’s spend some time trying to reproduce it. To start, I’m consolidating the above descriptions. I notice some elements are missing, but we’ll add them as we try to triangulate to the original image.
A beautiful girl in a white dress and black boots stands next to a tree in a winter forest with birch trees. A Jack Russell Terrier dog is nearby, and the weather is cloudy. The photo has a cinematic style. --ar 128:71
This is pretty far off the mark. We need to account for the overall setting and composition, relative positioning, clothing, hair, camera, perspective – even lighting and film emulsion.
Let’s see how we can refine it with some adjectives. Before this, I asked Anthropic’s Claude 3.5 to describe the image. Perhaps we’ll get more details.
An atmospheric winter photograph was taken in what appears to be a birch and pine forest. The ground is partially covered with snow, creating a patchy pattern of white snow and brown leaves or soil. In the image, there's a person wearing winter attire consisting of a white knit turtleneck sweater, a white skirt, dark knee-high socks, and grey lace-up boots. They're standing against a large tree trunk with their arms crossed. Next to them is a Jack Russell Terrier mix dog with distinctive brown and white colouring. The dog appears to be well-behaved, sitting attentively beside the tree. The composition is quite striking, with the massive dark tree trunk serving as a central focal point against the misty, winter forest background. The white clothing of the person creates a visual echo with the white bark of birch trees visible in the background, as well as with the patches of snow on the ground. The overall mood of the photo is somewhat moody and contemplative, enhanced by what appears to be overcast lighting and the misty quality of the forest in the background. The colour palette is largely monochromatic, dominated by whites, greys, and browns, which adds to the winter atmosphere of the scene.
We don’t seem to be moving in a good direction. Let’s modify the initial prompt.
A beautiful girl in a white dress and tall laced black boots stands facing the camera to the right of large oak tree centred in the foreground of a winter forest with birch trees in the background. To the left of the tree is a Jack Russell Terrier dog looking at the camera, and the weather is cloudy. The photo has a cinematic style. --ar 128:71
I’ll allow the results to speak for themselves. Let’s see if we can’t get her out of the wedding gown and into a white jumper and skirt. I’ll bold the amends.
A beautiful girl in a white jumper and skirt wearing black leggings and tall laced black boots stands facing the camera to the right of large oak tree centred in the foreground of a winter forest with birch trees in the background. To the left of the tree is a Jack Russell Terrier dog looking at the camera, and the weather is cloudy. The photo has a cinematic style. --ar 128:71
s
A beautiful young woman with long brown hair pulled to the side of her face in a white jumper and white skirt wearing black leggings under tall laced black boots stands facing the camera to the right of large oak tree centred in the foreground of a winter forest with birch trees in the background. Patchy snow is on the ground. To the left of the tree is a Jack Russell Terrier dog looking at the camera, and the weather is overcast. The photo has a cinematic style. --ar 128:71
What gives?
I think my point has been reinforced. I’m getting nowhere fast. Let’s give it one more go and see where we end up. I’ve not got a good feeling about this.
A single large oak tree centred in the foreground of a winter forest with birch trees in the background. Patches of snow is on the ground. To the right of the oak tree stands a beautiful young woman with long brown hair pulled to the side of her face in a white jumper and white skirt wearing black boots over tall laced black boots. She stands facing the camera. To the left of the tree is a Jack Russell Terrier dog looking at the camera, and the weather is overcast. The photo has a cinematic style. --ar 128:71
With this last one, I re-uploaded the original render along with this text prompt. Notice that the girl now looks the same and the scene (mostly) appears to be in the same location, but there are still challenges.
After several more divergent attempts, I decided to focus on one element – the girl.
As I regard the image, I’m thinking of a police sketch artist. They get sort of close, don’t they? They’re experts. I’m not confident that I even have the vocabulary to convey accurately what I see. How do I describe her jumper? Is that a turtleneck or a high collar? It appears to be knit. Is is wool or some blend? does that matter for an image? Does this pleated skirt have a particular name or shade of white? It looks as though she’s wearing black leggings – perhaps polyester. And those boots – how to describe them. I’m rerunning just the image above through a describe function to see if I can get any closer.
These descriptions are particularly interesting and telling. First, I’ll point out that AI attempts to identify the subject. I couldn’t find Noa Levin by a Google search, so I’m not sure how prominent she might be if she even exists at all in this capacity. More interesting still, the AI has placed her in a scenario where the pose was taken after a match. Evidently, this image reflects the style of photographer Guy Bourdin. Perhaps the jumper mystery is solved. It identified a turtleneck. I’ll ignore the tree and see if I can capture her with an amalgamation of these descriptions. Let’s see where this goes.
A photo-realistic portrait of Israeli female soccer player Noa Levin wearing a white turtleneck sweater, arms crossed, black boots, and a short skirt, with long brown hair, standing near a tree in a winter park. The image captured a full-length shot taken in a studio setting, using a Canon EOS R5 camera with a Canon L-series 80mm f/2 lens. The image has been professionally color-graded, with soft shadows, low contrast, and a clean, sharp focus. --ar 9:16
Close-ish. Let’s zoom in to get better descriptions of various elements starting with her face and hair.
Now, she’s a sad and angry Russian woman with (very) pale skin; large, sad, grey eyes; long, straight brown hair. Filmed in the style of either David LaChapelle or Alini Aenami (apparently misspelt from Alena Aenami). One thinks it was a SnapChat post. I was focusing on her face and hair, but it notices her wearing a white (oversized yet form-fitting) jumper sweater and crossed arms .
I’ll drop the angry bit – and then the sad.
Stick a fork in it. I’m done. Perhaps it’s not that language is insufficient; it that my language skills are insufficient. If you can get closer to the original image, please forward the image, the prompt, and the seed, so I can post it.
The Complexity Gradient
A clear pattern emerges when we examine how language performs across different levels of complexity:
Categorical Distinction (High Success)
Identifying shapes among limited options
Distinguishing between tree species
Basic color categorization
Simple Description (Moderate Success)
Basic geometric specifications
General object characteristics
Broad emotional states
Complex Description (Low Success)
Specific natural objects
Precise emotional experiences
Unique instances within categories
Abstract Concepts (Lowest Success)
Philosophical ideas
Personal experiences
Qualia
As we move up this complexity gradient, the gap between intended meaning and received understanding widens exponentially.
The Tolerance Problem
Understanding these limitations leads us to a practical question: what level of communicative tolerance is acceptable for different contexts? Just as engineering embraces acceptable tolerances rather than seeking perfect measurements, perhaps effective communication requires:
Acknowledging the gap between intended and received meaning
Establishing context-appropriate tolerance levels
Developing better frameworks for managing these tolerances
Recognizing when precision matters more than accuracy (or vice versa)
Implications for Human-AI Communication
These insights have particular relevance as we develop more sophisticated AI systems. The limitations we’ve explored suggest that:
Some communication problems might be fundamental rather than technical
AI systems may face similar boundaries as human communication
The gap between intended and received meaning might be unbridgeable
Future development should focus on managing rather than eliminating these limitations
Conclusion
Perhaps this is a simple exercise in mental masturbation. Language’s insufficiency isn’t a flaw to be fixed but a fundamental characteristic to be understood and accommodated. By definition, it can’t be fixed. The gap between intended and received meaning may be unbridgeable, but acknowledging this limitation is the first step toward more effective communication. As we continue to develop AI systems and push the boundaries of human-machine interaction, this understanding becomes increasingly critical.
Rather than seeking perfect precision in language, we might instead focus on:
Developing new forms of multimodal communication
Creating better frameworks for establishing shared context
Accepting and accounting for interpretative variance
Building systems that can operate effectively within these constraints
Understanding language’s limitations doesn’t diminish its value; rather, it helps us use it more effectively by working within its natural constraints.
I use generative AI often, perhaps daily. I spend most of my attention on textual application, but I use image generations, too—with less than spectacular results. Many of the cover images for the articles I post here are Dall-E renders. Typically, I feed it an article and ask for an apt image. As you can see, results vary and they are rarely stellar because I don’t want to spend time getting them right. Close enough for the government, as they say.
Midjourney produces much better results, but you need to tell it exactly what you want. I can’t simply upload a story and prompt it to figure it out. I’ve been playing with Midjourney for a few hours recently, and I decided to share my horror stories. Although it has rendered some awesome artwork, I want to focus on the other side of the spectrum. Some of this is not safe for work (NSFW), and some isn’t safe for reality more generally. I started with a pirate motif, moved to cowgirls, Samuris and Ninjas, Angels and Demons, and I’m not sure quite what else, but I ended up with Centaurs and Satyrs – or did I?
It seems that Midjourney (at least as of version 6.1) doesn’t know much about centaurs and satyrs, but what it does know is rather revealing. This was my first pass:
Notice, there’s not a centaur in sight, so I slowly trimmed my prompt down. I tried again. I wanted a female centaur, so I kept going.
So, not yet. It even slipped in a male’s face. Clearly, not vibing. Let’s continue.
Trimming a bit further, it seems to understand that centaurs have a connexion to horses. Unfortunately, it understands the classes of humans and horses, but it needs to merge them just so. Let’s keep going. This time, I only entered the word ‘centaur’. Can’t get any easier.
It seems I got an angel riding a horse or a woman riding a pegasus. You decide. A bull – a bit off the mark,. A woman riding a horse with either a horn or a big ear. And somewhat of a statue of a horse. Not great. And I wanted a ‘female centaur’, so let’s try this combination.
Yeah, not so much. I’m not sure what that woman holding bows in each hand is. There’s some type of unicorn or duocorn. I don’t know. Interesting, but off-topic. Another odd unicorn-horse thing. And a statue of a woman riding a horse.
Satyrs
Let’s try satyrs. Surely Midjourney’s just having an off day. On the upside, it seems to be more familiar with these goat hybrids, but not exactly.
What the hell was its training data? Let’s try again.
Not so much. We have a woman dancing with Baphomet or some such. Um, again?
We don’t seem to be going in the right direction. I’m not sure what’s happening. Forging ahead…
On the plus side, I’m starting to see goats.
There’s even a goat lady montage thing that’s cool in its own right, but not exactly what I ordered. Let’s get back to basic with a single-word prompt: Satyr.
Well, -ish. I forgot to prompt for a female satyr.
Ya, well. This is as good as we’re getting. Let’s call it a day, and see how the more humanoid creatures render.
As the series on higher education draws to a close, it seems fitting to reflect on the unique process behind its creation. There’s a popular notion that material generated by artificial intelligence is somehow of lesser quality or merely derivative. But I would argue that this perception applies to all language—whether written or spoken. My experience has shown that generative AI can elevate my material in much the same way as a skilled copy editor or research assistant might. Perhaps, in trying to draw a firm line between AI-generated and human-generated content, we’re caught in a Sorites paradox: at what point does this line blur?
These articles are the result of a truly collaborative effort involving myself, ChatGPT, and Claude. In combining our capabilities, this project became an exploration not only of higher education’s complexities but also of how humans and AI can work together to articulate, refine, and convey ideas.
The core ideas, observations, and critiques presented here are ultimately mine, shaped by personal experience and conviction. Yet, the research, the structuring of arguments, and the detailed expositions were enriched significantly by Generative AI. ChatGPT and Claude each brought distinct strengths to the table—helping to expand perspectives, test ideas, and transform abstract reflections into a structured, readable whole. This process has demonstrated that AI when thoughtfully integrated, can enhance the intellectual and creative process rather than replace it.
In the end, this series serves not only as an examination of higher education but as an example of how collaboration with AI can offer new possibilities. When human insights and AI’s analytical capabilities come together, the result can be richer than either could achieve in isolation.