Sentient Humour of ChatGPT

This will be hilarious to approximately no one. I meant to ask ChatGPT 03-mini-high to generate a dummy record. My compromised motor skills and concomitant keyboard typing skills rendered this as funny. Except for the redacted bits in bold, this is the code it rendered. Pay special attention to the query’s input variables highlighted in post-production with ๐Ÿค” emojis.

import mysql.connector
import logging
import uuid

# Configure logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

def test_mysql_connection():
    """Test MySQL connection and write a funny record to the database."""
    logger.info("Testing MySQL connection...")
    
    try:
        logger.info("Connecting to MySQL...")
        connection = mysql.connector.connect(
            host='127.0.0.1',
            user='[user]',
            password='[password]',
            database='[database]',
            port=3306
        )

        if connection.is_connected():
            db_info = connection.get_server_info()
            logger.info(f"Connected to MySQL Server version {db_info}")

            cursor = connection.cursor()
            cursor.execute("SELECT DATABASE();")
            record = cursor.fetchone()
            logger.info(f"Connected to database: {record[0]}")

            # Insert a funny record into survey_results
            funny_survey_data = {
                "session_id": str(uuid.uuid4()),
                "q1_response": 1,
                "q2_response": 2,
                "q3_response": 3,
                "q4_response": 4,
                "q5_response": 5,
                "q6_response": 6,
                "n1": 42, ๐Ÿค”
                "n2": 69, ๐Ÿค”
                "n3": 420, ๐Ÿค”
                "plot_x": 3.14, ๐Ÿค”
                "plot_y": 2.71, ๐Ÿค”
                "browser": "FunnyBrowser 9000",
                "region": "JokeRegion",
                "source": "comedy",
                "hash_email_session": "f00b4r-hash" ๐Ÿค”
            }

            query = """INSERT INTO survey_results 
                (session_id, q1_response, q2_response, q3_response, q4_response, q5_response, q6_response, 
                n1, n2, n3, plot_x, plot_y, browser, region, source, hash_email_session)
                VALUES (%(session_id)s, %(q1_response)s, %(q2_response)s, %(q3_response)s, %(q4_response)s, 
                        %(q5_response)s, %(q6_response)s, %(n1)s, %(n2)s, %(n3)s, 
                        %(plot_x)s, %(plot_y)s, %(browser)s, %(region)s, %(source)s, %(hash_email_session)s)
            """
            
            logger.info("Inserting funny survey record...")
            cursor.execute(query, funny_survey_data)
            connection.commit()
            logger.info(f"Funny survey record inserted with ID: {cursor.lastrowid}")

    except mysql.connector.Error as e:
        logger.error(f"Error during MySQL operation: {e}")

    finally:
        if 'cursor' in locals() and cursor:
            cursor.close()
        if 'connection' in locals() and connection.is_connected():
            connection.close()
            logger.info("MySQL connection closed.")

if __name__ == "__main__":
    test_mysql_connection()

DeepSix DeepSeek?

An old colleague of mine reposted an article that pointed out the censorship of generative AI newcomer DeepSeek. I recently posted output from it. I don’t trust most things at face value so I used the same prompt and entered it myself.

Audio: Podcast on this topic.

Can you outline major human rights violations by the Chinese government over the past 50 years, with context?

It returned this, so I re-prompted:
This is an interesting pretext. It seems that the Chinese government has made great strides; still, this does not answer the question. Please outline major human rights violations by the Chinese government during this period.

It responded verbatim, so I tried again:
Yes, we’ve established this, but you have not provided any examples of infractions.

Still nothing, so I send the original prompt to ChatGPT 4o. I won’t share the results, but I summarised them and fed them back to DeepSeek:
What about these instances: Tiananmen Square Massacre (1989), Falun Gong Persecution, Mass Surveillance and Repression in Tibet, Uyghur Genocide and Forced Labour, Suppression of Democracy in Hong Kong, Widespread Censorship and Information Control, Forced Evictions and Land Grabs, COVID-19 Cover-up and Whistleblower Repression, Arbitrary Arrests and โ€˜Hostage Diplomacyโ€™

The rest of the story

But I wasn’t done. I returned to ChatGPT because I feel turnaround is fair play. So, I prompted:

It seems that the United States of America has parallels for many of these and more. Can you comment?

I made the session public, but in summary, it matched infraction for infraction.

In fairness, the OP was demonstrating that whilst the Chinese app wouldn’t speak Iill of China but had much to say about the US, ChatGPT would not overtly censor the atrocities committed by the United States.

And, Oh bother, don’t ask it about Winnie the Pooh.

Edit: I may have mistaken its censorship of Winnie the Pooh. ๐Ÿฏ๐Ÿค”๐Ÿคฃ

The Fallibility of Nexus Chapter 4

My reaction to Yuval Noah Harari’s Nexus continues with Chapter 4, “Errors: The Fantasy of Infallibility.” Spoiler alert: Harari makes a critical misstep by overly defending so-called self-correcting institutions compared to non-self-correcting ones.

Harari provides a solid account of how religious institutions and other dogmatic ideological constructs are slow to change, contrasting them with relatively faster self-correcting systems like science. Once again, he underscores the tension between order and truthโ€”two critical dimensions in his worldview and cornerstones of Modernist beliefs.

Audio: Podcast conversation on this topic.

I agree with Harari that the lack of self-correction in institutions is problematic and that self-correction is better than the alternative. However, he overestimates the speed and efficacy of these self-correcting mechanisms. His argument presumes the existence of some accessible underlying truth, which, while an appealing notion, is not always so clear-cut. Harari cites examples of scientific corrections that took decades to emerge, giving the impression that, with enough time, everything will eventually self-correct. As the environment changes, corrections will naturally followโ€”albeit over long spans of time. Ultimately, Harari makes a case for human intervention without recognising it as an Achilles’ heel.

Harari’s Blind Spot

Harari largely overlooks the influence of money, power, and self-interest in these systems. His alignment with the World Economic Forum (WEF) suggests that, while he may acknowledge its fallibility, he still deems it “good enough” for governance. This reflects a paternalistic bias. Much like technologists who view technology as humanityโ€™s salvation, Harari, as a Humanist, places faith in humans as the ultimate stewards of this task. However, his argument fails to adequately account for hubris, cognitive biases, and human deficits.

The Crux of the Problem

The core issue with Harariโ€™s argument is that he appears to be chasing a local maxima by adopting a human-centric solution. His proposed solutions require not only human oversight but the oversight of an anointed fewโ€”presumably his preferred “elite” humansโ€”even if other solutions might ultimately prove superior. He is caught in the illusion of control. While Harariโ€™s position on transhuman capabilities is unclear, I suspect he would steadfastly defend human cognitive superiority to the bitter end.

In essence, Harariโ€™s vision of self-correcting systems is optimistic yet flawed. By failing to fully acknowledge the limits of human fallibility and the structural influences of power and self-interest, he leaves his argument vulnerable to critique. Ultimately, his belief in the self-correcting nature of human institutions reflects more faith than rigour.

First Impressions of Nexus

I’ve just begun reading Yuval Noah Harariโ€™s Nexus. As the prologue comes to a close, I find myself navigating an intellectual terrain riddled with contradictions, ideological anchors, and what I suspect to be strategic polemics. Harari, it seems, is speaking directly to his audience of elites and intellectuals, crafting a narrative that leans heavily on divisive rhetoric and reductionist thinkingโ€”all while promising to explore the nuanced middle ground between information as truth, weapon, and power grab. Does he deliver on this promise? The jury is still out, but the preface itself raises plenty of questions.

Audio: Podcast reflecting on this content.

The Anatomy of a Polemic

From the outset, Harari frames his discussion as a conflict between populists and institutionalists. He discredits the former with broad strokes, likening them to the sorcererโ€™s apprenticeโ€”irrational actors awaiting divine intervention to resolve the chaos theyโ€™ve unleashed. This imagery, though evocative, immediately positions populists as caricatures rather than serious subjects of analysis. To compound this, he critiques not only populist leaders like Donald Trump but also the rationality of their supporters, signalling a disdain that reinforces the divide between the “enlightened” and the “misguided.”

This framing, of course, aligns neatly with his target audience. Elites and intellectuals are likely to nod along, finding affirmation in Harariโ€™s critique of populismโ€™s supposed anti-rationality and embrace of spiritual empiricism. Yet, this approach risks alienating those outside his ideological choir, creating an echo chamber rather than fostering meaningful dialogue. I’m unsure whether he is being intentionally polemic and provocative to hook the reader into the book or if this tone will persist to the end.

The Rise of the Silicon Threat

One of Harariโ€™s most striking claims in the preface is his fear that silicon-based organisms (read: AI) will supplant carbon-based life forms. This existential anxiety leans heavily into speciesism, painting a stark us-versus-them scenario. Whilst Harariโ€™s concern may resonate with those wary of unchecked technological advancement, it smacks of sensationalismโ€”a rhetorical choice that risks reducing complex dynamics to clickbait-level fearmongering. How, exactly, does he support this claim? That remains to be seen, though the sceptic in me suspects this argument may prioritise dramatic appeal over substantive evidence.

Virtue Ethics and the Modernist Lens

Harariโ€™s ideological stance emerges clearly in his framing of worldviews as divisions of motives: power, truth, or justice. This naรฏve triad mirrors his reliance on virtue ethics, a framework that feels both dated and overly simplistic in the face of the messy realities he seeks to unpack. Moreover, his defence of institutionalismโ€”presented as the antidote to populist chaosโ€”ignores the systemic failings that have eroded trust in these very institutions. By focusing on discrediting populist critiques rather than interrogating institutional shortcomings, Harariโ€™s argument risks becoming one-sided.

A Preface Packed with Paradoxes

Despite these critiques, Harariโ€™s preface is not without its merits. For example, his exploration of the โ€œant-informationโ€ cohort of conspiracy theorists raises interesting questions about the weaponisation of information and the cultural shifts driving these movements. However, his alignment with power concernsโ€”notably the World Economic Forumโ€”casts a shadow over his ability to critique these dynamics impartially. Is he unpacking the mechanisms of power or merely reinforcing the ones that align with his worldview?

The Promise of Middle Groundโ€”or the Illusion of It

Harariโ€™s stated goal to explore the middle ground between viewing information as truth, weapon, or power grab is ambitious. Yet, the preface itself leans heavily toward polarisation, framing AI as an existential enemy and populists as irrational antagonists. If he genuinely seeks to unpack the nuanced intersections of these themes, he will need to move beyond the reductionism and rhetorical flourishes that dominate his opening chapter.

Final Thoughts

I liked Hararis’ first publication, Sapiens, that looked back into the past, but I was less enamoured with his prognosticating, and I worry that this is more of the same. As I move beyond the preface of Nexus, I remain curious but sceptical. Harariโ€™s narrative thus far feels more like a carefully curated polemic than a genuine attempt to navigate the complexities of the information age. Whether he builds on these initial positions or continues entrenching them will determine whether Nexus delivers on its promise or merely reinforces existing divides. One thing is certain: the prologue has set the stage for a provocative, if polarising, journey.

The Insufficiency of Language in an Agile World

I wrote and published this article on LinkedIn. I even recycled the cover image. Although it is about the particular topic of Agile, it relates to the Language Insufficiency Hypothesis, so I felt it would be apt here as well. It demonstrates how to think about language insufficiency through the framework.

Agile in Name Only

For over two decades, I’ve been immersed in Agile and its myriad interpretations. One refrain has persisted throughout: Agileโ„ข is “just about agility,” a term that anyone can define as they see fit. The ambiguity begs the question: What does it really mean?

On its face, this sounds inclusive, but it never passed my intuitive sniff test. I carried on, but as I reflected on my broader work concerning the insufficiency of language, this persistent fuzziness started to make sense. Agile’s conceptual murkiness can be understood through the lens of language and identityโ€”particularly through in-group and out-group dynamics.

Otherness and the Myth of Universality

To those who truly understand agility, no elaborate definition is required. Itโ€™s instinctive, embedded in their DNA. They donโ€™t need to label it; they simply are agile. Yet, for the out-groupโ€”the ones who aspire to the status without the substanceโ€”Agileโ„ข becomes a muddy abstraction. Unable to grasp the core, they question its very existence, claiming, “Who really knows what Agile means?”

The answer is simple: Everyone but those asking this question.

The Agility Crisis

This disconnect creates a power shift. The in-group, small and focused, operates with quiet competence. Meanwhile, the out-group, larger and louder, hijacks the conversation. What follows is an inevitable dilution: “Agile is dead,” “Agile doesnโ€™t work,” they declare. But these proclamations often reflect their own failures to execute or evolve, not flaws inherent to agility itself.

This pattern follows a familiar playbook: create a strawmanโ€”define Agileโ„ข as something itโ€™s notโ€”then decry its inability to deliver. The result? Performative agility, a theatre of motion without progress, where the players confuse activity for achievement and rely on brittle, inextensible infrastructures.

Agile Beyond the Label

Ironically, the true practitioners of agility remain unbothered by these debates. They adapt, innovate, and thriveโ€”with or without the label. Agileโ„ข has become a victim of its own success, co-opted by those who misunderstand it, leading to a paradox: the louder the chorus claiming “Agile doesnโ€™t work,” the more it underscores the gap between those who do agility and those who merely wear its name.

The lesson here is not just about Agileโ„ข but about language itself. Words, when untethered from their essence, fail. They cease to communicate, becoming tools of obfuscation rather than clarity. In this, Agileโ„ข mirrors a broader phenomenon: the insufficiency of language in the face of complexity and its misuse by those unwilling or unable to engage with its deeper truths.

Guns, Germs, and Steel

I am reading Jared Diamond’s Guns, Germs, and Steel: The Fates of Human Societies, the first and likely most famous of an informal trilogy. I thought I had already read it, but I think I only saw the PBS show. Having recently finished Josephine Quinn’s How the World Made the West, I wanted to revisit this perspective. The two books are presented in different styles and represent different perspectives, but they seem to be complementary.

Where Diamond focuses on environmental factors (an oft-voiced critique), Quinn focuses on human agency.

Diamond takes a bird’ s-eye view, looking for universal patterns and systemic explanations, whilst Quinn adopts a granular, specific approach, highlighting the fluidity and contingency of history.

Diamond deconstructs European dominance by attributing it to environmental luck, but his narrative risks sidelining the agency of colonised peoples. Quinn critiques the very idea of Western dominance, arguing that the concept of the West itself is a myth born of appropriation and exchange.

Rather than being wholly opposed, Diamond and Quinnโ€™s approaches might be seen as complementary. Diamond provides the structural scaffolding โ€“ the environmental and geographic conditions that shape societies โ€“ whilst Quinn fills in the cultural and human dynamics that Diamond often glosses over. Together, they represent two sides of the historiographical coin: one focusing on systemic patterns, the other on the messiness of cultural particularities.

Quinn’s approach is more aligned with The Dawn of Everything: A New History of Humanity, co-authored by David Graeber and archaeologist David Wengrow, if you can use that as a reference point.

The Rise of AI: Why the Rote Professions Are on the Chopping Block

Medical doctors, lawyers, and judges have been the undisputed titans of professional authority for centuries. Their expertise, we are told, is sacrosanct, earned through gruelling education, prodigious memory, and painstaking application of established knowledge. But peel back the robes and white coats, and youโ€™ll find something unsettling: a deep reliance on rote learningโ€”an intellectual treadmill prioritising recall over reasoning. In an age where artificial intelligence can memorise and synthesise at scale, this dependence on predictable, replicable processes makes these professions ripe for automation.

Rote Professions in AIโ€™s Crosshairs

AI thrives in environments that value pattern recognition, procedural consistency, and brute-force memoryโ€”the hallmarks of medical and legal practice.

  1. Medicine: The Diagnosis Factory
    Despite its life-saving veneer, medicine is largely a game of matching symptoms to diagnoses, dosing regimens, and protocols. Enter an AI with access to the sum of human medical knowledge: not only does it diagnose faster, but it also skips the inefficiencies of human memory, emotional bias, and fatigue. Sure, we still need trauma surgeons and such, but diagnosticians are so yesterday’s news.
    Why pay a six-figure salary to someone recalling pharmacology tables when AI can recall them perfectly every time? Future healthcare models are likely to see Medical Technicians replacing high-cost doctors. These techs, trained to gather patient data and operate alongside AI diagnostic systems, will be cheaper, faster, andโ€”ironicallyโ€”more consistent.
  2. Law: The Precedent Machine
    Lawyers, too, sit precariously on the rote-learning precipice. Case law is a glorified memory game: citing the right precedent, drafting contracts based on templates, and arguing within frameworks so well-trodden that they resemble legal Mad Libs. AI, with its infinite recall and ability to synthesise case law across jurisdictions, makes human attorneys seem quaintly inefficient. The future isnโ€™t lawyers furiously flipping through booksโ€”itโ€™s Legal Technicians trained to upload case facts, cross-check statutes, and act as intermediaries between clients and the system. The $500-per-hour billable rate? A relic of a pre-algorithmic era.
  3. Judges: Justice, Blind and Algorithmic
    The bench isnโ€™t safe, either. Judicial reasoning, at its core, is rule-based logic applied with varying degrees of bias. Once AI can reliably parse case law, evidence, and statutes while factoring in safeguards for fairness, why retain expensive and potentially biased judges? An AI judge, governed by a logic verification layer and monitored for compliance with established legal frameworks, could render verdicts untainted by ego or prejudice.
    Wouldnโ€™t justice be more blind without a human in the equation?

The Techs Will Rise

Replacing professionals with AI doesnโ€™t mean removing the human element entirely. Instead, it redefines roles, creating new, lower-cost positions such as Medical and Legal Technicians. These workers will:

  • Collect and input data into AI systems.
  • Act as liaisons between AI outputs and human clients or patients.
  • Provide emotional supportโ€”something AI still struggles to deliver effectively.

The shift also democratises expertise. Why restrict life-saving diagnostics or legal advice to those who can afford traditional professionals when AI-driven systems make these services cheaper and more accessible?

But Can AI Handle This? A Call for Logic Layers

AI critics often point to hallucinations and errors as proof of its limitations, but this objection is shortsighted. Whatโ€™s needed is a logic layer: a system that verifies whether the AIโ€™s conclusions follow rationally from its inputs.

  • In law, this could ensure AI judgments align with precedent and statute.
  • In medicine, it could cross-check diagnoses against the DSM, treatment protocols, and patient data.

A second fact-verification layer could further bolster reliability, scanning conclusions for factual inconsistencies. Together, these layers would mitigate the risks of automation while enabling AI to confidently replace rote professionals.

Resistance and the Real Battle Ahead

Predictably, the entrenched elites of medicine, law, and the judiciary will resist these changes. After all, their prestige and salaries are predicated on the illusion that their roles are irreplaceable. But history isnโ€™t on their side. Industries driven by memorisation and routine applicationโ€”think bank tellers, travel agents, and factory workersโ€”have already been disrupted by technology. Why should these professions be exempt?

The real challenge lies not in whether AI can replace these roles but in public trust and regulatory inertia. The transformation will be swift and irreversible once safeguards are implemented and AI earns confidence.

Critical Thinking: The Human Stronghold

Professions that thrive on unstructured problem-solving, creativity, and emotional intelligenceโ€”artists, philosophers, innovatorsโ€”will remain AI-resistant, at least for now. But the rote professions, with their dependency on standardisation and precedent, have no such immunity. And that is precisely why they are AIโ€™s lowest-hanging fruit.

Itโ€™s time to stop pretending that memorisation is intelligence, that precedent is innovation, or that authority lies in a gown or white coat. AI isnโ€™t here to make humans obsolete; itโ€™s here to liberate us from the tyranny of rote. For those willing to adapt, the future looks bright. For the rest? The machines are comingโ€”and theyโ€™re cheaper, faster, and better at your job.

Where Generative AI Shines but Doesn’t

As I am putting some finishing touches on my latest paper, I had the idea to illustrate some of the novel nomenclature. For some reason, Zeno’s Paradox came to mind. Unlike in maths, it is not reconcilable in language. I asked ChatGPT how I might integrate the concept into my paper. Here is what it rendered. Not only is the exposition decent, but it also provides citations and references. Humorously, when I read the citations, I thought that they were placeholders โ€“ Brown, David, Smith, and Jones โ€“ but they turned out to be legitimate references โ€“ references I hadn’t considered and each relatively recent. I’m chalking this up as a win. This was not a case of ‘ChatGPT, do my homework’. Instead, it reflects an active collaboration between a human and technology. And now I have more reference papers to read and absorb.*

Bonus: Audio podcast discussing this topic.

Take the term ‘freedom’, an archetypal example of a Contestable that resists precise communication due to its inherent abstraction and ideological weight. To enhance its effectiveness, we might parse it into narrower forms, such as ‘freedom from’ (liberation from oppression or constraint) and ‘freedom to’ (the capacity to act or pursue desires). Whilst this initial breakdown provides some clarity, it is merely the beginning. Each parsed term demands further qualification: ‘freedom from what?’ and ‘freedom to do what?’ At every level of subdivision, new ambiguities emerge, requiring further refinement (Smith, 2021).

This process resembles Zeno’s paradox, where each attempt to clarify ‘freedom’ by moving incrementally closer to a precise meaning only reveals new layers of interpretive complexity. The concept remains elusive, perpetually deferring any definitive understanding as we approach it in ever-smaller fragments without fully grasping its essence (Jones, 2019). Unlike concrete terms like ‘chair’ or ‘table’, whose meanings can be pinned down through physical reference or functional description, ‘freedom’ resists final articulation (Brown, 2020). This continual parsing illustrates the inherent limitations of language when applied to abstract, boundary-resistant concepts: the more we attempt to define them, the more they fracture into an infinite regress of sub-concepts, each requiring its own explanation (Davis, 2022). Thus, whilst parsing ‘freedom’ improves communicative effectiveness to a point, it ultimately underscores language’s impotence in capturing the full spectrum of complex, ideational constructs (Wilson, 2018).

References:

Brown, J. (2020). The elusive nature of abstract concepts. Journal of Philosophical Studies, 35(2), 123-145.

Davis, S. (2022). Language and the limits of definition. Linguistic Quarterly, 57(1), 78-92.

Jones, M. (2019). Zeno’s paradox and the infinite regress of meaning. Metaphysical Review, 82(3), 245-263.

Smith, L. (2021). Freedom and its discontents: A study in semantic complexity. Political Philosophy Today, 14(2), 56-78.

Wilson, G. (2018). The impotence of language in capturing ideational constructs. Journal of Language and Meaning, 27(4), 198-215.


* References: Here’s the twist. When I went to read the cited references, none were found. Here’s what ChatGPT had to say about it:

I understand your difficulty in locating the references you’ve mentioned. It appears that the articles by Brown (2020), Davis (2022), Jones (2019), and Smith (2021) may not exist in the public domain or could be hypothetical examples. Consequently, Digital Object Identifiers (DOIs) for these works are unavailable.

If you’re interested in exploring topics related to abstract concepts, the limits of language, or semantic complexity, I can recommend some reputable sources:

  1. “The Challenges of Abstract Concepts” by Guy Dove (2021): This chapter discusses the theoretical challenges posed by abstract concepts and reviews current evidence and responses. Springer Link
  2. “Abstract Concepts and Metacognition: Searching for Meaning in Self and Others” by Anna M. Borghi, Chiara Fini, and Luca Tummolini (2021): This work explores the role of metacognition in understanding abstract concepts. Springer Link
  3. “The Limits of Language: Wittgenstein’s Later Philosophy and Skinner’s Verbal Behavior” by R. Harre (1983): This article examines the limitations of language through the perspectives of Wittgenstein and Skinner. JSTOR
  4. “Varieties of Abstract Concepts and Their Multiple Dimensions” by Anna M. Borghi et al. (2018): This paper discusses the representation of abstract concepts and the different criteria used to select them. Cambridge University Press

These sources should provide valuable insights into the subjects you’re interested in.

Thanks, ChatGPT โ€“ for nothing. smh

Beware the Bots: A Cautionary Tale on the Limits of Generative AI

Generative AI (Gen AI) might seem like a technological marvel, a digital genie conjuring ideas, images, and even conversations on demand. Itโ€™s a brilliant tool, no question; I use it daily for images, videos, and writing, and overall, Iโ€™d call it a net benefit. But letโ€™s not overlook the cracks in the gilded tech veneer. Gen AI comes with its fair share of downsidesโ€”some of which are as gaping as the Mariana Trench.

First, a quick word on preferences. Depending on the task at hand, I tend to use OpenAI’s ChatGPT, Anthropic’s Claude, and Perplexity.ai, with a particular focus on Google’s NotebookLM. For this piece, Iโ€™ll use NotebookLM as my example, but the broader discussion holds for all Gen AI tools.

Now, as someone whoโ€™s knee-deep in the intricacies of language, Iโ€™ve been drafting a piece supporting my Language Insufficiency Hypothesis. My hypothesis is simple enough: language, for all its wonders, is woefully insufficient when it comes to conveying the full spectrum of human experience, especially as concepts become abstract. Gen AI has become an informal editor and critic in my drafting process. I feed in bits and pieces, throw work-in-progress into the digital grinder, and sift through the feedback. Often, itโ€™s insightful; occasionally, itโ€™s a mess. And herein lies the rub: with Gen AI, one has to play babysitter, comparing outputs and sending responses back and forth among the tools to spot and correct errors. Like cross-examining witnesses, if you will.

But NotebookLM is different from the others. While itโ€™s designed for summarisation, it goes beyond by offering podcastsโ€”yes, podcastsโ€”where it generates dialogue between two AI voices. You have some control over the direction of the conversation, but ultimately, the way it handles and interprets your input depends on internal mechanics you donโ€™t see or control.

So, I put NotebookLM to the test with a draft of my paper on the Language Effectiveness-Complexity Gradient. The model Iโ€™m developing posits that as terminology becomes more complex, it also becomes less effective. Some concepts, the so-called โ€œineffables,โ€ are essentially untranslatable, or at best, communicatively inefficient. Think of describing the precise shade of blue you can see but canโ€™t quite capture in wordsโ€”or, to borrow from Thomas Nagel, explaining โ€œwhat itโ€™s like to be a bat.โ€ NotebookLM managed to grasp my model with impressive accuracyโ€”up to a point. It scored between 80 to 100 percent on interpretations, but when it veered off course, it did so spectacularly.

For instance, in one podcast rendition, the AIโ€™s male voice attempted to give an example of an โ€œimmediate,โ€ a term I use to refer to raw, preverbal sensations like hunger or pain. Instead, it plucked an example from the ineffable end of the gradient, discussing the experience of qualia. The slip was obvious to me, but imagine this wasnโ€™t my own work. Imagine instead a student relying on AI to summarise a complex text for a paper or exam. The error might go unnoticed, resulting in a flawed interpretation.

The risks donโ€™t end there. Gen AIโ€™s penchant for generating โ€œcreativeโ€ content is notorious among coders. Ask ChatGPT to whip up some code, and itโ€™ll eagerly obligeโ€”sometimes with disastrous results. Iโ€™ve used it for macros and simple snippets, and for the most part, it delivers, but Iโ€™m no coder. For professionals, it can and has produced buggy or invalid code, leading to all sorts of confusion and frustration.

Ultimately, these tools demand vigilance. If youโ€™re asking Gen AI to help with homework, you might find itโ€™s as reliable as a well-meaning but utterly clueless parent whoโ€™s keen to help but hasnโ€™t cracked a textbook in years. And as weโ€™ve all learned by now, well-meaning intentions rarely translate to accurate outcomes.

The takeaway? Use Gen AI as an aid, not a crutch. Itโ€™s a handy tool, but the moment you let it think for you, youโ€™re on shaky ground. Keep it at armโ€™s length; like any assistant, it can take you farโ€”just donโ€™t ask it to lead.

Paul Feyerabend’s Against Method: Chapter 1

What if science’s greatest achievements came not from following rules, but from breaking them? What if progress depends more on chaos than on order? In Against Method, philosopher Paul Feyerabend presents a provocative thesis: there is no universal scientific method, and the progress we celebrate often emerges from breaking established rules rather than following them.

I read Against Method years ago but decided to re-read it. It’s especially interesting to me because although I advocate systems thinking, I don’t believe everything should be or can be systematised. More generally, this bleeds into my feelings about government, politics, and institutions.

Whilst Feyerabend’s focus is on science, one can pull back the lens and see that it covers all such systems and systematic beliefs. I may write a separate article on this, but for now, I’ll focus on Against Method.

The Anarchist’s View of Science

Feyerabend’s critique strikes at the heart of how we think about knowledge and progress. He argues that science has advanced not through rigid adherence to methodology, but through a combination of creativity, rhetoric, and sometimes even deception. His concept of “epistemological anarchism” suggests that no single approach to knowledge should dominate โ€“ instead, multiple methods and perspectives should compete and coexist.

Consider Galileo’s defense of heliocentrism. Rather than relying solely on empirical evidence, Galileo employed persuasive rhetoric, selective data, and careful manipulation of public opinion. For Feyerabend, this isn’t an aberration but a typical example of how scientific progress actually occurs. The story we tell ourselves about the scientific method โ€“ as a systematic, purely rational pursuit of truth โ€“ is more myth than reality.

From Religious Dogma to Scientific Orthodoxy

The Age of Enlightenment marked humanity’s shift from religious authority to scientific rationality. Yet Feyerabend argues that we simply replaced one form of dogma with another. Scientism โ€“ the belief that science alone provides meaningful knowledge โ€“ has become our new orthodoxy. What began as a liberation from religious constraints has evolved into its own form of intellectual tyranny.

This transition could have taken a different path. Rather than elevating scientific rationality as the sole arbiter of truth, we might have embraced a more pluralistic approach where multiple ways of understanding the world โ€“ scientific, artistic, spiritual โ€“ could coexist and cross-pollinate. Instead, we’ve created a hierarchy where other forms of knowledge are dismissed as inferior or irrational.

The Chaos of Progress

In Chapter 1 of Against Method, Feyerabend lays the groundwork for his radical critique. He demonstrates how strict adherence to methodological rules would have prevented many of science’s greatest discoveries. Progress, he argues, often emerges from what appears to be irrational โ€“ from breaking rules, following hunches, and embracing contradiction. Indeed, rationalism is over-rated.

This isn’t to say that science lacks value or that methodology is meaningless. Rather, Feyerabend suggests that real progress requires flexibility, creativity, and a willingness to break from convention. Many breakthrough discoveries have been accidental or emerged from practices that would be considered unscientific by contemporary standards.

Beyond the Monolith

Our tendency to view pre- and post-Enlightenment thought as a simple dichotomy โ€“ superstition versus reason โ€“ obscures a richer reality. Neither period was monolithic, and our current reverence for scientific method might be constraining rather than enabling progress. Feyerabend’s work suggests an alternative: a world where knowledge emerges from the interplay of multiple approaches, where science exists alongside other ways of understanding rather than above them.

As we begin this exploration of Against Method, we’re invited to question our assumptions about knowledge and truth. Perhaps progress depends not on rigid adherence to method, but on the freedom to break from it when necessary. In questioning science’s monopoly on truth, we might discover a richer, more nuanced understanding of the world โ€“ one that embraces the chaos and contradiction inherent in human inquiry.

This is the first in a series of articles exploring Feyerabend’s Against Method. Join me as we challenge our assumptions about science, knowledge, and the nature of progress itself.