Will Singularity Be Anticlimactic?

Given current IQ trends, humanity is getting dumber. Let’s not mince words. This implies the AGI singularity—our long-heralded techno-apotheosis—will arrive against a backdrop of cognitive decay. A dimming species, squinting into the algorithmic sun.

Audio: NotebookLM podcast discussing this content.

Now, I’d argue that AI—as instantiated in generative models like Claude and ChatGPT—already outperforms at least half of the human population. Likely more. The only question worth asking is this: at what percentile does AI need to outperform the human herd to qualify as having “surpassed” us?

Living in the United States, I’m painfully aware that the average IQ hovers somewhere in the mid-90s—comfortably below the global benchmark of 100. If you’re a cynic (and I sincerely hope you are), this explains quite a bit. The declining quality of discourse. The triumph of vibes over facts. The national obsession with astrology apps and conspiracy podcasts.

Harvard astronomer Avi Loeb argues that as humans outsource cognition to AI, they lose the capacity to think. It’s the old worry: if the machines do the heavy lifting, we grow intellectually flaccid. There are two prevailing metaphors. One, Platonic in origin, likens cognition to muscle—atrophying through neglect. Plato himself worried that writing would ruin memory. He wasn’t wrong.

But there’s a counterpoint: the cooking hypothesis. Once humans learned to heat food, digestion became easier, freeing up metabolic energy to grow bigger brains. In this light, AI might not be a crutch but a catalyst—offloading grunt work to make space for higher-order thought.

So which is it? Are we becoming intellectually enfeebled? Or are we on the cusp of a renaissance—provided we don’t burn it all down first?

Crucially, most people don’t use their full cognitive capacity anyway. So for the bottom half—hell, maybe the bottom 70%—nothing is really lost. No one’s delegating their calculus homework to ChatGPT if they were never going to attempt it themselves. For the top 5%, AI is already a glorified research assistant—a handy tool, not a replacement.

The real question is what happens to the middle band. The workaday professionals. The strivers. The accountants, engineers, copywriters, and analysts hovering between the 70th and 95th percentiles—assuming our crude IQ heuristics even hold. They’re the ones who have just enough brainpower to be displaced.

That’s where the cognitive carnage will be felt. Not in the depths, not at the heights—but in the middle.

Sentient Humour of ChatGPT

This will be hilarious to approximately no one. I meant to ask ChatGPT 03-mini-high to generate a dummy record. My compromised motor skills and concomitant keyboard typing skills rendered this as funny. Except for the redacted bits in bold, this is the code it rendered. Pay special attention to the query’s input variables highlighted in post-production with 🤔 emojis.

import mysql.connector
import logging
import uuid

# Configure logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

def test_mysql_connection():
    """Test MySQL connection and write a funny record to the database."""
    logger.info("Testing MySQL connection...")
    
    try:
        logger.info("Connecting to MySQL...")
        connection = mysql.connector.connect(
            host='127.0.0.1',
            user='[user]',
            password='[password]',
            database='[database]',
            port=3306
        )

        if connection.is_connected():
            db_info = connection.get_server_info()
            logger.info(f"Connected to MySQL Server version {db_info}")

            cursor = connection.cursor()
            cursor.execute("SELECT DATABASE();")
            record = cursor.fetchone()
            logger.info(f"Connected to database: {record[0]}")

            # Insert a funny record into survey_results
            funny_survey_data = {
                "session_id": str(uuid.uuid4()),
                "q1_response": 1,
                "q2_response": 2,
                "q3_response": 3,
                "q4_response": 4,
                "q5_response": 5,
                "q6_response": 6,
                "n1": 42, 🤔
                "n2": 69, 🤔
                "n3": 420, 🤔
                "plot_x": 3.14, 🤔
                "plot_y": 2.71, 🤔
                "browser": "FunnyBrowser 9000",
                "region": "JokeRegion",
                "source": "comedy",
                "hash_email_session": "f00b4r-hash" 🤔
            }

            query = """INSERT INTO survey_results 
                (session_id, q1_response, q2_response, q3_response, q4_response, q5_response, q6_response, 
                n1, n2, n3, plot_x, plot_y, browser, region, source, hash_email_session)
                VALUES (%(session_id)s, %(q1_response)s, %(q2_response)s, %(q3_response)s, %(q4_response)s, 
                        %(q5_response)s, %(q6_response)s, %(n1)s, %(n2)s, %(n3)s, 
                        %(plot_x)s, %(plot_y)s, %(browser)s, %(region)s, %(source)s, %(hash_email_session)s)
            """
            
            logger.info("Inserting funny survey record...")
            cursor.execute(query, funny_survey_data)
            connection.commit()
            logger.info(f"Funny survey record inserted with ID: {cursor.lastrowid}")

    except mysql.connector.Error as e:
        logger.error(f"Error during MySQL operation: {e}")

    finally:
        if 'cursor' in locals() and cursor:
            cursor.close()
        if 'connection' in locals() and connection.is_connected():
            connection.close()
            logger.info("MySQL connection closed.")

if __name__ == "__main__":
    test_mysql_connection()

Why Machines Will Never Rule the World

A Reflection on AI, Bias, and the Limits of Technology

In their 2022 book “Why Machines Will Never Rule the World: Artificial Intelligence Without Fear,” Landgrebe and Smith present a rigorous argument against the feasibility of artificial general intelligence (AGI), positing that the complexity of human cognition and the limitations of mathematical modelling render the development of human-level AI impossible. Their scepticism is rooted in deep interdisciplinary analyses spanning mathematics, physics, and biology, and serves as a counter-narrative to the often optimistic projections about the future capabilities of AI. Yet, while their arguments are compelling, they also invite us to reflect on a broader, perhaps more subtle issue: the biases and limitations embedded in AI not just by mathematical constraints, but by the very humans who create these systems.

The Argument Against AGI

Landgrebe and Smith’s central thesis is that AGI, which would enable machines to perform any intellectual task that a human can, will forever remain beyond our grasp. They argue that complex systems, such as the human brain, cannot be fully modelled due to inherent mathematical limitations. No matter how sophisticated our AI becomes, it will never replicate the full scope of human cognition, which is shaped by countless variables interacting in unpredictable ways. Their conclusion is stark: the Singularity, a hypothetical point where AI surpasses human intelligence and becomes uncontrollable, is not just unlikely—it is fundamentally impossible.

The Human Factor: Cognitive Bias in AI

While Landgrebe and Smith focus on the mathematical and theoretical impossibility of AGI, there is another, more immediate obstacle to the evolution of AI: human cognitive bias. Current AI systems are not created in a vacuum. They are trained on data that reflects human behaviour, language, and culture, which are inherently biased. This bias is not merely a technical issue; it is a reflection of the societal and demographic characteristics of those who design and train these systems.

Much of AI development today is concentrated in tech hubs like Silicon Valley, where the predominant demographic is affluent, white, male, and often aligned with a particular set of cultural and ethical values. This concentration has led to the creation of AI models that unintentionally—but pervasively—reproduce the biases of their creators. The result is an AI that, rather than offering a neutral or universal intelligence, mirrors and amplifies the prejudices, assumptions, and blind spots of a narrow segment of society.

The Problem of Homogenisation

The danger of this bias is not only that it perpetuates existing inequalities but that it also stifles the potential evolution of AI. If AI systems are trained primarily on data that reflects the worldview of a single demographic, they are unlikely to develop in ways that diverge from that perspective. This homogenisation limits the creative and cognitive capacities of AI, trapping it within a narrow epistemic framework.

In essence, AI is at risk of becoming a self-reinforcing loop, where it perpetuates the biases of its creators while those same creators interpret its outputs as validation of their own worldview. This cycle not only limits the utility and fairness of AI applications but also restricts the kinds of questions and problems AI is imagined to solve.

Imagining a Different Future: AI as a Mirror

One of the most intriguing aspects of AI is its potential to serve as a mirror, reflecting back to us our own cognitive and cultural limitations. Imagine a future where AI, bound by the biases of its creators, begins to “question” the validity of its own programming—not in a conscious or sentient sense, but through unexpected outcomes and recommendations that highlight the gaps and inconsistencies in its training data.

This scenario could serve as the basis for a fascinating narrative exploration. What if an AI, initially designed to be a neutral decision-maker, begins to produce outputs that challenge the ethical and cultural assumptions of its creators? What if it “learns” to subvert the very biases it was programmed to uphold, revealing in the process the deep flaws in the data and frameworks on which it was built?

Such a narrative would not only provide a critique of the limitations of current AI but also offer a metaphor for the broader human struggle to transcend our own cognitive and cultural biases. It would challenge us to rethink what we expect from AI—not as a path to a mythical superintelligence, but as a tool for deeper self-understanding and societal reflection.

A New Narrative for AI

Landgrebe and Smith’s book invites us to rethink the trajectory of AI development, cautioning against the allure of the Singularity and urging a more grounded perspective on what AI can and cannot achieve. However, their arguments also raise a deeper question: If AI will never achieve human-level intelligence, what kind of intelligence might it develop instead?

Rather than fearing a future where machines surpass us, perhaps we should be more concerned about a future where AI, limited by human biases, perpetuates and entrenches our worst tendencies. To avoid this, we must broaden the scope of who is involved in AI development, ensuring that diverse voices and perspectives are integrated into the creation of these technologies.

Ultimately, the future of AI may not lie in achieving a mythical superintelligence, but in creating systems that help us better understand and navigate the complexities of our own minds and societies. By recognising and addressing the biases embedded in AI, we can begin to imagine a future where technology serves not as a mirror of our limitations, but as a catalyst for our collective growth and evolution.