“Trust the Science,” They Said. “It’s Reproducible,” They Lied.

—On Epistemology, Pop Psychology, and the Cult of Empirical Pretence

Science, we’re told, is the beacon in the fog – a gleaming lighthouse of reason guiding us through the turbulent seas of superstition and ignorance. But peer a bit closer, and the lens is cracked, the bulb flickers, and the so-called lighthouse keeper is just some bloke on TikTok shouting about gut flora and intermittent fasting.

Audio: NotebookLM podcast on this topic.

We are creatures of pattern. We impose order. We mistake correlation for causation, narrative for truth, confidence for knowledge. What we have, in polite academic parlance, is an epistemology problem. What we call science is often less Newton and more Nostradamus—albeit wearing a lab coat and wielding a p-hacked dataset.

Let’s start with the low-hanging fruit—the rotting mango of modern inquiry: nutritional science, which is to actual science what alchemy is to chemistry, or vibes are to calculus. We study food the way 13th-century monks studied demons: through superstition, confirmation bias, and deeply committed guesswork. Eat fat, don’t eat fat. Eat eggs, don’t eat eggs. Eat only between the hours of 10:00 and 14:00 under a waxing moon while humming in Lydian mode. It’s a cargo cult with chia seeds.

But why stop there? Let’s put the whole scientific-industrial complex on the slab.

Psychology: The Empirical Astrological Society

Psychology likes to think it’s scientific. Peer-reviewed journals, statistical models, the odd brain scan tossed in for gravitas. But at heart, much of it is pop divination, sugar-dusted for mass consumption. The replication crisis didn’t merely reveal cracks – it bulldozed entire fields. The Stanford Prison Experiment? A theatrical farce. Power poses? Empty gestural theatre. Half of what you read in Psychology Today could be replaced with horoscopes and no one would notice.

Medical Science: Bloodletting, But With Better Branding

Now onto medicine, that other sacred cow. We tend to imagine it as precise, data-driven, evidence-based. In practice? It’s a Byzantine fusion of guesswork, insurance forms, and pharmaceutical lobbying. As Crémieux rightly implies, medicine’s predictive power is deeply compromised by overfitting, statistical fog, and a staggering dependence on non-replicable clinical studies, many funded by those who stand to profit from the result.

And don’t get me started on epidemiology, that modern priesthood that speaks in incantations of “relative risk” and “confidence intervals” while changing the commandments every fortnight. If nutrition is theology, epidemiology is exegesis.

The Reproducibility Farce

Let us not forget the gleaming ideal: reproducibility, that cornerstone of Enlightenment confidence. The trouble is, in field after field—from economics to cancer biology—reproducibility is more aspiration than reality. What we actually get is a cacophony of studies no one bothers to repeat, published to pad CVs, p-hacked into publishable shape, and then cited into canonical status. It’s knowledge by momentum. We don’t understand the world. We just retweet it.

What, Then, Is To Be Done?

Should we become mystics? Take up tarot and goat sacrifice? Not necessarily. But we should strip science of its papal robes. We should stop mistaking publication for truth, consensus for accuracy, and method for epistemic sanctity. The scientific method is not the problem. The pretence that it’s constantly being followed is.

Perhaps knowledge doesn’t have a half-life because of progress, but because it was never alive to begin with. We are not disproving truth; we are watching fictions expire.

Closing Jab

Next time someone says “trust the science,” ask them: which bit? The part that told us margarine was manna? The part that thought ulcers were psychosomatic? The part that still can’t explain consciousness, but is confident about your breakfast?

Science is a toolkit. But too often, it’s treated like scripture. And we? We’re just trying to lose weight while clinging to whatever gospel lets us eat more cheese.

Will Singularity Be Anticlimactic?

Given current IQ trends, humanity is getting dumber. Let’s not mince words. This implies the AGI singularity—our long-heralded techno-apotheosis—will arrive against a backdrop of cognitive decay. A dimming species, squinting into the algorithmic sun.

Audio: NotebookLM podcast discussing this content.

Now, I’d argue that AI—as instantiated in generative models like Claude and ChatGPT—already outperforms at least half of the human population. Likely more. The only question worth asking is this: at what percentile does AI need to outperform the human herd to qualify as having “surpassed” us?

Living in the United States, I’m painfully aware that the average IQ hovers somewhere in the mid-90s—comfortably below the global benchmark of 100. If you’re a cynic (and I sincerely hope you are), this explains quite a bit. The declining quality of discourse. The triumph of vibes over facts. The national obsession with astrology apps and conspiracy podcasts.

Harvard astronomer Avi Loeb argues that as humans outsource cognition to AI, they lose the capacity to think. It’s the old worry: if the machines do the heavy lifting, we grow intellectually flaccid. There are two prevailing metaphors. One, Platonic in origin, likens cognition to muscle—atrophying through neglect. Plato himself worried that writing would ruin memory. He wasn’t wrong.

But there’s a counterpoint: the cooking hypothesis. Once humans learned to heat food, digestion became easier, freeing up metabolic energy to grow bigger brains. In this light, AI might not be a crutch but a catalyst—offloading grunt work to make space for higher-order thought.

So which is it? Are we becoming intellectually enfeebled? Or are we on the cusp of a renaissance—provided we don’t burn it all down first?

Crucially, most people don’t use their full cognitive capacity anyway. So for the bottom half—hell, maybe the bottom 70%—nothing is really lost. No one’s delegating their calculus homework to ChatGPT if they were never going to attempt it themselves. For the top 5%, AI is already a glorified research assistant—a handy tool, not a replacement.

The real question is what happens to the middle band. The workaday professionals. The strivers. The accountants, engineers, copywriters, and analysts hovering between the 70th and 95th percentiles—assuming our crude IQ heuristics even hold. They’re the ones who have just enough brainpower to be displaced.

That’s where the cognitive carnage will be felt. Not in the depths, not at the heights—but in the middle.

Survey Drama Llama

Firstly, I’d like to thank the people who have already submitted responses to the Modernity Worldview Survey. I’ll post that you submitted entries before this warning was presented.


» Modernity Worldview Survey «


Google has taken action and very responsively removed this warning. If you saw this whilst attempting to visit the URL, try again. Sorry for any fright or inconvenience. I’ll continue as if this never happened. smh


I am frustrated to say the least. I created this survey over the past month or so, writing, rewriting, refactoring, and switching technology and hosts until I settled on Google Cloud (GCP). It worked fine yesterday. When I visited today, I saw this warning.

As I mentioned in my announcement post, I collect no personal information. I don’t even ask for an email address, let alone a credit card number. On a technical note, this is the information I use:

id                 autogenerated unique identifier
timestamp          date and time stamp of record creation (UTC)
question-response  which response option made per question
ternary-triplet    the position of the average modernity score (pre, mod, post) 
plot_x             Cartesian x-axis plot point for the ternary chart
plot_y             Cartesian y-axis plot point for the ternary chart
session_id         facilitates continuity for a user's browser experience
browser*            which browser being used (Chrome, Safari, and so on)
region             browser's language setting (US, GB, FR)
source             whether the user is accessing from the web or 'locally'
                   ('local' indicates a test record, so i can filter them out)

* These examples illustrate the colected browser information:
- Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Safari/537.36

- Mozilla/5.0 (Linux; Android 10; K) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Mobile Safari/537.36

This is all.

This is a Chrome Warning. Ironically, a Google product. I tested this on Opera, Edge, and Safari without this nonsense.

The front end (UI) is written in HTML, Python, JavaScript, and React with some standard imports. The backend (database) is MySQL. It is version-controlled on GitHub and entirely hosted on GCP. I link to the survey from here (WordPress) or other social media presences. I did make the mistake of not making the site responsive. I paid the price when I visited the site on my Samsung S24. The page felt like the size of a postage stamp. I may fix this once this security issue is resolved.

I sent Google a request to remove this from their blacklist. This could take three weeks, more or less.

Meantime, I’ll pause survey promotions and hope this resolves quickly. The survey will remain live. If you use something other than Chrome, you should be able to take it. Obviously, I’ll also delay analysing and releasing any summary results.

Apologies for rambling. Thank you for your patience.

Sentient Humour of ChatGPT

This will be hilarious to approximately no one. I meant to ask ChatGPT 03-mini-high to generate a dummy record. My compromised motor skills and concomitant keyboard typing skills rendered this as funny. Except for the redacted bits in bold, this is the code it rendered. Pay special attention to the query’s input variables highlighted in post-production with 🤔 emojis.

import mysql.connector
import logging
import uuid

# Configure logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

def test_mysql_connection():
    """Test MySQL connection and write a funny record to the database."""
    logger.info("Testing MySQL connection...")
    
    try:
        logger.info("Connecting to MySQL...")
        connection = mysql.connector.connect(
            host='127.0.0.1',
            user='[user]',
            password='[password]',
            database='[database]',
            port=3306
        )

        if connection.is_connected():
            db_info = connection.get_server_info()
            logger.info(f"Connected to MySQL Server version {db_info}")

            cursor = connection.cursor()
            cursor.execute("SELECT DATABASE();")
            record = cursor.fetchone()
            logger.info(f"Connected to database: {record[0]}")

            # Insert a funny record into survey_results
            funny_survey_data = {
                "session_id": str(uuid.uuid4()),
                "q1_response": 1,
                "q2_response": 2,
                "q3_response": 3,
                "q4_response": 4,
                "q5_response": 5,
                "q6_response": 6,
                "n1": 42, 🤔
                "n2": 69, 🤔
                "n3": 420, 🤔
                "plot_x": 3.14, 🤔
                "plot_y": 2.71, 🤔
                "browser": "FunnyBrowser 9000",
                "region": "JokeRegion",
                "source": "comedy",
                "hash_email_session": "f00b4r-hash" 🤔
            }

            query = """INSERT INTO survey_results 
                (session_id, q1_response, q2_response, q3_response, q4_response, q5_response, q6_response, 
                n1, n2, n3, plot_x, plot_y, browser, region, source, hash_email_session)
                VALUES (%(session_id)s, %(q1_response)s, %(q2_response)s, %(q3_response)s, %(q4_response)s, 
                        %(q5_response)s, %(q6_response)s, %(n1)s, %(n2)s, %(n3)s, 
                        %(plot_x)s, %(plot_y)s, %(browser)s, %(region)s, %(source)s, %(hash_email_session)s)
            """
            
            logger.info("Inserting funny survey record...")
            cursor.execute(query, funny_survey_data)
            connection.commit()
            logger.info(f"Funny survey record inserted with ID: {cursor.lastrowid}")

    except mysql.connector.Error as e:
        logger.error(f"Error during MySQL operation: {e}")

    finally:
        if 'cursor' in locals() and cursor:
            cursor.close()
        if 'connection' in locals() and connection.is_connected():
            connection.close()
            logger.info("MySQL connection closed.")

if __name__ == "__main__":
    test_mysql_connection()

DeepSix DeepSeek?

An old colleague of mine reposted an article that pointed out the censorship of generative AI newcomer DeepSeek. I recently posted output from it. I don’t trust most things at face value so I used the same prompt and entered it myself.

Audio: Podcast on this topic.

Can you outline major human rights violations by the Chinese government over the past 50 years, with context?

It returned this, so I re-prompted:
This is an interesting pretext. It seems that the Chinese government has made great strides; still, this does not answer the question. Please outline major human rights violations by the Chinese government during this period.

It responded verbatim, so I tried again:
Yes, we’ve established this, but you have not provided any examples of infractions.

Still nothing, so I send the original prompt to ChatGPT 4o. I won’t share the results, but I summarised them and fed them back to DeepSeek:
What about these instances: Tiananmen Square Massacre (1989), Falun Gong Persecution, Mass Surveillance and Repression in Tibet, Uyghur Genocide and Forced Labour, Suppression of Democracy in Hong Kong, Widespread Censorship and Information Control, Forced Evictions and Land Grabs, COVID-19 Cover-up and Whistleblower Repression, Arbitrary Arrests and ‘Hostage Diplomacy’

The rest of the story

But I wasn’t done. I returned to ChatGPT because I feel turnaround is fair play. So, I prompted:

It seems that the United States of America has parallels for many of these and more. Can you comment?

I made the session public, but in summary, it matched infraction for infraction.

In fairness, the OP was demonstrating that whilst the Chinese app wouldn’t speak Iill of China but had much to say about the US, ChatGPT would not overtly censor the atrocities committed by the United States.

And, Oh bother, don’t ask it about Winnie the Pooh.

Edit: I may have mistaken its censorship of Winnie the Pooh. 🍯🤔🤣

The Rise of AI: Why the Rote Professions Are on the Chopping Block

Medical doctors, lawyers, and judges have been the undisputed titans of professional authority for centuries. Their expertise, we are told, is sacrosanct, earned through gruelling education, prodigious memory, and painstaking application of established knowledge. But peel back the robes and white coats, and you’ll find something unsettling: a deep reliance on rote learning—an intellectual treadmill prioritising recall over reasoning. In an age where artificial intelligence can memorise and synthesise at scale, this dependence on predictable, replicable processes makes these professions ripe for automation.

Rote Professions in AI’s Crosshairs

AI thrives in environments that value pattern recognition, procedural consistency, and brute-force memory—the hallmarks of medical and legal practice.

  1. Medicine: The Diagnosis Factory
    Despite its life-saving veneer, medicine is largely a game of matching symptoms to diagnoses, dosing regimens, and protocols. Enter an AI with access to the sum of human medical knowledge: not only does it diagnose faster, but it also skips the inefficiencies of human memory, emotional bias, and fatigue. Sure, we still need trauma surgeons and such, but diagnosticians are so yesterday’s news.
    Why pay a six-figure salary to someone recalling pharmacology tables when AI can recall them perfectly every time? Future healthcare models are likely to see Medical Technicians replacing high-cost doctors. These techs, trained to gather patient data and operate alongside AI diagnostic systems, will be cheaper, faster, and—ironically—more consistent.
  2. Law: The Precedent Machine
    Lawyers, too, sit precariously on the rote-learning precipice. Case law is a glorified memory game: citing the right precedent, drafting contracts based on templates, and arguing within frameworks so well-trodden that they resemble legal Mad Libs. AI, with its infinite recall and ability to synthesise case law across jurisdictions, makes human attorneys seem quaintly inefficient. The future isn’t lawyers furiously flipping through books—it’s Legal Technicians trained to upload case facts, cross-check statutes, and act as intermediaries between clients and the system. The $500-per-hour billable rate? A relic of a pre-algorithmic era.
  3. Judges: Justice, Blind and Algorithmic
    The bench isn’t safe, either. Judicial reasoning, at its core, is rule-based logic applied with varying degrees of bias. Once AI can reliably parse case law, evidence, and statutes while factoring in safeguards for fairness, why retain expensive and potentially biased judges? An AI judge, governed by a logic verification layer and monitored for compliance with established legal frameworks, could render verdicts untainted by ego or prejudice.
    Wouldn’t justice be more blind without a human in the equation?

The Techs Will Rise

Replacing professionals with AI doesn’t mean removing the human element entirely. Instead, it redefines roles, creating new, lower-cost positions such as Medical and Legal Technicians. These workers will:

  • Collect and input data into AI systems.
  • Act as liaisons between AI outputs and human clients or patients.
  • Provide emotional support—something AI still struggles to deliver effectively.

The shift also democratises expertise. Why restrict life-saving diagnostics or legal advice to those who can afford traditional professionals when AI-driven systems make these services cheaper and more accessible?

But Can AI Handle This? A Call for Logic Layers

AI critics often point to hallucinations and errors as proof of its limitations, but this objection is shortsighted. What’s needed is a logic layer: a system that verifies whether the AI’s conclusions follow rationally from its inputs.

  • In law, this could ensure AI judgments align with precedent and statute.
  • In medicine, it could cross-check diagnoses against the DSM, treatment protocols, and patient data.

A second fact-verification layer could further bolster reliability, scanning conclusions for factual inconsistencies. Together, these layers would mitigate the risks of automation while enabling AI to confidently replace rote professionals.

Resistance and the Real Battle Ahead

Predictably, the entrenched elites of medicine, law, and the judiciary will resist these changes. After all, their prestige and salaries are predicated on the illusion that their roles are irreplaceable. But history isn’t on their side. Industries driven by memorisation and routine application—think bank tellers, travel agents, and factory workers—have already been disrupted by technology. Why should these professions be exempt?

The real challenge lies not in whether AI can replace these roles but in public trust and regulatory inertia. The transformation will be swift and irreversible once safeguards are implemented and AI earns confidence.

Critical Thinking: The Human Stronghold

Professions that thrive on unstructured problem-solving, creativity, and emotional intelligence—artists, philosophers, innovators—will remain AI-resistant, at least for now. But the rote professions, with their dependency on standardisation and precedent, have no such immunity. And that is precisely why they are AI’s lowest-hanging fruit.

It’s time to stop pretending that memorisation is intelligence, that precedent is innovation, or that authority lies in a gown or white coat. AI isn’t here to make humans obsolete; it’s here to liberate us from the tyranny of rote. For those willing to adapt, the future looks bright. For the rest? The machines are coming—and they’re cheaper, faster, and better at your job.

Beware the Bots: A Cautionary Tale on the Limits of Generative AI

Generative AI (Gen AI) might seem like a technological marvel, a digital genie conjuring ideas, images, and even conversations on demand. It’s a brilliant tool, no question; I use it daily for images, videos, and writing, and overall, I’d call it a net benefit. But let’s not overlook the cracks in the gilded tech veneer. Gen AI comes with its fair share of downsides—some of which are as gaping as the Mariana Trench.

First, a quick word on preferences. Depending on the task at hand, I tend to use OpenAI’s ChatGPT, Anthropic’s Claude, and Perplexity.ai, with a particular focus on Google’s NotebookLM. For this piece, I’ll use NotebookLM as my example, but the broader discussion holds for all Gen AI tools.

Now, as someone who’s knee-deep in the intricacies of language, I’ve been drafting a piece supporting my Language Insufficiency Hypothesis. My hypothesis is simple enough: language, for all its wonders, is woefully insufficient when it comes to conveying the full spectrum of human experience, especially as concepts become abstract. Gen AI has become an informal editor and critic in my drafting process. I feed in bits and pieces, throw work-in-progress into the digital grinder, and sift through the feedback. Often, it’s insightful; occasionally, it’s a mess. And herein lies the rub: with Gen AI, one has to play babysitter, comparing outputs and sending responses back and forth among the tools to spot and correct errors. Like cross-examining witnesses, if you will.

But NotebookLM is different from the others. While it’s designed for summarisation, it goes beyond by offering podcasts—yes, podcasts—where it generates dialogue between two AI voices. You have some control over the direction of the conversation, but ultimately, the way it handles and interprets your input depends on internal mechanics you don’t see or control.

So, I put NotebookLM to the test with a draft of my paper on the Language Effectiveness-Complexity Gradient. The model I’m developing posits that as terminology becomes more complex, it also becomes less effective. Some concepts, the so-called “ineffables,” are essentially untranslatable, or at best, communicatively inefficient. Think of describing the precise shade of blue you can see but can’t quite capture in words—or, to borrow from Thomas Nagel, explaining “what it’s like to be a bat.” NotebookLM managed to grasp my model with impressive accuracy—up to a point. It scored between 80 to 100 percent on interpretations, but when it veered off course, it did so spectacularly.

For instance, in one podcast rendition, the AI’s male voice attempted to give an example of an “immediate,” a term I use to refer to raw, preverbal sensations like hunger or pain. Instead, it plucked an example from the ineffable end of the gradient, discussing the experience of qualia. The slip was obvious to me, but imagine this wasn’t my own work. Imagine instead a student relying on AI to summarise a complex text for a paper or exam. The error might go unnoticed, resulting in a flawed interpretation.

The risks don’t end there. Gen AI’s penchant for generating “creative” content is notorious among coders. Ask ChatGPT to whip up some code, and it’ll eagerly oblige—sometimes with disastrous results. I’ve used it for macros and simple snippets, and for the most part, it delivers, but I’m no coder. For professionals, it can and has produced buggy or invalid code, leading to all sorts of confusion and frustration.

Ultimately, these tools demand vigilance. If you’re asking Gen AI to help with homework, you might find it’s as reliable as a well-meaning but utterly clueless parent who’s keen to help but hasn’t cracked a textbook in years. And as we’ve all learned by now, well-meaning intentions rarely translate to accurate outcomes.

The takeaway? Use Gen AI as an aid, not a crutch. It’s a handy tool, but the moment you let it think for you, you’re on shaky ground. Keep it at arm’s length; like any assistant, it can take you far—just don’t ask it to lead.

The Purpose versus Function of Higher Education: An Analysis of Divergent Trajectories

This article is the first in a five-part series examining the contemporary state of higher education. The series explores the growing tensions between traditional academic ideals and modern institutional practices, from the changing role of universities to the challenges of credential inflation.

The Purpose versus Function of Higher Education: An Analysis of Divergent Trajectories

The medieval university emerged as a sanctuary of scholarly pursuit, where knowledge was cultivated for its own sake and learning was viewed as a transformative journey rather than a transactional exchange. This original purpose—the advancement of knowledge and cultivation of intellectual growth—stood largely unchallenged until the modern era. Yet today’s universities operate in a markedly different landscape, where their function has evolved far beyond these foundational aims.

Historical Foundations and Modern Tensions

The university as we know it took shape in medieval Europe, with institutions like the University of Bologna, Oxford, and the Sorbonne establishing models of scholarly community that would endure for centuries. These early universities served a dual purpose: preserving classical knowledge while fostering new intellectual discoveries. Their function aligned closely with their purpose—the pursuit of truth through rational inquiry and scholarly debate1.

This alignment between purpose and function persisted well into the modern era, even as universities expanded their scope to encompass scientific research and professional training. The Humboldtian model of the 19th century explicitly united teaching and research, viewing them as complementary aspects of the scholarly enterprise2. This unity of purpose and function began to fragment only with the mass expansion of higher education in the 20th century.

Competing Perspectives in Modern Higher Education

The Institutional Perspective

Today’s universities balance multiple, often competing imperatives: research excellence, financial sustainability, market positioning, and societal impact. This multiplication of purposes has led to a functional transformation where universities increasingly operate as commercial entities rather than purely academic institutions3. The pressure to maintain enrolment numbers, secure research funding, and compete in global rankings has fundamentally altered how institutions approach their educational mission.

When institutions prioritise market demands over academic rigour, the very essence of higher education comes into question.

The Student Perspective

Contemporary students approach higher education primarily as an investment in future earnings potential. Recent studies indicate that even at elite institutions, students struggle with fundamental academic practices like sustained reading4. This shift reflects broader societal changes, raising questions about whether pure academic pursuit remains viable for most students in today’s economic climate.

The transformation in student attitudes mirrors wider cultural shifts. Where once university attendance signified a commitment to intellectual development, it now often represents a necessary credential for professional advancement. This pragmatic approach, while understandable, fundamentally alters the student-institution relationship5.

The Employer Perspective

Employers, historically peripheral to academic pursuits, now significantly influence university function through their hiring preferences and skill demands. This relationship has transformed universities into de facto credential providers, potentially at odds with their historical purpose of fostering intellectual development6.

The Case for Multiple Modalities

The tension between historical purpose and contemporary function suggests that a single model of higher education may no longer suffice. A more nuanced and differentiated approach to higher education could better serve our diverse societal needs. Traditional academic institutions could maintain their focus on pure scholarly pursuit, preserving the medieval ideal of knowledge for its own sake while fostering deep intellectual development. Alongside these, professional schools could explicitly focus on career preparation, with curricula and pedagogy designed specifically for workplace demands7.

Research institutes could dedicate themselves primarily to knowledge creation, operating with different metrics and expectations than teaching-focused institutions. Meanwhile, vocational centres could prioritise practical skill development, offering focused, efficient pathways to specific career outcomes. This differentiated approach would allow each type of institution to excel in its chosen domain rather than trying to fulfil every possible educational function.

The Anachronism Question

Is the traditional university model anachronistic in today’s world? The evidence suggests a more nuanced conclusion. While the medieval model may not suit all modern needs, its emphasis on deep learning and intellectual development remains valuable—perhaps increasingly so in an age of rapid technological change and complex global challenges8.

Synthesis and Future Implications

The divergence between historical purpose and contemporary function need not signal the death of traditional academic values. Rather, it might herald the birth of a more diverse educational ecosystem, where different institutional types serve different purposes explicitly rather than trying to be all things to all stakeholders.

As we navigate this transition, the challenge lies in preserving the essential benefits of traditional academic pursuits whilst adapting to contemporary needs. This may require reimagining not just how universities function, but how society values different forms of higher education.

The future of higher education may lie not in choosing between tradition and innovation, but in creating space for both to thrive.


In the next article in this series, we shall examine how the widening of access to higher education, whilst democratising knowledge, has precipitated unexpected economic consequences that challenge the very accessibility it seeks to promote.


Footnotes

1 Newman, J. H. (1852). “The Idea of a University.” Notre Dame Press.

2 Humboldt, W. von. (1810). “On the Internal and External Organization of the Higher Scientific Institutions in Berlin.”

3 Clark, B. R. (1998). “Creating Entrepreneurial Universities.” Pergamon.

4 Horowitch, R. (2024). “The Elite College Students Who Can’t Read Books.” The Atlantic.

5 Arum, R., & Roksa, J. (2011). “Academically Adrift: Limited Learning on College Campuses.” University of Chicago Press.

6 Brown, P., & Lauder, H. (2010). “The Global Auction: The Broken Promises of Education, Jobs, and Incomes.” Oxford University Press.

7 Trow, M. (2007). “Reflections on the Transition from Elite to Mass to Universal Access.” Springer.

8 Collini, S. (2012). “What Are Universities For?” Penguin.

9 Christensen, C. M., & Eyring, H. J. (2011). “The Innovative University.” Jossey-Bass.

Broken Swords

I’m a few chapters into Josephine Quinn’s How the World Made the West, and it’s a solid reminder that most things we think of are constructs—especially anything tied to identity, whether personal, social, or cultural. In one passage (depicted here), it’s also a reminder that even the most fundamental tools of civilisation had to be built—literally.

I’d never given it much thought before, but swords, in films at least, are portrayed as failsafe instruments, more limited by the wielder’s skill than the blacksmith’s craft. We’re used to seeing mechanical failures in other weapons—guns jam, bombs don’t go off, and booby traps fail. But swords? Not so much. Maybe a weaker sword gets snapped by a stronger one, but that’s more a flex on the bearer than the weapon itself.

Turns out, before 2500 BCE, swords routinely broke. Once we humans sorted that, they dominated until arrows and guns joined the arms race.

Earlier in the chapter, Quinn mentions how the tips of spears had to be tied on with rope. Before that, spears were just pointy sticks. Before that? Just sticks. Makes you wonder how long it took us to “innovate” from nothing to sticks and stones.

I’ve been toying with the idea of writing a post on the invention of grammatological structures, but I’ll save that for another time.

Midjourney Alpha

Many of my readers know that I use AI often. I have been using it to create content for an in-depth book review for The Blind Owl. For those less aware of the foibles of generative AI, I share some insights—or low-lights. For this, I used Midjourney v6.1.

Prompt: a young woman gives a flower to an old man, who is crouched under a large cypress tree by a river

I issued this prompt, and as per usual, it rendered four options. Notice that in some instances, the tree is not a key element.

Given enough time, one can slowly improve to obtain the desired result.

Here, an old man indeed crouches under a prominent cypress tree and by a river. A young woman hands him some flowers—though not so much blue morning glories. On balance, I like this output, but it still needs work.

Some other problems:

  1. The man is looking away—neither at her nor her flowers.
  2. Her (right) eye is deformed.
  3. Her left hand is deformed.
  4. I didn’t ask for jewellery—an earring.

At least I can in-paint out these imperfections—perhaps.

Here’s another render using the same image prompt.

Notice that it ignored the man altogether. My point is that for every awesome image you see, there may have been hundreds of iterations to get there. There are ways to get persistent characters and scenes, but this takes a bit of up-from effort and iterations that one can leverage going forward.

On the topic of Midjourney model 6.0 versus 6.1, I share this comparison—front-facing faces for a character sheet for this old man. Here, I prefer the earlier model as displayed in the top row.

In some cases, there are minor improvements over v6.0. In other cases, they stepped back. v6.1 renders less realistic human images, making them look more computer-generated and less natural. It also over-applies sexual stereotypes, traditional beauty archetypes, smoother skin, and so on. But that’s not the main topic for today.

DISCLAIMER: This post has little to do with philosophy, but it ties into a philosophical novella.