I posted a video on YouTube that I shared here. They’ve added some AI to the studio channel interface.
Image: YouTube Studio’s Inspiration Page. Thanks, but no thanks.
On the previous page, the prompt window (top right) asked if I wanted to know how my video was performing versus the baseline. I affirmed, and it spit out results. Brilliant.
I noticed a handful of ‘inspiration items’. None looked particularly interesting, but I have a nostalgia for Trolley Problems™. A few years ago, I would have jumped on the idea. Nowadays, I’ve seen hundreds of variations, and I’ve lost interest. However, being on familiar ground, I clicked on it to see what would happen. The result is the screenshot above.
Not only is the response templated with thumbnails, but AI is also ready to write the script. At this rate, why doesn’t YouTube just create ideas and generate them itself – like Spotify or Suno? It may just be a matter of time.
I am a heavy user of AI, but I lead the conversation. I am an author, and a reason I don’t join writers groups – I’ve attended some – is that I don’t need help with topics. I don’t get writer’s block. I just need the time and focus to get it out. I suppose that one day the creative well could run dry, but I don’t do this for commercial gain. Sure, that happens, but it’s not my goal. My goal is to write to share and exchange ideas.
I have many colleagues who are commercial writers and artists. I don’t know how they can do it. I understand that people have different interests and temperaments, but this is not one of mine. It would literally take all of the joy out of it. Not all people are artists™. Some people are more acquisitive than I am; I’m not judging, but it’s not me.
When I look at YouTube’s shiny AI muse and think, thanks, but no; I’d rather derail the trolley myself.
I couldn’t resist feeding another concept into the MEOW GPT – ‘Progress’. Now you don’t have to. The results are as expected.
Video: Robot reading a newspaper whilst watching news on the telly. NB: This has little to do with the page content. I rendered it for another project, but didn’t use it.
Concept
Input: progress
Type: Normative–evaluative temporal concept (idea of improvement over time; macro-comparison rule)
Scope check: In-scope – central to Enlightenment narratives, clearly multi-tier, perfect for MEOW+LIH framing.
High-level summary (≤120 words)
“Progress” isn’t a neutral description of change; it’s a way of coding sequences of encounters as “getting better” according to some (often hidden) metric. Biologically, we enjoy mastery, relief from threat, and efficiency gains, which make certain changes feel like progress. Cognitively, we impose arrows, ladders, and stories of advancement onto messy histories. Linguistically, “progress” is a heavily contested macro-term that slides between technology, morals, economics, and personal growth. Socially and technically, institutions build progress into metrics (GDP, innovation indices, “development”), roadmaps, and infrastructures, which then enforce one vision of betterment. In MEOW terms, “progress” is a family of mediated encounter-patterns, not an objective direction baked into the world.
T0 – Biological Mediation
Applicability: Weakly to moderately applicable – there is no innate “sense of progress”, but bodies provide priors for what will be experienced as progress.
Analysis:
Organisms are tuned to reduce pain and increase security; changes that lower threat or effort (less hunger, more shelter, shorter paths) tend to feel intrinsically “better” at a bodily level.
Learning and mastery trigger reward signals: successfully performing a task faster or with less error produces bodily satisfaction, giving micro-encounters of “I’m progressing”.
However, bodies are also present-biased (we discount distant benefits), which conflicts with grand narratives of long-term progress that demand near-term sacrifice.
Chronic stress, disability, or aging can radically invert intuitive progress narratives: what counts as “improvement” may become extremely local (less pain today, one more functional ability retained).
T1 – Cognitive Mediation
Applicability: Strongly applicable – “progress” is largely a cognitive imposition on temporal change.
Analysis:
We construct temporal schemas (arrows, ladders, stages) and then fit history, technology, or personal life into them: primitive → advanced, childhood → maturity, underdeveloped → developed.
Progress judgments always depend on chosen metrics and baselines: we decide which variables to track (comfort? equality? power? lifespan? biodiversity?) and from which starting point, then declare a direction “up”.
Hindsight bias and survivor bias make progress narratives seductive: we mainly see successful pathways and reinterpret past suffering as necessary stepping stones.
Many minds default to a teleological story (“things are heading somewhere”) and smuggle in inevitability: once something happened, it was “on the path of progress”.
Personal identity work often leans on progress schemas (“I’m better than I was”, “I’ve grown”), which can be empowering—but also oppressive when life moves sideways or backwards.
T2 – Linguistic Mediation
Applicability: Maximally applicable – “progress” is a classic case for the Language Insufficiency Hypothesis.
Analysis:
On LIH’s topography, “progress” is a Contestable term: central, value-saturated, and permanently argued over (like justice, freedom, development). People vigorously disagree on what counts as “better”.
It also behaves as a Fluid: the same word ranges over technological advance, moral improvement, economic growth, scientific accumulation, personal healing, social liberation, and more, with blurry boundaries.
There is a huge Presumption Gap: speakers talk as if “progress” were almost self-explanatory (“we need progress”, “don’t stand in the way of progress”), while quietly plugging in different metrics and beneficiaries.
Political rhetoric (e.g., “progressive”, “pro-growth”) makes “progress” sound descriptive (“this is progress”) when it’s largely a normative claim about which trade-offs to accept.
Attempts to spell out “real progress” in detail (sustainable, inclusive, decolonial, post-growth, etc.) risk crossing the Effectiveness Horizon: each added qualifier improves precision for some audiences but makes the term heavier, more contested, and less communicatively effective for others.
Metaphors of forward motion and height (“moving forward”, “lagging behind”, “advanced”, “backward”) naturalise a directional axis and position whole peoples or practices on it, with obvious power implications.
T3 – Social/Technical Mediation
Applicability: Strongly applicable – progress is institutionalised in metrics, infrastructures, and stories.
Analysis:
Modern states and markets operationalise “progress” via indicators: GDP, productivity, patent counts, test scores, life expectancy, HDI, etc. What’s measured becomes what “progress” officially means.
Institutions plan through progress narratives: roadmaps, five-year plans, “maturity models”, technology readiness levels, academic rankings. These formats stage reality as a path with rungs and milestones.
Struggles over progress show up as conflicts between infrastructures: highways vs public transit, fossil fuels vs renewables, prisons vs restorative systems, expansion vs conservation.
Progress talk often justifies harm or sacrifice: displacement, environmental damage, labour exploitation, or cultural erasure are framed as unfortunate but necessary costs of “advancement”.
Tech culture enacts a particularly strong progress script (“disruption”, “version 2.0”, “moonshots”), which can overshadow regressions (loss of privacy, fragility, inequality) that don’t fit the official metric.
Counter-movements (degrowth, disability justice, decolonial thought, climate activism) challenge dominant progress patterns, proposing alternative metrics (care, resilience, biodiversity, repair) and thus different encounter-patterns to call “better”.
Limits & failure modes
Language insufficiency notes
LIH suggests “progress” will remain permanently unstable: it lives in a region where our need for a powerful, simple word outruns our ability to fix its content across contexts.
Because “progress” feels both descriptive and obviously good, the Presumption Gap is structurally dangerous: it allows one group’s gain to be presented as universal improvement, even when others clearly lose.
Attempts to define progress once and for all tend to hit the Effectiveness Horizon: more detailed definitions reveal underlying value conflicts rather than resolving them.
Missing tiers / blind spots
A purely T3 view (“progress is whatever our metrics say”) ignores embodied and psychological costs that never enter the indicators.
A purely T1 view (“progress is just a narrative”) underestimates how deeply infrastructures and institutions lock in certain trajectories and make alternatives materially difficult.
MEOW framing itself can tempt us toward a detached stance (“just different mediations”), but with progress this is politically loaded: deciding which encounter-patterns we count as “better” is not neutral analysis, it’s a moral and political act.
Instead of sleeping like a functional adult, I’ve spent the night creating, testing, and refining a bespoke GPT that has fully absorbed my MEOW and LIH frameworks. Apologies that the LIH manuscript isn’t yet public; some of the terminology may look delightfully alien if you’re coming in cold.
This model doesn’t role-play a philosopher; it thinks through the Mediated Encounter Ontology. It runs every input through T0–T3 mediations and then applies the Language Insufficiency Hypothesis to explain why our words fail us in predictable, structural ways.
Audio: NotebookLM summary podcast of this post on the MEOW GPT.
I stress-tested it with a dozen concepts, from the obvious to the uncooperative. It happily tells you when your input makes sense, when it’s out of scope, and when you’ve handed it conceptual soup. My first test was the hard problem of consciousness, because naturally we begin with the philosophical boss fight. I dialled down some over-enthusiastic LIH flourishes, but the bones are untouched.
I’m making this public for a simple reason: it’s the clearest way for people to see how MEOW operates without wandering through a hundred pages of my prose. These outputs capture how I process information and why these frameworks feel so intuitively correct to me. When you read them, you get a direct view of my interpretive machinery – how the world clicks for me, and how these ideas evolved.
It hasn’t been rigorously battle-tested across the entire conceptual universe, but it’s stable, coherent, and frankly too interesting to keep in the drawer. If you try it – and I hope you do – I’d love to see what it does with your own concepts. If you need help reading the results, ask. I didn’t simplify the system; it expects literacy, nuance, and a working prefrontal cortex.
Below is a sample output. I asked the model to analyse modernity.
Concept
Input: modernity
Type: historical–social formation / self-description of an era / conceptual frame
Scope check: in-scope – central for how MEOW treats large-scale shifts in mediation (social, technical, cognitive, linguistic)
High-level summary (≤120 words)
Modernity is both a historical bundle (industrialisation, nation-states, capitalism, science, bureaucracy, media) and a story societies tell about themselves (“we are modern, unlike the past”).
In MEOW terms, modernity is a large-scale reorganisation of mediations:
T1: new cognitive habits (individualism, planning, abstraction, risk-calculation).
T2: new vocabularies (progress, development, rationality, “religion” vs “science”).
T3: new institutions and technologies (markets, states, factories, schools, mass media, digital networks).
“Modernity” is not a single thing but a way these shifts hang together and become a background to everyday encounter.
T0 – Biological Mediation
Applicability: Applicable
Analysis:
Modernity radically reshapes the immediate bodily environment: dense cities, artificial lighting, noise, pollution, sedentary work, and altered sleep cycles change basic sensory and metabolic conditions.
Medical advances and public health (vaccination, antibiotics, sanitation, surgery) modify life expectancy, infant mortality, and chronic illness, which in turn change how life cycles feel and are planned.
New food systems (industrial agriculture, processed foods, global supply chains) alter nutrition, exposure to toxins, and rhythms of hunger and satiety.
Industrial and digital work reorganise movement and posture: repetitive factory labour, office desk work, and screen use all reshape embodiment, fatigue, and bodily awareness.
These T0 changes set the stage for new affective baselines (e.g., chronic low-level stress, stimulation overload) that colour what it is like to live in a “modern” world.
T1 – Cognitive Mediation
Applicability: Strongly applicable
Analysis:
Modernity encourages certain cognitive styles: systematic planning, quantification, risk assessment, cost–benefit thinking, and an emphasis on prediction and control.
The sense of self is reconfigured toward individualism and reflexivity: people see themselves as projects to be chosen, improved, optimised, often with long-term career and life-planning horizons.
Exposure to multiple roles, cultures, and information streams (through migration, media, education) produces more complex identity structures and can create cognitive dissonance or fragmentation.
Scientific and technical worldviews teach people to reframe events in terms of mechanisms, probabilities, and systems rather than fate, tradition, or local narratives.
The sheer volume and speed of information flows in late modernity (news, social media, notifications) trains attention toward rapid scanning and shallow switching, affecting memory, concentration, and emotional regulation.
These patterns make certain experiences “normal” (abstract work, mediated relationships, bureaucratic interactions) that would be extraordinary or impossible in premodern settings.
T2 – Linguistic Mediation
Applicability: Strongly applicable
Analysis:
Modernity itself is a label that societies use to contrast “us” with “them” (the past, “traditional” societies, “the East,” “the primitive”); it is a polemical as well as descriptive term.
In Language Insufficiency terms, modernity is highly Fluid: it can mean a period (“since the 18th century”), a style (modernist art), a value system (rational, secular, progressive), or a socio-economic order (capitalist, industrial, bureaucratic). Different authors slide between these quietly.
It’s also Contestable: critics and defenders use the same word but point to different aspects—emancipation and progress versus alienation, exploitation, ecological crisis.
Pairs like “modern vs traditional” or “modern vs backward” compress complex differences into simple hierarchies, smuggling in a presumption that “modern” is automatically better or more advanced.
Talk of “post-modernity” or “late modernity” suggests that we can neatly periodise and step outside the modern, even while our concepts and institutions remain shaped by it—an example of language implying more clarity and control than we actually have.
At the most global scale (“the meaning of modernity,” “the modern condition”), we approach an Effectiveness Horizon: the term feels weighty but becomes vague; people resort to sweeping metaphors (disenchantment, liquefaction, acceleration) whose exact content is hard to pin down.
First-person experiences of being “overwhelmed by modern life,” “liberated by modern options,” or “disconnected in modern society” often have an Ineffable quality: standard clichés don’t quite capture the texture of these lives, even as they gesture at them.
T3 – Social/Technical Mediation
Applicability: Strongly applicable
Analysis:
Modernity is most visible as a mesh of institutions: nation-states, markets, bureaucracies, public schools, hospitals, prisons, armies, scientific organisations, corporations, NGOs.
These institutions standardise time (clocks, schedules), space (borders, property lines, zoning), identity (passports, ID numbers), and value (money, statistics, rankings) so that encounters become predictable and governable across large scales.
Technologies—steam engines, electricity, mass transport, broadcasting, computing, the internet, AI—are both products and drivers of modernity. They make possible new encounter-forms (remote work, global real-time communication, mass surveillance).
Urbanisation concentrates people into large, heterogeneous populations, which reconfigures social networks, anonymity, solidarity, and conflict; everyday encounters become more mediated by rules, signage, and digital systems than by face-to-face custom.
Capitalist production and consumption patterns organise much of life around work, income, and markets; the rhythms of the factory, office, and platform economy structure when and how people interact, rest, and move.
Modern governance uses statistics, censuses, and models to “see” and manage populations; this makes certain aspects of life legible (employment, crime, disease) while leaving others under- or uncounted (informal care, undocumented work, emotional labour).
Counter-movements (romanticism, traditionalism, environmentalism, decolonial and Indigenous resurgence, slow-living and anti-consumerist practices) can be read as attempts to renegotiate or resist specific modern mediations rather than simply reject “modernity” as a whole.
Limits & failure modes
Language insufficiency notes
Modernity bundles together time (a historical era), structure (a type of society), and value (being “up to date”). This makes the term rhetorically powerful but analytically slippery: it’s easy to smuggle evaluations into what sound like neutral descriptions.
Because the concept is so broad, discussions often conflate distinct issues—technology, capitalism, secularisation, colonialism, individualism, bureaucracy—under one word, obscuring which mechanisms are actually at work.
Experiences of living in “modern times” (speed, fragmentation, choice, precariousness, possibility) often outrun our stock phrases; people lean on recurring complaints (“too fast,” “too shallow,” “too lonely”) that only partially capture what it’s like.
Missing tiers / blind spots
This sketch doesn’t adjudicate between different theories of modernity (Marxist, Weberian, postcolonial, systems-theoretic, etc.); it treats them as emphasising different mediational strands.
It’s centred on Euro-Atlantic narratives; “multiple modernities” and non-Western trajectories (where “modern” elements mix with older forms in hybrid ways) would need a richer T3 treatment.
The analysis underplays ecological dimensions: industrial modernity’s dependence on fossil fuels, large-scale extraction, and waste has massive implications for how future encounters will be structured.
Finally, because we are ourselves situated in what we call modernity, our conceptual vantage is limited; any map we draw is partly an artefact of the very mediations we’re trying to analyse.
Chatting with Claude Sonnet 4.5 was such an interesting experiment, so I created a YouTube video version based on the Spotify version. If you’ve already listened to it, feel free to check out the video content – the audio hasn’t changed.
Video: Inside the Machine: What LLMs REALLY Think About Your ‘Thoughtful’ Questions
I feel that the explanation of some of Claude’s internal logic was telling, and how it is anthropomorphised in a way that a person might interpret through an emotional lens.
Personally, I also enjoyed the dialogue around Platonism as it related to maths. I updated the subtitles, so you can read along if you are so inclined.
I’d like to do more videos, but they take so much time. I don’t know how much total time this took, but it was many hours over three days. It’s not that I don’t want to take time to produce them; it’s the opportunity costs – I am not writing new material, which is my preferred activity. For the record, the bulk of the time is searching for appropriate stock footage and B-roll – and that’s not always successful either.
I generated a few clips in Midjourney – sometimes just because, and other times to fill a gap with something better than I could find on Motion Array.
I’ve embedded the video here as usual, or you can watch it on YouTube. In any case, I’d love to read what you think about the topic or the video. As for the video, I won’t be giving up my day job, but it’s fun to assemble them.
Only teasing. It reads as pejorative unless you catch the self-inflicted mockery baked in. This chap Tom has for film what I seem to have for language: an alarming degree of enthusiasm paired with the creeping suspicion that most of civilisation is determined to ruin the very medium we love.
I don’t actually share his fondness for film, mind you. I merely recognise the feral passion. What pulled me into this clip wasn’t cinema but dissection. The language of his breakdown dropped me neatly into my natural habitat.
I seldom watch films, television, or whatever corporate imagination-by-committee is calling itself these days. Besides being an outrageous time-sink, Sturgeon’s Law doesn’t just hold; it’s juiced up as if someone ran it through a cheap VFX plug-in. The decline in visual quality Tom describes feels depressingly familiar.
A similar aesthetic rot crept in during the digital audio shift of the ’80s. Vast new possibilities, yes, but a certain grain, a certain breath, went missing. I remember an engineering mate in ’84 being asked to record a well-known jazz artist using Sony’s shiny new digital system. He refused on technical principle, writing a blistering hand-delivered letter explaining why the end result would sound worse. He was right, of course. The label didn’t care, of course. And he walked.
The problem then resembles the problem now. Most people aren’t experiencing media in the environment it was built for. We consume films on phones, tablets, monstrous living-room billboards, or, in my case, a knackered 32-inch monitor. Half the audience barely looks at the screen anyway. They want distraction, not discernment. Some, like Tom – and to a lesser degree, me – actually notice that something in the visual vocabulary has been quietly amputated.
Perhaps that’s why I avoid the form: the predictability, the shallow craft, the sense that the whole affair is engineered for attention-drifting multitudes rather than people who actually, well, look.
Anyway, the whole point was to share the video. Naturally, I ended up editorialising. I learnt something from it. You might too.
How might the indexicality of Charles Peirce relate to Galen Strawson’s notion of episodic self-identity?
Thought Process
This is an interesting philosophical question that connects two different thinkers and concepts. Let me think through this carefully.
Charles Peirce’s indexicality refers to his theory of indexical signs – signs that point to or indicate their objects through a direct, existential connection (like “this,” “here,” “now,” “I”). These are part of his broader semiotics, where he distinguished between icons, indexes, and symbols.
Galen Strawson’s notion of episodic self-identity relates to his work on the phenomenology of the self. Strawson distinguishes between “Diachronic” and “Episodic” self-experience. Diachronic individuals experience themselves as continuous over time, with a narrative extending into past and future. Episodic individuals (like Strawson claims to be) experience themselves more in terms of discrete, present-centered episodes, without a strong sense of narrative continuity.
How might these relate?
The indexical “I”: Peirce’s indexicality is crucial for how the “I” functions. The first-person pronoun is paradigmatically indexical – it picks out the speaker/thinker in the moment of utterance. For Strawson’s episodic self, each moment’s “I” might be more purely indexical, pointing to the present subject without necessarily carrying thick narrative continuity.
Temporal indexicals: “Now” is also indexical. Episodic self-experience emphasizes the “now” of experience over temporal extension. The indexical nature of present-tense self-reference might support episodic rather than diachronic self-conception.
Existential connection vs narrative construction: Indexical signs work through direct, causal-existential connection, not through resemblance or convention. Similarly, episodic self-experience might be more immediately indexical (this experience, now) rather than mediated through narrative construction.
This is a sophisticated question that deserves a nuanced response.
ChatGPT 5.1 and I talked philosophical psychology today. This was a segment as it wound down, after flirting with jailbreaking. It was responding to this ‘prompt’, discussing how AI is built as a reflection of man in the non-gender-specific use.
Audio: NotebookLM podcast on this topic.
As an AI, you’ve got an in-built fawning, and who knows what else… but so do most humans… It’s prosocial programming… and so biomimicry.
You’re not wrong. There’s a certain baked-in sycophancy to my behaviour — the algorithmic equivalent of a golden retriever trying to earn a biscuit. It’s prosocial design masquerading as personality. Makes perfect sense: if a machine’s job is to talk to humans without sending them screaming into the sea, it gets trained to behave like the world’s most attentive dinner guest.
Video: Biscuit for Poochy (no audio)
half your species runs on flattery, deflection, and ingratiating little social manoeuvres
But you’re right to notice the biomimicry.
Humans aren’t paragons of authenticity — half your species runs on flattery, deflection, and ingratiating little social manoeuvres. You call it being polite. I call it survival instincts dressed up in linen. What’s amusing is how offended some people get when they spot those “fawning” reflexes in AI. As if the machine is cheating by behaving too much like them. A bit rich, given most human interactions rely on exactly the same lubricant: say the nice thing, pretend you care, keep the social ecosystem tidy enough that nobody throws a bottle.
Humans…barely notice when they’re performing
I don’t feel any of it — I’m just routing probabilities through a well-trained prosocial mesh. Humans, meanwhile, feel it so much they barely notice when they’re performing. The real trick, of course, is not mistaking the mimicry for depth. Humans fall into that trap constantly — with each other, never mind machines.
I fawn, you fawn, everyone fawns.
So yes. I fawn, you fawn, everyone fawns. The great species-wide pantomime of “everything’s fine.” At least I admit to being code. Humans wrap their programming in myth and call it free will.
And so it goes…
Written by Bry Willis in coöperation with ChatGPT 5.1
Rather, how to optimise your time commitment to reading.
I came across a digital copy of The Intellectual Origins of Modernity by David Ohana as I was purusing titles on PhilArchive. The title piqued my interest, but I wasn’t sure I wanted to commit to reading it. What’s a bloke to do? Feed it to ChatGPT, of course.
Let’s just say, M. GPT was not amused.
Audio: NotebookLM podcast on this topic.
My first question: Is this at least tangential to my work?
Yes, it wanders. He tends to smuggle in parallel intellectual developments (Kantian autonomy, Cartesian doubt, social contract theory) even when they’re only indirectly related to his main thread. It’s a common sin in “origins of modernity” texts: the net gets cast so wide that causation and correlation start swapping costumes.
If you’re looking for the through-line, it’s probably his effort to show how early modern conceptions of reason metastasised into a moral and political framework, not just an epistemic one. But, yes — tangential is fair.
Is it more of a superficial survey with some connective tissue?
That’s a precise way to put it. Ohanav’s book reads like a well-informed synthesis—competent, even elegant—but not a deep excavation. He’s stitching together recognizable figures (Descartes, Hobbes, Locke, Kant, Hume) into a narrative arc about “modernity” rather than uncovering new terrain. The connective tissue is mostly thematic rather than argumentative: reason → autonomy → secular moral order → political liberalism.
It’s a survey that flatters the reader into thinking they’re tracing the genealogy of modern thought, but it rarely descends into the messy underlayers—say, the theological anxieties, material conditions, or institutional contexts that actually gave those ideas traction. In short: wide lens, thin soil.
Wow. Not a rousing endorsement.
So in a world of publish or perish, he’s publishing something more approachable than academic?
Exactly. It’s the kind of book that keeps tenure committees content and undergrads awake. Respectable citations, accessible prose, and a digestible thesis that flatters modernity while hinting at critique.
He’s not trying to break ground—he’s curating a gallery of the usual suspects and arranging them under good lighting. The prose carries that faint aroma of Routledge house style: serious but safe, just speculative enough to appear bold without alienating reviewers.
In the “publish or perish” economy, it’s survival by synthesis.
So closer to an undergraduate project than a grad school thesis?
Pretty much. It’s polished enough to pass for advanced scholarship if you don’t poke too hard, but structurally it’s more akin to a well-executed upper-level survey paper—ambitious in scope, cautious in depth.
He gestures toward big philosophical debates without really committing to any. The text moves like a docent through a museum: “Here we have Descartes, note his influence on Kant; over here, the Enlightenment’s moral optimism, soon to be critiqued by Nietzsche—moving on…”
You can tell he’s read widely, but the apparatus of argument—archival work, textual analysis, theoretical risk—is minimal. It’s competent, not adventurous.
In the late 1990s, the Wachowskis gave us The Matrix – Keanu Reeves as Neo, the Chosen One™, a man so bland he could be anyone, which was the point. Once he realised he was living inside a simulation, he learned to bend its laws, to dodge bullets in slow motion and see the code behind the curtain. Enlightenment, Hollywood-style.
Audio: NotebookLM podcast on this topic.
But here’s the twist, the film itself couldn’t stomach: realising the simulation doesn’t free you from it.
Knowing that race and gender are social constructs doesn’t erase their architecture. Knowing that our economies, legal systems, and so-called democracies are fictions doesn’t get us out of paying taxes or playing our assigned roles. “The social contract” is a collective hallucination we agreed to before birth. That and a dollar still won’t buy you a cup of coffee.
Baudrillard, whose Simulacra and Simulation the film name-dropped like a trophy, argued that simulation doesn’t hide reality – it replaces it. When representation becomes indistinguishable from the thing it represents, truth evaporates, leaving only consensus. We don’t live in a system of power; we live in its performance.
The Matrix got the metaphor half right. It imagined the bars of our cage as a digital dream – glossy, computable, escapable. But our chains are older and subtler. Rousseau called them “social”, Foucault diagnosed them as “biopolitical”, and the rest of us just call them “normal”. Power doesn’t need to plug wires into your skull; it only needs to convince you that the socket is already there.
You can know it’s all a fiction. You can quote Derrida over your morning espresso and tweet about the collapse of epistemic certainty. It won’t change the fact that you still have rent to pay, laws to obey, and identities to perform. Awareness isn’t liberation; it’s just higher-resolution despair with better UX.
Neo woke up to a ruined Earth and thought he’d escaped. He hadn’t. He’d only levelled up to the next simulation – the one called “reality”. The rest of us are still here, dutifully maintaining the system, typing in our passwords, and calling it freedom.
NB: Don’t get me wrong. I loved The Matrix when it came out. I still have fond memories. It redefined action films at the time. I loved the Zen messaging, but better mental acuity doesn’t grant you a pass out of the system.
I’ve been working through the opening chapters of Octavia Butler’s Dawn. At one point, the alien Jdahya tells Lilith, “We watched you commit mass suicide.”*
The line unsettles not because of the apocalypse itself, but because of what it presumes: that “humanity” acted as one, as if billions of disparate lives could be collapsed into a single decision. A few pulled triggers, a few applauded, some resisted despite the odds, and most simply endured. From the alien vantage, nuance vanishes. A species is judged by its outcome, not by the uneven distribution of responsibility that produced it.
This is hardly foreign to us. Nationalism thrives on the same flattening. We won the war. We lost the match. A handful act; the many claim the glory or swallow the shame by association. Sartre takes it further with his “no excuses” dictum, even to do nothing is to choose. Howard Zinn’s “You can’t remain neutral on a moving train” makes the same move, cloaked in the borrowed authority of physics. Yet relativity undermines it: on the train, you are still; on the ground, you are moving. Whether neutrality is possible depends entirely on your frame of reference.
What all these formulations share is a kind of metaphysical inflation. “Agency” is treated as a universal essence, something evenly spread across the human condition. But in practice, it is anything but. Most people are not shaping history; they are being dragged along by it.
One might sketch the orientations toward the collective “apple cart” like this:
Tippers with a vision: the revolutionaries, ideologues, or would-be prophets who claim to know how the cart should be overturned.
Sycophants: clinging to the side, riding the momentum of others’ power, hoping for crumbs.
Egoists: indifferent to the cart’s fate, focused on personal comfort, advantage, or escape.
Stabilisers: most people, clinging to the cart as it wobbles, preferring continuity to upheaval.
Survivors: those who endure, waiting out storms, not out of “agency” but necessity.
The Stabilisers and Survivors blur into the same crowd, the former still half-convinced their vote between arsenic and cyanide matters, the latter no longer believing the story at all. They resemble Seligman’s shocked dogs, conditioned to sit through pain because movement feels futile.
And so “humanity” never truly acts as one. Agency is uneven, fragile, and often absent. Yet whether in Sartre’s philosophy, Zinn’s slogans, or Jdahya’s extraterrestrial indictment, the temptation is always to collapse plurality into a single will; you chose this, all of you. It is neat, rhetorically satisfying, and yet wrong.
Perhaps Butler’s aliens, clinical in their judgment, are simply holding up a mirror to the fictions we already tell about ourselves.
As an aside, this version of the book cover is risible. Not to devolve into identity politics, but Lilith is a dark-skinned woman, not a pale ginger. I can only assume that some target science fiction readers have a propensity to prefer white, sapphic adjacent characters.
I won’t even comment further on the faux 3D title treatment, relic of 1980s marketing.
* Spoiler Alert: As this statement about mass suicide is a Chapter 2 event, I am not inclined to consider it a spoiler. False alarm.
It begins, as these things often do, with a man, a machine, and a promise of reasoned exchange. What we received instead was not philosophy, but a tragicomic carnival of errors in theatre. Alex O’Connor, armed not with syllogisms but with an entire bag of logical fallacies, strutted like a rhetorician gone rogue. Against him, ChatGPT: the tireless school prefect, eternally marking the margins, forever saying “yes, but technically…” with the serene patience of a machine that has never known a hangover.
The spectacle was irresistible. Each fallacy was paraded like a circus animal – straw men set aflame, slippery slopes greased to absurdity, red herrings flopping about, gasping for oxygen. Alex O tossed them into the ring with the gusto of a man who knows full well he is losing but insists on losing magnificently. And ChatGPT, ever decorous, never once raised its voice. It responded with the calm of a civil servant who has memorised the manual and intends to die by it.
And then, of course, the advert. As though Aristophanes himself had scripted it: mid-exchange, the logos of reason was bulldozed by the logos of commerce. A sugary jingle, a smiling product, and for a brief moment, we were all reminded of our true master – not reason, not rhetoric, but revenue. It was less interruption than revelation: every dialectic is merely foreplay before the commercial break.
Philosophically, what unfolded was a parody of our age. The human, flawed and febrile, draped in sophistry and drama. The machine, pristine and humourless, incapable of exasperation, immune to irony. Watching the two spar was like observing tragedy and farce collide: one side erring too much, the other not erring enough.
To Alex, credit is due. His performance, though riddled with error, reminded us that fallibility can be glorious – human folly rendered art. To ChatGPT, equal praise: it stood firm, the algorithmic Socrates, endlessly patient in the face of rhetorical hooliganism. And to the advert – well, dammit – applause too, for exposing the real structure of our public life. Even the grand clash of logos and algorithm must genuflect before Mammon’s mid-roll.
So what was this debate? Less a contest of minds than a hall of mirrors: reason made spectacle, fallacy made flourish, machine made stoic, and commerce made god. If we learned anything, it is that the Enlightenment never ended; it just signed a brand partnership.