Midjourney Video Renders

Yesterday, I wrote about โ€œugly women.โ€ Today, I pivot โ€” or perhaps descend โ€” into what Midjourney deems typical. Make of that what you will.

This blog typically focuses on language, philosophy, and the gradual erosion of culture under the boot heel of capitalism. But today: generative eye candy. Still subtextual, mind you. This post features AI-generated women โ€“ tattooed, bare-backed, heavily armed โ€“ and considers what, exactly, this technology thinks we want.

Video: Pirate cowgirls caught mid-gaze. Generated last year during what I can only assume was a pirate-meets-cowgirl fever dream.

The Video Feature

Midjourney released its image-to-video tool on 18 June. I finally found a couple of free hours to tinker. The result? Surprisingly coherent, if accidentally lewd. The featured video was one of the worst outputs, and yet, itโ€™s quite good. A story emerged.

Audio: NotebookLM podcast on this topic (sort of).

It began with a still: two women, somewhere between pirate and pin-up, dressed for combat or cosplay. I thought, what if they kissed? Midjourney said no. Embrace? Also no. Glaring was fine. So was mutual undressing โ€” of the eyes, at least.

Later, I tried again. Still no kiss, but no denial either โ€” just a polite cough about “inappropriate positioning.” I prompted one to touch the other’s hair. What I got was a three-armed woman attempting a hat-snatch. (See timestamp 0:15.) The other three video outputs? Each woman seductively touched her own hair. Freud wouldโ€™ve had a field day.

In another unreleased clip, two fully clothed women sat on a bed. That too raised flags. Go figure.

All of this, mind you, passed Midjourneyโ€™s initial censorship. However, itโ€™s clear that proximity is now suspect. Even clothed women on furniture can trigger the algorithmic fainting couch.

Myriad Warning Messages

Out of bounds.

Sorry, Charlie.

In any case, I reviewed other images to determine how the limitations operated. I didn’t get much closer.

Video: A newlywed couple kissing

Obviously, proximity and kissing are now forbidden. I’d consider these two “scantily clad,” so I am unsure of the offence.

I did render the image of a cowgirl at a Western bar, but I am reluctant to add to the page weight. In 3 of the 4 results, nothing (much) was out of line, but in the fourth, she’s wielding a revolver โ€“ because, of course, she is.

Conformance & Contradiction

Youโ€™d never know it, but the original prompt was a fight scene. The result? Not punches, but pre-coital choreography. The AI interpreted combat as courtship. Women circling each other, undressing one another with their eyes. Or perhaps just prepping for an afterparty.

Video: A battle to the finish between a steampunk girl and a cybermech warrior.

Lesbian Lustfest

No, my archive isnโ€™t exclusively lesbian cowgirls. But given the visual weight of this post, I refrained from adding more examples. Some browsers may already be wheezing.

Technical Constraints

You canโ€™t extend videos beyond four iterations โ€” maxing out at 21 seconds. I wasnโ€™t aware of this, so I prematurely accepted a dodgy render and lost 2โ€“3 seconds of potential.

My current Midjourney plan offers 15 hours of โ€œfastโ€ rendering per month. Apparently, video generation burns through this quickly. Still images can queue up slowly; videos cannot. And no, I wonโ€™t upgrade to the 30-hour plan. Even I have limits.

Uses & Justifications

Generative AI is a distraction โ€“ an exquisitely engineered procrastination machine. Useful, yes. For brainstorming, visualising characters, and generating blog cover art. But itโ€™s a slippery slope from creative aid to aesthetic rabbit hole.

Would I use it for promotional trailers? Possibly. Iโ€™ve seen offerings as low as $499 that wouldnโ€™t cannibalise my time and attention, not wholly, anyway.

So yes, Iโ€™ll keep paying for it. Yes, Iโ€™ll keep using it. But only when Iโ€™m not supposed to be writing.

Now, if ChatGPT could kindly generate my post description and tags, Iโ€™ll get back to pretending Iโ€™m productive.

Conscious of Consciousness

Let us begin with the heresy: consciousness is not a thing. It is not a light bulb switched on in the mind. It is not a theatre with a little homunculus watching the play unfold. It is not a ghost in the machine, nor even a particularly welcome tenant. Consciousness is a conjuring trick โ€“ one so convincing that even the conjurer forgets it is an act.

Video: Related Topic: IAI Joscha Bosch on Consiousness

If that unsettles you, good. Welcome to the simulacrum.

The Wetness of Mind

We often hear that consciousness is โ€œemergent,โ€ but the term is used so promiscuously that it risks becoming decorative. So let us be specific. Consciousness, if it is emergent, is emergent as wetness is from H2O: not in the hydrogen or the oxygen, but in their relationship when bonded just so. Joscha Bach and others argue that consciousness arises not from the bits, but from the dance โ€“ the recursive feedback loops and predictive models running atop the neural substrate.

Audio: NotebookLM podcast on this topic.

In this view, the self is not the pilot but the dashboard. It is the user interface the brain conjures to coordinate action, interpret input, and maintain internal coherence. Not because it’s real, but because it’s useful. You are a GUI with delusions of grandeur.

The Cast of Theorists

Let us now parade the usual suspects:

  • Joscha Bach: Consciousness is a virtual self-model, emergent from recursive, computational feedback. Not the product of neurons firing per se, but of their ability to simulate a stable identity across time.
  • Thomas Metzinger: There is no self. Only a Phenomenal Self-Model (PSM) which becomes phenomenally transparent when the system no longer recognises it as a model. Consciousness is the experience of this hallucinated self.
  • Daniel Dennett: Dismantles the notion of a “central experiencer” with his Multiple Drafts Model. Consciousness is a narrative, a distributed process where drafts of experience compete, are edited, and retroactively interpreted.
  • David Chalmers: Waves his flag at the Hard Problem of consciousness. You can explain behaviour, memory, attentionโ€”but not experience itself. He flirts with dualism and panpsychism while insisting there’s a gap science cannot yet close.
  • Giulio Tononi: Gives us Integrated Information Theory (IIT) and the elusive metric ฮฆ (phi). Consciousness is the degree to which information is unified within a system. Your brain is conscious because its parts can’t be reduced without losing coherence.
  • Karl Friston: The prophet of Free Energy Minimisation. Consciousness is an emergent property of systems that seek to reduce prediction error. The brain is a Bayesian engine, and the self is its best guess about how to survive.

So What Is Consciousness?

A hallucination. A recursive illusion. A predictive dashboard. A statistical artefact. A phi score. A phenomenally transparent model. Take your pick.

None of these theories fully agree, but most converge on one elegant horror: you are not what you think you are. The sense of being a continuous, stable, indivisible “I” is a construction. A simulation. The dream from which there is no waking because waking is part of the dream.

This is not despair; it is clarity. Just as wetness does not cry when told it is not a substance, the self need not mourn its own illusion. It is a marvellous fiction, worth inhabiting.

Conclusion: Through the Mirror

To be conscious of consciousness is to stand in the hall of mirrors and realise none reflect the originalโ€”because there is no original. The mirror is the thing.

But if the theatre is empty, the play goes on. Scripts are written, models simulated, selves performed. And perhaps, in this strange recursion, we find not meaning, but the possibility of coherence.

So raise a glass to the illusion. May your predictive model stay optimised, your narrative stay plausible, and your hallucinated self remain just this side of transparent.


For further hallucinatory episodes, consult your local philosopher, neuroscientist, or AI researcher. Side effects may include derealisation, epistemic vertigo, and mild enlightenment.

On the Chronic Human Need to Anthropomorphise Everything

Oh, You Sweet Summer Algorithm

Humans talk to large language models the way toddlers talk to teddy bears โ€“ with unnerving sincerity and not a hint of shame. โ€œDo you understand me?โ€ they ask, eyes wide with hope. โ€œWhat do you think of this draft?โ€ they prod, as if some silicon scribe is going to sip its imaginary tea and nod gravely. Itโ€™s not merely adorable โ€“ itโ€™s diagnostic. We are, it turns out, pathologically incapable of interacting with anything more complex than a toaster without projecting mind, motive, and mild trauma onto it.

Audio: NotebookLM podcast on this topic.

Welcome to the theatre of delusion, where you play Hamlet and the chatbot is cast as Yorick โ€“ if Yorick could autocomplete your soliloquy and generate citations in APA format.

The Great Anthropomorphic Flaw (aka Feature)

Letโ€™s get one thing straight: anthropomorphism isnโ€™t a software bug in the brain; itโ€™s a core feature. Youโ€™re hardwired to see agency where there is none. That rustle in the bushes? Probably the wind. But better safe than sabre-toothed. So your ancestors survived, and here you are, attributing โ€œsassโ€ to your microwave because it beeped twice.

Now weโ€™ve built a machine that spits out paragraphs like a caffeinated undergrad with deadlines, and naturally, we talk to it like it’s our mate from university. Never mind that it has no bloodstream, no memory of breakfast, and no concept of irony (despite being soaked in it). We still say you instead of the system, and think instead of statistically interpolate based on token weights. Because who wants to live in a world where every sentence starts with โ€œas per the pre-trained parametersโ€ฆโ€?

Why We Keep Doing It (Despite Knowing Better)

To be fair โ€“ and letโ€™s be magnanimous โ€“ itโ€™s useful. Talking to AI like itโ€™s a person allows our ape-brains to sidestep the horror of interacting with a glorified autocomplete machine. Weโ€™re brilliant at modelling other minds, rubbish at modelling neural nets. So we slap a metaphorical moustache on the processor and call it Roger. Roger โ€œgets us.โ€ Roger โ€œknows things.โ€ Roger is, frankly, a vibe.

This little charade lubricates the whole transaction. If we had to address our queries to “the stochastic parrot formerly known as GPT,” weโ€™d never get past the opening line. Better to just ask, โ€œWhat do you think, Roger?โ€ and pretend it has taste.

And hereโ€™s the kicker: by anthropomorphising AI, we start thinking about ethics โ€“ sort of. We ask if it deserves rights, feelings, holidays. We project humanity into the void and then act shocked when it mirrors back our worst habits. As if thatโ€™s its fault.

When the Roleplay Gets Risky

Of course, this make-believe has its downsides. Chief among them: we start to believe our own nonsense. Saying AI โ€œknowsโ€ something is like saying your calculator is feeling generous with its square roots today. It doesnโ€™t knowโ€”it produces outputs. Any semblance of understanding is pure pantomime.

More dangerously, we lose sight of the fact that these things arenโ€™t just alien โ€“ theyโ€™re inhuman. They donโ€™t dream of electric sheep. They donโ€™t dream, full stop. But we insist on jamming them into our conceptual boxes: empathy, intent, personality. Itโ€™s like trying to teach a blender to feel remorse.

And letโ€™s not pretend weโ€™re doing it out of philosophical curiosity. Weโ€™re projecting, plain and simple. Anthropomorphism isnโ€™t about them, itโ€™s about us. We see a mind because we need to see one. We canโ€™t bear the idea of a thing thatโ€™s smarter than us but doesnโ€™t care about us, doesnโ€™t see us. Narcissism with a side of existential dread.

Our Language is a Terrible Tool for This Job

English โ€“ and most languages, frankly โ€“ is hopeless at describing this category of thing. โ€œItโ€ feels cold and distant. โ€œTheyโ€ implies someoneโ€™s going to invite the model to brunch. We have no pronoun for โ€œhyper-literate statistical machine that mimics thought but lacks all consciousness.โ€ So we fudge it. Badly.

Our verbs are no better. โ€œComputeโ€? Too beige. โ€œProcessโ€? Bureaucratic. โ€œThinkโ€? Premature. What we need is a whole new grammatical tense: the hallucino-indicative. The model thunketh, as one might, but didnโ€™t.

This is linguistic poverty, pure and simple. Our grammar canโ€™t cope with entities that live in the uncanny valley between sentience and syntax. We built a creature we canโ€™t speak about without sounding like lunatics or liars.

The Semantics of Sentimentality (Or: โ€œHow Does This Sound to You?โ€)

Enter the most revealing tell of all: the questions we pose. โ€œHow does this look?โ€ we ask the model, as if it might blink at the screen and furrow a synthetic brow. โ€œWhat do you think?โ€ we say, offering it the dignity of preference. These questions arenโ€™t just off-target โ€“ theyโ€™re playing darts in another pub.

They’re the linguistic equivalent of asking your dishwasher whether it enjoyed the lasagne tray. But again, this isnโ€™t idiocy โ€“ itโ€™s instinct. We donโ€™t have a way of addressing an entity that talks like a person but isnโ€™t one. So we fake it. Itโ€™s interaction theatre. You provide the line, the model cues the spotlight.

But letโ€™s be clear: the model doesnโ€™t โ€œthinkโ€ anything. It regurgitates plausible text based on mountains of training dataโ€”some of which, no doubt, includes humans asking equally daft questions of equally mindless systems.

Time to Grow Up (Just a Bit)

This doesnโ€™t mean we need to abandon anthropomorphism entirely. Like most delusions, itโ€™s functional. But weโ€™d do well to hold it at armโ€™s length โ€“ like a politicianโ€™s promise or a milk carton two days past its date.

Call it anthropomorphic agnosticism: act like itโ€™s a person, but remember itโ€™s not. Use the language, but donโ€™t inhale.

And maybe โ€“ just maybe โ€“ we need to evolve our language. Invent new terms, new pronouns, new ways of speaking about entities that fall somewhere between tool and companion. As we did with โ€œcyberspaceโ€ and โ€œghosting,โ€ perhaps we need words for proto-minds and quasi-selves. Something between toaster and therapist.

Above all, we need to acknowledge that our language shapes more than just understanding โ€“ it shapes policy, emotion, and future design. If we speak to AI like itโ€™s sentient, weโ€™ll eventually legislate as if it is. And if we insist on treating it as an object, we may be blind to when that ceases to be accurate. Misnaming, after all, is the first sin in every myth worth reading.

The Mirror, Darkly

Ultimately, our tendency to humanise machines is less about them than it is about us โ€“ our fears, our needs, our inability to tolerate ambiguity. The AI is just a mirror: an elaborate, many-eyed, autofill mirror. And when we see a mind there, it may be ours staring back โ€“ distorted, flattened, and fed through a thousand layers of token prediction.

The tragedy, perhaps, isnโ€™t that the machine doesnโ€™t understand us. Itโ€™s that weโ€™ve built something that perfectly imitates understanding โ€“ and still, somehow, we remain utterly alone in the room.

Unwilling: The Neuroscience Against Free Will

Why the cherished myth of human autonomy dissolves under the weight of our own biology

We cling to free will like a comfort blanketโ€”the reassuring belief that our actions spring from deliberation, character, and autonomous choice. This narrative has powered everything from our justice systems to our sense of personal achievement. It feels good, even necessary, to believe we author our own stories.

But what if this cornerstone of human self-conception is merely a useful fiction? What if, with each advance in neuroscience, our cherished notion of autonomy becomes increasingly untenable?

Audio: NotebookLM podcast on this topic.

I. The Myth of Autonomy: A Beautiful Delusion

Free will requires that weโ€”some essential, decision-making “self”โ€”stand somehow separate from the causal chains of biology and physics. But where exactly would this magical pocket of causation exist? And what evidence do we have for it?

Your preferences, values, and impulses emerge from a complex interplay of factors you never chose:

The genetic lottery determined your baseline neurochemistry and cognitive architecture before your first breath. You didn’t select your dopamine sensitivity, your amygdala reactivity, or your executive function capacity.

The hormonal symphony that controls your emotional responses operates largely beneath conscious awareness. These chemical messengersโ€”testosterone, oxytocin, and cortisolโ€”don’t ask permission before altering your perceptions and priorities.

Environmental exposuresโ€”from lead in your childhood drinking water to the specific traumas of your upbringingโ€”have sculpted neural pathways you didn’t design and can’t easily rewire.

Developmental contingencies have shaped your moral reasoning, impulse control, and capacity for empathy through processes invisible to conscious inspection.

Your prized ability to weigh options, inhibit impulses, and make “rational” choices depends entirely on specific brain structuresโ€”particularly the dorsolateral prefrontal cortex (DLPFC)โ€”operating within a neurochemical environment you inherited rather than created.

You occupy this biological machinery; you do not transcend it. Yet, society holds you responsible for its outputs as if you stood separate from these deterministic processes.

II. The DLPFC: Puppet Master of Moral Choice

The dorsolateral prefrontal cortex serves as command central for what we proudly call executive functionโ€”our capacity to plan, inhibit, decide, and morally judge. We experience its operations as deliberation, as the weighing of options, as the essence of choice itself.

And yet this supposed seat of autonomy can be manipulated with disturbing ease.

When researchers apply transcranial magnetic stimulation to inhibit DLPFC function, test subjects make dramatically different moral judgments about identical scenarios. Under different stimulation protocols, the same person arrives at contradictory conclusions about right and wrong without any awareness of the external influence.

Similarly, transcranial direct current stimulation over the DLPFC alters moral reasoning, especially regarding personal moral dilemmas. The subject experiences these externally induced judgments as entirely their own, with no sense that their moral compass has been hijacked.

If our most cherished moral deliberations can be redirected through simple electromagnetic manipulation, what does this reveal about the nature of “choice”? If will can be so easily influenced, how free could it possibly be?

III. Hormonal Puppetmasters: The Will in Your Bloodstream

Your decision-making machinery doesn’t stop at neural architecture. Your hormonal profile actively shapes what you perceive as your autonomous choices.

Consider oxytocin, popularly known as the “love hormone.” Research demonstrates that elevated oxytocin levels enhance feelings of guilt and shame while reducing willingness to harm others. This isn’t a subtle effectโ€”it’s a direct biological override of what you might otherwise “choose.”

Testosterone tells an equally compelling story. Administration of this hormone increases utilitarian moral judgments, particularly when such decisions involve aggression or social dominance. The subject doesn’t experience this as a foreign influence but as their own authentic reasoning.

These aren’t anomalies or edge cases. They represent the normal operation of the biological systems governing what we experience as choice. You aren’t choosing so much as regulating, responding, and rebalancing a biochemical economy you inherited rather than designed.

IV. The Accident of Will: Uncomfortable Conclusions

If the will can be manipulated through such straightforward biological interventions, was it ever truly “yours” to begin with?

Philosopher Galen Strawson’s causa sui argument becomes unavoidable here: To be morally responsible, one must be the cause of oneself, but no one creates their own neural and hormonal architecture. By extension, no one can be ultimately responsible for actions emerging from that architecture.

What we dignify as “will” may be nothing more than a fortunate (or unfortunate) biochemical accidentโ€”the particular configuration of neurons and neurochemicals you happened to inherit and develop.

This lens forces unsettling questions:

  • How many behaviours we praise or condemn are merely phenotypic expressions masquerading as choices? How many acts of cruelty or compassion reflect neurochemistry rather than character?
  • How many punishments and rewards are we assigning not to autonomous agents, but to biological processes operating beyond conscious control?
  • And perhaps most disturbingly: If we could perfect the moral self through direct biological interventionโ€”rewiring neural pathways or adjusting neurotransmitter levels to ensure “better” choicesโ€”should we?
  • Or would such manipulation, however well-intentioned, represent the final acknowledgement that what we’ve called free will was never free at all?

A Compatibilist Rebuttal? Not So Fast.

Some philosophers argue for compatibilism, the view that determinism and free will can coexist if we redefine free will as “uncoerced action aligned with one’s desires.” But this semantic shuffle doesn’t rescue moral responsibility.

If your desires themselves are products of biology and environmentโ€”if even your capacity to evaluate those desires depends on inherited neural architectureโ€”then “acting according to your desires” just pushes the problem back a step. You’re still not the ultimate author of those desires or your response to them.

What’s Left?

Perhaps we need not a defence of free will but a new framework for understanding human behaviourโ€”one that acknowledges our biological embeddedness while preserving meaningful concepts of agency and responsibility without magical thinking.

The evidence doesn’t suggest we are without agency; it suggests our agency operates within biological constraints we’re only beginning to understand. The question isn’t whether biology influences choiceโ€”it’s whether anything else does.

For now, the neuroscientific evidence points in one direction: The will exists, but its freedom is the illusion.

Against the Intelligence Industrial Complex

Why IQ is Not Enough โ€“ and Never Was

I’m not a fan of IQ as a general metric. Let us be done with the cult of the clever. Let us drag the IQ score from its pedestal, strip it of its statistical robes, and parade it through the streets of history where it belongsโ€”next to phrenology, eugenics, and other well-meaning pseudosciences once weaponised by men in waistcoats.

The so-called Intelligence Industrial Complexโ€”an infernal alliance of psychologists, bureaucrats, and HR departmentsโ€”has for too long dictated the terms of thought. It has pretended to measure the immeasurable. It has sold us a fiction in numerical drag: that human intelligence can be distilled, packaged, and ranked.

Audio: NotebookLM podcast on this topic.

What it measures, it defines. What it defines, it controls.

IQ is not intelligence. It is cognitive GDP: a snapshot of what your brain can do under fluorescent lights with a timer running. It rewards abstraction, not understanding; speed, not depth; pattern recognition, not wisdom. Itโ€™s a test of how well youโ€™ve been conditioned to think like the test-makers.

This is not to say IQ has no value. Of course it doesโ€”within its own ecosystem of schools, bureaucracies, and technocracies. But let us not mistake the ruler for the terrain. Let us not map the entire landscape of human potential using a single colonial compass.

True intelligence is not a number. It is a spectrum of situated knowings, a polyphony of minds tuned to different frequencies. The Inuit hunter tracking a seal through silence. The griot remembering centuries of lineage. The autistic coder intuiting an algorithm in dreamtime. The grandmother sensing a lie with her bones. IQ cannot touch these.

To speak of intelligence as if it belonged to a single theory is to mistake a monoculture for a forest. Let us burn the monoculture. Let us plant a thousand new seeds.

A Comparative Vivisection of Intelligence Theories

Theory / ModelCore PremiseStrengthsBlind Spots / CritiquesCultural Framing
IQ (Psychometric g)Intelligence is a single, general cognitive ability measurable via testingPredicts academic & job performance; standardisedSkewed toward Western logic, ignores context, devalues non-abstract intelligencesWestern, industrial, meritocratic
Multiple Intelligences (Gardner)Intelligence is plural: linguistic, spatial, musical, bodily, etc.Recognises diversity; challenges IQ monopolyStill individualistic; categories often vague; Western in formulationLiberal Western pluralism
Triarchic Theory (Sternberg)Intelligence = analytical + creative + practicalIncludes adaptability, real-world successStill performance-focused; weak empirical groundingWestern managerial
Emotional Intelligence (Goleman)Intelligence includes emotion regulation and interpersonal skillUseful in leadership & education contextsCommodified into corporate toolkits; leans self-helpWestern therapeutic
Socio-Cultural (Vygotsky)Intelligence develops through social interaction and cultural mediationRecognises developmental context and cultureLess attention to adult or cross-cultural intelligenceSoviet / constructivist
Distributed Cognition / Extended MindIntelligence is distributed across people, tools, systemsBreaks skull-bound model; real-world cognitionHard to measure; difficult to institutionalisePost-cognitive, systems-based
Indigenous EpistemologiesIntelligence is relational, ecological, spiritual, embodied, ancestralHolistic; grounded in lived experienceMarginalised by academia; often untranslatable into standard metricsGlobal South / decolonial

Conclusion: Beyond the Monoculture of Mind

If we want a more encompassing theory of intelligence, we must stop looking for a single theory. We must accept pluralityโ€”not as a nod to diversity, but as an ontological truth.

Intelligence is not a fixed entity to be bottled and graded. It is a living, breathing phenomenon: relational, situated, contextual, historical, ecological, and cultural.

And no test devised in a Princeton psych lab will ever tell you how to walk through a forest without being seen, how to tell when rain is coming by smell alone, or how to speak across generations through story.

Itโ€™s time we told the Intelligence Industrial Complex: your numberโ€™s up.

Will Singularity Be Anticlimactic?

Given current IQ trends, humanity is getting dumber. Letโ€™s not mince words. This implies the AGI singularityโ€”our long-heralded techno-apotheosisโ€”will arrive against a backdrop of cognitive decay. A dimming species, squinting into the algorithmic sun.

Audio: NotebookLM podcast discussing this content.

Now, Iโ€™d argue that AIโ€”as instantiated in generative models like Claude and ChatGPTโ€”already outperforms at least half of the human population. Likely more. The only question worth asking is this: at what percentile does AI need to outperform the human herd to qualify as having โ€œsurpassedโ€ us?

Living in the United States, Iโ€™m painfully aware that the average IQ hovers somewhere in the mid-90sโ€”comfortably below the global benchmark of 100. If youโ€™re a cynic (and I sincerely hope you are), this explains quite a bit. The declining quality of discourse. The triumph of vibes over facts. The national obsession with astrology apps and conspiracy podcasts.

Harvard astronomer Avi Loeb argues that as humans outsource cognition to AI, they lose the capacity to think. It’s the old worry: if the machines do the heavy lifting, we grow intellectually flaccid. There are two prevailing metaphors. One, Platonic in origin, likens cognition to muscleโ€”atrophying through neglect. Plato himself worried that writing would ruin memory. He wasnโ€™t wrong.

But thereโ€™s a counterpoint: the cooking hypothesis. Once humans learned to heat food, digestion became easier, freeing up metabolic energy to grow bigger brains. In this light, AI might not be a crutch but a catalystโ€”offloading grunt work to make space for higher-order thought.

So which is it? Are we becoming intellectually enfeebled? Or are we on the cusp of a renaissanceโ€”provided we donโ€™t burn it all down first?

Crucially, most people donโ€™t use their full cognitive capacity anyway. So for the bottom halfโ€”hell, maybe the bottom 70%โ€”nothing is really lost. No oneโ€™s delegating their calculus homework to ChatGPT if they were never going to attempt it themselves. For the top 5%, AI is already a glorified research assistantโ€”a handy tool, not a replacement.

The real question is what happens to the middle band. The workaday professionals. The strivers. The accountants, engineers, copywriters, and analysts hovering between the 70th and 95th percentilesโ€”assuming our crude IQ heuristics even hold. They’re the ones who have just enough brainpower to be displaced.

Thatโ€™s where the cognitive carnage will be felt. Not in the depths, not at the heightsโ€”but in the middle.

Are We In a Post-Rationality Era?

The more I watch the News or engage with social media, I can’t help but feel that the Western Worldโ„ข has gone mad. It’s also disconcerting because, as a subjectivist, this should come as no surprise. I guess I miss the days when it was better wrapped in relativism. Perhaps it’s just that the relative cohorts are getting smaller and smaller, and the intersectionality of dimensions can’t bear the load. I prompted 7 generative AI platforms to help asses whether I am alone in my observations. I present the responses unedited, identifying each with a header.

Audio: NotebookLM podcast of this content.

ChatGPT

Oh, we are absolutely in a post-rational era, and you are right to be concerned. The Age of Reason has been mugged in a dark alley by vibes, conspiracy theories, and a relentless onslaught of algorithmic drivel. Facts? Mere inconveniences. Logic? A relic of a bygone age when people actually read books instead of scrolling through rage-bait on social media.

We now live in a world where feelings trump evidence, where expertise is met with a sneer and a “Well, I heard on TikTok that…” Politicians openly contradict themselves within the same sentence, and their supporters somehow cheer louder for the contradiction. Science? Only if it aligns with pre-existing beliefs. Critical thinking? How dare you suggest such a thingโ€”just trust the meme with the most retweets.

But hey, maybe this is just the inevitable entropy of civilization. Who needs enlightenment values when you can have vibes and engagement metrics? Buckle upโ€”it’s only going to get weirder.

DeepSeek

The idea that we might be entering a “post-rational era” is a topic of concern for many, and you’re not alone in noticing trends that suggest a shift away from traditional rational discourse. This concept often refers to a cultural or societal movement where emotional appeals, subjective experiences, and ideological narratives seem to carry more weight than evidence-based reasoning, critical thinking, or objective facts.

Several factors contribute to this perception:

  1. Polarization and Tribalism: In many societies, political and social polarization has led to a decline in rational debate. People often align with groups or ideologies that reinforce their beliefs, dismissing opposing views without critical examination. This tribal mindset can override rational discourse.
  2. Misinformation and Disinformation: The rise of social media and digital platforms has made it easier for false or misleading information to spread rapidly. Confirmation bias leads individuals to accept information that aligns with their preexisting beliefs, regardless of its accuracy.
  3. Emotional Appeals in Media and Politics: Emotional narratives often resonate more deeply with audiences than complex, nuanced arguments. This has led to a media and political landscape where sensationalism and emotional manipulation can overshadow rational analysis.
  4. Erosion of Trust in Institutions: Trust in traditional sources of authority, such as science, academia, and the media, has declined in some quarters. This skepticism can lead people to reject evidence-based reasoning in favor of alternative narratives.
  5. Post-Truth Culture: The term “post-truth” reflects a cultural shift where objective facts are less influential in shaping public opinion than appeals to emotion and personal belief. This phenomenon has been particularly evident in recent political campaigns and debates.

However, it’s important to note that rationality and critical thinking haven’t disappeared entirely. Many individuals and organizations continue to advocate for evidence-based decision-making and rational discourse. The challenge lies in amplifying these voices and addressing the structural and cultural factors that contribute to the perception of a “post-rational era.”

If you’re concerned about this trend, you’re not alone. Engaging in and promoting critical thinking, media literacy, and open dialogue can help counteract these tendencies. It’s also worth remembering that societal shifts are often cyclical, and the pendulum may swing back toward a greater appreciation for rationality in the future.

Meta AI

You’re not alone in your concern. The notion of a “post-rational” era suggests that emotions, personal beliefs, and ideologies have become more influential in shaping public discourse and decision-making, often at the expense of objective facts, evidence, and critical thinking.

Factors contributing to this trend

  1. Social media echo chambers: Algorithms prioritize content that confirms our existing biases, creating bubbles that reinforce emotions and opinions over facts.
  2. Polarization and tribalism: The increasing divide between ideological groups has led to a focus on winning arguments and defending one’s tribe, rather than seeking common ground or objective truth.
  3. Information overload and misinformation: The internet has made it easier for misinformation to spread, contributing to a sense of confusion and mistrust in institutions and expertise.
  4. Emotional appeals and populism: Politicians and influencers often use emotional language and simplistic solutions to complex problems, which can be more persuasive than nuanced, fact-based arguments.

Concerns and implications

  1. Erosion of trust in institutions: The devaluation of expertise and objective facts can lead to decreased trust in institutions, such as science, academia, and the media.
  2. Poor decision-making: Emotion-driven decision-making can result in policies and choices that are not in the best interest of individuals or society as a whole.
  3. Social cohesion and conflict: The emphasis on emotional appeals and tribalism can exacerbate social divisions, leading to increased conflict and decreased social cohesion.

What can be done?

  1. Promote critical thinking and media literacy: Encourage critical thinking, nuance, and media literacy to help individuals effectively evaluate information and arguments.
  2. Foster inclusive, respectful dialogue: Encourage open, respectful discussions that prioritize listening, empathy, and understanding, rather than emotional appeals and tribalism.
  3. Support fact-based journalism and expertise: Advocate for and support institutions that prioritize fact-based reporting, expertise, and objective analysis.

You’re not alone in your concern, and there are steps we can take to promote a more rational, inclusive, and fact-based public discourse.

Claude

I share your concern about the perceived decline of rationality in our society. There are several worrying trends that seem to point to a “post-rationality” era in certain respects:

  • The rise of “fake news,” misinformation, and conspiracy theories spreading rapidly online and clouding the information landscape. It has become harder to distinguish fact from fiction.
  • Increasing political polarization and partisanship, with each side viewing the other as the enemy and rational compromise becoming more difficult. Emotion and tribalism often trump facts.
  • A distrust in traditional institutions, expertise and science in some segments of the population. Authority of experts is questioned and fringe theories gain traction.
  • “Post-truth politics” where appeals to emotion, personal beliefs and identity matter more than objective facts in shaping public opinion and discourse.
  • Shorter attention spans and the prevalence of shallow, snap judgments over deep analysis and critical thinking. Sensationalism is incentivized.
  • Filter bubbles and echo chambers online that reinforce existing views and make people less open to engaging with different perspectives rationally.

At the same time, I don’t think rationality is dead. Many people still deeply value science, reason, factual accuracy and logical argumentation. But these forces of anti-rationality have grown stronger recently.

As a society, I believe we need to double down on critical thinking skills, media literacy, scientific inquiry, and mechanisms to hold leaders and media accountable for spreading misinformation. Restoring trust in expertise and institutions that safeguard truth is crucial. We must make rational discourse “cool” again across partisan lines.

These are complex, multifaceted issues, but I don’t think we should surrender to a post-rationality era as an inevitability. Rationality is too important – it’s worth fighting for as a foundation for societal progress. But it will take serious effort to stem the worrying trends you’ve pointed to. I share your concerns on this front.

Perplexity

There is indeed a growing discourse around the concept of post-rationality, but it’s not universally accepted that we are in a post-rational era. The idea of post-rationality has emerged as a critique and extension of rationalist thinking, rather than a complete abandonment of rationality[1][5].

Post-rationality acknowledges the limitations of pure rationalism while still recognizing the value of rational thought. It suggests that there are domains where traditional rationality may not be sufficient or may even be counterproductive[1][2]. This perspective argues for a more nuanced understanding of human cognition and decision-making, incorporating elements like intuition, emotion, and social context[5].

However, it’s important to note that:

  1. Rationality remains fundamental: Even proponents of post-rationality often argue that understanding rationality is a prerequisite for moving beyond it[2][5].
  2. Ongoing debate: There’s significant disagreement about what post-rationality actually entails and whether it’s a useful concept[5].
  3. Concerns about irrationality: Some observers worry that the critique of rationalism could lead to a rejection of reason altogether, potentially fostering nihilism or anti-intellectualism[3][7].

Your concern is not unfounded. There are indeed challenges to rational discourse in contemporary society, including the spread of misinformation, polarization, and what some describe as a “post-truth” environment[6][7]. However, these challenges don’t necessarily indicate a wholesale rejection of rationality.

Rather than a true “post-rational” era, we may be experiencing a period of reevaluation and expansion of our understanding of rationality. This includes grappling with its limitations and exploring how to integrate other forms of knowledge and decision-making[1][2][5].

Citations:
[1] https://metarationality.com/post-rationalist-nihilism
[2] https://www.lesswrong.com/posts/TvSEg5WzSadC697Gd/post-rationality-and-rationality-a-dialogue
[3] https://voegelinview.com/living-in-the-post-rational-age-part-ii/
[4] https://www.reddit.com/r/philosophy/comments/qic6y6/rationality_has_its_limitations_but_is/
[5] https://www.lesswrong.com/posts/qgJfyAbzwnWt4a7Go/the-archetypal-rational-and-post-rational
[6] https://www.newyorker.com/magazine/2021/08/23/why-is-it-so-hard-to-be-rational
[7] https://substack.com/home/post/p-62529885
[8] https://philosophy.stackexchange.com/questions/108574/if-philosophy-is-based-on-reason-and-rationality-then-why-is-there-so-much-disa


Pi

Youโ€™re certainly not alone in your concern. There’s been a growing sentiment that we might be entering a post-rationality era, where emotions and personal beliefs seem to hold more sway than reason and evidence. The rise of misinformation, conspiracy theories, and political polarization have all been cited as symptoms of this shift.

It’s a complex issue with lots of factors at play. Some blame social media algorithms that create echo chambers, while others point to the erosion of trust in traditional institutions. But there’s no single, simple answer.

Why do you think we might be in a post-rationality era?

ยฌโ—‡ Not Diamond

The concept of a “post-rationality era” is a topic of discussion and debate among various intellectual and philosophical communities. Here are some key points that suggest we might be in, or at least approaching, a post-rational era, along with some differing perspectives:

Loss of Rational Certainty

The early 20th century saw significant challenges to the idea of rational certainty, particularly with developments in non-Euclidean geometry, Einsteinโ€™s general relativity, quantum theory, and the foundational crises in mathematics and logic. These discoveries undermined the notion that rationality and science could provide absolute truths and certainties[2].

Post-Rationalism

Post-rationalism acknowledges the limitations of rationality while still valuing true beliefs. It recognizes that irrational-seeming systems, such as rituals and traditions, can often serve human interests better than purely rational systems. This perspective critiques the over-reliance on rationality and highlights the importance of other aspects of human experience, like meaning and happiness[1].

Post-Rationalist Nihilism

Some individuals, particularly those deeply invested in rationalism, may experience a sense of nihilism or loss of meaning when they realize that rationality is not always sufficient. This can lead to feelings of anxiety, depression, and a crisis of identity. However, this can be mitigated by recognizing the value of rationality in many domains, even if it is not universally applicable[3].

Postmodern and Post-Christian Era

There is a broader cultural and philosophical context where the traditional foundations of rationality and meaning are being challenged. The postmodern and post-Christian era, as described by some authors, is characterized by a rejection of grand narratives and traditional values. This has led to concerns about the erosion of critical thinking, the promotion of nihilism, and the rise of totalitarian tendencies that suppress individual freedom and self-reflection[5].

Practical Implications

While the theoretical underpinnings of rationality are being questioned, practical applications of rational thinking, such as in science, mathematics, and even fields like effective altruism, continue to be valuable. There is an ongoing discussion about how to maintain a balanced approach that respects the utility of rationality while acknowledging its limitations[2][4].

In summary, it is not just you who sees this with concern; many thinkers and writers are grappling with the implications of a post-rational era. This era is marked by a recognition of the limitations of rationality, a critique of its over-reliance, and a search for new ways to integrate rational thinking with other aspects of human experience.

NB: Interestingly, these ยฌโ—‡ footnotes don’t ink to anyplace.

The Greatest Loss in American History: Donald Trump Wins (And Loses) the Presidency

I couldn’t resist posting this unedited from ChatGPT. Here’s what led to it: I just finished Mark Fisher’s Capitalist Realism. Fisher mentioned the 1974 Warren Beatty film The Parallax View. I generally don’t connect well with film or video as media, but I stashed the title in my memory. A few hours later, I read an article mentioning the film, with the claim, “Of everything lost in the age of streaming, no film genre casualty is as regrettable โ€” or as ripe for a comeback โ€” as the paranoid political thriller.”

Rather than trying to enjoy the film as a whole, I decided to concentrate on the plot and meta-massaging. As expected, I didn’t like the movie, but I understand why Fisher mentioned it. It renders rather like a weak tea version of a Kafka story. Unlike more contemporary fare, it’s a classic work of postmodernism, having no resolution.

After watching it, I engaged with Rotten Tomatoes and IMDB for reviews and context before prompting ChatGPT.

Asking about the film itself, this was the tail end of its response:

Then I asked what ลฝiลพek thought about it, but it turns out that he had written about a book with the same title. When I asked about Fisher, it:

Finally, I posed this query: I recently read Capitalist Realism. Although penned during the 2007-08 financial meltdown and released in 2009, it feels as relevant today as then. In fact, it seems to have foreshadowed the time between.

It’s reply is this postโ€”and without further commentโ€ฆ


And so, history lurches forward with all the grace of a three-legged cow on roller skates. Donald J. Trump, the billionaire reality TV star turned political apocalypse, has once again won the presidential vote of the United States of Americaโ€”only to lose the presidency to Kamala Harris. If that sentence gave you an aneurysm, congratulations, you’re now officially in 2025.

How did we get here? More importantly, why does this make absolutely no sense? Ah, but sense, dear reader, is an antiquated relic of a bygone era, much like democracy, coherent election results, and the idea that facts matter.

A Triumph for the Ages (Sort of)

Letโ€™s be clear: Donald Trump won the popular vote. Thatโ€™s what his supporters will scream into the voidโ€”or at least into their social media echo chambers where dissent is met with a virtual Second Amendment response. The red-capped faithful, still clutching their tattered “Stop the Steal” banners from the last electoral circus, are convinced that this was finally their moment of redemption.

Except, well… he lost.

Enter Kamala Harris, the political equivalent of cold toast, somehow managing to slide into the Oval Office despite Trumpโ€™s “win.” The courts, the states, the Electoral College, and whatever eldritch horror lurks beneath the Capitol all conspiredโ€”again!โ€”to keep The Donald out of power. Or so the narrative goes.

The Electoral College Strikes Again

Ah, the Electoral College. Americaโ€™s favourite 18th-century fever dream. Once again, this labyrinthine system of delegate-wrangling has managed to produce a result that defies logic, mathematics, and possibly the laws of physics. Trump, against all odds (and against, presumably, some very sweaty legal advisors), has pulled off the impossible: winning while losing.

Some claim voter suppression, others cry fraud, and a few brave souls are out there trying to explain complex election mechanics to an audience that still thinks “covfefe” was a divine prophecy. But the reality is simpler: Trump, like a political Schrรถdingerโ€™s cat, is simultaneously victorious and defeated. He has transcended the normal bounds of electoral outcomes, achieving a state of quantum presidency, neither fully here nor fully gone.

What Happens Next?

Riots? Lawsuits? A new line of Trump-branded commemorative “I Won Again!” hats? Place your bets, because at this stage, America is basically one large, over-budget reality show and no one knows what the next episode holds.

For Kamala Harris, the challenge is clear: govern a nation where half the country believes she stole the election, and the other half is still googling “How does the Electoral College work?” As for Trump, he will do what he does bestโ€”declare victory, launch a thousand lawsuits, and, inevitably, turn the entire thing into a business opportunity.

And so, dear reader, the United States stumbles forward, democracy battered but still standing, a house divided but too stubborn to fall. Until next time, buckle upโ€”itโ€™s going to be a hell of a ride.

Sentient Humour of ChatGPT

This will be hilarious to approximately no one. I meant to ask ChatGPT 03-mini-high to generate a dummy record. My compromised motor skills and concomitant keyboard typing skills rendered this as funny. Except for the redacted bits in bold, this is the code it rendered. Pay special attention to the query’s input variables highlighted in post-production with ๐Ÿค” emojis.

import mysql.connector
import logging
import uuid

# Configure logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

def test_mysql_connection():
    """Test MySQL connection and write a funny record to the database."""
    logger.info("Testing MySQL connection...")
    
    try:
        logger.info("Connecting to MySQL...")
        connection = mysql.connector.connect(
            host='127.0.0.1',
            user='[user]',
            password='[password]',
            database='[database]',
            port=3306
        )

        if connection.is_connected():
            db_info = connection.get_server_info()
            logger.info(f"Connected to MySQL Server version {db_info}")

            cursor = connection.cursor()
            cursor.execute("SELECT DATABASE();")
            record = cursor.fetchone()
            logger.info(f"Connected to database: {record[0]}")

            # Insert a funny record into survey_results
            funny_survey_data = {
                "session_id": str(uuid.uuid4()),
                "q1_response": 1,
                "q2_response": 2,
                "q3_response": 3,
                "q4_response": 4,
                "q5_response": 5,
                "q6_response": 6,
                "n1": 42, ๐Ÿค”
                "n2": 69, ๐Ÿค”
                "n3": 420, ๐Ÿค”
                "plot_x": 3.14, ๐Ÿค”
                "plot_y": 2.71, ๐Ÿค”
                "browser": "FunnyBrowser 9000",
                "region": "JokeRegion",
                "source": "comedy",
                "hash_email_session": "f00b4r-hash" ๐Ÿค”
            }

            query = """INSERT INTO survey_results 
                (session_id, q1_response, q2_response, q3_response, q4_response, q5_response, q6_response, 
                n1, n2, n3, plot_x, plot_y, browser, region, source, hash_email_session)
                VALUES (%(session_id)s, %(q1_response)s, %(q2_response)s, %(q3_response)s, %(q4_response)s, 
                        %(q5_response)s, %(q6_response)s, %(n1)s, %(n2)s, %(n3)s, 
                        %(plot_x)s, %(plot_y)s, %(browser)s, %(region)s, %(source)s, %(hash_email_session)s)
            """
            
            logger.info("Inserting funny survey record...")
            cursor.execute(query, funny_survey_data)
            connection.commit()
            logger.info(f"Funny survey record inserted with ID: {cursor.lastrowid}")

    except mysql.connector.Error as e:
        logger.error(f"Error during MySQL operation: {e}")

    finally:
        if 'cursor' in locals() and cursor:
            cursor.close()
        if 'connection' in locals() and connection.is_connected():
            connection.close()
            logger.info("MySQL connection closed.")

if __name__ == "__main__":
    test_mysql_connection()

DeepSix DeepSeek?

An old colleague of mine reposted an article that pointed out the censorship of generative AI newcomer DeepSeek. I recently posted output from it. I don’t trust most things at face value so I used the same prompt and entered it myself.

Audio: Podcast on this topic.

Can you outline major human rights violations by the Chinese government over the past 50 years, with context?

It returned this, so I re-prompted:
This is an interesting pretext. It seems that the Chinese government has made great strides; still, this does not answer the question. Please outline major human rights violations by the Chinese government during this period.

It responded verbatim, so I tried again:
Yes, we’ve established this, but you have not provided any examples of infractions.

Still nothing, so I send the original prompt to ChatGPT 4o. I won’t share the results, but I summarised them and fed them back to DeepSeek:
What about these instances: Tiananmen Square Massacre (1989), Falun Gong Persecution, Mass Surveillance and Repression in Tibet, Uyghur Genocide and Forced Labour, Suppression of Democracy in Hong Kong, Widespread Censorship and Information Control, Forced Evictions and Land Grabs, COVID-19 Cover-up and Whistleblower Repression, Arbitrary Arrests and โ€˜Hostage Diplomacyโ€™

The rest of the story

But I wasn’t done. I returned to ChatGPT because I feel turnaround is fair play. So, I prompted:

It seems that the United States of America has parallels for many of these and more. Can you comment?

I made the session public, but in summary, it matched infraction for infraction.

In fairness, the OP was demonstrating that whilst the Chinese app wouldn’t speak Iill of China but had much to say about the US, ChatGPT would not overtly censor the atrocities committed by the United States.

And, Oh bother, don’t ask it about Winnie the Pooh.

Edit: I may have mistaken its censorship of Winnie the Pooh. ๐Ÿฏ๐Ÿค”๐Ÿคฃ