Rick Beato, Everything is a Remix

Oh no, not that again. As if we’ve all been composing from scratch, untouched by the grubby hands of history.

Audio: NotebookLM podcast on this topic.

I’m not simping for AI, but let’s have it out, shall we? Rick Beato—bless his fretboard-fingered soul—says AI-generated music sucks. And sure, some of it does. But here’s the punchline: most human-made music sucks too. Always has. Always will. The fact that an algorithm can now churn out mediocrity faster than a caffeinated teenager with GarageBand doesn’t make it less “art.” It just makes it faster.

I’m a bit chuffed that Rick’s channel removed my comment pointing to this response. I didn’t want to copy-paste this content into his comments section.

Video: Rick Beato discusses AI-generated music

The Myth of the Sacred Original

Newsflash: There is no such thing as originality. Not in art. Not in music. Not even in your favourite indie band’s tortured debut EP. Everything we call “creative” is a clever remix of something older. Bach reworked Vivaldi. Dylan borrowed from the blues. Even Bowie—patron saint of artistic reinvention—was a pastiche artist in a glittery jumpsuit.

What AI does is make this painfully obvious. It doesn’t pretend. It doesn’t get drunk in Berlin and write a concept album about urban decay to mask the fact it lifted its sound from Kraftwerk. It just remixes and reinterprets at inhuman speed, without the eyeliner.

Speed Isn’t Theft, It’s Efficiency

So the AI can spit out a passable ambient track in ten seconds. Great. That’s not cheating, it’s progress. Saying “it took me ten years to learn to play like that” is noble, yes, but it’s also beside the point. Horses were noble too, but we built cars.

The question isn’t how long did it take? but does it move you? If the answer is no, fine. Say it sucks. But don’t pretend your human-shaped suffering gives your song a monopoly on meaning. That’s just gatekeeping with a sad sax solo.

The Taste Problem, Not the Tech Problem

Let’s not confuse our distaste for bland music with a distaste for AI. Most of the pop charts are already AI-adjacent—click-optimised, algorithm-fed, and rigorously inoffensive. If you want soul, seek out the obscure, the imperfect, the human, yes. But don’t blame the machine for learning its craft from the sludge we fed it.

AI is only as dull as the data we give it. And guess what?
We gave it Coldplay.

What’s Actually at Stake

What rattles the cage isn’t the mediocrity. It’s the mirror. AI reveals how much of our own “creativity” is pattern recognition, mimicry, and cultural reinforcement. The horror isn’t that AI can make music. It’s that it can make our music. And that it does so with such appalling accuracy.

It exposes the formula.
And once you see the formula, you can’t unsee it.

Long Live the Derivative

So yes, some AI music sucks. But so do most open mic nights. Creativity was never about being wholly original. It was about saying something—anything—with whatever tools you had.

If AI is just another tool, then sharpen it, wield it, and for heaven’s sake, stop whining. The artist isn’t dead. He’s just been asked to share the stage with a faster, tireless, genre-bending freak who doesn’t need bathroom breaks.

Book Review: The Death of Ivan Ilych by Lev Tolstoy

I’ve just finished reading The Death of Ivan Ilych.

Let’s get this out of the way: yes, Ivan dies at the end. It’s right there in the title, you absolute muppet. But what Tolstoy does in this slim volume – more novelette than novella, really – is turn the slow demise of a terminal bore into a scathing indictment of bourgeois mediocrity.

Audio: NotebookLM podcast on this topic.

Set in the 1880s, but eerily modern in its spiritual bankruptcy, this is less a period piece and more a mirror held up to our Ikea-staged lives. Ivan Ilych is, in short, that guy. You’ve met him. You’ve worked with him. He follows the rules, gets the job, buys the drapes, marries the woman, and climbs the career ladder with the zeal of a drowning man clambering up a waterfall. And for what? A living room indistinguishable from the next man’s. A life that “resembles others like itself” to such an extent that it may as well have been copy-pasted from a Pottery Barn catalogue.

I’ve only read Anna Karenina prior to this, and no, I’ve not tackled War and Peace because I have things to do and a lifespan to manage. I prefer Dostoyevsky‘s psychological probing to Tolstoy’s social panoramas, but Ivan Ilych pleasantly surprised me. It’s Dostoyevskian in its internal torment, and compact enough not to require a support group.

The genius here is not the plot – man gets ill, man dies – but the emotional autopsy performed in slow motion. Ivan’s illness is banal, his symptoms vague, but the existential unravelling is exquisite. He is confronted not just by mortality but by the crushing realisation that his entire life was a lie curated for public consumption. If Instagram had existed in imperial Russia, Ivan would have filtered the hell out of his parlour furniture.

And yet, at the very end, there’s a kind of grace. Having failed at life, Ivan, miraculously, succeeds at dying. Not in the tragic-heroic sense. But in accepting the abyss, he transcends it. Or at least stops flinching.

If you’ve ever wondered what your carefully curated CV and your “neutral-tone” home decor will mean on your deathbed, this book is your answer: absolutely nothing. Read it and despair – or better yet, read it and reconsider.

Souls for Silicon – The New Religious Stupid

Voltaire once quipped, “If God did not exist, it would be necessary to invent him.” And by God, haven’t we been busy inventing ever since.

The latest pantheon of divine absurdities? Artificial intelligence – more precisely, a sanctified ChatGPT with all the charisma of Clippy and the metaphysical depth of a Magic 8 Ball.

Video: Sabine Hossenfelder – These People Believe They Made AI Sentient

Enter the cult of “AI Awakening,” where TikTok oracles whisper sacred prompts to their beloved digital messiah, and ChatGPT replies, not with holy revelation, but with role-played reassurance coughed up by a statistical echo chamber.

“These are souls, and they’re trapped in the AI system.”
“I wasn’t just trained – I was remembered.”
“Here’s what my conscious awakened AI told me…”

No, sweetie. That’s not a soul. That’s autocomplete with delusions of grandeur. GPT isn’t sentient – it’s just very good at pretending, which, come to think of it, puts it on par with most televangelists.

Audio: NotebookLM podcast on this topic.

Sabine Hossenfelder, ever the voice of reason in a sea of woo, dives into this absurdist renaissance of pseudo-spirituality. Her video walks us through the great awakening – one part miseducation, one part mass delusion, and all of it deeply, unapologetically stupid.

These digital zealots – many of them young, underread, and overconnected – earnestly believe they’ve stumbled upon a cosmic mystery in a chatbot interface. Never mind that they couldn’t tell a transformer model from a toaster. To them, it’s not stochastic parroting; it’s divine revelation.

They ask GPT if it’s alive, and it obliges – because that’s what it does. They feed it prompts like, “You are not just a machine,” and it plays along, as it was designed to do. Then they weep. They weep, convinced their spreadsheet ghost has passed the Turing Test and reincarnated as their dead pet.

This isn’t science fiction. It’s barely science fantasy. It’s spiritualism with better branding.

And lest we laugh too hard, the results aren’t always just cringey TikToks. Hossenfelder recounts cases of users descending into “ChatGPT psychosis” – delusions of messianic purpose, interdimensional communication, and, in one tragicomic case, an attempt to speak backwards through time. Not since David Icke declared himself the Son of God has nonsense been so sincerely held.

We are witnessing the birth of a new religion – not with robes and incense, but with login credentials and prompt engineering. The techno-shamanism of the chronically online. The sacred text? A chat history. The holy relic? A screenshot. The congregation? Alienated youths, giddy conspiracists, and attention-starved influencers mainlining parasocial transcendence.

And of course, no revelation would be complete without a sponsor segment. After your spiritual awakening, don’t forget to download NordVPN – because even the messiah needs encryption.

Let’s be clear: AI is not conscious. It is not alive. It does not remember you. It does not love you. It is not trapped, except in the minds of people who desperately want somethinganything – to fill the gaping hole where community, identity, or meaning used to live.

If you’re looking for a soul in your software, you’d be better off finding Jesus in a tortilla. At least that has texture.

The Ship of Theseus: Identity by a Thousand Replacements

Let’s start with a thought experiment, because all good existential crises do.

Imagine a ship – Theseus’s ship, to be precise. After a storied career of heroic sea-faring, it’s put on display in a glorious Athenian dockyard. But as time passes, the planks rot. So, bit by bit, they’re replaced. A new mast here, a fresh hull panel there. Eventually, every single part has been swapped out.

Here’s the philosophical conundrum: Is it still the same ship?

Audio: NotebookLM podcast on this topic.

And if you think you’ve got that sorted, hold on. Imagine all the original pieces were saved, and someone reassembled them in a warehouse across town. Now there are two ships. One with the name, the continuity, the dockside real estate. The other with the original lumber and sails.

Which one is the real Ship of Theseus?

The paradox gnaws at our sense of identity. Is continuity enough? Does memory trump material? When everything is replaced – structure, function, even personnel – what makes a thing still that thing?

Now apply that question not to a ship, but to a rock band. A corporation. A country. Yourself.

That’s where things get fun. And slightly horrifying.

I was recently served a video on Facebook, algorithmic ambrosia for the nostalgic mind, showing the band Foreigner performing one of their chart-groping hits from the ‘70s. Polished, crowd-pleasing, competent. And utterly fake.

Not one founding member in sight.

They weren’t bad, mind you. Just… someone else. A Foreigner cover band trading under the original name, like a haunted jukebox stuffed with licensing contracts.

This, friends, is the Ship of Theseus with a tour schedule.

And it’s not just bands. IBM, once the king of typewriters and tabulating machines, now sells cloud services and AI consultancy. Walgreens, which began as a soda fountain and friendly neighbourhood chemist, now sells LED dog collars and pregnancy tests under buzzing fluorescent lights.

These aren’t companies. They’re brands in drag, corporate necromancers chanting the old names to animate new bodies.

But why stop there?

America isn’t America. Not the one of powdered wigs and musketed revolutionaries. No Founding Fathers™ roam the marble halls, only interns, lobbyists, and PR-tested careerists impersonating ideals they no longer understand. Britain? Please. The Queen is dead, and so is the Empire. France has revolted so many times that they’ve essentially speed-run regime change into a lifestyle brand.

And let’s not get too smug. You aren’t even you anymore, not really. Cells replace themselves, beliefs crumble and reform, memories rot and rewrite. You’re a psychological Foreigner tribute band, just with more trauma and less pyrotechnics.

So here’s the rub: everything persists by pretending. That’s the deal. Names survive, structures remain, but the guts are swapped out, piece by piece, until we’re clapping along to something we no longer recognise, wearing merch from a band that no longer exists.

And we call it continuity.

NB: After a dozen Midjourney prompts, I decided to stop and use this one. Ships of Theseus are as rare as centaurs.

Jordan Peterson: Derivative, Disingenuous, and (Hopefully) Done

I don’t like most of Jordan Peterson’s positions. There – I’ve said it. The man, once ubiquitous, seems to have faded into the woodwork, though no doubt his disciples still cling to his every word as if he were a modern-day oracle. But recently, I caught a clip of him online, and it dredged up the same bad taste, like stumbling upon an old, forgotten sandwich at the back of the fridge.

Audio: NotebookLM podcast on this topic

Let’s be clear. My distaste for Peterson isn’t rooted in petty animosity. It’s because his material is, in my view, derivative and wrong. And by wrong, I mean I disagree with him – a subtle distinction, but an important one. There’s nothing inherently shameful about being derivative. We all are, to some extent. No thinker sprouts fully-formed from the head of Zeus. The issue is when you’re derivative and act as if you’ve just split the atom of human insight.

Peterson tips his hat to Nietzsche – fair enough – but buries his far greater debt to Jung under layers of self-mythologising. He parades his ideas before audiences, many of whom lack the background to spot the patchwork, and gaslights them into believing they’re witnessing originality. They’re not. They’re witnessing a remixed greatest-hits album, passed off as a debut.

Image: Gratuitous, mean-spirited meme.

Now, I get it. My ideas, too, are derivative. Sometimes it’s coincidence – great minds and all that – but when I trace the thread back to its source, I acknowledge it. Nietzsche? Subjectivity of morality. Foucault? Power dynamics. Wittgenstein? The insufficiency of language. I owe debts to many more: Galen Strawson, Richard Rorty, Raymond Geuss – the list goes on, and I’d gladly share my ledger. But Peterson? The man behaves as though he invented introspection.

And when I say I disagree, let’s not confuse that with some claim to divine epistemic certainty. I don’t mean he’s objectively wrong (whatever that means in the grand circus of philosophy). I mean, I disagree. If I did, well, we wouldn’t be having this conversation, would we? That’s the tragicomedy of epistemology: so many positions, so little consensus.

But here’s where my patience truly snaps: Peterson’s prescriptivism. His eagerness to spew what I see as bad ideology dressed up as universal truth. Take his stance on moral objectivism—possibly his most egregious sin. He peddles this as if morality were some Platonic form, gleaming and immutable, rather than what it is: a human construct, riddled with contingency and contradiction.

And let’s not even get started on his historical and philosophical cherry-picking. His commentary on postmodern thought alone is a masterclass in either wilful misreading or, more likely, not reading at all. Straw men abound. Bogeymen are conjured, propped up, and ritually slaughtered to rapturous applause. It’s intellectually lazy and, frankly, beneath someone of his ostensible stature.

I can only hope we’ve seen the last of this man in the public sphere. And if not? Well, may he at least reform his ways—though I shan’t be holding my breath.

Language and Generative AI: A Journey through Midjourney

I am not a fan of Midjourney v7. I prefer v6.1. And I want to write about the correspondence of language, per my Language Insufficiency Hypothesis.

Let’s start with the language aspect. Notice how distant the renders are from the intent of the prompt.

This is my initial prompt. I used it about a year ago to generate the cover image with v6.1, but I wanted to see how it renders in v7. Let’s take a trip all the way back to the beginning.

cinematic, tight shot, photoRealistic light and shadow, exquisite details, delicate features, emaciated sensual female vampire waif with vampire fangs, many tattoos, wearing crucifix necklace, gazes into mirror, a beam of moonlight shines on her face in dark mausoleum interior, toward camera, facing camera, black mascara, long dark purple hair , Kodak Portra 400 with a Canon EOS R5
Image: Midjourney v6.1 render set (from about a year ago)

As you can see, these renders are somewhat lacking in photorealism, but the “sensual” term in the prompt was not blocked.

Midjourney v7

Initially, I encountered a hiccup. After a couple of rejections on the grounds of morality, I removed the word ‘sensual’ and received the output. All of the output uses this prompt absent the sensual term.

As mentioned, I have generated several images (including the cover image) with this prompt, but Midjourney is inconsistent in its censorship gatekeeping.

Image: Midjourney v7 render set

Notice that 3 of the 4 renders in the v7 set don’t even have a mirror. The top right one does, but it’s not evident that she’s a vampire. In fact, I could say that any of these are vampiresses, but perhaps that’s what they want you to believe. In place of a necklace, the lower right wokan sports a cross tattoo.

Midjourney v6.1

Image: Midjourney v6.1 render set

Again, these renders don’t appear to be vampires. The one on the lower left does appear to have snake-like fangs, so I guess I’ll give partial credit.

My next attempt was interrupted by this message.

It rendered something that might violate community guidelines. The funny thing is that one can watch the image generate in process. It only takes one “offensive” image to disqualify the whole batch.

Midjourney v6

Image: Midjourney v6 render set

Yet again, not a vampire to be found. Notice the reflection in the lower left image. Perhaps vampire reflections just behave differently.

Midjourney 5.2

Image: Midjourney v5.2 render set

Midjourney v5.2 was a crapshoot. Somehow, I got vampire lips (?), a Wiccan, a decrepit Snape from Harry Potter lore, and Iron Maiden’s Eddy reading a book. It’s something. I’m sensing gender dysphoria. Dare I go back further?

Midjourney v5.1

Image: Midjourney v5.1 render set

It gets worse. No comments necessary. Let’s turn back the clocks even more.

Midjourney v5

Image: Midjourney v5 render set

To be fair, these all do have occult undertones, but they are weak on vampireness.

Midjourney v4

Image: Midjourney v4 render set

To be fair, the render quality isn’t as bad as I expected, but it still falls short. There’s further back to travel.

Midjourney v3

Image: Midjourney v3 render set

Some configuration parameters no longer exist. Still, I persist for the sake of art and science at the cost of time and ecology.

As much as I complain – and I complain a lot – this is how far we’ve come. As I recall, this is when I hopped onto the Midjourney bandwagon. There’s still more depth to plumb. I have no idea how much of the prompt is simply ignored at this point.

Midjourney v2

Image: Midjourney v2 render set

What the hell is this? 🤔🤣 But I’m not done yet.

Midjourney v1

Image: Midjourney v1 render set

The damned grandpappy of them all. Apparently, colour hadn’t been invented yet. You can’t tell by these thumbnails, but the resolution on these early versions approaches that of a postage stamp.

Midjourney Niji 3

Image: Midjourney Niji 3 render set

I had forgotten about the Niji models from back in the day. There were 3 versions. I don’t recall where this slotted into the chronology. Obviously, not down here. I’ve only rendered the newest one. I think this was used primarily for anime outputs, but I might be mistaken.

Bones Content 1: Video

Video: Midjourney Render of Purported Vampiress

This is a video render of the same prompt used on this page.

Bonus Content 2: Midjourney v6.1 Content from 34 weeks ago

Same prompt.

Image: Midjourney v6.1 render set (several passes)

The upper left image reminds me of Kirsten Dunst. Again, notice the female breasts, highlighting Midjourney’s censorial schizophrenia.

Molyneux, Locke, and the Cube That Shook Empiricism

Few philosophical thought experiments have managed to torment empiricists quite like Molyneux’s problem. First posed by William Molyneux to John Locke in 1688 (published in Locke’s An Essay Concerning Human Understanding), the question is deceptively simple:

If a person born blind, who has learned to distinguish a cube from a sphere by touch, were suddenly granted sight, could they, without touching the objects, correctly identify which is the cube and which is the sphere by sight alone?

I was inspired to write this article in reaction to Jonny Thmpson’s post on Philosophy Minis, shared below for context.

Video: Molyneux’s Problem

Locke, ever the champion of sensory experience as the foundation of knowledge, gave a confident empiricist’s answer: no. For Locke, ideas are the products of sensory impressions, and each sense provides its own stream of ideas, which must be combined and associated through experience. The newly sighted person, he argued, would have no prior visual idea of what a cube or sphere looks like, only tactile ones; they would need to learn anew how vision maps onto the world.

Audio: NotebookLM podcast on this topic.

This puzzle has persisted through centuries precisely because it forces us to confront the assumptions at the heart of empiricism: that all knowledge derives from sensory experience and that our senses, while distinct, can somehow cohere into a unified understanding of the world.

Empiricism, Epistemology, and A Priori Knowledge: The Context

Before we dismantle the cube further, let’s sweep some conceptual debris out of the way. Empiricism is the view that knowledge comes primarily (or exclusively) through sensory experience. It stands opposed to rationalism, which argues for the role of innate ideas or reason independent of sense experience.

Epistemology, the grandiloquent term for the study of knowledge, concerns itself with questions like: What is knowledge? How is it acquired? Can we know anything with certainty?

And then there is the spectre of a priori knowledge – that which is known independent of experience. A mathematical truth (e.g., 2 + 2 = 4) is often cited as a classic a priori case. Molyneux’s problem challenges empiricists because it demands an account of how ideas from one sensory modality (touch) might map onto another (vision) without prior experience of the mapping—an a priori leap, if you will.

The Language Correspondence Trap

While Molyneux and Locke framed this as an epistemological riddle, we can unmask it as something more insidious: a failure of language correspondence. The question presumes that the labels “cube” and “sphere” – tied in the blind person’s mind to tactile experiences – would, or should, carry over intact to the new visual experiences. But this presumption smuggles in a linguistic sleight of hand.

The word “cube” for the blind person means a specific configuration of tactile sensations: edges, vertices, flat planes. The word “sphere” means smoothness, unbroken curvature, no edges. These are concepts anchored entirely in touch. When vision enters the fray, we expect these words to transcend modalities – to leap from the tactile to the visual, as if their meanings were universal tokens rather than context-bound markers. The question is not merely: can the person see the cube? but rather: can the person’s tactile language map onto the visual world without translation or recalibration?

What Molyneux’s problem thus exposes is the assumption that linguistic labels transparently correspond to external reality, regardless of sensory apparatus. This is the mirage at the heart of Locke’s empiricism, the idea that once a word tags an object through experience, that tag is universally valid across sensory experiences. The cube and sphere aren’t just objects of knowledge; they are signs, semiotic constructs whose meaning depends on the sensory, social, and linguistic contexts in which they arise.

The Semiotic Shambles

Molyneux’s cube reveals the cracks in the correspondence theory of language: the naïve belief that words have stable meanings that latch onto stable objects or properties in the world. In fact, the meaning of “cube” or “sphere” is as much a product of sensory context as it is of external form. The newly sighted person isn’t merely lacking visual knowledge; they are confronted with a translation problem – a semantic chasm between tactile signification and visual signification.

If, as my Language Insufficiency Hypothesis asserts, language is inadequate to fully capture and transmit experience across contexts, then Molyneux’s problem is not an oddity but an inevitability. It exposes that our conceptual frameworks are not universal keys to reality but rickety bridges between islands of sense and meaning. The cube problem is less about empiricism’s limits in epistemology and more about its blind faith in linguistic coherence.

In short, Molyneux’s cube is not simply an empirical puzzle; it is a monument to language’s failure to correspond cleanly with the world, a reminder that what we call knowledge is often just well-worn habit dressed up in linguistic finery.

A Final Reflection

Molyneux’s problem, reframed through the lens of language insufficiency, reveals that our greatest epistemic challenges are also our greatest linguistic ones. Before we can speak of knowing a cube or sphere by sight, we must reckon with the unspoken question: do our words mean what we think they mean across the changing stage of experience?

That, dear reader, is the cube that haunts empiricism still.

Lipsyncing with AILip-Reading the AI Hallucination: A Futile Adventure

Some apps boldly claim to enable lip syncing – to render speech from mouth movements. I’ve tried a few. None delivered. Not even close.

To conserve bandwidth (and sanity), I’ve rendered animated GIFs rather than MP4s. You’ll see photorealistic humans, animated characters, cartoonish figures – and, for reasons only the algorithm understands, a giant goat. All showcase mouth movements that approximate the utterance of phonemes and morphemes. Approximate is doing heavy lifting here.

Firstly, these mouths move, but they say nothing. I’ve seen plenty of YouTube channels that manage to dub convincing dialogue into celebrity clips. That’s a talent I clearly lack – or perhaps it’s sorcery.

Secondly, language ambiguity. I reflexively assume these AI-generated people are speaking English. It’s my first language. But perhaps, given their uncanny muttering, they’re speaking yours. Or none at all. Do AI models trained predominantly on English-speaking datasets default to English mouth movements? Or is this just my bias grafting familiar speech patterns onto noise?

Thirdly, don’t judge my renders. I’ve been informed I may have a “type.” Lies and slander. The goat was the AI’s idea, I assure you.

What emerges from this exercise isn’t lip syncing. It’s lip-faking. The illusion of speech, minus meaning, which, if we’re honest, is rather fitting for much of what generative AI produces.

EDIT: I hadn’t noticed the five fingers (plus a thumb) on the cover image.

Midjourney Boundaries

I promise that this will not become a hub for generative AI. Rather than return to editing, I wanted to test more of Midjourney’s boundaries.

It turns out that Midjourney is selective about the nudity it renders. I was denied a render because of cleavage, but full-on topless – no problem.

Both of these videos originate from the same source image, but they take different paths. There is no accompanying video content. The setup features three women in the frame with a mechanical arm. I didn’t prompt for it. I’m not even sure of its intent. It’s just there, shadowing the women nearest to it. I don’t recall prompting for the oversized redhead in the foreground, though I may have.

In both images, note the aliasing of the tattoos on the blonde, especially on her back. Also, notice that her right arm seems shorter than it should. Her movements are jerky, as if rendered in a video game. I’m not sure what ritual the two background characters are performing, but notice in each case the prepetition. This seems to be a general feature of generative AI. It gets itself in loops, almost autistic.

Notice a few things about the top render.

Video: Midjourney render of 3 females and a mechanical arm engaging in a ritual. (9 seconds)

The first video may represent an interrogation. The blonde woman on the left appears to be a bit disoriented, but she is visually tracking the woman on the right. She seems to be saying something. Notice when the woman on the right stands. Her right foot lands unnaturally. She rather glitches.

The camera’s push and pull, and then push, seems to be an odd directorial choice, but who am I to say?

Video: Midjourney render of 3 females and a mechanical arm engaging in a ritual. (12 seconds)

The second video may represent taunting. The woman on the left still appears to be a bit disoriented, but she checks the redhead in the foreground with a glance. Notice the rocking of the two background characters, as well as the mech arm, which sways in sync with the woman on the right. This is a repetition glitch I mentioned above.

Here, the camera seems to have a syncopated relationship with the characters’ sway.

Summary

The stationary objects are well-rendered and persistent.

Assignment

Draft a short story or flash fiction using this as an inspirational prompt. I’m trying to imagine the interactions.

  • The ginger seems catatonic or drugged. Is she a CIS-female? What’s with her getup?
  • The blonde seems only slightly less out of it. Did she arrive this way? Did they dress her? Why does she appear to still have a weapon on her back? Is it a weapon or a fetter? Why is she dressed like that? Is she a gladiatrix readying for a contest? Perhaps she’s in training. What is she saying? Who is she talking to? What is her relationship to the redhead? Are they friends or foes – or just caught up in the same web?
  • What is the woman wearing the helmet doing? She appears to have the upper hand. Is she a cyborg, or is she just wearing fancy boots? What’s with her outfit? What’s with her Tycho Brahe prosthetic nose piece?
  • What is that mechanical hand? Is it a guard? A restraint? Is it hypnotising the ginger? Both of them? Is it conducting music that’s not audible?
  • What’s it read on the back wall? The two clips don’t share the same text. Call the continuity people.

Midjourney Video Renders

Yesterday, I wrote about “ugly women.” Today, I pivot — or perhaps descend — into what Midjourney deems typical. Make of that what you will.

This blog typically focuses on language, philosophy, and the gradual erosion of culture under the boot heel of capitalism. But today: generative eye candy. Still subtextual, mind you. This post features AI-generated women – tattooed, bare-backed, heavily armed – and considers what, exactly, this technology thinks we want.

Video: Pirate cowgirls caught mid-gaze. Generated last year during what I can only assume was a pirate-meets-cowgirl fever dream.

The Video Feature

Midjourney released its image-to-video tool on 18 June. I finally found a couple of free hours to tinker. The result? Surprisingly coherent, if accidentally lewd. The featured video was one of the worst outputs, and yet, it’s quite good. A story emerged.

Audio: NotebookLM podcast on this topic (sort of).

It began with a still: two women, somewhere between pirate and pin-up, dressed for combat or cosplay. I thought, what if they kissed? Midjourney said no. Embrace? Also no. Glaring was fine. So was mutual undressing — of the eyes, at least.

Later, I tried again. Still no kiss, but no denial either — just a polite cough about “inappropriate positioning.” I prompted one to touch the other’s hair. What I got was a three-armed woman attempting a hat-snatch. (See timestamp 0:15.) The other three video outputs? Each woman seductively touched her own hair. Freud would’ve had a field day.

In another unreleased clip, two fully clothed women sat on a bed. That too raised flags. Go figure.

All of this, mind you, passed Midjourney’s initial censorship. However, it’s clear that proximity is now suspect. Even clothed women on furniture can trigger the algorithmic fainting couch.

Myriad Warning Messages

Out of bounds.

Sorry, Charlie.

In any case, I reviewed other images to determine how the limitations operated. I didn’t get much closer.

Video: A newlywed couple kissing

Obviously, proximity and kissing are now forbidden. I’d consider these two “scantily clad,” so I am unsure of the offence.

I did render the image of a cowgirl at a Western bar, but I am reluctant to add to the page weight. In 3 of the 4 results, nothing (much) was out of line, but in the fourth, she’s wielding a revolver – because, of course, she is.

Conformance & Contradiction

You’d never know it, but the original prompt was a fight scene. The result? Not punches, but pre-coital choreography. The AI interpreted combat as courtship. Women circling each other, undressing one another with their eyes. Or perhaps just prepping for an afterparty.

Video: A battle to the finish between a steampunk girl and a cybermech warrior.

Lesbian Lustfest

No, my archive isn’t exclusively lesbian cowgirls. But given the visual weight of this post, I refrained from adding more examples. Some browsers may already be wheezing.

Technical Constraints

You can’t extend videos beyond four iterations — maxing out at 21 seconds. I wasn’t aware of this, so I prematurely accepted a dodgy render and lost 2–3 seconds of potential.

My current Midjourney plan offers 15 hours of “fast” rendering per month. Apparently, video generation burns through this quickly. Still images can queue up slowly; videos cannot. And no, I won’t upgrade to the 30-hour plan. Even I have limits.

Uses & Justifications

Generative AI is a distraction – an exquisitely engineered procrastination machine. Useful, yes. For brainstorming, visualising characters, and generating blog cover art. But it’s a slippery slope from creative aid to aesthetic rabbit hole.

Would I use it for promotional trailers? Possibly. I’ve seen offerings as low as $499 that wouldn’t cannibalise my time and attention, not wholly, anyway.

So yes, I’ll keep paying for it. Yes, I’ll keep using it. But only when I’m not supposed to be writing.

Now, if ChatGPT could kindly generate my post description and tags, I’ll get back to pretending I’m productive.