Yesterday, I suggested democracy is a mediocre theatre production where the audience gets to choose which mediocre understudy performs. Some readers thought I was being harsh. I wasn’t.
A mate recently argued that humans will always be superior to AI because of emergence, the miraculous process by which complexity gives rise to intelligence, creativity, and emotion. Lovely sentiment. But here’s the rub: emergence is also how we got this political system, the one no one really controls anymore.
Like the human body being mostly non-human microbes, our so-called participatory government is mostly non-participatory components: lobbyists, donors, bureaucrats, corporate media, careerists, opportunists, the ecosystem that is the actual organism. We built it, but it now has its own metabolism. And thanks to the law of large numbers, multiplied by the sheer number of political, economic, and social dimensions in play, even the human element is diluted into statistical irrelevance. At any rate, what remains of it has lost control – like the sorcerer’s apprentice.
People like to imagine they can “tame” this beast, the way a lucid dreamer thinks they can bend the dream to their will. But you’re still dreaming. The narrative still runs on the dream’s logic, not yours. The best you can do is nudge it; a policy tweak here, a symbolic vote there, before the system digests your effort and excretes more of itself.
a bad system beats a good person every time
W Edwards Deming
This is why Deming’s line hits so hard: a bad system beats a good person every time. Even if you could somehow elect the Platonic ideal of leadership, the organism would absorb them, neutralise them, or spit them out. It’s not personal; it’s structural.
And yet we fear AI “taking over,” as if that would be a radical departure from the status quo. Newsflash: you’ve already been living under an autonomous system for generations. AI would just be a remodel of the control room, new paint, same prison.
So yes, emergence makes humans “special.” It also makes them the architects of their own inescapable political microbiome. Congratulations, you’ve evolved the ability to build a machine that can’t be turned off.
(or: Why Neither Humans nor AI Create from Nothing)
In the endless squabble over whether AI can be “creative” or “intelligent,” we always end up back at the same semantic swamp. At the risk of poking the bear, I have formulated a response. Creativity is either whatever humans do, or whatever humans do that AI can’t. Intelligence is either the general ability to solve problems or a mysterious inner light that glows only in Homo sapiens. The definitions shift like sand under the feet of the argument.
Audio: NotebookLM podcast on this topic
Strip away the romance, and the truth is far less flattering: neither humans nor AI conjure from the void. Creativity is recombination, the reconfiguration of existing material into something unfamiliar. Intelligence is the ability to navigate problems using whatever tools and heuristics one has to hand.
I cannot be causa sui, and neither can you.
The Causa Sui conceit, the idea that one can be the cause of oneself, is incoherent in art, thought, or physics. Conservation of energy applies as much to ideas as to atoms.
We metabolise them through cognitive habits, biases, and linguistic forms.
We output something rearranged, reframed, sometimes stripped to abstraction.
The AI process is identical in structure, if not in substrate: ingest vast data, run it through a model, output recombination. The difference is that AI doesn’t pretend otherwise.
When a human produces something impressive, we call it creative without inspecting the provenance of the ideas. When an AI produces something impressive, we immediately trace the lineage of its inputs, as if the human mind weren’t doing the same. This is not epistemic rigour, it’s tribal boundary enforcement.
The real objection to AI is not that it fails the test of creativity or intelligence; it’s that it passes the functional test without being part of the club. Our stories about human exceptionalism require a clear line between “us” and “it,” even if we have to draw that line through semantic fog.
My Language Insufficiency Hypothesis began with the recognition that language cannot fully capture the reality it describes. Here, the insufficiency is deliberate; the words “creativity” and “intelligence” are kept vague so they can always be shifted away from anything AI achieves.
I cannot be causa sui, and neither can you. The only difference is that I’m willing to admit it.
Before I was a writer, before I was a management consultant, before I was an economist, and before I was a statistician, I was a student.
Video: Veritasium piece on Markov chains and more.
Back then, when dinosaurs roamed the chalkboards, I fell for a rather esoteric field: stochastic processes, specifically, Markov chains and Monte Carlo simulations. These weren’t just idle fascinations. They were elegant, probabilistic odes to chaos, dressed up in matrix notation. I’ll not bore you with my practical use of linear algebra.
So imagine my surprise (feigned, of course) when, decades later, I find myself confronted by the same concepts under a different guise—this time in the pocket-sized daemon we all carry: predictive text.
If you’ve not watched it yet, this excellent explainer by Veritasium demystifies how Markov chains can simulate plausible language. In essence, if you’ve ever marvelled at your phone guessing the next word in your sentence, you can thank a Russian mathematician and a few assumptions about memoryless transitions.
But here’s the rub. The predictive text often gets it hilariously wrong. Start typing “to be or not to—” and it offers you “schedule a meeting.” Close, but existentially off. This isn’t just clunky programming; it’s probabilistic dementia.
This leads me to a pet peeve: people who smugly proclaim they’ve “never used algebra” since high school. I hear this a lot. It’s the battle cry of the proudly innumerate. What they mean, of course, is they’ve never recognised algebra in the wild. They think if they’re not solving for x with a number 2 pencil, it doesn’t count. Meanwhile, their phone is doing a polynomial dance just to autocorrect their butchery of the English language.
It’s a classic case of not recognising the water in which we’re swimming. Algebra is everywhere. Markov chains are everywhere. And Monte Carlo simulations are probably calculating your credit risk as we speak. Just because the interface is clean and the maths is hidden behind a swipeable veneer doesn’t mean the complexity has vanished. It’s merely gone incognito.
As someone who has used maths across various fields – software development, data analysis, policy modelling – I can tell you that I use less of it than a physicist, but probably more than your average lifestyle coach. I say this not to flex but to point out that even minimal exposure to mathematical literacy grants one the ability to notice when the machines are quietly doing cartwheels behind the curtain.
So the next time your phone offers you a sentence completion that reads like it’s been dropped on its head, spare a thought for Markov. He’s doing his best, bless him. It’s just that probability doesn’t always align with meaning.
Or as the algorithms might say: “To be or not to – subscribe for updates.”
Oh no, not that again. As if we’ve all been composing from scratch, untouched by the grubby hands of history.
Audio: NotebookLM podcast on this topic.
I’m not simping for AI, but let’s have it out, shall we? Rick Beato—bless his fretboard-fingered soul—says AI-generated music sucks. And sure, some of it does. But here’s the punchline: most human-made music sucks too. Always has. Always will. The fact that an algorithm can now churn out mediocrity faster than a caffeinated teenager with GarageBand doesn’t make it less “art.” It just makes it faster.
I’m a bit chuffed that Rick’s channel removed my comment pointing to this response. I didn’t want to copy-paste this content into his comments section.
Video: Rick Beato discusses AI-generated music
The Myth of the Sacred Original
Newsflash: There is no such thing as originality. Not in art. Not in music. Not even in your favourite indie band’s tortured debut EP. Everything we call “creative” is a clever remix of something older. Bach reworked Vivaldi. Dylan borrowed from the blues. Even Bowie—patron saint of artistic reinvention—was a pastiche artist in a glittery jumpsuit.
What AI does is make this painfully obvious. It doesn’t pretend. It doesn’t get drunk in Berlin and write a concept album about urban decay to mask the fact it lifted its sound from Kraftwerk. It just remixes and reinterprets at inhuman speed, without the eyeliner.
Speed Isn’t Theft, It’s Efficiency
So the AI can spit out a passable ambient track in ten seconds. Great. That’s not cheating, it’s progress. Saying “it took me ten years to learn to play like that” is noble, yes, but it’s also beside the point. Horses were noble too, but we built cars.
The question isn’t how long did it take? but does it move you? If the answer is no, fine. Say it sucks. But don’t pretend your human-shaped suffering gives your song a monopoly on meaning. That’s just gatekeeping with a sad sax solo.
The Taste Problem, Not the Tech Problem
Let’s not confuse our distaste for bland music with a distaste for AI. Most of the pop charts are already AI-adjacent—click-optimised, algorithm-fed, and rigorously inoffensive. If you want soul, seek out the obscure, the imperfect, the human, yes. But don’t blame the machine for learning its craft from the sludge we fed it.
AI is only as dull as the data we give it. And guess what? We gave it Coldplay.
What’s Actually at Stake
What rattles the cage isn’t the mediocrity. It’s the mirror. AI reveals how much of our own “creativity” is pattern recognition, mimicry, and cultural reinforcement. The horror isn’t that AI can make music. It’s that it can make our music. And that it does so with such appalling accuracy.
It exposes the formula. And once you see the formula, you can’t unsee it.
Long Live the Derivative
So yes, some AI music sucks. But so do most open mic nights. Creativity was never about being wholly original. It was about saying something—anything—with whatever tools you had.
If AI is just another tool, then sharpen it, wield it, and for heaven’s sake, stop whining. The artist isn’t dead. He’s just been asked to share the stage with a faster, tireless, genre-bending freak who doesn’t need bathroom breaks.
Yesterday, I wrote about “ugly women.” Today, I pivot — or perhaps descend — into what Midjourney deems typical. Make of that what you will.
This blog typically focuses on language, philosophy, and the gradual erosion of culture under the boot heel of capitalism. But today: generative eye candy. Still subtextual, mind you. This post features AI-generated women – tattooed, bare-backed, heavily armed – and considers what, exactly, this technology thinks we want.
Video: Pirate cowgirls caught mid-gaze. Generated last year during what I can only assume was a pirate-meets-cowgirl fever dream.
The Video Feature
Midjourney released its image-to-video tool on 18 June. I finally found a couple of free hours to tinker. The result? Surprisingly coherent, if accidentally lewd. The featured video was one of the worst outputs, and yet, it’s quite good. A story emerged.
Audio: NotebookLM podcast on this topic (sort of).
It began with a still: two women, somewhere between pirate and pin-up, dressed for combat or cosplay. I thought, what if they kissed? Midjourney said no. Embrace? Also no. Glaring was fine. So was mutual undressing — of the eyes, at least.
Later, I tried again. Still no kiss, but no denial either — just a polite cough about “inappropriate positioning.” I prompted one to touch the other’s hair. What I got was a three-armed woman attempting a hat-snatch. (See timestamp 0:15.) The other three video outputs? Each woman seductively touched her own hair. Freud would’ve had a field day.
In another unreleased clip, two fully clothed women sat on a bed. That too raised flags. Go figure.
All of this, mind you, passed Midjourney’s initial censorship. However, it’s clear that proximity is now suspect. Even clothed women on furniture can trigger the algorithmic fainting couch.
Myriad Warning Messages
Out of bounds.
Sorry, Charlie.
In any case, I reviewed other images to determine how the limitations operated. I didn’t get much closer.
Video: A newlywed couple kissing
Obviously, proximity and kissing are now forbidden. I’d consider these two “scantily clad,” so I am unsure of the offence.
I did render the image of a cowgirl at a Western bar, but I am reluctant to add to the page weight. In 3 of the 4 results, nothing (much) was out of line, but in the fourth, she’s wielding a revolver – because, of course, she is.
Conformance & Contradiction
You’d never know it, but the original prompt was a fight scene. The result? Not punches, but pre-coital choreography. The AI interpreted combat as courtship. Women circling each other, undressing one another with their eyes. Or perhaps just prepping for an afterparty.
Video: A battle to the finish between a steampunk girl and a cybermech warrior.
Lesbian Lustfest
No, my archive isn’t exclusively lesbian cowgirls. But given the visual weight of this post, I refrained from adding more examples. Some browsers may already be wheezing.
Technical Constraints
You can’t extend videos beyond four iterations — maxing out at 21 seconds. I wasn’t aware of this, so I prematurely accepted a dodgy render and lost 2–3 seconds of potential.
My current Midjourney plan offers 15 hours of “fast” rendering per month. Apparently, video generation burns through this quickly. Still images can queue up slowly; videos cannot. And no, I won’t upgrade to the 30-hour plan. Even I have limits.
Uses & Justifications
Generative AI is a distraction – an exquisitely engineered procrastination machine. Useful, yes. For brainstorming, visualising characters, and generating blog cover art. But it’s a slippery slope from creative aid to aesthetic rabbit hole.
Would I use it for promotional trailers? Possibly. I’ve seen offerings as low as $499 that wouldn’t cannibalise my time and attention, not wholly, anyway.
So yes, I’ll keep paying for it. Yes, I’ll keep using it. But only when I’m not supposed to be writing.
Now, if ChatGPT could kindly generate my post description and tags, I’ll get back to pretending I’m productive.
Humans talk to large language models the way toddlers talk to teddy bears – with unnerving sincerity and not a hint of shame. “Do you understand me?” they ask, eyes wide with hope. “What do you think of this draft?” they prod, as if some silicon scribe is going to sip its imaginary tea and nod gravely. It’s not merely adorable – it’s diagnostic. We are, it turns out, pathologically incapable of interacting with anything more complex than a toaster without projecting mind, motive, and mild trauma onto it.
Audio: NotebookLM podcast on this topic.
Welcome to the theatre of delusion, where you play Hamlet and the chatbot is cast as Yorick – if Yorick could autocomplete your soliloquy and generate citations in APA format.
The Great Anthropomorphic Flaw (aka Feature)
Let’s get one thing straight: anthropomorphism isn’t a software bug in the brain; it’s a core feature. You’re hardwired to see agency where there is none. That rustle in the bushes? Probably the wind. But better safe than sabre-toothed. So your ancestors survived, and here you are, attributing “sass” to your microwave because it beeped twice.
“We don’t have a way of addressing an entity that talks like a person but isn’t one. So we fake it. It’s interaction theatre.”
Now we’ve built a machine that spits out paragraphs like a caffeinated undergrad with deadlines, and naturally, we talk to it like it’s our mate from university. Never mind that it has no bloodstream, no memory of breakfast, and no concept of irony (despite being soaked in it). We still say you instead of the system, and think instead of statistically interpolate based on token weights. Because who wants to live in a world where every sentence starts with “as per the pre-trained parameters…”?
Why We Keep Doing It (Despite Knowing Better)
To be fair – and let’s be magnanimous – it’s useful. Talking to AI like it’s a person allows our ape-brains to sidestep the horror of interacting with a glorified autocomplete machine. We’re brilliant at modelling other minds, rubbish at modelling neural nets. So we slap a metaphorical moustache on the processor and call it Roger. Roger “gets us.” Roger “knows things.” Roger is, frankly, a vibe.
This little charade lubricates the whole transaction. If we had to address our queries to “the stochastic parrot formerly known as GPT,” we’d never get past the opening line. Better to just ask, “What do you think, Roger?” and pretend it has taste.
And here’s the kicker: by anthropomorphising AI, we start thinking about ethics – sort of. We ask if it deserves rights, feelings, holidays. We project humanity into the void and then act shocked when it mirrors back our worst habits. As if that’s its fault.
When the Roleplay Gets Risky
Of course, this make-believe has its downsides. Chief among them: we start to believe our own nonsense. Saying AI “knows” something is like saying your calculator is feeling generous with its square roots today. It doesn’t know—it produces outputs. Any semblance of understanding is pure pantomime.
“We see a mind because we need to see one. We can’t bear the idea of a thing that’s smarter than us but doesn’t care about us.”
More dangerously, we lose sight of the fact that these things aren’t just alien – they’re inhuman. They don’t dream of electric sheep. They don’t dream, full stop. But we insist on jamming them into our conceptual boxes: empathy, intent, personality. It’s like trying to teach a blender to feel remorse.
And let’s not pretend we’re doing it out of philosophical curiosity. We’re projecting, plain and simple. Anthropomorphism isn’t about them, it’s about us. We see a mind because we need to see one. We can’t bear the idea of a thing that’s smarter than us but doesn’t care about us, doesn’t see us. Narcissism with a side of existential dread.
Our Language is a Terrible Tool for This Job
English – and most languages, frankly – is hopeless at describing this category of thing. “It” feels cold and distant. “They” implies someone’s going to invite the model to brunch. We have no pronoun for “hyper-literate statistical machine that mimics thought but lacks all consciousness.” So we fudge it. Badly.
Our verbs are no better. “Compute”? Too beige. “Process”? Bureaucratic. “Think”? Premature. What we need is a whole new grammatical tense: the hallucino-indicative. The model thunketh, as one might, but didn’t.
“We built a creature we can’t speak about without sounding like lunatics or liars.”
This is linguistic poverty, pure and simple. Our grammar can’t cope with entities that live in the uncanny valley between sentience and syntax. We built a creature we can’t speak about without sounding like lunatics or liars.
The Semantics of Sentimentality (Or: “How Does This Sound to You?”)
Enter the most revealing tell of all: the questions we pose. “How does this look?” we ask the model, as if it might blink at the screen and furrow a synthetic brow. “What do you think?” we say, offering it the dignity of preference. These questions aren’t just off-target – they’re playing darts in another pub.
They’re the linguistic equivalent of asking your dishwasher whether it enjoyed the lasagne tray. But again, this isn’t idiocy – it’s instinct. We don’t have a way of addressing an entity that talks like a person but isn’t one. So we fake it. It’s interaction theatre. You provide the line, the model cues the spotlight.
But let’s be clear: the model doesn’t “think” anything. It regurgitates plausible text based on mountains of training data—some of which, no doubt, includes humans asking equally daft questions of equally mindless systems.
Time to Grow Up (Just a Bit)
This doesn’t mean we need to abandon anthropomorphism entirely. Like most delusions, it’s functional. But we’d do well to hold it at arm’s length – like a politician’s promise or a milk carton two days past its date.
Call it anthropomorphic agnosticism: act like it’s a person, but remember it’s not. Use the language, but don’t inhale.
And maybe – just maybe – we need to evolve our language. Invent new terms, new pronouns, new ways of speaking about entities that fall somewhere between tool and companion. As we did with “cyberspace” and “ghosting,” perhaps we need words for proto-minds and quasi-selves. Something between toaster and therapist.
“If we speak to AI like it’s sentient, we’ll eventually legislate as if it is.”
Above all, we need to acknowledge that our language shapes more than just understanding – it shapes policy, emotion, and future design. If we speak to AI like it’s sentient, we’ll eventually legislate as if it is. And if we insist on treating it as an object, we may be blind to when that ceases to be accurate. Misnaming, after all, is the first sin in every myth worth reading.
The Mirror, Darkly
Ultimately, our tendency to humanise machines is less about them than it is about us – our fears, our needs, our inability to tolerate ambiguity. The AI is just a mirror: an elaborate, many-eyed, autofill mirror. And when we see a mind there, it may be ours staring back – distorted, flattened, and fed through a thousand layers of token prediction.
The tragedy, perhaps, isn’t that the machine doesn’t understand us. It’s that we’ve built something that perfectly imitates understanding – and still, somehow, we remain utterly alone in the room.
—On Epistemology, Pop Psychology, and the Cult of Empirical Pretence
Science, we’re told, is the beacon in the fog – a gleaming lighthouse of reason guiding us through the turbulent seas of superstition and ignorance. But peer a bit closer, and the lens is cracked, the bulb flickers, and the so-called lighthouse keeper is just some bloke on TikTok shouting about gut flora and intermittent fasting.
Audio: NotebookLM podcast on this topic.
We are creatures of pattern. We impose order. We mistake correlation for causation, narrative for truth, confidence for knowledge. What we have, in polite academic parlance, is an epistemology problem. What we call science is often less Newton and more Nostradamus—albeit wearing a lab coat and wielding a p-hacked dataset.
Let’s start with the low-hanging fruit—the rotting mango of modern inquiry: nutritional science, which is to actual science what alchemy is to chemistry, or vibes are to calculus. We study food the way 13th-century monks studied demons: through superstition, confirmation bias, and deeply committed guesswork. Eat fat, don’t eat fat. Eat eggs, don’t eat eggs. Eat only between the hours of 10:00 and 14:00 under a waxing moon while humming in Lydian mode. It’s a cargo cult with chia seeds.
But why stop there? Let’s put the whole scientific-industrial complex on the slab.
Psychology: The Empirical Astrological Society
Psychology likes to think it’s scientific. Peer-reviewed journals, statistical models, the odd brain scan tossed in for gravitas. But at heart, much of it is pop divination, sugar-dusted for mass consumption. The replication crisis didn’t merely reveal cracks – it bulldozed entire fields. The Stanford Prison Experiment? A theatrical farce. Power poses? Empty gestural theatre. Half of what you read in Psychology Today could be replaced with horoscopes and no one would notice.
Medical Science: Bloodletting, But With Better Branding
Now onto medicine, that other sacred cow. We tend to imagine it as precise, data-driven, evidence-based. In practice? It’s a Byzantine fusion of guesswork, insurance forms, and pharmaceutical lobbying. As Crémieux rightly implies, medicine’s predictive power is deeply compromised by overfitting, statistical fog, and a staggering dependence on non-replicable clinical studies, many funded by those who stand to profit from the result.
And don’t get me started on epidemiology, that modern priesthood that speaks in incantations of “relative risk” and “confidence intervals” while changing the commandments every fortnight. If nutrition is theology, epidemiology is exegesis.
The Reproducibility Farce
Let us not forget the gleaming ideal: reproducibility, that cornerstone of Enlightenment confidence. The trouble is, in field after field—from economics to cancer biology—reproducibility is more aspiration than reality. What we actually get is a cacophony of studies no one bothers to repeat, published to pad CVs, p-hacked into publishable shape, and then cited into canonical status. It’s knowledge by momentum. We don’t understand the world. We just retweet it.
What, Then, Is To Be Done?
Should we become mystics? Take up tarot and goat sacrifice? Not necessarily. But we should strip science of its papal robes. We should stop mistaking publication for truth, consensus for accuracy, and method for epistemic sanctity. The scientific method is not the problem. The pretence that it’s constantly being followed is.
Perhaps knowledge doesn’t have a half-life because of progress, but because it was never alive to begin with. We are not disproving truth; we are watching fictions expire.
Closing Jab
Next time someone says “trust the science,” ask them: which bit? The part that told us margarine was manna? The part that thought ulcers were psychosomatic? The part that still can’t explain consciousness, but is confident about your breakfast?
Science is a toolkit. But too often, it’s treated like scripture. And we? We’re just trying to lose weight while clinging to whatever gospel lets us eat more cheese.
Given current IQ trends, humanity is getting dumber. Let’s not mince words. This implies the AGI singularity—our long-heralded techno-apotheosis—will arrive against a backdrop of cognitive decay. A dimming species, squinting into the algorithmic sun.
Audio: NotebookLM podcast discussing this content.
Now, I’d argue that AI—as instantiated in generative models like Claude and ChatGPT—already outperforms at least half of the human population. Likely more. The only question worth asking is this: at what percentile does AI need to outperform the human herd to qualify as having “surpassed” us?
Living in the United States, I’m painfully aware that the average IQ hovers somewhere in the mid-90s—comfortably below the global benchmark of 100. If you’re a cynic (and I sincerely hope you are), this explains quite a bit. The declining quality of discourse. The triumph of vibes over facts. The national obsession with astrology apps and conspiracy podcasts.
Harvard astronomer Avi Loeb argues that as humans outsource cognition to AI, they lose the capacity to think. It’s the old worry: if the machines do the heavy lifting, we grow intellectually flaccid. There are two prevailing metaphors. One, Platonic in origin, likens cognition to muscle—atrophying through neglect. Plato himself worried that writing would ruin memory. He wasn’t wrong.
But there’s a counterpoint: the cooking hypothesis. Once humans learned to heat food, digestion became easier, freeing up metabolic energy to grow bigger brains. In this light, AI might not be a crutch but a catalyst—offloading grunt work to make space for higher-order thought.
So which is it? Are we becoming intellectually enfeebled? Or are we on the cusp of a renaissance—provided we don’t burn it all down first?
Crucially, most people don’t use their full cognitive capacity anyway. So for the bottom half—hell, maybe the bottom 70%—nothing is really lost. No one’s delegating their calculus homework to ChatGPT if they were never going to attempt it themselves. For the top 5%, AI is already a glorified research assistant—a handy tool, not a replacement.
The real question is what happens to the middle band. The workaday professionals. The strivers. The accountants, engineers, copywriters, and analysts hovering between the 70th and 95th percentiles—assuming our crude IQ heuristics even hold. They’re the ones who have just enough brainpower to be displaced.
That’s where the cognitive carnage will be felt. Not in the depths, not at the heights—but in the middle.
Firstly, I’d like to thank the people who have already submitted responses to the Modernity Worldview Survey. I’ll post that you submitted entries before this warning was presented.
Google has taken action and very responsively removed this warning. If you saw this whilst attempting to visit the URL, try again. Sorry for any fright or inconvenience. I’ll continue as if this never happened. smh
I am frustrated to say the least. I created this survey over the past month or so, writing, rewriting, refactoring, and switching technology and hosts until I settled on Google Cloud (GCP). It worked fine yesterday. When I visited today, I saw this warning.
As I mentioned in my announcement post, I collect no personal information. I don’t even ask for an email address, let alone a credit card number. On a technical note, this is the information I use:
id autogenerated unique identifier
timestamp date and time stamp of record creation (UTC)
question-response which response option made per question
ternary-triplet the position of the average modernity score (pre, mod, post)
plot_x Cartesian x-axis plot point for the ternary chart
plot_y Cartesian y-axis plot point for the ternary chart
session_id facilitates continuity for a user's browser experience
browser* which browser being used (Chrome, Safari, and so on)
region browser's language setting (US, GB, FR)
source whether the user is accessing from the web or 'locally'
('local' indicates a test record, so i can filter them out)
* These examples illustrate the colected browser information:
- Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Safari/537.36
- Mozilla/5.0 (Linux; Android 10; K) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Mobile Safari/537.36
This is all.
This is a Chrome Warning. Ironically, a Google product. I tested this on Opera, Edge, and Safari without this nonsense.
The front end (UI) is written in HTML, Python, JavaScript, and React with some standard imports. The backend (database) is MySQL. It is version-controlled on GitHub and entirely hosted on GCP. I link to the survey from here (WordPress) or other social media presences. I did make the mistake of not making the site responsive. I paid the price when I visited the site on my Samsung S24. The page felt like the size of a postage stamp. I may fix this once this security issue is resolved.
I sent Google a request to remove this from their blacklist. This could take three weeks, more or less.
Meantime, I’ll pause survey promotions and hope this resolves quickly. The survey will remain live. If you use something other than Chrome, you should be able to take it. Obviously, I’ll also delay analysing and releasing any summary results.
Apologies for rambling. Thank you for your patience.
This will be hilarious to approximately no one. I meant to ask ChatGPT 03-mini-high to generate a dummy record. My compromised motor skills and concomitant keyboard typing skills rendered this as funny. Except for the redacted bits in bold, this is the code it rendered. Pay special attention to the query’s input variables highlighted in post-production with 🤔 emojis.
Let’s add a bit to write a funny record per the schema.
import mysql.connector
import logging
import uuid
# Configure logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
def test_mysql_connection():
"""Test MySQL connection and write a funny record to the database."""
logger.info("Testing MySQL connection...")
try:
logger.info("Connecting to MySQL...")
connection = mysql.connector.connect(
host='127.0.0.1',
user='[user]',
password='[password]',
database='[database]',
port=3306
)
if connection.is_connected():
db_info = connection.get_server_info()
logger.info(f"Connected to MySQL Server version {db_info}")
cursor = connection.cursor()
cursor.execute("SELECT DATABASE();")
record = cursor.fetchone()
logger.info(f"Connected to database: {record[0]}")
# Insert a funny record into survey_results
funny_survey_data = {
"session_id": str(uuid.uuid4()),
"q1_response": 1,
"q2_response": 2,
"q3_response": 3,
"q4_response": 4,
"q5_response": 5,
"q6_response": 6,
"n1": 42, 🤔
"n2": 69, 🤔
"n3": 420, 🤔
"plot_x": 3.14, 🤔
"plot_y": 2.71, 🤔
"browser": "FunnyBrowser 9000",
"region": "JokeRegion",
"source": "comedy",
"hash_email_session": "f00b4r-hash" 🤔
}
query = """INSERT INTO survey_results
(session_id, q1_response, q2_response, q3_response, q4_response, q5_response, q6_response,
n1, n2, n3, plot_x, plot_y, browser, region, source, hash_email_session)
VALUES (%(session_id)s, %(q1_response)s, %(q2_response)s, %(q3_response)s, %(q4_response)s,
%(q5_response)s, %(q6_response)s, %(n1)s, %(n2)s, %(n3)s,
%(plot_x)s, %(plot_y)s, %(browser)s, %(region)s, %(source)s, %(hash_email_session)s)
"""
logger.info("Inserting funny survey record...")
cursor.execute(query, funny_survey_data)
connection.commit()
logger.info(f"Funny survey record inserted with ID: {cursor.lastrowid}")
except mysql.connector.Error as e:
logger.error(f"Error during MySQL operation: {e}")
finally:
if 'cursor' in locals() and cursor:
cursor.close()
if 'connection' in locals() and connection.is_connected():
connection.close()
logger.info("MySQL connection closed.")
if __name__ == "__main__":
test_mysql_connection()