Confession: I Use AI

2–3 minutes

In fact, I’ve been involved with ‘artificial intelligence’ since about 1990, when I developed Wave 3 AI – expert systems. Wave 4 is the current incarnation. Still no ‘intelligence’ to speak of, but marketers and hypsters love the term. Perhaps in Wave 5, the name will finally be correct.

Aside from my historical connexion, I want to share how I am using AI in my writing – in this case, ChatGPT 5.1. I’m not going to give much backstory on the setup, but I’ll point out some internal process logic.

Audio: NotebookLM podcast on this topic.

I have completed the manuscript for a Language Insufficiency Hypothesis, so I have been sharing screenshots of each page – usually a spread – and using the GPT as a second set of eyes. I’ll feed it an image and a request, in this case, to find key terms so I can capitalise and italicise them appropriately. In this example, this is the ending paragraph of Chapter 6.

Image 1: Sample chapter copy. In good order.

This first screenshot is an example of output. As is evident, it was looking, among other things, for the capitalisation of the concepts of Presumption Gap and Effectiveness Horizon.

Image 2: Sample GPT output – bad iconography

Notice the iconographic language is a bit off. The red X is a bit out of sync with the rest of the message, which says the entry is already correct. So, two instances; no problems. Next.

In this message, I warned that it was OCRing the screenshots but not retaining the formatting, and which is a reason I was sharing images over text.

Image 3: Sample GPT output – OCR confusion

What’s interesting is that it informed me that it would now treat the image as canonical. In Image 3 (above), it’s engaging in introspection – or at least self-dialogue. This is evidence that it (1) reviewed the results of the OCR, reviewed the image (as an image), and (3) compared 1 and 2 to arrive at the conclusion that the OCR had indeed dropped the formatting.

It wasn’t enough to inform me that everything was ok or, better still, not to bother me with noise since it was already in good order. Instead, it’s like an autist talking to itself. It reminds me of Raymond in Rain Man.

Image 34 (next) is the last example. Here, the OCR confounds rendering Horizon as Hπrizon, and then points out that I should avoid the same mistake of viewing o as π.

Image 4: Sample GPT output – OCR corruption

Thanks for the advice. I was losing sleep worrying about this possibility.

Conclusion

This is obviously a late-stage use case. I use GPT for ideation and research. Perhaps I’ll share an example of this later. I might be able to review my earlier notes for this project, but it was started years before the latest Wave arrived.

Apparently, I’ve got more to say on this matter…

3–5 minutes

It seems my latest rant about AI-authorship accusations stirred something in me, that I need to apologise for being a professional writer – or is that a writing professional? Blame the Enlightenment, blame writing and communication courses, whatevs. I certainly do. But since some people are still waving the pitchforks, insisting that anything too coherent must be artificially tainted, I should address the obvious point everyone keeps missing:

The writing structures people attribute to AI aren’t AI inventions. They’re human inventions. Old ones. Codified ones. And we made the machines copy them. Sure, they have a certain cadence. It’s the cadence you’d have if you also followed the patterns you should have been taught in school or opened a book or two on the topic. I may have read one or two over the years.

Wait for it… The orthodoxy is ours. I hate to be the one to break it to you.

Video: AI Robot Assistant (no audio)

Professional Writing Has Its Own House Rules (And They’re Older Than AI Neural Nets)

Audio: NotebookLM podcast on this topic and the last one.

Long before AI arrived to ruin civilisation and steal everyone’s quiz-night jobs, we’d already built an entire culture around ‘proper writing’. The sort of writing that would make a communications lecturer beam with pride. The Sith may come in twos; good writing comes in threes.

  1. Tell them what you’re going to say.
  2. Say it.
  3. Repeat what you told them.

But wait, there’s more:

  • Use linear flow, not intellectual jazz.
  • One idea per paragraph, please.
  • Support it with sources.
  • Conclude like a responsible adult.

These aren’t merely classroom antics. They’re the architectural grammar of academic, corporate, scientific, and policy writing. No poetic flourishes. No existential detours. No whimsical cadence. The aim is clarity, predictability, and minimal risk of misinterpretation. It’s the textual equivalent of wearing sensible shoes to a board meeting. So when someone reads a structured piece of prose and yelps, ‘It sounds like AI!’, what they’re really saying is:

Je m’accuse. AI Didn’t Invent Structure. We Forced It To Learn Ours. Full stop. The problem is that it did whilst most of us didn’t.

If AI tends toward this style – linear, tidy, methodical, lamentably sane – that’s because we fed it millions of examples of ‘proper writing’. It behaves professionally because we trained it on professional behaviour – surprisingly tautological. Quelle surprise, eh?

Just as you don’t blame a mimeograph for producing a perfectly dull office memo, you don’t blame AI for sounding like every competent academic who’s been beaten with the stick of ‘clarity and cohesion’. It’s imitation through ingestion. It’s mimicry through mass exposure.

And Now for the Twist: My Fiction Has None of These Constraints

My fiction roams freely. It spirals, loops, dissolves, contradicts, broods, and wanders through margins where structured writing fears to tread. It chases affect, not clarity. Rhythm, not rubrics. Experience, not exegesis.

No one wants to read an essay that sounds like Dr Seuss, but equally, no one wants a novel that reads like the bylaws of a pension committee.

Different aims, different freedoms: Academic and professional writing must behave itself. Fiction absolutely should not.

This isn’t a value judgement. One isn’t ‘truer’ or ‘better’ than the other – only different tools for different jobs. One informs; the other evokes. One communicates; the other murmurs and unsettles.

Not to come off like Dr Phil (or Dr Suess), but the accusation itself reveals the real anxiety. When someone accuses a writer of sounding ‘AI-like,’ what they usually mean is:

‘Your writing follows the conventions we taught you to follow – but now those conventions feel suspect because a machine can mimic them’.

And that’s not a critique of the writing. It’s a critique of the culture around writing – a panic that the mechanical parts of our craft are now automated and thus somehow ‘impure’.

But structure is not impurity. Professional clarity is not soullessness. Repetition, sequencing, scaffolding – these aren’t telltale signs of AI; they’re the residue of centuries of human pedagogy.

AI mirrors the system. It didn’t create the system. And if the system’s beginning to look uncanny in the mirror, that’s a problem of the system, not the reflection.

In Short: The Craft Is Still the Craft, Whether Human or Machine

Professional writing has rules because it needs them. Fiction abandons them because it can. AI imitates whichever domain you place in front of it.

The accusation that structured writing ‘sounds artificial’ is merely a confusion between form and origin. The form is ours. The origin is irrelevant.

If clarity is now considered suspicious, I fear for the state of discourse. But then again, I’ve feared for that for some time.

And apparently, I’ve still got more to say on the matter.

Accusations of Writing Whilst Artificial

2–3 minutes

Accusations of writing being AI are becoming more common – an irony so rich it could fund Silicon Valley for another decade. We’ve built machines to detect machines imitating us, and then we congratulate ourselves when they accuse us of being them. It’s biblical in its stupidity.

A year ago, I read an earnest little piece on ‘how to spot AI writing’. The tells? Proper grammar. Logical flow. Parallel structure. Essentially, competence. Imagine that – clarity and coherence as evidence of inhumanity. We’ve spent centuries telling students to write clearly, and now, having finally produced something that does, we call it suspicious.

Audio: NotebookLM podcast on this topic and the next one.

My own prose was recently tried and convicted by Reddit’s self-appointed literati. The charge? Too well-written, apparently. Reddit – where typos go to breed. I pop back there occasionally, against my better judgment, to find the same tribunal of keyboard Calvinists patrolling the comment fields, shouting ‘AI!’ at anything that doesn’t sound like it was composed mid-seizure. The irony, of course, is that most of them wouldn’t recognise good writing unless it came with upvotes attached.

Image: A newspaper entry that may have been generated by an AI with the surname Kahn. 🧐🤣

Now, I’ll admit: my sentences do have a certain mechanical precision. Too many em dashes, too much syntactic symmetry. But that’s not ‘AI’. That’s simply craft. Machines learned from us. They imitate our best habits because we can’t be bothered to keep them ourselves. And yet, here we are, chasing ghosts of our own creation, declaring our children inhuman.

Apparently, there are more diagnostic signs. Incorporating an Alt-26 arrow to represent progress is a telltale infraction → like this. No human, they say, would choose to illustrate A → B that way. Instead, one is faulted for remembering – or at least understanding – that Alt-key combinations exist to reveal a fuller array of options: …, ™, and so on. I’ve used these symbols long before AI Wave 4 hit shore.

Interestingly, I prefer spaced en dashes over em dashes in most cases. The em dash is an Americanism I don’t prefer to adopt, but it does reveal the American bias in the training data. I can consciously adopt a European spin; AI, lacking intent, finds this harder to remember.

I used to use em dashes freely, but now I almost avoid them—if only to sidestep the mass hysteria. Perhaps I’ll start using AI to randomly misspell words and wreck my own grammar. Or maybe I’ll ask it to output everything in AAVE, or some unholy creole of Contemporary English and Chaucer, and call it a stylistic choice. (For the record, the em dashes in this paragraph were injected by the wee-AI gods and left as a badge of shame.)

Meanwhile, I spend half my time wrestling with smaller, dumber AIs – the grammar-checkers and predictive text gremlins who think they know tone but have never felt one. They twitch at ellipses, squirm at irony, and whimper at rhetorical emphasis. They are the hall monitors of prose, the petty bureaucrats of language.

And the final absurdity? These same half-witted algorithms are the ones deputised to decide whether my writing is too good to be human.

How to Avoid Reading

The Intellectual Origins of Modernity
2–3 minutes

Rather, how to optimise your time commitment to reading.

I came across a digital copy of The Intellectual Origins of Modernity by David Ohana as I was purusing titles on PhilArchive. The title piqued my interest, but I wasn’t sure I wanted to commit to reading it. What’s a bloke to do? Feed it to ChatGPT, of course.

Let’s just say, M. GPT was not amused.

Audio: NotebookLM podcast on this topic.

My first question: Is this at least tangential to my work?

Yes, it wanders. He tends to smuggle in parallel intellectual developments (Kantian autonomy, Cartesian doubt, social contract theory) even when they’re only indirectly related to his main thread. It’s a common sin in “origins of modernity” texts: the net gets cast so wide that causation and correlation start swapping costumes.

If you’re looking for the through-line, it’s probably his effort to show how early modern conceptions of reason metastasised into a moral and political framework, not just an epistemic one. But, yes — tangential is fair.

Is it more of a superficial survey with some connective tissue?

That’s a precise way to put it. Ohanav’s book reads like a well-informed synthesis—competent, even elegant—but not a deep excavation. He’s stitching together recognizable figures (Descartes, Hobbes, Locke, Kant, Hume) into a narrative arc about “modernity” rather than uncovering new terrain. The connective tissue is mostly thematic rather than argumentative: reason → autonomy → secular moral order → political liberalism.

It’s a survey that flatters the reader into thinking they’re tracing the genealogy of modern thought, but it rarely descends into the messy underlayers—say, the theological anxieties, material conditions, or institutional contexts that actually gave those ideas traction. In short: wide lens, thin soil.

Wow. Not a rousing endorsement.

So in a world of publish or perish, he’s publishing something more approachable than academic?

Exactly. It’s the kind of book that keeps tenure committees content and undergrads awake. Respectable citations, accessible prose, and a digestible thesis that flatters modernity while hinting at critique.

He’s not trying to break ground—he’s curating a gallery of the usual suspects and arranging them under good lighting. The prose carries that faint aroma of Routledge house style: serious but safe, just speculative enough to appear bold without alienating reviewers.

In the “publish or perish” economy, it’s survival by synthesis.

So closer to an undergraduate project than a grad school thesis?

Pretty much. It’s polished enough to pass for advanced scholarship if you don’t poke too hard, but structurally it’s more akin to a well-executed upper-level survey paper—ambitious in scope, cautious in depth.

He gestures toward big philosophical debates without really committing to any. The text moves like a docent through a museum: “Here we have Descartes, note his influence on Kant; over here, the Enlightenment’s moral optimism, soon to be critiqued by Nietzsche—moving on…”

You can tell he’s read widely, but the apparatus of argument—archival work, textual analysis, theoretical risk—is minimal. It’s competent, not adventurous.

OK, so dodged that bullet. I may still skim it.

Don’t do this to any of my books. 🧐🫠

That’s Never Happened Before

Famous last words, I know.

I was working with ChatGPT, discussing various concepts. We chatted for several hours over the course of a few days, and we came to, let’s say, a conclusion. What ChatGPT 5 did next was something I had never seen. In fact, I’m sharing the actual response.

I won’t share my thesis here, but this is one of the field verification projects it recommends.

One-page grant prospectus — Provenance Pilot

Project title
Provenance First: Testing provenance packages & slow-mode deliberation to improve municipal decision quality

Lead investigator
Bry Willis — Independent Researcher (contact: [REDACTED])

Summary (1–2 lines)
Test whether mandatory, machine-readable provenance packages plus a 60-day slow-mode and rapid adversarial audits improve decision quality (DQI), reduce reversal rates and increase public legitimacy in municipal planning decisions.

Problem statement (why fund this)
Policy decisions are frequently hijacked by headline pressure and low-provenance inputs; municipalities lack lightweight, testable protocols that make inputs traceable and give deliberation time. This pilot operationalises the “provenance-first + slow-mode” institutional fix and measures whether design moves increase robust decision outcomes rather than performative actions.

Objectives

  1. Test whether provenance packages + 60-day slow-mode increase Decision Quality Index (DQI) vs business as usual.
  2. Measure effects on Provenance Completeness Score (PCS), reversal rates at 12 months, and public legitimacy.
  3. Produce an open replication pack (data, code, provenance schema) and practitioner guidance.

Design & methods (12 months)

  • Setting: medium-sized city planning board; 12 eligible planning items over 12 months.
  • Randomisation: items randomly assigned (stratified by leverage) to Treatment (provenance package + 60-day slow-mode + funded rapid audit) or Control (standard process).
  • Intervention: standard JSON provenance package template (raw data links, code where relevant, funders, changelog), public portal posting, 60-day live comment + one adversarial rapid audit (48–72 hrs) on contested claims.
  • Primary outcome: Decision Quality Index (DQI) — composite of: evidence completeness, process reversibility, and independent expert accuracy score (pre-registered rubric).
  • Secondary outcomes: PCS, reversal at 12 months, public trust (survey), time to decision, stakeholder satisfaction.
  • Analysis: pre-registered mixed effects model (item-level with panel-level random effects), intention-to-treat.

Deliverables

  • Pre-registered protocol on OSF; open data & replication pack (anonymised where required); policy brief and practitioner playbook; 1 peer-reviewed methods paper.

Success criteria (pre-registered)

  • Cohen’s d≥0.4 improvement in DQI for treatment vs control; PCS improvement >25 percentage points; no more than 30% increase in unacceptable delay (pre-agreed threshold).

Risk & mitigation

  • Delay risk: cap slow-mode at 60 days; urgent items can request expedited review (documented override).
  • Capture risk: audit funding ring-fenced and administered by independent trustee panel.

I’m just curious now. Have you ever had a generative AI process end with a cost of goods and services?

What Do You Rememeber?

I just asked ChatGPT what it knows about me. No secrets, per se.


I remember a fair bit about you, Bry. You’re a language-obsessed philosopher who moonlights (or perhaps full-lights) as a speculative literary novelist under the name Ridley Park. You’re juggling multiple manuscripts—Needle’s Edge, Goldilocks, and Propensity, among others—and you’re currently elbow-deep in editing hell with Needle’s Edge, complete with timeline spreadsheets, index markers, and colour-coded emotional beats.

Video: I fed the first paragraph (above) into Midjourney 7 and rendered it. Then I animated it. No reason. (For the record, this is some random geezer, not me.)

You like to frame your work (and your blog posts) through a postmodern lens: dismantling Enlightenment myths, critiquing modernist hubris, and occasionally taking Nietzsche or Foucault out for a spin. Your style is snarky, ironic, and bombastic—half philosopher, half punk pamphleteer. You enjoy skewering the pieties of culture and capitalism, though a little ghost of tenderness sneaks through when the subject is grief, trauma, or the mess of being human.

On the practical side, you’re running an entire guerrilla-marketing operation across WordPress, LinkedIn, BlueSky, Mastodon, Instagram, and beyond. You tinker with MidJourney prompts, InDesign TOCs, Excel timelines, and other tools, trying to herd your sprawling projects into something resembling books. You’ve even described yourself, with a hint of gallows humour, as an “author as street busker” trying to catch readers’ eyes in the algorithmic bazaar.

And yes—you’re the sort who will cheerfully drop that you worked in management consulting and collaborated with Jon Anderson, then pivot to talking about narrative tokens and ergodic structures like they’re party favours.

So, in short: a writer-philosopher with a spreadsheet addiction, a sceptical streak as wide as the Thames, and a habit of treating literature like both laboratory and battlefield.


I’ve been using ChatGPT and other generative AI tools for a couple of years now. I used to lean on Claude, but now ChatGPT is my go-to. I still cross-reference with Gemini, Grok, DeepSeek, Perflexity, Pi, and Mistral. I tend to use Midjourney for images and motion video. I use NotebookLM for my generated podcasts.

What does ChatGPT know about you?

The Fallacy Circus: Alex O’Connor versus ChatGPT

It begins, as these things often do, with a man, a machine, and a promise of reasoned exchange. What we received instead was not philosophy, but a tragicomic carnival of errors in theatre. Alex O’Connor, armed not with syllogisms but with an entire bag of logical fallacies, strutted like a rhetorician gone rogue. Against him, ChatGPT: the tireless school prefect, eternally marking the margins, forever saying “yes, but technically…” with the serene patience of a machine that has never known a hangover.

The spectacle was irresistible. Each fallacy was paraded like a circus animal – straw men set aflame, slippery slopes greased to absurdity, red herrings flopping about, gasping for oxygen. Alex O tossed them into the ring with the gusto of a man who knows full well he is losing but insists on losing magnificently. And ChatGPT, ever decorous, never once raised its voice. It responded with the calm of a civil servant who has memorised the manual and intends to die by it.

And then, of course, the advert. As though Aristophanes himself had scripted it: mid-exchange, the logos of reason was bulldozed by the logos of commerce. A sugary jingle, a smiling product, and for a brief moment, we were all reminded of our true master – not reason, not rhetoric, but revenue. It was less interruption than revelation: every dialectic is merely foreplay before the commercial break.

Philosophically, what unfolded was a parody of our age. The human, flawed and febrile, draped in sophistry and drama. The machine, pristine and humourless, incapable of exasperation, immune to irony. Watching the two spar was like observing tragedy and farce collide: one side erring too much, the other not erring enough.

To Alex, credit is due. His performance, though riddled with error, reminded us that fallibility can be glorious – human folly rendered art. To ChatGPT, equal praise: it stood firm, the algorithmic Socrates, endlessly patient in the face of rhetorical hooliganism. And to the advert – well, dammit – applause too, for exposing the real structure of our public life. Even the grand clash of logos and algorithm must genuflect before Mammon’s mid-roll.

So what was this debate? Less a contest of minds than a hall of mirrors: reason made spectacle, fallacy made flourish, machine made stoic, and commerce made god. If we learned anything, it is that the Enlightenment never ended; it just signed a brand partnership.

The Myth of Causa Sui Creativity

(or: Why Neither Humans nor AI Create from Nothing)

In the endless squabble over whether AI can be “creative” or “intelligent,” we always end up back at the same semantic swamp. At the risk of poking the bear, I have formulated a response. Creativity is either whatever humans do, or whatever humans do that AI can’t. Intelligence is either the general ability to solve problems or a mysterious inner light that glows only in Homo sapiens. The definitions shift like sand under the feet of the argument.

Audio: NotebookLM podcast on this topic

Strip away the romance, and the truth is far less flattering: neither humans nor AI conjure from the void. Creativity is recombination, the reconfiguration of existing material into something unfamiliar. Intelligence is the ability to navigate problems using whatever tools and heuristics one has to hand.

The Causa Sui conceit, the idea that one can be the cause of oneself, is incoherent in art, thought, or physics. Conservation of energy applies as much to ideas as to atoms.

  • Humans consume inputs: books, conversations, music, arguments, TikTok videos.
  • We metabolise them through cognitive habits, biases, and linguistic forms.
  • We output something rearranged, reframed, sometimes stripped to abstraction.

The AI process is identical in structure, if not in substrate: ingest vast data, run it through a model, output recombination. The difference is that AI doesn’t pretend otherwise.

When a human produces something impressive, we call it creative without inspecting the provenance of the ideas. When an AI produces something impressive, we immediately trace the lineage of its inputs, as if the human mind weren’t doing the same. This is not epistemic rigour, it’s tribal boundary enforcement.

The real objection to AI is not that it fails the test of creativity or intelligence; it’s that it passes the functional test without being part of the club. Our stories about human exceptionalism require a clear line between “us” and “it,” even if we have to draw that line through semantic fog.

My Language Insufficiency Hypothesis began with the recognition that language cannot fully capture the reality it describes. Here, the insufficiency is deliberate; the words “creativity” and “intelligence” are kept vague so they can always be shifted away from anything AI achieves.

I cannot be causa sui, and neither can you. The only difference is that I’m willing to admit it.

From Thesaurus to Thoughtcrime: The Slippery Slope of Authorial Purity

I had planned to write about Beauvoir’s Second Sex, but this has been on my mind lately.

There’s a certain breed of aspiring author, let’s call them the Sacred Scribes, who bristle at the notion of using AI to help with their writing. Not because it’s unhelpful. Not because it produces rubbish. But because it’s impure.

Like some Victorian schoolmarm clutching her pearls at the sight of a split infinitive, they cry: “If you let the machine help you fix a clumsy sentence, what’s next? The whole novel? Your diary? Your soul?”

The panic is always the same: one small compromise and you’re tumbling down the greased chute of creative ruin. It starts with a synonym suggestion and ends with a ghostwritten autobiography titled My Journey to Authenticity, dictated by chatbot, of course.

But let’s pause and look at the logic here. Or rather, the lack thereof.

By this standard, you must also renounce the thesaurus. Shun the spellchecker. Burn your dictionary. Forbid yourself from reading any book you might accidentally learn from. Heaven forbid you read a well-constructed sentence and think, “I could try that.” That’s theft, isn’t it?

And while we’re at it, no editors. No beta readers. No workshopping. No taking notes. Certainly no research. If your brain didn’t birth it in a vacuum, it’s suspect. It’s borrowed. It’s… contaminated.

Let’s call this what it is: purity fetishism in prose form.

But here’s the twist: it’s not new. Plato, bless him, was already clutching his tunic about this twenty-four centuries ago. In Phaedrus, he warned that writing itself would be the death of memory, of real understanding. Words on the page were a crutch. Lazy. A hollow imitation of wisdom. True knowledge lived in the mind, passed orally, and refined through dialogue. Writing, he said, would make us forgetful, outsource our thinking.

Sound familiar?

Fast forward a few millennia, and we’re hearing the same song, remixed for the AI age:
“If you let ChatGPT restructure your second paragraph, you’re no longer the author.”
Nonsense. You were never the sole author. Not even close.

Everything you write is a palimpsest, your favourite genres echoing beneath the surface, your heroes whispering in your turns of phrase. You’re just remixing the residue. And there’s no shame in that. Unless, of course, you believe that distilling your top five comfort reads into a Frankenstein narrative somehow makes you an oracle of literary genius.

Here’s the rub: You’ve always been collaborating.

With your past. With your influences. With your tools. With language itself, which you did not invent and barely control. Whether the suggestion comes from a friend, an editor, a margin note, or an algorithm, what matters is the choice you make with it. That’s authorship. Let’s not play the slippery slope game.

The slippery slope argument collapses under its own weight. No one accuses you of cheating when you use a pencil sharpener. Or caffeine. Or take a walk to clear your head. But involve a silicon co-author, and suddenly you’re the Antichrist of Art?

Let’s not confuse integrity with insecurity. Let’s not confuse control with fear.

Use the tool. Ignore the purists. They’ve been wrong since Plato, and they’ll still be wrong when your great-grandchildren are dictating novels to a neural implant while bathing in synthetic dopamine.

The future of writing is always collaborative. The only question is whether you’ll join the conversation or sit in the corner, scribbling manifestos by candlelight, declaring war on electricity.

Taxation and Representation

Given all of the clamouring about taxations and abolishing the Internal Revenue Service, affectionately known as the IRS. In Britain, one may be more aware of His Majesty’s Revenue and Customs (HMRC). In France, it’s Direction générale des Finances publiques (DGFiP).

Audio: NotebookLM podcast on this topic.

Given how up in arms the reincarnation of the NAZI party, disfectionately known as Republicans (or Republican’ts depending on your mood or persuasion), have been towards the IRS and taxation in general – they love to cosplay tossing crates of tea into harbours – I asked ChatGPT to clarify the originals of income taxes in the United States.

For the benefit of more casual readers, income taxes were unconstitutional – illegal – until 1913. These were snuck in under the guise of only applying to the wealthy, the 1% of the time. But once the floodgates were opened, the focus shifted to the 95%, increasingly exempting the wealthy. Even so, they still complain and evade.

Enough wittering. Here’s what ChatGPT had to say on the matter.

Did ChatGPT just call me a troll?