A Brief and Largely Accurate History of Punctuation

1–2 minutes

For most of human history, written Latin looked something like THISISASENTENCEABOUTPHILOSOPHYORWARYOUCHOOSE, and readers were simply expected to get on with it. And of course, in ALL CAPS. This was not considered a problem. The Romans were not known for their sensitivity to the needs of others.

The Romans did, briefly, experiment with the interpunct – a modest dot deployed between words, giving the reader something like THIS·IS·A·SENTENCE·ABOUT·PHILOSOPHY·OR·WAR·YOU·CHOOSE – before apparently deciding this was excessive hand-holding and abandoning it entirely. Punctuation’s first appearance in Western prose was thus also its first act of self-destruction. A precedent, as we shall see, that held.

Audio: NotebookLM summary podcast of this topic.

Relief came, eventually, from the most unlikely of sources: monks. Specifically, Irish and Anglo-Saxon monks in the 7th and 8th centuries, who were copying Latin texts they couldn’t actually read fluently, and who introduced spaces between words as a personal coping mechanism. Civilisation has strange bedfellows.

The comma, the full stop, and their assorted relatives arrived with the printing press – Aldus Manutius and the Venetian humanists essentially standardising the breath-marks of prose into something reproducible at scale. Punctuation became, in this period, the bureaucratisation of rhythm. A noble project. Mildly tyrannical in execution.

The em dash, meanwhile, had an entirely respectable career throughout the 18th and 19th centuries — a mark of genuine syntactic energy, used to interrupt, to pivot, to hold two thoughts in productive tension — before being left largely to the eccentric and the emphatic.

Then came the large language models. Within approximately eighteen months, the em dash was resurrected from the dead to become the default unit of thought, issuing them faster than Oprah Christmas giveaways. Every clause got one. Sometimes a sentence received two, bracketing a thought that required neither a bracket nor a thought. The em dash ceased to mean interruption and began to mean I am text generated at scale. Readers noticed. Then they mocked it. Then, following the immutable logic of cultural exhaustion, they stopped using it entirely. The em dash is now extinct — which is a shame, really.

The Author Did Not Write This

4–6 minutes

The LinkedIn consensus has spoken: if you used AI in the writing process, you are not the author. The position is stated with the confidence of someone who has never hired a ghostwriter, employed a research assistant, submitted to a heavy editor, or considered that the Gettysburg Address was almost certainly not written by Lincoln.

Image: I couldn’t not share this Midjourney 8.1 image. It may not have understood the assignment.
Audio: NotebookLM summary podcast of this topic.

Authorship has never been a production relation. It has always been an attribution relation — an institutionally stabilised answer to the question of which name the practice elects to put on the cover. These are not the same thing, and conflating them is the error from which every subsequent confusion proceeds.

The ghostwriter has existed as long as commercial publishing. The political speechwriter is so normalised that nobody considers it worth remarking. The celebrity memoir, the corporate thought-leadership piece, the attributed editorial — these are not edge cases or embarrassing exceptions. They are the normal operation of every writing-adjacent industry that has ever existed. The name on the cover has never reliably indicated the hands on the keyboard, and the industry has never seriously pretended otherwise. It has simply preferred not to discuss it at dinner.

AI changes the tool. It does not change the structure. The person who prompts, selects, curates, revises, and publishes is doing what commissioners of ghostwriters have always done. What has changed is that AI makes the mediation visible in a way that polite convention previously concealed. Visibility triggers the purity reflex. What presents itself as a defence of authentic authorship is a defence of a particular fiction — the Romantic author as solitary originating consciousness — that the industry never consistently believed and certainly never consistently practised.

The purity position also fails on its own terms before it gets started. Consider the spectrum of AI-assisted writing: a full draft submitted for light polish; a human argument substantially revised by AI; collaborative ideation followed by AI drafting; a kernel of an idea handed over for full execution. These are genuinely different in terms of human contribution. The zealot position requires a threshold somewhere on this spectrum below which authorship lapses. It never specifies where. More fatally, it has no means of verification. There is no external method of determining where on the spectrum any given piece of writing falls. The detector tools are probabilistic noise that disproportionately penalise competent prose. Any audit mechanism sophisticated enough to catch first-order evasion immediately generates a second-order workaround. The regress terminates only at continuous surveillance of the writing process — panoptical authorship as the logical endpoint of the position taken seriously.

NotebookLM Infographic on this topic.

Then there is the recursion problem, which the zealot never addresses because it is fatal. The stochastic parrot charge against AI — that it merely recombines absorbed linguistic patterns without genuine origination — describes with considerable accuracy what human cognition also does. The writer’s training data is the Dickens read at ten, the billboard absorbed on a commute, the argument overheard on public transit, the half-remembered essay that shaped a position without ever being consciously cited. The causal chain of any human idea disappears into an unauditable cognitive history. Genuine origination in the sense the purity position requires has never existed. The Romantic author was always a retrospective confabulation. Barthes said so in 1967. The industry nodded politely and continued invoicing.

What the zealot is defending is not authorship. It is a particular grammar of authorship — one that selects compositional origin as the threshold criterion, applies it selectively and unverifiably, and uses the resulting suspicion as a status boundary. It is guild behaviour dressed as principle, which is understandable as a response to a genuine economic threat but should not be mistaken for a philosophical position.

Authorship is the position a culture elects to stabilise after the work has already been produced through far messier means. It has always been thus. AI did not break the fiction. It just made the fiction harder to keep a straight face about.


The Rest of the Story

I’ve written about this before. I am not an AI apologist, but I am peeved by anti-LLM zealots, who clearly haven’t thought through their arguments.

I finished reading A.J. Ayer’s Language, Truth, and Logic, the part about Bertrand Russell’s claim about ‘The author of Waverley was Scotch‘. My brain latched onto authorship, and my emotional response was WTF? I have other problems with Russell and Ayer on this, but that’s a matter for another day.

To make my point, this page up to the ellipsis is the output of Claude after an extended dialogue with it and ChatGPT after I read Ayers, and something didn’t sit quite right. I am not ashamed to use LLMs in my authoring workflow and am not ashamed to mention it, as here. Almost all of these thoughts are mine. I’ve simply asked Claude to organise the output. It’s good enough to output as-is, and any edits would be trivial, so I won’t bother. I probably could have made the edits in as much time as it took to type this, but I’ve got nothing to hide. I’m just a human with access to technology circa 2026.

The Environment Always Wins: The Myth of Pure Voice

4–6 minutes

There’s a certain kind of cultural panic that tells you more about the panickers than about the thing they are panicking about. The current hysteria over AI-inflected prose is a good example.

The argument, insofar as it deserves the name, goes roughly like this: LLMs produce prose with identifiable features – a certain blandness, a fondness for the em dash, a tendency toward tidy three-part structure. Writers who use these tools risk absorbing those features. The authentic human voice is therefore under threat. Something precious is being diluted by contact with the machine.

This is sentimental rubbish, and it is worth saying so clearly before doing anything else – and a sort of virtue signalling.

Audio: NotebookLM summary podcast of this topic.

I use LLMs daily. For research, for editorial pushback, for smoothing passages that have gone awry. This means I spend hours a day reading a particular kind of output. You’d have to be delusional not to admit it has effects. Certain phrasings start feeling natural that didn’t before. Certain rhythms begin to recur. Certain words might not have otherwise come into use. I notice this and note it without particular alarm, because I’ve read enough to know that this is just what environments do.

Read nothing but McCarthy for a month, and your sentences will start hunting for the spare declarative. Spend a year editing academic philosophy, and you will catch yourself reaching for ‘insofar as’ and ‘it’s worth noting’ in casual conversation. Live in a city long enough, and its cadences work their way into your syntax. This isn’t contamination, the negative moralist dispersion. It’s how language acquisition works for as long as one is alive and reading. Voice isn’t a spring. It’s a river, a moving accumulation of every tributary it has passed through.

The prestige game being played by the anti-LLM faction isn’t difficult to spot. When Dostoyevsky shapes a young writer’s cadence, we call it influence and treat it as evidence of a serious literary education. When a game world shapes a child’s imagination – I homeschooled my son in the manner of unschooling, and his primary corpus for years was World of Warcraft and its attendant lore before shifting to Dark Souls – and that child ends up reading Dante and Milton unprompted in year seven, the same mechanism has clearly operated. The source was not canonical, the outcome was. But the respectable hierarchy of influences cannot easily accommodate this, because the hierarchy was never really about the mechanism. It was about the cultural status of the inputs.

The more interesting observation isn’t about those of us who use these tools. It’s about those who conspicuously do not.

A minor genre has emerged – charitably, I’ll call it a genre because cult feels morally loaded – consisting of writers anxiously purging their prose of anything that might read as AI-generated. It’s worth noting that they have read the lists. Telltale signs of LLM authorship: excessive hedging, em dashes, transitional summaries, the phrase ‘it is worth noting’. And so they scrub, redact, replace, and perform a kind of stylistic hygiene that’s a creative decision made in direct response to LLM discourse.

These writers aren’t free of the machine’s influence. They’re among the most thoroughly shaped by it. They simply have the more theatrical relationship – the counter-imitator, the purity-performer, the one who reorganises their entire aesthetic in orbit around the thing they claim to reject.

Thomas Moore, in Care of the Soul, observes that a child raised by an alcoholic parent tends to become either an alcoholic or a committed teetotaller. He presents this as a dichotomy, which is too neat, but the underlying point holds. Reactions are still relata – see what happens when you read too much philosophy and logic? The teetotaller has organised their life around the bottle as surely as the alcoholic has. Both are defined by it.

Opposition is one of influence’s favourite disguises.

The fair objection is that LLM influence may differ from other influences in kind rather than just in kind. Dostoyevsky is strange. Bernhard is strange to the point of pathology. A canonical prose style is idiosyncratic by definition, which is why it’s worth absorbing. In contrast, LLM output aims for plausible fluency and statistical centrality. Its pull may be more homogenising than the pull of a singular authorial sensibility.

That’s a real point. The environment in question has a centripetal force toward the mean that most literary influences lack.

But conceding the point doesn’t really rescue the panic. It just specifies the kind of influence involved. The mechanism remains identical to every other case of environmental absorption. And ‘this influence tends toward the generic’ is an ironically generic critique of a particular environment’s character rather than a claim that the environment is doing something ontologically unprecedented to the notion of authorship.

The question that actually matters aesthetically is not was this touched by AI? It is what did the writer do with the environment they inhabited? That’s always been the question. It remains the question. The machinery has changed; the problem of influence has not.

What the current schism actually reveals is not that AI is doing something new to writing. It’s that we’ve been operating with a fairy tale about what writing is. The fairy tale holds that voice is self-originating, that somewhere beneath the reading AND the editing AND the genre conventions AND the institutional pressures AND the decade of a particular editor’s feedback, there is a pristine you, unconditioned and pure, expressing itself directly onto the page.

This was always false. Writers have always been patchworks of absorbed environments. The only difference now is that one of the environments is a machine, and the machine is new enough that people haven’t yet learned to be comfortable with what it reveals about the rest.

The environment always wins. The only interesting question is which environments you choose, and what you make of them.

NotebookLM Infographic on this topic.

Art or Content

3–4 minutes

So glad I took time out to watch a short exchange between Rick Beato and Justin Hawkins on whether music is becoming content rather than art. The question is framed in musical terms, but it hardly stops there. The same corrosion is visible in writing, visual art, criticism, and now, with grim inevitability, in AI-mediated production more broadly. The disease is not confined to music. Music merely makes the symptoms easier to hear.

For music, my aversion to pop music goes back to my youth. I was a kid when the Beatles practically invented pop music, but they left it to grow and continued exploring. Sadly, as solo artists, they mainly – not always – failed and rested on their laurels in pop. It’s not that their version or any pop music is inherently unlistenable. Surely, it’s not, if only by the aspiration of the pop moniker, but it has no depth, no soul, as it were. Some make this argument for Organic food. In essence, it involves an appeal to nature fallacy.

Audio: Slightly off, but not bad, NotebookLM summary podcast of this topic.

My own aversion to much pop music begins there. It is not that pop is necessarily bad, nor even that it is always shallow. That would be too crude and too easy. The problem is that pop often presents itself less as an artistic act than as a consumption object engineered for immediate uptake: catchy, frictionless, emotionally legible, and just disposable enough to make room for the next one. It is built to circulate.

That, for me, is the difference between content and art. Art may be accessible, even popular, but it retains some residue that exceeds its delivery mechanism. It resists total reduction to utility. Content, by contrast, is made to be processed. It is optimised not for depth but for throughput. Its highest ambition is not transformation, but engagement.

This is why the question matters beyond music. Writing, too, now lives under the same pressure. One is increasingly expected to produce not essays, arguments, or works, but units of output: posts, threads, reactions, takes, summaries, explainers, and other forms of polished verbal debris. The point is no longer to say something worth dwelling on, but to remain visible within the churn.

The issue, then, is not simply whether one should consume AI-generated material. That framing is too pious and too easy. The more interesting question is what the consumer thinks they are consuming. If a reader, listener, or viewer wants only speed, familiarity, and surface competence, then AI content is not a scandal at all. It is the logical endpoint of a culture that has already demoted art into a deliverable.

This is where the fuss over labelling enters. Is it a principled demand for honesty, or merely a theatrical gesture by people who still want the aura of art whilst consuming content on industrial terms? Some of it is clearly protectionism. Some of it is virtue signalling. But not all of it is empty. The insistence on labelling betrays an intuition, however muddled, that authorship still matters, and that not all artefacts are equivalent merely because they occupy the same screen-space.

The deeper question is whether we still want art at all, or whether we merely want the aesthetic styling of art attached to things optimised for convenience. Once a culture learns to prefer seamless output over resistance, recognisability over risk, and quantity over form, it should not act surprised when machines begin to serve it perfectly. They are only completing a trajectory already chosen.

So no, the issue is not AI alone. AI is only the latest mirror held up to a public that has spent years confusing availability with value and polish with depth. The real question is not whether machines can make content. Plainly, they can. The question is whether we still possess the appetite, patience, and seriousness required for art.

Image: Full image because the cover version is truncated. Generated by Gemini Nano Banana.

When Syntax Is Asked to Bear Too Much v1.2

1–2 minutes

I published the first version of this essay in February, arguing that the Frege–Geach problem, that three-score-year-old albatross around expressivism’s neck rests on a category error. Analytic philosophers were polite about it in the way that analytic philosophers are polite about things they intend to ignore. I don’t often revise my manuscripts, opting instead to publish a new and improved version, but the meat of this one remained strong and not worth revisiting as much as fortifying.

The trouble was that I’d dissolved the problem without resolving it. Good enough for me. Others were less convinced. Telling people they’ve been asking the wrong question is satisfying but insufficient without a better one. Version 1.1 tidied the prose. Version 1.2 does the actual work.

The new section (§4, if you’ve already read previous versions) introduces recruitable expressions – a broader class of expressions (moral predicates, thick evaluative terms, epistemic and institutional vocabulary) whose full functional load is attenuated under embedding whilst a thinner inferential profile remains available for reasoning. The standard of practical inferential adequacy replaces the demand for semantic identity: what ordinary reasoning requires is not invariance but inferential sufficiency. And the pattern isn’t peculiar to moral language – a noted goal –, which means Frege–Geach stops looking like a special embarrassment for expressivism and starts looking like one symptom of a general feature of how natural language handles multi-functional expressions under logical stress.

The essay is dissolved as a demand for unrestricted semantic invariance. It is resolved insofar as the behaviour it identifies is explained, predicted, and shown to be general.

The revised paper is available here, near the rest of my manuscripts: DOI

Lastly, this essay is built on the foundations of A Language Insufficiency Hypothesis and The Architecture of Encounter, the latter of which wasn’t yet available for the initial publication.

As ever, I welcome the polite ignoring.

The Demise of Frege–Geach?

4–5 minutes

Journal Entry

I published an essay on the Frege–Geach problem in February. I published an update yesterday. I still wasn’t satisfied, so I engaged with several LLMs. This was my approach.

The involved LLMs were:

  • Claude
  • Grok
  • ChatGPT
  • Gemini
Audio: NotebookLM summary podcast of this topic.
(This summary misses the mark in some ways, but it brings up some interesting observations along the way.)

First, I fed them some documents in no particular order, my goal being to share my own knowledge and position on the purported problem.

I started with Gemini. This was my prompt:

I am interested in resolving the Frege–Geach problem, but it seems I can only dissolve it. This doesn’t appear to be adequate for some analytical philosophers. How might I get closer to resolving it? My main argument is that they are assuming that language is stronger than it is, and they don’t agree with my argument.

As the prompt notes and by design, many analytical philosophers are reluctant to grant the degree of insufficiency I take to be constitutive of natural language, especially where logical embedding is concerned. Evidently, that counts as my not wanting to play their game. From my perspective, they are committed to a different ontological grammar. What this means practically is that I need to present my solution proposal in their terms. This doesn’t mean their terms are right; problems are only relevant in their dialect, even though my argument is that all dialects are lossy – mine included.

Part of the challenge is that formal logic was invented precisely because ordinary language is imprecise, yet its standards are often retrofitted back onto natural language as though they revealed what language must have been doing all along.

Without sharing the entire play-by-play of the transcripts, I established my course of action. I’d dissolved the problem, but I hadn’t yet resolved it.

My initial intuition of several years ago was to argue that they were expecting too much from grammar. I’ll use a well-worn example. Follow these statements:

  1. IF ‘Murder is wrong.’
  2. THEN ‘If murder is wrong, then getting your brother to murder is wrong.’
  3. SO ‘Getting your brother to murder is wrong.’

According to them, the embedded ‘murder is wrong‘ doesn’t make sense. Here’s their logic:

According to Ayer, moral statements are simply emotive. When one utters, ‘murder is wrong‘, they are really saying ‘Boo, murder‘ – ‘I don’t like murder‘.

If ‘murder‘ is defined as ‘killing disallowed by the state‘, then murder is wrong might be translated into ‘killing disallowed by the state is wrong’ or ‘what the state declares is wrong is wrong’, but we also know that the state makes many pronouncements, many of which carry no moral weight and others which are counter to expected moral positions – law does not equal moral, and vice versa. Let’s move on and revisit our statements:

  1. IF ‘Boo to Murder is wrong.’
  2. THEN ‘If boo to murder is wrong, then boo to getting your brother to murder is wrong.’
  3. SO ‘Getting your brother to murder is wrong.’

My intuition was that the embedded clause does not perform the same linguistic act as the standalone assertion, even if the lexical material is repeated. We’re committing a category error. More crucially, the category it belongs to doesn’t exist, so it’s unspecified. It needs to be invented.

Although I struggled to find apt nomenclature, I settled on performance-sensitive expressions.

A parallel challenge is that the solution can’t be a simple carve-out for moral language. Whilst I feel that moral language does use its own grammar and semantics, I don’t expect analytical philosophers to accept this assertion, so the solution should be more generalisable. I’d need to demonstrate where else this conditional logic fails in the same manner.

ChatGPT had this to say in response to a draft:

The comparative-cost section is good, but one sentence should be added to pre-empt the “your account also uses theory-laden notions” objection.
A critic may reply: your own terms, like “coherence zone” and “synchronisation protocol,” are also theoretical machinery. True enough. The difference is that your machinery is independently motivated and not introduced solely to patch Frege-Geach. You imply this already, but it would help to say so directly.

It offers clearer language:

Embedded moral predicates are not semantically identical to their unembedded counterparts, but neither are they inert; they are performance-sensitive expressions whose full evaluative load is attenuated under embedding while a thinner inferential profile remains available for reasoning.

In any case, I am still polishing the essay, dotting Is and crossing Ts. I think I’ve got the main argument and some examples. One of my weaknesses may be that I rely heavily on my own theories, but these are published and debatable on their own merits.


100% Human Content

3–4 minutes

Perhaps not 100% because I’ve just spent hours chatting with several LLMs, complaining about the spate of purported AI detectors that tell me ‘this content shows a high similarity to AI-generated content’ or somewhat.

Audio: NotebookLM summary podcast of this topic.

If I weren’t already familiar with the AI tells, I am now, as Claude reluctantly shared this:


Lexical tics. The ones you’ve already flagged are real: “delve,” “robust,” “nuanced,” “multifaceted,” “landscape” (used metaphorically for any domain whatsoever), “tapestry,” “leverage” (as verb), “facilitate,” “underscores.” There’s also a tier of slightly subtler ones — “notably,” “crucially,” “it’s worth noting that,” “this is particularly important because” — which function as false-emphasis markers, lending rhetorical weight where the content doesn’t actually earn it.

Structural patterns. This is where it gets more diagnostic than any single word choice. LLM prose tends toward a very particular architecture: a thesis-restating opening sentence, three to five parallel elaboration points (often with suspiciously balanced treatment), and a synthesising conclusion that circles back to the opening. The ratio of setup-to-substance is high. Everything gets contextualised before it gets said. There’s a compulsive both-sides-ism, a hedging reflex — “while X is certainly true, it’s also important to consider Y” — that reads less like genuine intellectual caution and more like a conflict-avoidance subroutine.

Tonal tells. LLM text tends to be relentlessly encouraging. There’s an affective flatness masquerading as warmth — everything is “great question,” everything “raises important considerations.” The prose never gets genuinely irritable, confused, bored, or reckless. It maintains a kind of supervisory friendliness, the register of a middle manager who’s been on a coaching course. Relatedly, there’s almost never a genuinely surprising analogy or a metaphor that risks falling flat — the figurative language is always safe, always legible, always serviceable.

Punctuation and rhythm. The em-dash point is real but often overstated. More telling is sentence-length uniformity — LLM prose tends to hover in a narrow band of 15–25 words per sentence, rarely producing either the genuinely terse or the genuinely complex-subordinated. Paragraph length is also strikingly regular. The prose breathes at metronomic intervals.

The absence of mess. Perhaps the deepest tell. Human writing — even polished human writing — contains asymmetries: a paragraph that runs long because the writer got interested, a point that gets dropped without resolution, a transition that’s slightly rough, a joke that doesn’t quite land but stays in anyway. LLM prose is clean in a way that signals not craft but indifference — nothing is there because the writer cared about it, so nothing is disproportionate.

Add to this grammar-checkers attempting to drive writing back to middle school formats – not to mention competing grammar checkers hypercorrecting each other. The struggle is real.

NotebookLM Infographic on this topic.

Of course, a solution might be just to review the content and undo these inclusions. Almost interestingly, I use Autocrit (but not often because it is too orthodox), but it does serve somewhat as an anti-AI agent, assessing rhythm and pace, plus the usual copyediting functions.

I prefer to draft in a stream-of-consciousness style. The AI organises my messes, so if I asked it to, it would remove or relocate my functional parenthetical about Autocrit. But I’m leaving it just to prove I’m human. Or did I add it to an AI-scripted piece? 🧐

Whilst I considered that I could either overdo AI or join the 54 per cent of Americans who write at a third-grade level. Grok suggested something even more sinister – Friggin Musk. It suggested that I double down on the AI likeness and make my content into an AI parody factory – overpopulate it with em-dashes, deving, and tapestry. Evidently, Carole King was AI before Suno.

In any case – and AI might suggest moving this to the top – the problem is that I now have an additional layer that interrupts my flow and process. It’s disconcerting, and I resent it. My psyche is disturbed to appease witchhunters. And it’s bollox.

The question is whether to succumb to the moral suasion or ignore the moral posturing.


This post contains no sugar, salt, fat, carbohydrates, protein, or fibre. No animals were harmed in the production of this blog. All proceeds will be donated to the Unicorn Recovery Foundation.

If you can’t tell, does it matter?

3–5 minutes

Westworld was a disappointment. It became unwatchable after the first season. But one exchange from 2016 has aged better than anything else in that show, and it landed differently when I recalled it recently in the context of AI authorship.

A greeter robot exchanges words with William, a guest.

You want to ask, so ask.’

Are you real?’

Well, if you can’t tell, does it matter?

I thought of this after encountering a post that’s representative of a genre now doing brisk trade on LinkedIn and its satellites. The argument runs roughly thus: AI can write fast, but it can’t write you. Your why is sacred. Your scars make the prose real. The messy middle is where the magic lives. Keep the soul in your stories.

Audio: NotebookLM summary podcast of this topic.

A bloke shared this opinion:

The one thing AI can’t replicate is your “Why.” 🧠

There’s a lot of noise lately about how AI can “write a book in an hour.” But after publishing 8 books, I’ve realized something crucial: speed is not the same as substance.

The “hidden danger” of letting tools do the heavy lifting isn’t just about the quality of the prose—it’s about the erosion of the creative spirit. When we skip the struggle of the “messy middle,” we skip the insights that actually make a story resonate with a reader.

Tools are great for grammar and brainstorming, but they don’t have:
The scars that make a character’s pain feel real.

The weird, specific memories that make a setting feel alive.
The intuition to know when to break the rules for emotional impact.

By all means, use the tech. But don’t let it sit in the driver’s seat. Your readers are looking for a connection with you, not a refined algorithm.

Keep the soul in your stories. It’s the only thing that actually sticks.

NotebookLM Infographic on this topic.

So much to unpack.

This sounds lovely enough. It is also almost entirely wrong, methinks.

Why is doing suspiciously grand work in these arguments. It’s treated as an ineffable essence – a soul-particle immune to replication. But why is not a substance. It’s an interpretive gloss. A post-hoc narrative we attach to action to stabilise it. Call it intention, call it telos, call it ‘creative spirit’ if one must. It remains a story we tell about stories.

And if we’re invoking the canon, let’s not do so selectively. Roland Barthes already detonated the neat alignment between authorial intention and readerly reception. Once a work leaves the desk, its why dissolves into a field of readings. The reader does not commune with your struggle. They encounter marks on a page. The rest is projection.

The romanticisation of the ‘messy middle’ borders on Calvinism – suffering as guarantor of authenticity, as though the scar itself writes the sentence. Plenty of humans have scars and produce dull prose. Plenty of writers construct convincing pain from observation, empathy, craft, and yes, occasionally from tools. Emotional resonance is not a moral reward for having bled.

Then there is the means-fetish: the idea that process sanctifies product. We do not evaluate a bridge by how spiritually formative the drafting was for the engineer. We ask whether it stands. If a text moves a reader, unsettles them, clarifies something, disturbs them – the instrument used to draft it is historically interesting, not aesthetically decisive.

There is also a quiet assumption buried in all of this: that connexion between writer and reader is a transmission of interiority. It isn’t. It is a negotiated effect. Readers connect with patterns that mirror, disrupt, or reframe their own experience. They are not sniffing for artisanal anguish.

None of this means craft evaporates. It means we should be wary of smuggling metaphysics into workflow preferences.

If someone prefers to wrestle with the blank page unaided – splendid, have at it. But the fetish for purity says more about our anxieties over authorship than it does about art. And if you can’t tell whether the thing that moved you was written by hand or by machine, then I’d suggest, with the greeter robot, that perhaps it doesn’t matter.

In the end, I am not even advocating using AI for writing, but I am saying not to be a dick about it. Enough of the virtue signalling

Why Deflationary Philosophy Keeps Attracting Mystics

4–5 minutes

The struggle is real. There is an odd occupational hazard that comes with writing deflationary philosophy: mystics keep turning up to thank you for your service.

This is always mildly bewildering. One spends a great deal of time dismantling metaphysical furniture, only to discover a small group lighting incense in the newly cleared space. Candles appear. Silence thickens. Someone whispers ineffable. Nope. The filing cabinet was just mislabeled.

Audio: NotebookLM summary podcast of this topic.

The problem is not misunderstanding. It’s reuse.

It is tempting to think this is a simple misreading: I say this concept breaks down here, and someone hears you have glimpsed the ultimate. But that’s too kind. What’s really happening is more interesting. Mysticism does not merely misunderstand deflationary work; it feeds on the same linguistic moves and then stops too early.

Both mysticism and deflation rely on negative gestures:

  • “This description fails.”
  • “That category no longer applies.”
  • “Our usual language runs out.”

Up to this point, they are indistinguishable. The fork comes immediately after. The mystic treats conceptual failure as an endpoint. The silence itself becomes the destination. Something deep must live there, humming quietly, just out of reach.

The deflationist treats the same failure as a transition. The silence is not sacred. It’s a signal. It means: this tool no longer fits; pick another or move on. Same breakdown. Entirely different posture.

Clearing space versus consecrating it

Much deflationary philosophy clears space. It removes assumptions that were doing illicit work and leaves behind something quieter, simpler, and occasionally disappointing.

Mysticism has a standing policy of consecrating cleared space. An empty room is never just empty. It must be pregnant with meaning. Absence becomes depth. Silence becomes revelation. The fewer claims you make, the more cosmic you must be.

This is not a philosophical disagreement so much as a difference in temperament. One side sees subtraction. The other experiences loss and rushes to compensate. Modern intellectual culture strongly prefers addition. New layers. Hidden structures. Further depths. Deflation feels like theft. So it gets reinterpreted as a subtler form of enrichment: Ah, fewer words, therefore more truth.

The aesthetic trap

There is also an aesthetic problem, which I increasingly suspect does most of the damage. Deflationary philosophy, when done well, tends to sound calm, patient, and restrained. It does not shout. It does not posture. It does not perform certainty. Unfortunately, this is exactly how profundity is supposed to sound.

Quiet seriousness is easily mistaken for spiritual depth. Refusal to speculate reads as wisdom. Negative definition acquires an apophatic glow. This is how one ends up being mistaken for a mystic without having said anything mystical at all.

A brief word about Wittgenstein (because of course)

This is not a new problem. Ludwig Wittgenstein spent a good portion of his career trying to convince people that philosophical problems arise when language goes on holiday. He was not pointing at a deeper reality beyond words. He was pointing back at the words and saying: look at what you’re doing with these.

Unfortunately, “Whereof one cannot speak, thereof one must be silent” has proven irresistible to those who think silence is where the real action is. Wittgenstein meant: stop here. Many readers heard: kneel here. This is the recurring fate of therapeutic philosophy. The cure gets mistaken for a sacrament.

Charity is not complicity

Another contributor to the confusion is tone. Deflationary work tends to be charitable. It explains why certain intuitions arise. It traces confusions to their sources. It does not sneer. This generosity is often misheard as validation. When you say, “It makes sense that we think this way,” some readers hear, “Your intuition is pointing at something profound.” You are offering an explanation. They are receiving an affirmation. At that point, no disclaimer will save you. Any denial is absorbed as further evidence that you are brushing up against something too deep to articulate.

The real disagreement

The disagreement here is not about reality. It is about what to do when explanation fails.

Mysticism treats failure as revelation. Deflation treats failure as diagnostic.

One sanctifies the breakdown. The other changes tools.

Once you see this, the repeated misfire stops being frustrating and starts being predictable.

A final, self-directed warning

There is, admittedly, a risk on the other side as well. Deflation can become mystical if it turns into ritual. If refusal hardens into identity. If “there is nothing there” becomes something one performs rather than concludes. Even subtraction can acquire ceremony if repeated without purpose. The discipline, such as it is, lies in knowing when to clear space—and when to leave the room.

No replacement gods

When a metaphysical idol is removed, someone will always ask what god is meant to replace it. The deflationary answer is often disappointing: none. This will never satisfy everyone. But the room is cleaner now, and that has its own quiet reward—even if someone insists on lighting incense in the corner.

Image: Full cover image infographic by NotebookLM

Reality Happens Once. Facts Happen Many Times.

I want to clarify my recent The Trouble with Facts post. I realise that I was speaking to one non-trivial form of facts, but there is more than one class of facts. We argue about facts as if the word named a single, stable thing. It doesn’t. It names a family of very different things, quietly grouped together by habit, convenience, and institutional need. Most disputes about facts go nowhere, not because one side is irrational, but because the word itself is doing covert work. We slide between meanings without noticing, then act surprised when disagreement follows. This piece is an attempt to slow that slide.

Audio: NotebookLM summary podcast of this topic.

Polysemy We Notice, Polysemy We Don’t

We are comfortable with ambiguity when it is obvious. A bank can be a financial institution or the edge of a river. A bat can be an animal or a piece of sports equipment. Context resolves these instantly. No one feels existentially threatened by the ambiguity.

Fact is different. The word is polysemous in a way that is both subtle and consequential. Its meanings sit close enough to bleed into one another, allowing certainty from one sense to be smuggled into another without detection. Calling something a fact does not merely describe it. It confers authority. It signals that questioning should stop. That is why this ambiguity matters.

Different Kinds of Facts

Before critiquing facts, we need to sort them.

1. Event-facts (brute, world-facing)
As mentioned previously, these concern what happens in the world, independent of observation.

  • A car collides with a tree.
  • Momentum changes.
  • Metal deforms.

These events occur whether or not anyone notices them. They are ontologically robust and epistemically inaccessible. No one ever encounters them directly. We only ever encounter traces.

2. Indexical or performative facts (trivial, self-reporting)
“I am typing.”

I am doing this now – those now may not be relevant when you read this. This is a fact, but a very thin one. Its authority comes from the coincidence of saying and doing. It requires no reconstruction, no inference, no institutional validation. These facts are easy because they do almost no work.

3. Retrospective personal facts (memory-mediated)
“I was typing.”

This may be relevant now, at least relative to the typing of this particular post. Still a fact, but weaker. Memory enters. Narrative compression enters. Selectivity enters. The same activity now carries a different epistemic status purely because time has passed.

4. Prospective statements (modal, not yet facts)
“I will be typing.”

This is not yet a fact. It may never come to be one. It is an intention or prediction that may or may not be realised. Future-tense claims are often treated as incipient facts, but this is a category error with real consequences.

5. Institutional facts (designated, procedural)
“The court finds…”
“The report concludes…”

These are facts by designation. They are not discovered so much as selected, formalised, and stabilised so that systems can act. They are unlikely to rise to the level of facts, so the legal system tends to generate facts in name only – FINO, if I am being cute.

All of these are called ‘facts’. They are not interchangeable. The trouble begins when certainty migrates illicitly from trivial or institutional facts into brute event-facts, and we pretend nothing happened in the transfer.

One Motor Vehicle

Reconsider the deliberately simple case: A motor vehicle collides with a tree. Trees are immobile, so we can rule out the tree colliding with the car.

Ontologically, something happened. Reality did not hesitate. But even here, no one has direct access to the event itself.

The driver does not enjoy privileged access. They experience shock, adrenaline, attentional narrowing, selective memory, post hoc rationalisation, perhaps a concussion. Already several layers intervene before language even arrives.

A rough schema looks like this:

event → sensory registration → cognitive framing → linguistic encoding → social validation

Ontology concerns what happens.
Epistemology concerns how anything becomes assertable.

Modern thinking collapses the second into the first and calls the result the facts.

People speak of “hard facts” as if hardness transfers from objects to propositions by proximity. It doesn’t. The tree is solid. The fact is an artefact assembled from observation, inference, convention, and agreement.

And so it goes…

Why the Confusion Persists

When someone responds, “But isn’t it a fact that I read this?”, the answer is yes. A different kind of fact.

The error lies not in affirming facts, but in failing to distinguish them. The word fact allows certainty to migrate across categories unnoticed, from trivial self-reports to brute world-events, and from institutional verdicts to metaphysical claims. That migration is doing the work.

Conclusion

Clarifying types of facts does not weaken truth. It prevents us from laundering certainty where it does not belong.

Facts exist. Events occur. But they do not arrive unmediated, innocent, or singular.

Reality happens once. Facts happen many times.

The mistake was never that facts are unreal. It was believing they were all the same kind of thing.