Indexing the Architecture of Encounter

2–3 minutes

Let’s begin with a confession: I loathe indexing.

It takes me days to index one of my books. Longer when the technology decides to become sentient in the worst possible way, such as the time InDesign corrupted the index file and swallowed days of work whole. A charming little reminder that software is often just bureaucracy with buttons.

Audio: Not the best NotebookLM summary podcast, but it’s mercifully under 5 minutes.

Today, while chatting with Claude (Opus 4.6), I mentioned that I should probably create an index for my current project. The manuscript is not fully reviewed and revised, but it is getting close. At this stage, I do not expect to add much of substance. I am more likely to subtract than expand.

Claude asked whether I wanted help generating a list of candidate terms from the manuscript.

Dois-je rédiger une liste de termes candidats à partir du manuscrit ?

I said yes, and it produced an embedded PDF: Index Term List – Architecture of Encounter. On first scan, it looks remarkably close to what I need. It is not merely a term list, either. It also proposes candidates for glossary entries, which is useful, even if I am not yet convinced I want to add a glossary. The book is already sitting at around 256 pages, and print production costs do not exactly reward philosophical generosity. The draft organises terms into five sections, including framework-specific technical vocabulary, inherited philosophical terms, proper names, traditions and programmes, and application domains and diagnostics. It also marks some entries as glossary candidates and notes likely cross-references.

NotebookLM Infographic on this topic.

One amusing detail is that some of the suggested references relate to epigraphs. I had not really considered indexing those. My inclination is still not to include them, but I admit the temptation is there.

The categorisation itself is also interesting. It makes a good deal of sense as a conceptual map or discovery tool, especially for a larger work. But it does not quite align with what most readers expect from an index, which is, bluntly, alphabetical and easy to raid.

Still, as a starting point, this is rather better than staring into the manuscript and pretending I enjoy this sort of thing.

Some people like to badmouth or trash-talk AI. I’m here to say that these people need to discover nuance and use cases.

’nuff said. What do you think? 🧐

Meet the Language Insufficiency GPT

1–2 minutes

In anticipation of the publication of A Language Insufficiency Hypothesis in January 2026, I’ve created a Language Insufficiency GPT.

Today I’m launching something designed to exploit a familiar failure mode with forensic precision:
👉 https://chatgpt.com/g/g-694018a9bbc88191a8360d65a530e50c-language-insufficiency-gpt

Naturally, it will make more sense alongside the book. But it may still provide a bit of entertainment – and mild discomfort – in the meantime.

tl;dr: Language is generally presumed to be stable. Words mean what you think they mean, right? A table is a table. A bird is a bird. Polysemy aside, these are solid, dependable units.

Then we arrive at freedom, justice, truth, and an entire panoply of unstable candidates. And let’s not even pretend qualia are behaving themselves.

So when someone says ‘truth’, ‘free speech’, or ‘IQ’, you may suddenly realise you’ve been arguing with a cardboard cut-out wearing your own assumptions. That isn’t just interpersonal mischief. It’s language doing exactly what it was designed to do: letting you glide over the hard problems while sounding perfectly reasonable.

Audio: Short NotebookLM summary of this page content*
Video: Legacy video explaining some features of the LIH.

If that sounds banal, you’ve already fallen for the trap.

Give it a try – or wait until you’ve digested the book. Not literally, unless you’re short on fibre.

Cheers.

* As I’ve cited previously, the quality of NotebookLM varies – usually in predictable directions. This one does well enough, but it doesn’t have enough context to get the story right (because it was only drawing from this page rather than from a fuller accounting of the LIH). Its trailing comment reveals that it doesn’t grasp that “new words” don’t solve the problem.

Earlier, it suggests that language is intentionally vague. This is not an assertion I make. You can read some of the earlier incarnations, or you can wait for it to be published.