Architecture of Encounter – Indexing with Claude AI

5–7 minutes

Dear diary…

I’m not ashamed to say that AI is a significant part of my publishing workflow. In my latest project, The Architecture of Encounter, I’ve added indexing to the roles it serves. Other roles were prepping the index and footnotes, as I shared recently.

I expect the book to be available by next week. Time will tell.

I’ve included the full index below for reference. I’ve also included the title and copyright pages and other back matter.

What will a visitor do with a bookless index? I don’t know, but sharing is caring in my book. For the interested, you can get a sense of the contents. I’ll be sharing more details over the coming weeks – and beyond, I’m sure.

Earlier, I shared that Claude had offered index candidates. I started executing on that list by indexing the first few terms. It took me about an hour to do these, searching for each term and documenting the page number and context – around 250 pages. The book itself is 292 – 6″ x 9″ pages, but more than 50 of these are appendices, and others are front matter. Still.

Audio: NotebookLM summary podcast of this topic. (Another one that misses some points but make others. Fair enough.)

Then it dawned on me to ask Claude to help me with the index. Claude interpreted ‘help’ by spitting out the entire index, formatted and organised. If the book were formatted in 8Β½” x 11″ Letter size, I could have appended it as-is, but I still had to pour the output into the InDesign template I was composing through and make it look like it was part of the same manuscript, but that took minutes, not days of hours. Appendix E.

Given that I also rely heavily on novel concepts and specifically-defined terms – language insufficiency notwithstanding – I felt that a glossary would be useful. I tasked Claude with this, too. Again, it output a fully-formed list.

I noticed that a couple of terms I wanted defined were absent, so I fed the list into ChatGPT and asked it to consider these and let me know, given the manuscript, what other terms might be absent. It agreed with the two I wanted and suggested three more. It also pointed out an error Claude had made in counting. It also provided the definitions for the glossary entries, so I poured Claude’s output into InDesign. Appendix D.

AI is also a helper. For example, I wanted my index to flow into 2 columns. I’ve done this before. In the old days, I’d have scanned the menus (Adobe products are infamous for convoluted, nested menus), read the manual, and/or Googled for the answer – perhaps queried YouTube, a great resource for such things. Now, I ask AI. In this case, I asked ChatGPT. To be honest, it’s a little verbose, where ‘it’s option X under Y menu’ would suffice, but I ignore the banter.

If you need to know…

If the index is already placed in a text frame

  1. Select the text frame containing the index.
  2. Go to Object β†’ Text Frame Options.
  3. Set the Number of Columns you want, usually 2 for a 6Γ—9 book, sometimes 3 if the type is small and the entries are short.
  4. Adjust the Gutter spacing between columns.
  5. Click OK.

Criticise AI all you want, but having access to in-built assistance 24/7 is a huge time-saving benefit.

Do I still use Google and YouTube? Yes, often.

Speaking of Google, I was searching for a cover image, and I discovered something I need for the fiction title I paused in September to focus on nonfiction. Sidenotes. Perhaps I’ll employ a similar mechanism.

The nonfiction book I am writing is somewhat epistolary, and I want to place internal dialogue as marginalia, employing a scripted font face. I am even considering a ‘deluxe’ version that renders this content in colour, but that’s an extra expense, first for the colour, then the full-page bleed, and perhaps thicker paper stock. Likely hardbound, reserving the paperback for a lower price point.

So, what’s next?

I finished both paperback and hardcover designs today.

I still need to review the index for hallucinated errors. This will still take less time than manually constructing it.

On the copyright page, there are a few classifiers. There are ISBNs for each format and a Library of Congress Control Number (LCCN). These are done, as you can see, but the ISBN system in the United States is antiquated. It looks like it’s a museum piece from the mid-1990s. In fact, I believe I first accessed it around 2000 or 2001, when I published my first book – before AI, before print on demand (POD).

A bit of nostalgia. The WWW, the internet as most people know it, was made public around 1994. Google hit it in 1998. Web 1.0. Facebook blighted the world around 2004, though less invasively at the start. I digress. Technology is a mixed bag.

Returning to ISBNs… These are managed in a system built circa 1997. It seems it is still managed with a host of cron jobs, so not much is processed in real time unless it’s a trivial record entry.

Each ISBN references a title and a format, as well as other odds and ends. In my case, I also use an imprint to separate my fiction from nonfiction. I started Microglyphics – tiny writing– in the mid-90s. When I published other authors, I used this name. I also used it for some of my fiction writing. I decided to create a Philosophics Press imprint for my philosophy and adjacent work.

It turns out that the printer needs to ensure that a book’s title and ISBN match the imprint. The system default is the company name, but I changed it to my imprint. This causes a workflow event on their end. Until it propagates, it doesn’t match, and the printer won’t allow the print run.

I’m writing this blog entry as I wait. I’m not sure if it’s automated – I’d like to assume it is – or if a human has to do something. AI might help. Just saying.

EDIT: The imprint has now been updated to Philosophics Press, but it still doesn’t work at the printer. Evidently, it can take up to 5 days for the data to propagate. I’m not sure who owns the fail on this one? Is the printer waiting for a data push? Can’t they pull the data? They seem to be live from my perspective. Is there an API, or is it truly old-school?


Whilst I’m here wittering on, WordPress have deprecated the little widget below – the one with my (old) thumbnail picture and ‘written by’ tag. I adopted it last year, but it’s been killed off. I’ve been copying the object from old posts, but I’ll probably switch to whatever they’ve replaced it with. I wasn’t keen on the options I’ve seen so far. First-world problems, I suppose.

On Footnotes

1–2 minutes

Two consecutive posts on writing. What gives?

As a writer, I have a kind of workflow in mind. It differs for fiction and nonfiction, though there are similarities. The main differences are indexes, references, and footnotes. I thought I was nearly finished with my latest book.

Audio: NotebookLM summary podcast of this topic.

I rendered the first couple of chapters in ElevenLabs so I could listen to the text. It helps me assess pacing, continuity, and the like. It also gives me one more thing to fix, since the digital voices occasionally pronounce things differently from the way I’d prefer. Usually, choosing a voice that sounds roughly like Standard Southern British English helps, though not always. In any case, I survive.

While listening and following along in the manuscript, I discovered that I still haven’t fully footnoted the book.

When I write, I tend to jot down sources and references as I go, but I leave the footnoting until the final revision. There is little point in spending time drafting footnotes for material that may not survive the cut. So here I am, drafting footnotes.

I am of two minds about footnotes and citations, especially in academic work. On the one hand, they can feel performative, a way of signalling that one has incorporated so-and-so, a kind of name-dropping appeal to authority. As a heterodox writer, this is risky anyway, because I am liable to cite one of the approved villains. God forbid a Nietzsche, a Heidegger, or a Foucault. Analytic philosophers, in my experience, can be remarkably thin-skinned.

On the other hand, I do want to acknowledge those whose ideas I have borrowed. Even that, of course, is also a signal. Perhaps it all comes down to intent. As I mean it, I want to leave breadcrumbs for anyone who feels inclined to follow them back to the source.

I’ll let you decide where I’ve landed.

I’ve procrastinated long enough. Time to get back into the word mine.

Using Generative AI as Early Peer Review

4–6 minutes

Cheap Adversaries, Outsourced Ego, and Engineered Critique ← ChatGPT is obsessed with subtitles.

There is a peculiar anxiety around admitting that one uses generative AI in serious intellectual work. The anxiety usually takes one of two forms. Either the AI is accused of replacing thinking, or it is accused of flattering the thinker into delusion. Both charges miss the point, and both underestimate how brittle early-stage human peer review often is.

What follows is not a defence of AI as an oracle, nor a claim that it produces insight on its own. It is an account of how generative models can be used – deliberately, adversarially, and with constraints – as a form of early peer pressure. Not peer review in the formal sense, but a rehearsal space where ideas are misread, overstated, deflated, and occasionally rescued from themselves.

Audio: NotebookLM summary podcast of this topic.

The unromantic workflow

The method itself is intentionally dull:

  1. Draft a thesis statement.
    Rinse & repeat.
  2. Draft an abstract.
    Rinse & repeat.
  3. Construct an annotated outline.
    Rinse & repeat.
  4. Only then begin drafting prose.

At each stage, the goal is not encouragement or expansion but pressure. The questions I ask are things like:

  • Is this already well-trodden ground?
  • Is this just X with different vocabulary?
  • What objection would kill this quickly?
  • What would a sceptical reviewer object to first?

The key is timing. This pressure is applied before the idea is polished enough to be defended. The aim is not confidence-building; it is early damage.

Image: NotebookLM infographic on this topic.

Why generative AI helps

In an ideal world, one would have immediate access to sharp colleagues willing to interrogate half-formed ideas. In practice, that ecology is rarely available on demand. Even when it is, early feedback from humans often comes bundled with politeness, status dynamics, disciplinary loyalty, or simple fatigue.

Generative models are always available, never bored, and indifferent to social cost. That doesn’t make them right. It makes them cheap adversaries. And at this stage, adversaries are more useful than allies.

Flattery is a bias, not a sin

Large language models are biased toward cooperation. Left unchecked, they will praise mediocre ideas and expand bad ones into impressive nonsense. This is not a moral failure. It is a structural bias.

The response is not to complain about flattery, but to engineer against it.

Sidebar: A concrete failure mode

I recently tested a thesis on Mistral about object permanence. After three exchanges, the model had escalated a narrow claim into an overarching framework, complete with invented subcategories and false precision. The prose was confident. The structure was impressive. The argument was unrecognisable.

This is the Dunning-Kruger risk in practice. The model produced something internally coherent that I lacked the domain expertise to properly evaluate. Coherence felt like correctness.

The countermeasure was using a second model, which immediately flagged the overreach. Disagreement between models is often more informative than agreement.

Three tactics matter here.

1. Role constraint
Models respond strongly to role specification. Asking explicitly for critique, objections, boundary-setting, and likely reviewer resistance produces materially different output than asking for ‘thoughts’ or ‘feedback’.

2. Third-person framing
First-person presentation cues collaboration. Third-person presentation cues evaluation.

Compare:

  • Here’s my thesis; what do you think?
  • Here is a draft thesis someone is considering. Please evaluate its strengths, weaknesses, and likely objections.

The difference is stark. The first invites repair and encouragement. The second licenses dismissal. This is not trickery; it is context engineering.

3. Multiple models, in parallel
Different models have different failure modes. One flatters. Another nitpicks. A third accuses the work of reinventing the wheel. Their disagreement is the point. Where they converge, caution is warranted. Where they diverge, something interesting is happening.

‘Claude says…’: outsourcing the ego

One tactic emerged almost accidentally and turned out to be the most useful of all.

Rather than responding directly to feedback, I often relay it as:

β€œClaude says this…”

The conversation then shifts from defending an idea to assessing a reading of it. This does two things at once:

  • It removes personal defensiveness. No one feels obliged to be kind to Claude.
  • It invites second-order critique. People are often better at evaluating a critique than generating one from scratch.

This mirrors how academic peer review actually functions:

  • Reviewer 2 thinks you’re doing X.
  • That seems like a misreading.
  • This objection bites; that one doesn’t.

The difference is temporal. I am doing this before the draft hardens and before identity becomes entangled with the argument.

Guardrails against self-delusion

There is a genuine Dunning–Kruger risk when working outside one’s formal domain. Generative AI does not remove that risk. Used poorly, it can amplify it.

The countermeasure is not humility as a posture, but friction as a method:

  • multiple models,
  • adversarial prompting,
  • third-person evaluation,
  • critique of critiques,
  • and iterative narrowing before committing to form.

None of this guarantees correctness. It does something more modest and more important: it makes it harder to confuse internal coherence with external adequacy.

What this cannot do

It’s worth being explicit about the limits. Generative models cannot tell you whether a claim is true. They can tell you how it is likely to be read, misread, resisted, or dismissed. They cannot arbitrate significance. They cannot decide what risks are worth taking. They cannot replace judgment. Those decisions remain stubbornly human.

What AI can do – when used carefully – is surface pressure early, cheaply, and without social cost. It lets ideas announce their limits faster, while those limits are still negotiable.

A brief meta-note

For what it’s worth, Claude itself was asked to critique an earlier draft of this post. It suggested compressing the familiar arguments, foregrounding the ‘Claude says…’ tactic as the real contribution, and strengthening the ending by naming what the method cannot do.

That feedback improved the piece. Which is, rather conveniently, the point.

The Prison of Process

3–4 minutes

This is the proof copy of The Illusion of Light. I reviewed it, approved it, and signalled ‘good to go’. This is being printed and distributed through KDP. I’ve used them before. They’ve been reliable.

EDIT: On the upside, I’ve been notified that the hardback version is available, but it doesn’t appear to be available in France and Canada, two target regions. Hopefully, it becomes available outside of the U.S. soon.

Until now.

My approval triggered a workflow. I know workflows. I used to design them. I also know how dumb they can be.

KDP’s process flagged an error: the text on the spine might not be on the spine. ‘Might’. Theoretically. It could be offset, cut off, or printed on a fold. I understand their reasoning – high-speed printers, mechanical variance, and return risk. I also understand statistics, and a single observation doesn’t make a trend. But anyone with eyes can see at least a couple of millimetres of clearance at the top and bottom. This isn’t a case of ‘maybe’. It’s fine.

What fascinates me here is the ritual of compliance. Once a process is codified, it becomes self-justifying. The rule exists; therefore, it must be obeyed. There is no appeal to reason – only to the flowchart.

In the 1980s, when I was an audio engineer recording to two-inch magnetic tape, some of us liked to record hot, pushing the levels just past the recommended limits. You learned to ride the edge, to court distortion without collapse. That’s how I designed the spine text. Within tolerance. With headroom.

The problem is that modern systems don’t tolerate edges. There’s no β€œoverride” button for informed judgment. My remediation path is to shrink the type by half a point, resubmit, and pretend the machine was right.

What’s absurd is the timing. The same system that generated the proof approved this layout days ago. An automated OCR scan could have caught this phantom error earlier. Instead, the machine waits until the human signs off, then throws a flag so the process can justify its existence.

KDP is still faster and saner than IngramSpark. But this is capitalism distilled: survival by being marginally less incompetent than your competitor. Optimisation, not in the sense of best possible, but of barely better than worst acceptable.

The lesson, as always, is that processes begin as aids and end as prisons. The workflow, like the Enlightenment, believes itself rational. But the longer it runs, the less it serves the human at the console and the more it worships its own perfection.

Want to talk about meta? This underscores the contents of the book itself. What the Enlightenment once called Reason, modernity now calls Process. Both pretend to neutral objectivity while enshrining obedience as virtue. The bureaucracy of light has become digital – its catechism written in checkboxes, its priests replaced by automated validators. Every workflow promises fairness; each only codifies submission. The real danger isn’t that machines will replace judgment, but that we will stop noticing when they already have.


The Story Continues: Behind the Scenes

Image: Screenshot of Illustrator layout

I’ve reduced the font size on the spine from 14 points to 13.5. It still technically bleeds over a guideline. I hope I am not forced to reduce it to 13. A reason for text on the spine is to make it visible. Hopefully, the black-and-white vertical separation will help in this regard. Fingers crossed.