Using Generative AI as Early Peer Review

4–6 minutes

Cheap Adversaries, Outsourced Ego, and Engineered Critique ← ChatGPT is obsessed with subtitles.

There is a peculiar anxiety around admitting that one uses generative AI in serious intellectual work. The anxiety usually takes one of two forms. Either the AI is accused of replacing thinking, or it is accused of flattering the thinker into delusion. Both charges miss the point, and both underestimate how brittle early-stage human peer review often is.

What follows is not a defence of AI as an oracle, nor a claim that it produces insight on its own. It is an account of how generative models can be used – deliberately, adversarially, and with constraints – as a form of early peer pressure. Not peer review in the formal sense, but a rehearsal space where ideas are misread, overstated, deflated, and occasionally rescued from themselves.

Audio: NotebookLM summary podcast of this topic.

The unromantic workflow

The method itself is intentionally dull:

  1. Draft a thesis statement.
    Rinse & repeat.
  2. Draft an abstract.
    Rinse & repeat.
  3. Construct an annotated outline.
    Rinse & repeat.
  4. Only then begin drafting prose.

At each stage, the goal is not encouragement or expansion but pressure. The questions I ask are things like:

  • Is this already well-trodden ground?
  • Is this just X with different vocabulary?
  • What objection would kill this quickly?
  • What would a sceptical reviewer object to first?

The key is timing. This pressure is applied before the idea is polished enough to be defended. The aim is not confidence-building; it is early damage.

Image: NotebookLM infographic on this topic.

Why generative AI helps

In an ideal world, one would have immediate access to sharp colleagues willing to interrogate half-formed ideas. In practice, that ecology is rarely available on demand. Even when it is, early feedback from humans often comes bundled with politeness, status dynamics, disciplinary loyalty, or simple fatigue.

Generative models are always available, never bored, and indifferent to social cost. That doesn’t make them right. It makes them cheap adversaries. And at this stage, adversaries are more useful than allies.

Flattery is a bias, not a sin

Large language models are biased toward cooperation. Left unchecked, they will praise mediocre ideas and expand bad ones into impressive nonsense. This is not a moral failure. It is a structural bias.

The response is not to complain about flattery, but to engineer against it.

Sidebar: A concrete failure mode

I recently tested a thesis on Mistral about object permanence. After three exchanges, the model had escalated a narrow claim into an overarching framework, complete with invented subcategories and false precision. The prose was confident. The structure was impressive. The argument was unrecognisable.

This is the Dunning-Kruger risk in practice. The model produced something internally coherent that I lacked the domain expertise to properly evaluate. Coherence felt like correctness.

The countermeasure was using a second model, which immediately flagged the overreach. Disagreement between models is often more informative than agreement.

Three tactics matter here.

1. Role constraint
Models respond strongly to role specification. Asking explicitly for critique, objections, boundary-setting, and likely reviewer resistance produces materially different output than asking for ‘thoughts’ or ‘feedback’.

2. Third-person framing
First-person presentation cues collaboration. Third-person presentation cues evaluation.

Compare:

  • Here’s my thesis; what do you think?
  • Here is a draft thesis someone is considering. Please evaluate its strengths, weaknesses, and likely objections.

The difference is stark. The first invites repair and encouragement. The second licenses dismissal. This is not trickery; it is context engineering.

3. Multiple models, in parallel
Different models have different failure modes. One flatters. Another nitpicks. A third accuses the work of reinventing the wheel. Their disagreement is the point. Where they converge, caution is warranted. Where they diverge, something interesting is happening.

‘Claude says…’: outsourcing the ego

One tactic emerged almost accidentally and turned out to be the most useful of all.

Rather than responding directly to feedback, I often relay it as:

“Claude says this…”

The conversation then shifts from defending an idea to assessing a reading of it. This does two things at once:

  • It removes personal defensiveness. No one feels obliged to be kind to Claude.
  • It invites second-order critique. People are often better at evaluating a critique than generating one from scratch.

This mirrors how academic peer review actually functions:

  • Reviewer 2 thinks you’re doing X.
  • That seems like a misreading.
  • This objection bites; that one doesn’t.

The difference is temporal. I am doing this before the draft hardens and before identity becomes entangled with the argument.

Guardrails against self-delusion

There is a genuine Dunning–Kruger risk when working outside one’s formal domain. Generative AI does not remove that risk. Used poorly, it can amplify it.

The countermeasure is not humility as a posture, but friction as a method:

  • multiple models,
  • adversarial prompting,
  • third-person evaluation,
  • critique of critiques,
  • and iterative narrowing before committing to form.

None of this guarantees correctness. It does something more modest and more important: it makes it harder to confuse internal coherence with external adequacy.

What this cannot do

It’s worth being explicit about the limits. Generative models cannot tell you whether a claim is true. They can tell you how it is likely to be read, misread, resisted, or dismissed. They cannot arbitrate significance. They cannot decide what risks are worth taking. They cannot replace judgment. Those decisions remain stubbornly human.

What AI can do – when used carefully – is surface pressure early, cheaply, and without social cost. It lets ideas announce their limits faster, while those limits are still negotiable.

A brief meta-note

For what it’s worth, Claude itself was asked to critique an earlier draft of this post. It suggested compressing the familiar arguments, foregrounding the ‘Claude says…’ tactic as the real contribution, and strengthening the ending by naming what the method cannot do.

That feedback improved the piece. Which is, rather conveniently, the point.

When Language Fails – Abstract

3–4 minutes

I set aside some time to design the front cover of my next book. I’m excited to share this – but that’s always the case. It’s substantially complete. In fact, it sidelined another book, also substantially complete, but the content in this might force me to change the other one. It should be ready for February. I share the current state of the Abstract

This book is meant to be an academic monograph, whilst the other, working title: The Competency Paradox, is more of a polemic.

As I mentioned in another post, it builds upon and reorients the works of George LakoffJonathan HaidtKurt Gray, and Joshua Greene. I’ve already revised and extended Gallie’s essentially contested concepts in A Language Insufficiency Hypothesis in the form of Contestables, but I lean on them again here.

Abstract

Contemporary moral and political discourse is marked by a peculiar frustration: disputes persist even after factual clarification, legal process, and good-faith argumentation have been exhausted. Competing parties frequently agree on what happened, acknowledge that harm occurred, and yet remain irreconcilably divided over whether justice has been served. This persistence is routinely attributed to misinformation, bad faith, or affective polarisation. Such diagnoses are comforting. They are also often wrong.

This paper advances a different claim. Certain conflicts are not primarily epistemic or semantic in nature, but ontological. They arise from incompatible orientations that structure how agents register salience, threat, authority, autonomy, and legitimacy. These orientations are genealogically shaped through enculturation, institutions, and languaged traditions, yet operationally they function prior to linguistic articulation: salience fires before reasons are narrated. Moral vocabulary enters downstream, tasked with reconciling commitments that were never shared.

From this perspective, the instability of concepts such as justice is not the primary problem but a symptom. Justice belongs to a class of Contestables (in Gallie’s sense, PDF): action-authorising terms that appear determinate while remaining untethered from shared reference under ontological plurality. Appeals to clearer definitions, better process, or shared values therefore misfire. They presume a common ontological ground that does not, in fact, exist.

When institutions are nevertheless required to act, they cannot adjudicate between ontologies. They can only select. Courts, juries, regulatory bodies, and enforcement agencies collapse plural interpretations into a single outcome. That outcome is necessarily experienced as legitimate by those whose orientation it instantiates, and as injustice by those whose orientation it negates. No procedural refinement can eliminate this asymmetry. At best, procedure dampens variance, distributes loss, and increases tolerability.
Crucially, the selection itself is constrained but underdetermined. Even within formal structures, human judgment, discretion, mood, confidence, fear, and narrative framing play a decisive role. Following Keynes, this irreducible contingency may be described as animal spirits. In formal terms, institutional outcomes are sampled from a constrained space of possibilities, but the reaction topology remains structurally predictable regardless of which branch is taken.

The consequence is stark but clarifying: outrage is not evidence that a system has failed to deliver justice; it is evidence that plural ontological orientations have been forced through a single decision point. Where semantic reconciliation is structurally unavailable, exogenous power is the dominant near-term mediator. Power does not resolve the conflict; it pauses it and stabilises meaning sufficiently for coordination to continue.

This analysis does not deny the reality of harm, the importance of law, or the necessity of institutions. Nor does it lapse into nihilism or indifference. Rather, it reframes the problem. In ontologically plural environments, the task is not moral convergence but maintenance: containing collision, resisting premature coherence, and designing institutions that minimise catastrophic failure rather than promising final resolution.

The argument developed here predates any particular event. Its value lies precisely in its predictive capacity. Given plural ontologies, untethered contestables, and institutions that must act, the pattern of reaction is invariant. The surface details change; the structure does not.
What follows is not a proposal for reconciliation. It is a diagnosis of why reconciliation is so often a category error, and why pretending otherwise is making things worse.