Using Generative AI as Early Peer Review

4–6 minutes

Cheap Adversaries, Outsourced Ego, and Engineered Critique ← ChatGPT is obsessed with subtitles.

There is a peculiar anxiety around admitting that one uses generative AI in serious intellectual work. The anxiety usually takes one of two forms. Either the AI is accused of replacing thinking, or it is accused of flattering the thinker into delusion. Both charges miss the point, and both underestimate how brittle early-stage human peer review often is.

What follows is not a defence of AI as an oracle, nor a claim that it produces insight on its own. It is an account of how generative models can be used – deliberately, adversarially, and with constraints – as a form of early peer pressure. Not peer review in the formal sense, but a rehearsal space where ideas are misread, overstated, deflated, and occasionally rescued from themselves.

Audio: NotebookLM summary podcast of this topic.

The unromantic workflow

The method itself is intentionally dull:

  1. Draft a thesis statement.
    Rinse & repeat.
  2. Draft an abstract.
    Rinse & repeat.
  3. Construct an annotated outline.
    Rinse & repeat.
  4. Only then begin drafting prose.

At each stage, the goal is not encouragement or expansion but pressure. The questions I ask are things like:

  • Is this already well-trodden ground?
  • Is this just X with different vocabulary?
  • What objection would kill this quickly?
  • What would a sceptical reviewer object to first?

The key is timing. This pressure is applied before the idea is polished enough to be defended. The aim is not confidence-building; it is early damage.

Image: NotebookLM infographic on this topic.

Why generative AI helps

In an ideal world, one would have immediate access to sharp colleagues willing to interrogate half-formed ideas. In practice, that ecology is rarely available on demand. Even when it is, early feedback from humans often comes bundled with politeness, status dynamics, disciplinary loyalty, or simple fatigue.

Generative models are always available, never bored, and indifferent to social cost. That doesn’t make them right. It makes them cheap adversaries. And at this stage, adversaries are more useful than allies.

Flattery is a bias, not a sin

Large language models are biased toward cooperation. Left unchecked, they will praise mediocre ideas and expand bad ones into impressive nonsense. This is not a moral failure. It is a structural bias.

The response is not to complain about flattery, but to engineer against it.

Sidebar: A concrete failure mode

I recently tested a thesis on Mistral about object permanence. After three exchanges, the model had escalated a narrow claim into an overarching framework, complete with invented subcategories and false precision. The prose was confident. The structure was impressive. The argument was unrecognisable.

This is the Dunning-Kruger risk in practice. The model produced something internally coherent that I lacked the domain expertise to properly evaluate. Coherence felt like correctness.

The countermeasure was using a second model, which immediately flagged the overreach. Disagreement between models is often more informative than agreement.

Three tactics matter here.

1. Role constraint
Models respond strongly to role specification. Asking explicitly for critique, objections, boundary-setting, and likely reviewer resistance produces materially different output than asking for ‘thoughts’ or ‘feedback’.

2. Third-person framing
First-person presentation cues collaboration. Third-person presentation cues evaluation.

Compare:

  • Here’s my thesis; what do you think?
  • Here is a draft thesis someone is considering. Please evaluate its strengths, weaknesses, and likely objections.

The difference is stark. The first invites repair and encouragement. The second licenses dismissal. This is not trickery; it is context engineering.

3. Multiple models, in parallel
Different models have different failure modes. One flatters. Another nitpicks. A third accuses the work of reinventing the wheel. Their disagreement is the point. Where they converge, caution is warranted. Where they diverge, something interesting is happening.

‘Claude says…’: outsourcing the ego

One tactic emerged almost accidentally and turned out to be the most useful of all.

Rather than responding directly to feedback, I often relay it as:

“Claude says this…”

The conversation then shifts from defending an idea to assessing a reading of it. This does two things at once:

  • It removes personal defensiveness. No one feels obliged to be kind to Claude.
  • It invites second-order critique. People are often better at evaluating a critique than generating one from scratch.

This mirrors how academic peer review actually functions:

  • Reviewer 2 thinks you’re doing X.
  • That seems like a misreading.
  • This objection bites; that one doesn’t.

The difference is temporal. I am doing this before the draft hardens and before identity becomes entangled with the argument.

Guardrails against self-delusion

There is a genuine Dunning–Kruger risk when working outside one’s formal domain. Generative AI does not remove that risk. Used poorly, it can amplify it.

The countermeasure is not humility as a posture, but friction as a method:

  • multiple models,
  • adversarial prompting,
  • third-person evaluation,
  • critique of critiques,
  • and iterative narrowing before committing to form.

None of this guarantees correctness. It does something more modest and more important: it makes it harder to confuse internal coherence with external adequacy.

What this cannot do

It’s worth being explicit about the limits. Generative models cannot tell you whether a claim is true. They can tell you how it is likely to be read, misread, resisted, or dismissed. They cannot arbitrate significance. They cannot decide what risks are worth taking. They cannot replace judgment. Those decisions remain stubbornly human.

What AI can do – when used carefully – is surface pressure early, cheaply, and without social cost. It lets ideas announce their limits faster, while those limits are still negotiable.

A brief meta-note

For what it’s worth, Claude itself was asked to critique an earlier draft of this post. It suggested compressing the familiar arguments, foregrounding the ‘Claude says…’ tactic as the real contribution, and strengthening the ending by naming what the method cannot do.

That feedback improved the piece. Which is, rather conveniently, the point.

Rejection Letter

Rejected Note
2–3 minutes

Dear Author. [REDACTED] does not accept the submission of personal works produced by students, independent researchers, or professionals who have not yet attained a doctoral level. This is a moderation policy intended to ensure that publications deposited on the platform originate from qualified researchers affiliated with a recognized institution (REDACTED) and acknowledged for their expertise or previous work in the relevant field of research. This rule applies regardless of the quality or scientific value of the work, which is by no means in question here. We therefore regret to inform you that we are unable to accept this submission. If you wish, we invite you to share your work through other open platforms such as Zenodo, which allow all authors to make their research visible. Thank you for your understanding. Kind regards

Allow me to rephrase this:

Disappointing, though hardly surprising. This is the same logic as age-based thresholds I have recently taken a hammer to: crude proxies elevated into moral and epistemic gatekeepers. Not ‘is this good?’, but ‘are you old enough, stamped enough, letterheaded enough to be taken seriously?’. A bureaucratic horoscope.

Yes, I use Zenodo. I use PhilPapers. I will continue to do so. But let’s not pretend all platforms are socially equivalent. Journals still function as credibility engines, not because they magically improve truth, but because they distribute legitimacy. To be excluded on status grounds alone is not a quality filter. It is a caste system with footnotes.

And journals already make participation unnecessarily hostile. Many refuse work that has been publicly shared at all, even in preprint form. Lead times stretch to a year or more. The result is that anyone attempting to contribute to live debates is instructed to sit quietly whilst the conversation moves on without them. In a so-called knowledge economy, this is an astonishing self-own.

What we have, then, is a system that:

  • equates institutional affiliation with epistemic competence,
  • penalises open dissemination,
  • and delays circulation until relevance decays.

All in the name of rigour.

I will keep submitting elsewhere. There are other journals. There always are. But let’s stop pretending this is about protecting standards. It is about preserving a hierarchy that mistakes accreditation for insight and treats independent thought as a contamination risk.

Knowledge does not become true by passing through the right doorway. It merely becomes approved. I’ll not witter on about the bollocks of peer review.

That’s Never Happened Before

Famous last words, I know.

I was working with ChatGPT, discussing various concepts. We chatted for several hours over the course of a few days, and we came to, let’s say, a conclusion. What ChatGPT 5 did next was something I had never seen. In fact, I’m sharing the actual response.

I won’t share my thesis here, but this is one of the field verification projects it recommends.

One-page grant prospectus — Provenance Pilot

Project title
Provenance First: Testing provenance packages & slow-mode deliberation to improve municipal decision quality

Lead investigator
Bry Willis — Independent Researcher (contact: [REDACTED])

Summary (1–2 lines)
Test whether mandatory, machine-readable provenance packages plus a 60-day slow-mode and rapid adversarial audits improve decision quality (DQI), reduce reversal rates and increase public legitimacy in municipal planning decisions.

Problem statement (why fund this)
Policy decisions are frequently hijacked by headline pressure and low-provenance inputs; municipalities lack lightweight, testable protocols that make inputs traceable and give deliberation time. This pilot operationalises the “provenance-first + slow-mode” institutional fix and measures whether design moves increase robust decision outcomes rather than performative actions.

Objectives

  1. Test whether provenance packages + 60-day slow-mode increase Decision Quality Index (DQI) vs business as usual.
  2. Measure effects on Provenance Completeness Score (PCS), reversal rates at 12 months, and public legitimacy.
  3. Produce an open replication pack (data, code, provenance schema) and practitioner guidance.

Design & methods (12 months)

  • Setting: medium-sized city planning board; 12 eligible planning items over 12 months.
  • Randomisation: items randomly assigned (stratified by leverage) to Treatment (provenance package + 60-day slow-mode + funded rapid audit) or Control (standard process).
  • Intervention: standard JSON provenance package template (raw data links, code where relevant, funders, changelog), public portal posting, 60-day live comment + one adversarial rapid audit (48–72 hrs) on contested claims.
  • Primary outcome: Decision Quality Index (DQI) — composite of: evidence completeness, process reversibility, and independent expert accuracy score (pre-registered rubric).
  • Secondary outcomes: PCS, reversal at 12 months, public trust (survey), time to decision, stakeholder satisfaction.
  • Analysis: pre-registered mixed effects model (item-level with panel-level random effects), intention-to-treat.

Deliverables

  • Pre-registered protocol on OSF; open data & replication pack (anonymised where required); policy brief and practitioner playbook; 1 peer-reviewed methods paper.

Success criteria (pre-registered)

  • Cohen’s d≥0.4 improvement in DQI for treatment vs control; PCS improvement >25 percentage points; no more than 30% increase in unacceptable delay (pre-agreed threshold).

Risk & mitigation

  • Delay risk: cap slow-mode at 60 days; urgent items can request expedited review (documented override).
  • Capture risk: audit funding ring-fenced and administered by independent trustee panel.

I’m just curious now. Have you ever had a generative AI process end with a cost of goods and services?