← Joachim Baumann

Stop Automating Peer Review Without Rigorous Evaluation

ICML 2026 (Spotlight)

Joachim Baumann, Jiaxin Pei, Sanmi Koyejo, Dirk Hovy

Read the PDF OpenReview
@inproceedings{baumann2026stop,
  title     = {Stop Automating Peer Review Without Rigorous Evaluation},
  author    = {Baumann, Joachim and Pei, Jiaxin and Koyejo, Sanmi and Hovy, Dirk},
  booktitle = {Forty-third International Conference on Machine Learning},
  year      = {2026},
  note      = {Spotlight}
}

Abstract

Large language models offer a tempting solution to address the peer review crisis. This position paper argues that today's AI systems should not be used to produce paper reviews. We ground this position in an empirical comparison of human- versus AI-generated ICLR 2026 reviews and an evaluation of the effect of automated paper rewriting on different AI reviewers. We identify two critical issues: 1) AI reviewers exhibit a hivemind effect of excessive agreement within and across papers that reduces perspective diversity. 2) AI review scores are trivially gameable through paper laundering: prompting an LLM to rewrite a paper could significantly increase the scores from AI reviewers, demonstrating that LLM reviewers are easy to game through stylistic changes rather than scientific results. However, non-gameability and review diversity are necessary but not sufficient conditions for automation. We argue that addressing the peer review crisis requires a science of peer review automation-not general-purpose LLMs deployed without rigorous evaluation.