Assisting ICLR 2025 reviewers with feedback
(This post is written by James Zou, Associate Program Chair)
Obtaining constructive and high-quality peer reviews at AI conferences has become increasingly challenging due to the rapidly rising volume of paper submissions. For example, ICLR experienced year-over-year submission increases of 47% in 2024 and 61% in 2025. As submission numbers grow, the demand on reviewers increases, often leading to inconsistent review quality. To help, for ICLR 2025 we are introducing a review feedback agent that identifies potential issues in reviews and provides feedback to reviewers for improvements.
The goal of this system is to help make reviews more constructive and actionable for authors. The review feedback agent will provide suggestions on three potential categories of issues in reviews. We curated these categories by compiling public comments and evaluating reviews from previous ICLRs to identify common issues.
The feedback areas are:
- Encouraging reviewers to rephrase vague review comments, making them more actionable for the authors.
- Highlighting sections of the paper that may already address some of the reviewer’s questions.
- Identifying and addressing unprofessional or inappropriate remarks in the review.
We provide an example of a comment that would be flagged by our system for each category and an example feedback below.
The feedback system will not replace any human reviewers. The agent will not write reviews or make automated edits to reviews. Rather, it will serve as an assistant, providing optional feedback that reviewers can choose to incorporate or disregard. Every ICLR submission will be assessed solely by human reviewers, and the final acceptance decisions will be made by ACs, SACs, and reviewers, as in previous ICLR conferences.
The feedback will be provided for a randomly selected subset of initial reviews to enable comparisons and assess its impact. Reviewers will have an opportunity to update their review, should they wish, before the reviews are available to the authors. Feedback will be sent to the reviewer within a few hours of submitting a review. There will be no feedback on subsequent reviewer responses, and no further interaction between the reviewer and the feedback system. Moreover, the feedback will only be visible to the reviewer and the ICLR program chairs; it will not be shared with other reviewers, authors, or ACs and will not be a factor in the acceptance decisions. We have designed the feedback system using a pipeline of multiple LLMs to minimize hallucinations and enhance the quality of the feedback. The system has been carefully tested on publicly available ICLR 2024 reviews.
After the final paper decisions have been released, we will distribute an anonymous, voluntary survey to authors, reviewers, and ACs to gather feedback. We will carefully analyze the responses to assess the pilot system’s impact and to guide improvements for future iterations.
Nitya Thakkar, Mert Yuksekgonul, Jake Silberg, James Zou (Associate Program Chair)
The Review Feedback Agent Team
Carl Vondrick, Rose Yu, Violet Peng, Fei Sha, Animesh Garg
ICLR 2025 Program Chairs