ICLR 2026 Response to LLM-Generated Papers and Reviews
In the past few days, there have been many concerns raised about potential LLM-generated papers and low quality LLM-generated reviews. We take these concerns seriously, and we want to update the community on the steps we are taking and will be taking over the next two weeks.
These steps below are based on the policies we outlined in our previous blog post: Policies on Large Language Model Usage at ICLR 2026.
The core of this policy is twofold: (a) if an author or reviewer uses an LLM, they must disclose this and they also are ultimately responsible for the LLM’s outputs (b) whether or not authors and reviewers use LLMs, they must not make false or misleading claims, fabricate or falsify data, or misrepresent results. We have planned and are undertaking punitive measures against authors and reviewers who violate these policies.
LLM-generated papers
Papers that make extensive usage of LLMs and do not disclose this usage will be desk rejected. Extensive and/or careless LLM usage often results in false claims, misrepresentations, or hallucinated content, including hallucinated references. As stated in our previous blog post: hallucinations of this kind would be considered a Code of Ethics violation on the part of the paper’s authors. We have been desk -rejecting, and will continue to desk -reject, any paper that includes such issues.
We have been relying on ACs and SACs to identify papers that have these issues. To help triage this, we will be leveraging recent LLM detection tools to identify papers that potentially have a significant amount of LLM-generated content. These will then be given to ACs for further checking. Given the possibility of false positives from detection tools, we will only take action if an AC or SAC identifies concrete evidence as identified above.
Dual submission policy violation
In addition to these desk rejections, we are also aware of possible cases where authors submitted multiple slightly different variants of the same paper (LLM-paraphrased or otherwise) without acknowledging or citing the concurrent submissions. While this is not always done with the help of LLMs, the use of LLMs can faciltate this process and may therefore exacerbate this issue. We are in the process of defining severe consequences for authors who try to spam the conference with multiple very similar variants of the same paper. The process and the policy will be detailed in a subsequent blog post.
LLM-generated or very-low-quality reviews
As mentioned above, reviewers are responsible for the content they post. Therefore, if they use LLMs, they are responsible for any issues in their posted review. Very poor quality reviews that feature false claims, misrepresentations or hallucinated references are also a code of ethics violation as expressed in the previous blog post. As such, reviewers who posted such poor quality reviews will also face consequences, including the desk rejection of their submitted papers. This follows policies that were already laid out in the previous blog post as well as the reviewer guide.
Once again, we will be using LLM detection tools to triage and will rely on ACs and SACs to identify such poor quality/LLM-generated reviews. Authors who got such reviews (with many hallucinated references or false claims) should post a confidential message to ACs and SACs pointing out the poor quality reviews and provide the necessary evidence.
Conclusion
The actions described above will play out over the next 1-2 weeks, as ACs monitor discussions and identify problematic papers and reviewers. We plan to make another post to update the community about these desk rejected papers and irresponsible reviewers in a transparent manner.
We are thankful to the community for identifying some of these issues, as well as for running large-scale meta-analyses. These efforts are supported by ICLR’s policy of making reviews and discussions public: it allows the community to see issues that are only visible at scale. At the same time, we would like to reassure the community that these issues were anticipated, which is why we articulated careful policies in the previous blog post. Our job now is to rigorously enforce this policy, which is going to be our focus going forward.