Goto

Collaborating Authors

 rebuttal


fa3a3c407f82377f55c19c5d403335c7-AuthorFeedback.pdf

Neural Information Processing Systems

Extended " T able 2" in submitted paper. Extended " T able 3" in submitted paper. We thank reviewers for their comments, and will carefully revise paper considering these comments. Q1 (R1): References and comparison with a baseline that learns embeddings only through a standard convnet. In Tab.2 of this rebuttal, the state-of-the-art method of AISI [7] also depends on We will give more details of these compared methods in paper for clarity.



Rebuttal for submission 7667 - Rand-NSG

Neural Information Processing Systems

Reviewer 1: We will release our code along with the paper. A few contributions are: (1) We introduce a new pruning strategy parameterised by α that reduces the number of hops from navigating vertex to any other vertex. This provides a greater control over the diameter of the graph which is important for disk search. This is a feature that previous methods like NSG and HNSW lack. The plots above suggest that Rand-NSG indices are competitive with NSG and HNSW.




Insights from the ICLR Peer Review and Rebuttal Process

Kargaran, Amir Hossein, Nikeghbal, Nafiseh, Yang, Jing, Ousidhoum, Nedjma

arXiv.org Artificial Intelligence

Peer review is a cornerstone of scientific publishing, including at premier machine learning conferences such as ICLR. As submission volumes increase, understanding the nature and dynamics of the review process is crucial for improving its efficiency, effectiveness, and the quality of published papers. We present a large-scale analysis of the ICLR 2024 and 2025 peer review processes, focusing on before- and after-rebuttal scores and reviewer-author interactions. We examine review scores, author-reviewer engagement, temporal patterns in review submissions, and co-reviewer influence effects. Combining quantitative analyses with LLM-based categorization of review texts and rebuttal discussions, we identify common strengths and weaknesses for each rating group, as well as trends in rebuttal strategies that are most strongly associated with score changes. Our findings show that initial scores and the ratings of co-reviewers are the strongest predictors of score changes during the rebuttal, pointing to a degree of reviewer influence. Rebuttals play a valuable role in improving outcomes for borderline papers, where thoughtful author responses can meaningfully shift reviewer perspectives. More broadly, our study offers evidence-based insights to improve the peer review process, guiding authors on effective rebuttal strategies and helping the community design fairer and more efficient review processes. Our code and score changes data are available at https://github.com/papercopilot/iclr-insights.


Effectiveness of Counter-Speech against Abusive Content: A Multidimensional Annotation and Classification Study

Damo, Greta, Cabrio, Elena, Villata, Serena

arXiv.org Artificial Intelligence

Counter-speech (CS) is a key strategy for mitigating online Hate Speech (HS), yet defining the criteria to assess its effectiveness remains an open challenge. We propose a novel computational framework for CS effectiveness classification, grounded in linguistics, communication and argumentation concepts. Our framework defines six core dimensions - Clarity, Evidence, Emotional Appeal, Rebuttal, Audience Adaptation, and Fairness - which we use to annotate 4,214 CS instances from two benchmark datasets, resulting in a novel linguistic resource released to the community. In addition, we propose two classification strategies, multi-task and dependency-based, achieving strong results (0.94 and 0.96 average F1 respectively on both expert- and user-written CS), outperforming standard baselines, and revealing strong interdependence among dimensions.


ReviewerToo: Should AI Join The Program Committee? A Look At The Future of Peer Review

Sahu, Gaurav, Larochelle, Hugo, Charlin, Laurent, Pal, Christopher

arXiv.org Artificial Intelligence

Peer review is the cornerstone of scientific publishing, yet it suffers from inconsistencies, reviewer subjectivity, and scalability challenges. We introduce ReviewerToo, a modular framework for studying and deploying AI-assisted peer review to complement human judgment with systematic and consistent assessments. ReviewerToo supports systematic experiments with specialized reviewer personas and structured evaluation criteria, and can be partially or fully integrated into real conference workflows. We validate ReviewerToo on a carefully curated dataset of 1,963 paper submissions from ICLR 2025, where our experiments with the gpt-oss-120b model achieves 81.8% accuracy for the task of categorizing a paper as accept/reject compared to 83.9% for the average human reviewer. Additionally, ReviewerToo-generated reviews are rated as higher quality than the human average by an LLM judge, though still trailing the strongest expert contributions. Our analysis highlights domains where AI reviewers excel (e.g., fact-checking, literature coverage) and where they struggle (e.g., assessing methodological novelty and theoretical contributions), underscoring the continued need for human expertise. Based on these findings, we propose guidelines for integrating AI into peer-review pipelines, showing how AI can enhance consistency, coverage, and fairness while leaving complex evaluative judgments to domain experts. Our work provides a foundation for systematic, hybrid peer-review systems that scale with the growth of scientific publishing.


Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. The paper introduces a full model for tracking while allowing for multiple and varying number of hypothesis and clutter. It promises a clear notation and fast algorithms through the use of variational/Baum-Welch type inference. Experiments appear extensive and are performed on real-world data. The key novelty of this paper is the assignment problem (aka data association). Tracking itself, as the authors acknowledge, is a well-trodden field.


428fca9bc1921c25c5121f9da7815cde-Reviews.html

Neural Information Processing Systems

First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. The set in question in this paper is the set of function with bounded partial derivatives up to order K. The authors' technique mimics the work of Thaler et al (ICALP 2012) only the authors decompose the queries not into regular polynomials (Chebyshev polynomials in the case of Thaler et al), but rather to trigonometric polynomial in this case. The bulk of the work is indeed to show that the abovementioned set of queries can be well-approximated by trigonometric polynomials. Having established that, adding Laplace noise to each monomial suffices to guarantee differential privacy.