Does My Rebuttal Matter? Insights from a Major NLP Conference
Gao, Yang, Eger, Steffen, Kuznetsov, Ilia, Gurevych, Iryna, Miyao, Yusuke
–arXiv.org Artificial Intelligence
Peer review is a core element of the scientific process, particularly in conference-centered fields such as ML and NLP. However, only few studies have evaluated its properties empirically. Aiming to fill this gap, we present a corpus that contains over 4k reviews and 1.2k author responses from ACL-2018. We quantitatively and qualitatively assess the corpus. This includes a pilot study on paper weaknesses given by reviewers and on quality of author responses. We then focus on the role of the rebuttal phase, and propose a novel task to predict after-rebuttal (i.e., final) scores from initial reviews and author responses. Although author responses do have a marginal (and statistically significant) influence on the final scores, especially for borderline papers, our results suggest that a reviewer's final score is largely determined by her initial score and the distance to the other reviewers' initial scores. In this context, we discuss the conformity bias inherent to peer reviewing, a bias that has largely been overlooked in previous research. We hope our analyses will help better assess the usefulness of the rebuttal phase in NLP conferences.
arXiv.org Artificial Intelligence
Mar-28-2019
- Country:
- Asia
- Europe
- Bulgaria > Sofia City Province
- Sofia (0.04)
- France (0.04)
- Germany > Hesse
- Darmstadt Region > Darmstadt (0.04)
- Italy (0.04)
- Spain (0.04)
- Switzerland (0.04)
- United Kingdom (0.04)
- Bulgaria > Sofia City Province
- North America
- Canada (0.04)
- United States
- Louisiana > Orleans Parish
- New Orleans (0.04)
- Massachusetts (0.04)
- Minnesota > Hennepin County
- Minneapolis (0.04)
- Texas (0.04)
- Louisiana > Orleans Parish
- Genre:
- Research Report
- Experimental Study (1.00)
- New Finding (1.00)
- Research Report
- Technology: