Goto

Collaborating Authors

 leniency rating


Generative Artificial Intelligence Policies under the Microscope

Communications of the ACM

Since the rise of ChatGPT, generative artificial intelligence (GenAI) technologies gained widespread popularity, impacting academic research and everyday communication.5,10 While GenAI offers benefits in task automation,9 it can also be misused and abused in nefarious applications,7 with significant risks to long-tail populations.6 Professionals in fields such as journalism and law still remain cautious due to concerns related to hallucinations and ethical issues, but scholars in computer science (CS), the field where GenAI originated, appear to be cautiously, yet actively exploring its use. For instance, Liang, W. et al.3 report the increased use of large language models (LLMs) in the CS scholarly articles (up to 17.5%), compared to mathematics articles (up to 6.3%), and Liang, W. et al.2 report that, between 6.5% and 16.9% of peer reviews at ICLR 2024, NeurIPS 2023, CoRL 2023, and EMNLP 2023 may have been altered by LLMs beyond minor revisions. Considering researchers' increasing adoption of GenAI, it is crucial to establish usage policies to promote fair and ethical practices in scholarly writing and peer reviews.


Generative AI Policies under the Microscope: How CS Conferences Are Navigating the New Frontier in Scholarly Writing

Nahar, Mahjabin, Lee, Sian, Guillen, Becky, Lee, Dongwon

arXiv.org Artificial Intelligence

While Gen-AI offers significant benefits in content generation and task automation [9], it can be also misused and abused in nefarious applications [7], with more significant risks toward long-tail populations and regions [6]. Professionals in fields like journalism and law still remain cautious due to concerns over hallucinations and ethical issues but scholars in Computer Science (CS), the field where Gen-AI originated, appear to be cautiously but actively exploring its use. For instance, [3] reports the increased use of large language models (LLMs) in the CS scholarly articles (up to 17.5%), compared to Mathematics articles (up to 6.3%), and [2] reports that between 6.5% and 16.9% of peer reviews at ICLR 2024, NeurIPS 2023, CoRL 2023, and EMNLP 2023 may have been significantly altered by LLMs beyond minor revisions. Considering researchers' increasing adoption of Gen-AI, it is crucial to establish usage guidelines and well-defined policies to promote fair and ethical practices in scholarly writing and peer reviews. Previous research also examined Gen-AI policies by major publishers like Elsevier, Springer, etc. [5], but there is still a lack of clear understanding of how CS conferences are adapting to this paradigm shift.