Generative AI Policies under the Microscope: How CS Conferences Are Navigating the New Frontier in Scholarly Writing
Nahar, Mahjabin, Lee, Sian, Guillen, Becky, Lee, Dongwon
–arXiv.org Artificial Intelligence
While Gen-AI offers significant benefits in content generation and task automation [9], it can be also misused and abused in nefarious applications [7], with more significant risks toward long-tail populations and regions [6]. Professionals in fields like journalism and law still remain cautious due to concerns over hallucinations and ethical issues but scholars in Computer Science (CS), the field where Gen-AI originated, appear to be cautiously but actively exploring its use. For instance, [3] reports the increased use of large language models (LLMs) in the CS scholarly articles (up to 17.5%), compared to Mathematics articles (up to 6.3%), and [2] reports that between 6.5% and 16.9% of peer reviews at ICLR 2024, NeurIPS 2023, CoRL 2023, and EMNLP 2023 may have been significantly altered by LLMs beyond minor revisions. Considering researchers' increasing adoption of Gen-AI, it is crucial to establish usage guidelines and well-defined policies to promote fair and ethical practices in scholarly writing and peer reviews. Previous research also examined Gen-AI policies by major publishers like Elsevier, Springer, etc. [5], but there is still a lack of clear understanding of how CS conferences are adapting to this paradigm shift.
arXiv.org Artificial Intelligence
Jan-8-2025
- Country:
- North America > United States (0.31)
- Genre:
- Research Report (1.00)
- Industry:
- Information Technology > Security & Privacy (0.47)
- Law (0.46)
- Media > News (0.34)
- Technology: