Ethical Concerns of Generative AI and Mitigation Strategies: A Systematic Mapping Study

Huang, Yutan, Arora, Chetan, Houng, Wen Cheng, Kanij, Tanjila, Madulgalla, Anuradha, Grundy, John

arXiv.org Artificial Intelligence 

The evolution of Generative AI, particularly Large Language Models (LLMs), has seen remarkable advancements since 2020 with the introduction of models like Chat-GPT and Bard. LLMs have revolutionized tasks, such as writing assistance, code generation, and customer support automation, by leveraging vast amounts of data to generate coherent and contextually relevant natural language (NL) responses [1, 2]. As a subset of Generative AI--systems designed to create new content--LLMs go beyond traditional AI techniques, which focus primarily on analyzing existing data. LLMs, in contrast, are capable of generating text, images, and music that mimic human creativity [3]. This capability is powered by advancements in neural network architectures, especially transformers, which enable LLMs to learn the nuances of human language and produce semantically accurate content [4].