LLM Company Policies and Policy Implications in Software Organizations
Khojah, Ranim, Mohamad, Mazen, Erlenhov, Linda, Neto, Francisco Gomes de Oliveira, Leitner, Philipp
–arXiv.org Artificial Intelligence
Abstract--The risks associated with adopting large language model (LLM) chatbots in software organizations highlight the need for clear policies. We examine how 11 companies create these policies and the factors that influence them, aiming to help managers safely integrate chatbots into development workflows. In software organizations, the software product is gradually evolving to AI-powered software (AIware) with the use of AI, more specifically, large language models (LLMs) in the development process [2]. LLMs are increasingly seen as valuable tools for improving productivity, which motivated enterprises to adopt them [3]. However, these models have introduced risks and concerns that impact the organization, the software engineers, and the product. Integrating LLMs into software development raises challenges related to the quality and ownership of generated content [4], which complicates accountability and can affect product reliability . In addition, interactions with LLMs (e.g., through external APIs) may expose organizations to liability where developers unintentionally transmit sensitive data, resulting in legal repercussions [5].
arXiv.org Artificial Intelligence
Oct-9-2025
- Country:
- Asia > India (0.04)
- Europe
- Austria > Vienna (0.14)
- Netherlands (0.04)
- Sweden > Vaestra Goetaland
- Gothenburg (0.06)
- Switzerland (0.04)
- North America > United States (0.04)
- South America > Brazil
- Paraíba > Campina Grande (0.04)
- Genre:
- Research Report (0.50)
- Industry:
- Information Technology > Security & Privacy (1.00)
- Law (1.00)
- Technology: