AI Risk Categorization Decoded (AIR 2024): From Government Regulations to Corporate Policies
Zeng, Yi, Klyman, Kevin, Zhou, Andy, Yang, Yu, Pan, Minzhou, Jia, Ruoxi, Song, Dawn, Liang, Percy, Li, Bo
–arXiv.org Artificial Intelligence
We present a comprehensive AI risk taxonomy derived from eight government policies from the European Union, United States, and China and 16 company policies worldwide, making a significant step towards establishing a unified language for generative AI safety evaluation. We identify 314 unique risk categories, organized into a four-tiered taxonomy. At the highest level, this taxonomy encompasses System & Operational Risks, Content Safety Risks, Societal Risks, and Legal & Rights Risks. The taxonomy establishes connections between various descriptions and approaches to risk, highlighting the overlaps and discrepancies between public and private sector conceptions of risk. By providing this unified framework, we aim to advance AI safety through information sharing across sectors and the promotion of best practices in risk mitigation for generative AI models and systems.
arXiv.org Artificial Intelligence
Jun-25-2024
- Country:
- Asia > China (1.00)
- Europe (1.00)
- North America > United States
- California (0.68)
- Genre:
- Research Report (0.64)
- Industry:
- Government
- Military (1.00)
- Regional Government
- Europe Government (1.00)
- North America Government > United States Government (1.00)
- Information Technology > Security & Privacy (1.00)
- Law
- Civil Rights & Constitutional Law (1.00)
- Criminal Law (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Government
- Technology: