Adversarial Machine Learning and Cybersecurity: Risks, Challenges, and Legal Implications

Musser, Micah, Lohn, Andrew, Dempsey, James X., Spring, Jonathan, Kumar, Ram Shankar Siva, Leong, Brenda, Liaghati, Christina, Martinez, Cindy, Grant, Crystal D., Rohrer, Daniel, Frase, Heather, Elliott, Jonathan, Bansemer, John, Rodriguez, Mikel, Regan, Mitt, Chowdhury, Rumman, Hermanek, Stefan

arXiv.org Artificial Intelligence 

In July 2022, the Center for Security and Emerging Technology (CSET) at Georgetown University and the Program on Geopolitics, Technology, and Governance at the Stanford Cyber Policy Center convened a workshop of experts to examine the relationship between vulnerabilities in artificial intelligence systems and more traditional types of software vulnerabilities. Topics discussed included the extent to which AI vulnerabilities can be handled under standard cybersecurity processes, the barriers currently preventing the accurate sharing of information about AI vulnerabilities, legal issues associated with adversarial attacks on AI systems, and potential areas where government support could improve AI vulnerability management and mitigation. Attendees at the workshop included industry representatives in both cybersecurity and AI red-teaming roles; academics with experience conducting adversarial machine learning research; legal specialists in cybersecurity regulation, AI liability, and computer-related criminal law; and government representatives with significant AI oversight responsibilities. This report is meant to accomplish two things. First, it provides a high-level discussion of AI vulnerabilities, including the ways in which they are disanalogous to other types of vulnerabilities, and the current state of affairs regarding information sharing and legal oversight of AI vulnerabilities. Second, it attempts to articulate broad recommendations as endorsed by the majority of participants at the workshop. These recommendations, categorized under four high-level topics, are as follows: 1. Topic: Extending Traditional Cybersecurity for AI Vulnerabilities 1.1. Recommendation: Organizations building or deploying AI models should use a risk management framework that addresses security throughout the AI system life cycle.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found