How to improve cybersecurity for artificial intelligence
In January 2017, a group of artificial intelligence researchers gathered at the Asilomar Conference Grounds in California and developed 23 principles for artificial intelligence, which was later dubbed the Asilomar AI Principles. The sixth principle states that "AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible." Thousands of people in both academia and the private sector have since signed on to these principles, but, more than three years after the Asilomar conference, many questions remain about what it means to make AI systems safe and secure. Verifying these features in the context of a rapidly developing field and highly complicated deployments in health care, financial trading, transportation, and translation, among others, complicates this endeavor. Much of the discussion to date has centered on how beneficial machine learning algorithms may be for identifying and defending against computer-based vulnerabilities and threats by automating the detection of and response to attempted attacks.1 Conversely, concerns have been raised that using AI for offensive purposes may make cyberattacks increasingly difficult to block or defend against by enabling rapid adaptation of malware to adjust to restrictions imposed by countermeasures and security controls.2
Jun-30-2020, 05:15:45 GMT
- AI-Alerts:
- 2020 > 2020-06 > AAAI AI-Alert for Jun 30, 2020 (1.00)
- Country:
- North America > United States > California (0.24)
- Industry:
- Government > Military
- Cyberwarfare (0.72)
- Information Technology > Security & Privacy (1.00)
- Government > Military
- Technology: