Cyberattacks against machine learning systems are more common than you think - Microsoft Security
Machine learning (ML) is making incredible transformations in critical areas such as finance, healthcare, and defense, impacting nearly every aspect of our lives. Many businesses, eager to capitalize on advancements in ML, have not scrutinized the security of their ML systems. Today, along with MITRE, and contributions from 11 organizations including IBM, NVIDIA, Bosch, Microsoft is releasing the Adversarial ML Threat Matrix, an industry-focused open framework, to empower security analysts to detect, respond to, and remediate threats against ML systems. During the last four years, Microsoft has seen a notable increase in attacks on commercial ML systems. Market reports are also bringing attention to this problem: Gartner's Top 10 Strategic Technology Trends for 2020, published in October 2019, predicts that "Through 2022, 30% of all AI cyberattacks will leverage training-data poisoning, AI model theft, or adversarial samples to attack AI-powered systems."
Oct-27-2020, 10:40:43 GMT
- Country:
- North America > Canada > Ontario > Toronto (0.16)
- Industry:
- Government > Military
- Cyberwarfare (0.72)
- Information Technology > Security & Privacy (0.72)
- Government > Military
- Technology: