Machine Learning Security - Considerations and Assurance

#artificialintelligence 

Machine learning security is an emerging concern for companies, as recent research by teams from Google Brain, OpenAI, US Army Research Laboratory and top universities has shown how machine learning models can be manipulated to return results fitting the attacker's desire. One area of significant finding has been in image recognition models. Image recognition is one of the stalwarts of machine learning and deep learning systems, allowing for superhuman performance on classification tasks and enabling proofs of concept in autonomous vehicles. Recent highly successful research showing the exploitation of image recognition models, specifically convolutional neural networks, is especially troubling for autonomous vehicles as attackers could theoretically take control of vehicles, or at least cause them to lose control. Advancements by Geoffrey Hinton and team address a few of the key problems plaguing convolutional neural networks, or CNNs, (more on that below), however, definitive research has not yet been performed to check if they fix the security problems. I'll outline several security issues that exist in current algorithmic deployments and then walk through some steps to take in order to provide assurance over algorithmic integrity.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found