Adversarial machine learning: 5 recommendations for app sec teams

#artificialintelligence 

In 2016, Microsoft released a prototype chatbot on Twitter. The automated program, dubbed Tay, responded to tweets and incorporated the content of those tweets into its knowledge base, riffing off the topics to carry on conversations. In less than 24 hours, Microsoft had to yank the program and issue an apology after the software started spewing vile comments, including "I f**king hate feminists" and tweeting that it agreed with Hitler. Online attackers had used crafted comments to pollute the machine-learning algorithm, exploited a specific vulnerability in the program, and recognized that the bot frequently would just repeat comments, a major design flaw. Microsoft apologized, and Tay has not returned.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found