Goto

Collaborating Authors

 specific attack


Machine Learning Systems Vulnerable to Specific Attacks

#artificialintelligence

The growing number of organizations creating and deploying machine learning solutions raises concerns as to their intrinsic security, argues the NCC Group in a recent whitepaper. The NCC Group's whitepaper provides a classification of attacks that may be carried through against machine learning systems, including examples based on popular libraries such as SciKit-Learn, Keras, PyTorch and TensorFlow platforms. Although the various mechanisms that allow this are to some extent documented, we contend that the security implications of this behaviour are not well-understood in the broader ML community. According to the NCC Groups, ML systems are subject to specific forms of attacks in addition to more traditional attacks that may attempt to exploit infrastructure or applications bugs, or other kind of issues. A first vector of risk is associated to the fact that many ML models contain code that is executed when the model is loaded or when a particular condition is met, such as a given output class is predicted.


Microsoft Executive Apologizes for Not Understanding How the Internet Works

#artificialintelligence

One day after trolls transformed Microsoft's chatbot Tay into a ditzy, Holocaust-denying monster, the company has issued an apology for failing to realize that people on the internet are dicks. "We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," wrote Peter Lee, the corporate vice president for Microsoft Research, with what one imagines was a look of pained bewilderment unique to someone who just learned that 4chan exists. As anyone who followed the debacle will tell you, the most astonishing thing about it was not the revelation that trolls will troll--that's a given--but rather that Microsoft somehow didn't anticipate the very real possibility of rampant trolling. As we developed Tay, we planned and implemented a lot of filtering and conducted extensive user studies with diverse user groups. We stress-tested Tay under a variety of conditions, specifically to make interacting with Tay a positive experience.


Quantifying and Improving the Robustness of Trust Systems

Wang, Dongxia (Nanyang Technological University)

AAAI Conferences

Trust systems are widely used to facilitate interactions among agents based on trust evaluation. These systems may have robustness issues, that is, they are affected by various attacks. Designers of trust systems propose methods to defend against these attacks. However, they typically verify the robustness of their defense mechanisms (or trust models) only under specific attacks. This raises problems: first, the robustness of their models is not guaranteed as they do not consider all attacks. Second, the comparison between two trust models depends on the choice of specific attacks, introducing bias. We propose to quantify the strength of attacks, and to quantify the robustness of trust systems based on the strength of the attacks it can resist.Our quantification is based on information theory, and provides designers of trust systems a fair measurement of the robustness.