Sengupta, Sailik
To Monitor or to Trust: Observing Robot's Behavior based on a Game-Theoretic Model of Trust
Sengupta, Sailik, Zahedi, Zahra, Kambhampati, Subbarao
In scenarios where a robot generates and executes a plan, there may be instances where this generated plan is less costly for the robot to execute but incomprehensible to the human. When the human acts as a supervisor and is held accountable for the robot's plan, the human may be at a higher risk if the incomprehensible behavior is deemed to be unsafe. In such cases, the robot, who may be unaware of the human's exact expectations, may choose to do (1) the most constrained plan (i.e. one preferred by all possible supervisors) incurring the added cost of executing highly sub-optimal behavior when the human is observing it and (2) deviate to a more optimal plan when the human looks away. These problems amplify in situations where the robot has to fulfill multiple goals and cater to the needs of different human supervisors. In such settings, the robot, being a rational agent, should take any chance it gets to deviate to a lower cost plan. On the other hand, continuous monitoring of the robot's behavior is often difficult for the human because it costs them valuable resources (eg. time, effort, cognitive overload etc.). To optimize the cost for constant monitoring while ensuring the robots follow the {\em safe} behavior, we model this problem in the game-theoretic framework of trust where the human is the agent that trusts the robot. We show that the notion of human's trust, which is well-defined when there is a pure strategy equilibrium, is inversely proportional to the probability it assigns for observing the robot's behavior. We then show that with high probability, our game lacks a pure strategy Nash equilibrium, forcing us to define a notion of trust boundary over mixed strategies of the human in order to guarantee safe behavior by the robot.
Markov Game Modeling of Moving Target Defense for Strategic Detection of Threats in Cloud Networks
Chowdhary, Ankur, Sengupta, Sailik, Huang, Dijiang, Kambhampati, Subbarao
The processing and storage of critical data in large-scale cloud networks necessitate the need for scalable security solutions. It has been shown that deploying all possible security measures incurs a cost on performance by using up valuable computing and networking resources which are the primary selling points for cloud service providers. Thus, there has been a recent interest in developing Moving Target Defense (MTD) mechanisms that helps one optimize the joint objective of maximizing security while ensuring that the impact on performance is minimized. Often, these techniques model the problem of multi-stage attacks by stealthy adversaries as a single-step attack detection game using graph connectivity measures as a heuristic to measure performance, thereby (1) losing out on valuable information that is inherently present in graph-theoretic models designed for large cloud networks, and (2) coming up with certain strategies that have asymmetric impacts on performance. In this work, we leverage knowledge in attack graphs of a cloud network in formulating a zero-sum Markov Game and use the Common Vulnerability Scoring System (CVSS) to come up with meaningful utility values for this game. Then, we show that the optimal strategy of placing detecting mechanisms against an adversary is equivalent to computing the mixed Min-max Equilibrium of the Markov Game. We compare the gains obtained by using our method to other techniques presently used in cloud network security, thereby showing its effectiveness. Finally, we highlight how the method was used for a small real-world cloud system.
Imagining an Engineer: On GAN-Based Data Augmentation Perpetuating Biases
Jain, Niharika, Manikonda, Lydia, Hernandez, Alberto Olmo, Sengupta, Sailik, Kambhampati, Subbarao
The use of synthetic data generated by Generative Adversarial Networks (GANs) has become quite a popular method to do data augmentation for many applications. While practitioners celebrate this as an economical way to get more synthetic data that can be used to train downstream classifiers, it is not clear that they recognize the inherent pitfalls of this technique. In this paper, we aim to exhort practitioners against deriving any false sense of security against data biases based on data augmentation. To drive this point home, we show that starting with a dataset consisting of head-shots of engineering researchers, GAN-based augmentation "imagines" synthetic engineers, most of whom have masculine features and white skin color (inferred from a human subject study conducted on Amazon Mechanical Turk). This demonstrates how biases inherent in the training data are reinforced, and sometimes even amplified, by GAN-based data augmentation; it should serve as a cautionary tale for the lay practitioners.
MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense
Sengupta, Sailik (Arizona State University) | Chakraborti, Tathagata (Arizona State University) | Kambhampati, Subbarao (Arizona State University)
Recent works on gradient-based attacks and universal perturbations can adversarially modify images to bring down the accuracy of state-of-the-art classification techniques based on deep neural networks to as low as 10% on popular datasets like MNIST and ImageNet. The design of general defense strategies against a wide range of such attacks remains a challenging problem. In this paper, we derive inspiration from recent advances in the fields of cybersecurity and multi-agent systems and propose to use the concept of Moving Target Defense (MTD) for increasing the robustness of a set of deep networks against such adversarial attacks. To this end, we formalize and exploit the notion of differential immunity of an ensemble of networks to specific attacks. To classify an input image, a trained network is picked from this set of networks by formulating the interaction between a Defender (who hosts the classification networks) and their (Legitimate and Malicious) Users as a repeated Bayesian Stackelberg Game (BSG).We empirically show that our approach, MTDeep reduces misclassification on perturbed images for MNIST and ImageNet datasets while maintaining high classification accuracy on legitimate test images. Lastly, we demonstrate that our framework can be used in conjunction with any existing defense mechanism to provide more resilience to adversarial attacks than those defense mechanisms by themselves.
RADAR — A Proactive Decision Support System for Human-in-the-Loop Planning
Sengupta, Sailik (Arizona State University) | Chakraborti, Tathagata (Arizona State University) | Sreedharan, Sarath (Arizona State University) | Vadlamudi, Satya Gautam (Arizona State University) | Kambhampati, Subbarao (Arizona State University)
Proactive Decision Support (PDS) aims at improving the decision making experience of human decision makers by enhancing both the quality of the decisions and the ease of making them. In this paper, we ask the question what role automated decision-making technologies can play in the deliberative process of the human decision maker.Specifically, we focus on expert humans in the loop who now share a detailed, if not complete, model of the domain with the assistant, but may still be unable to compute plans due to cognitive overload. To this end, we propose a PDS framework RADAR based on research in the automated planning community that aids the human decision maker in constructing plans. We will situate our discussion on principles of interface design laid out in the literature on the degrees of automation and its effect on the collaborative decision-making process. Also, at the heart of our design is the principle of naturalistic decision making which has been shown to be a necessary requirement of such systems, thus focusing more on providing suggestions rather than enforcing decisions and executing actions. We will demonstrate the different properties of such a system through examples in a fire-fighting domain, where human commanders are involved in building response strategies to mitigate a fire outbreak.The paper is written to serve both as a position paper by motivating requirements of an effective proactive decision support system, and also an emerging application of these ideas in the context of the role of an automated planner in human decision making, in a platform that can prove to be a valuable test bed for research on the same.