Not enough data to create a plot.
Try a different view from the menu above.
Vijaykeerthy, Deepak
Automated Testing of AI Models
Haldar, Swagatam, Vijaykeerthy, Deepak, Saha, Diptikalyan
The last decade has seen tremendous progress in AI technology and applications. With such widespread adoption, ensuring the reliability of the AI models is crucial. In past, we took the first step of creating a testing framework called AITEST for metamorphic properties such as fairness, robustness properties for tabular, time-series, and text classification models. In this paper, we extend the capability of the AITEST tool to include the testing techniques for Image and Speech-to-text models along with interpretability testing for tabular models. These novel extensions make AITEST a comprehensive framework for testing AI models.
Verifying Individual Fairness in Machine Learning Models
John, Philips George, Vijaykeerthy, Deepak, Saha, Diptikalyan
We consider the problem of whether a given decision model, working with structured data, has individual fairness. Following the work of Dwork, a model is individually biased (or unfair) if there is a pair of valid inputs which are close to each other (according to an appropriate metric) but are treated differently by the model (different class label, or large difference in output), and it is unbiased (or fair) if no such pair exists. Our objective is to construct verifiers for proving individual fairness of a given model, and we do so by considering appropriate relaxations of the problem. We construct verifiers which are sound but not complete for linear classifiers, and kernelized polynomial/radial basis function classifiers. We also report the experimental results of evaluating our proposed algorithms on publicly available datasets.
Exploring the Hyperparameter Landscape of Adversarial Robustness
Duesterwald, Evelyn, Murthi, Anupama, Venkataraman, Ganesh, Sinn, Mathieu, Vijaykeerthy, Deepak
Adversarial training shows promise as an approach for training models that are robust towards adversarial perturbation. In this paper, we explore some of the practical challenges of adversarial training. We present a sensitivity analysis that illustrates that the effectiveness of adversarial training hinges on the settings of a few salient hyperparameters. We show that the robustness surface that emerges across these salient parameters can be surprisingly complex and that therefore no effective one-size-fits-all parameter settings exist. We then demonstrate that we can use the same salient hyperparameters as tuning knob to navigate the tension that can arise between robustness and accuracy. Based on these findings, we present a practical approach that leverages hyperparameter optimization techniques for tuning adversarial training to maximize robustness while keeping the loss in accuracy within a defined budget.
Explaining Deep Learning Models using Causal Inference
Narendra, Tanmayee, Sankaran, Anush, Vijaykeerthy, Deepak, Mani, Senthil
Although deep learning models have been successfully applied to a variety of tasks, due to the millions of parameters, they are becoming increasingly opaque and complex. In order to establish trust for their widespread commercial use, it is important to formalize a principled framework to reason over these models. In this work, we use ideas from causal inference to describe a general framework to reason over CNN models. Specifically, we build a Structural Causal Model (SCM) as an abstraction over a specific aspect of the CNN. We also formulate a method to quantitatively rank the filters of a convolution layer according to their counterfactual importance. We illustrate our approach with popular CNN architectures such as LeNet5, VGG19, and ResNet32.
Hardening Deep Neural Networks via Adversarial Model Cascades
Vijaykeerthy, Deepak, Suri, Anshuman, Mehta, Sameep, Kumaraguru, Ponnurangam
Deep neural networks (DNNs) have been shown to be vulnerable to adversarial examples - malicious inputs which are crafted by the adversary to induce the trained model to produce erroneous outputs. This vulnerability has inspired a lot of research on how to secure neural networks against these kinds of attacks. Although existing techniques increase the robustness of the models against white-box attacks, they are ineffective against black-box attacks. To address the challenge of black-box adversarial attacks, we propose Adversarial Model Cascades (AMC); a framework that performs better than existing state-of-the-art defenses, in both black-box and white-box settings and is easy to integrate into existing set-ups. Our approach trains a cascade of models by injecting images crafted from an already defended proxy model, to improve the robustness of the target models against adversarial attacks. AMC provides an increase in robustness of 8.175% & 7.115% for white-box attacks and 30.218% & 4.717% for black-box, in comparison to defensive distillation and adversarial hardening. To the best of our knowledge, ours is the first work that aims to provide a defense mechanism that can improve robustness against multiple adversarial attacks simultaneously.
Debugging Machine Learning Tasks
Chakarov, Aleksandar, Nori, Aditya, Rajamani, Sriram, Sen, Shayak, Vijaykeerthy, Deepak
Unlike traditional programs (such as operating systems or word processors) which have large amounts of code, machine learning tasks use programs with relatively small amounts of code (written in machine learning libraries), but voluminous amounts of data. Just like developers of traditional programs debug errors in their code, developers of machine learning tasks debug and fix errors in their data. However, algorithms and tools for debugging and fixing errors in data are less common, when compared to their counterparts for detecting and fixing errors in code. In this paper, we consider classification tasks where errors in training data lead to misclassifications in test points, and propose an automated method to find the root causes of such misclassifications. Our root cause analysis is based on Pearl's theory of causation, and uses Pearl's PS (Probability of Sufficiency) as a scoring metric. Our implementation, Psi, encodes the computation of PS as a probabilistic program, and uses recent work on probabilistic programs and transformations on probabilistic programs (along with gray-box models of machine learning algorithms) to efficiently compute PS. Psi is able to identify root causes of data errors in interesting data sets.