Bagchi, Amitabha
GRAPHGINI: Fostering Individual and Group Fairness in Graph Neural Networks
Sirohi, Anuj Kumar, Gupta, Anjali, Ranu, Sayan, Kumar, Sandeep, Bagchi, Amitabha
We address the growing apprehension that GNNs, in the absence of fairness constraints, might produce biased decisions that disproportionately affect underprivileged groups or individuals. Departing from previous work, we introduce for the first time a method for incorporating the Gini coefficient as a measure of fairness to be used within the GNN framework. Our proposal, GRAPHGINI, works with the two different goals of individual and group fairness in a single system, while maintaining high prediction accuracy. GRAPHGINI enforces individual fairness through learnable attention scores that help in aggregating more information through similar nodes. A heuristic-based maximum Nash social welfare constraint ensures the maximum possible group fairness. Both the individual fairness constraint and the group fairness constraint are stated in terms of a differentiable approximation of the Gini coefficient. This approximation is a contribution that is likely to be of interest even beyond the scope of the problem studied in this paper. Unlike other state-of-the-art, GRAPHGINI automatically balances all three optimization objectives (utility, individual, and group fairness) of the GNN and is free from any manual tuning of weight parameters. Extensive experimentation on real-world datasets showcases the efficacy of GRAPHGINI in making significant improvements in individual fairness compared to all currently available state-of-the-art methods while maintaining utility and group equality.
FROCC: Fast Random projection-based One-Class Classification
Bhattacharya, Arindam, Varambally, Sumanth, Bagchi, Amitabha, Bedathur, Srikanta
We present Fast Random projection-based One-Class Classification (FROCC), an extremely efficient method for one-class classification. Our method is based on a simple idea of transforming the training data by projecting it onto a set of random unit vectors that are chosen uniformly and independently from the unit sphere, and bounding the regions based on separation of the data. FROCC can be naturally extended with kernels. We theoretically prove that FROCC generalizes well in the sense that it is stable and has low bias. FROCC achieves up to 3.1 percent points better ROC, with 1.2--67.8x speedup in training and test times over a range of state-of-the-art benchmarks including the SVM and the deep learning based models for the OCC task.
Lecture notes: Efficient approximation of kernel functions
Bagchi, Amitabha
These lecture notes endeavour to collect in one place the mathematical background required to understand the properties of kernels in general and the Random Fourier Features approximation of Rahimi and Recht (NIPS 2007) in particular. We briefly motivate the use of kernels in Machine Learning with the example of the support vector machine. We discuss positive definite and conditionally negative definite kernels in some detail. After a brief discussion of Hilbert spaces, including the Reproducing Kernel Hilbert Space construction, we present Mercer's theorem. We discuss the Random Fourier Features technique and then present, with proofs, scalar and matrix concentration results that help us estimate the error incurred by the technique. These notes are the transcription of 10 lectures given at IIT Delhi between January and April 2020.