Silva, Arlei
Graph Anomaly Detection via Adaptive Test-time Representation Learning across Out-of-Distribution Domains
Pirhayati, Delaram, Silva, Arlei
Graph Anomaly Detection (GAD) has demonstrated great effectiveness in identifying unusual patterns within graph-structured data. However, while labeled anomalies are often scarce in emerging applications, existing supervised GAD approaches are either ineffective or not applicable when moved across graph domains due to distribution shifts and heterogeneous feature spaces. To address these challenges, we present AdaGraph-T3, a novel test-time training framework for cross-domain GAD. AdaGraph-T3 combines supervised and self-supervised learning during training while adapting to a new domain during test time using only self-supervised learning by leveraging a homophily-based affinity score that captures domain-invariant properties of anomalies. Our framework introduces four key innovations to cross-domain GAD: an effective self-supervision scheme, an attention-based mechanism that dynamically learns edge importance weights during message passing, domain-specific encoders for handling heterogeneous features, and class-aware regularization to address imbalance. Experiments across multiple cross-domain settings demonstrate that AdaGraph-T3 significantly outperforms existing approaches, achieving average improvements of over 6.6% in AUROC and 7.9% in AUPRC compared to the best competing model.
Attribute-Enhanced Similarity Ranking for Sparse Link Prediction
Mattos, Joรฃo, Huang, Zexi, Kosan, Mert, Singh, Ambuj, Silva, Arlei
Link prediction is a fundamental problem in graph data. In its most realistic setting, the problem consists of predicting missing or future links between random pairs of nodes from the set of disconnected pairs. Graph Neural Networks (GNNs) have become the predominant framework for link prediction. GNN-based methods treat link prediction as a binary classification problem and handle the extreme class imbalance -- real graphs are very sparse -- by sampling (uniformly at random) a balanced number of disconnected pairs not only for training but also for evaluation. However, we show that the reported performance of GNNs for link prediction in the balanced setting does not translate to the more realistic imbalanced setting and that simpler topology-based approaches are often better at handling sparsity. These findings motivate Gelato, a similarity-based link-prediction method that applies (1) graph learning based on node attributes to enhance a topological heuristic, (2) a ranking loss for addressing class imbalance, and (3) a negative sampling scheme that efficiently selects hard training pairs via graph partitioning. Experiments show that Gelato outperforms existing GNN-based alternatives.
Sensor Placement for Learning in Flow Networks
Burudgunte, Arnav, Silva, Arlei
Large infrastructure networks (e.g. for transportation and power distribution) require constant monitoring for failures, congestion, and other adversarial events. However, assigning a sensor to every link in the network is often infeasible due to placement and maintenance costs. Instead, sensors can be placed only on a few key links, and machine learning algorithms can be leveraged for the inference of missing measurements (e.g. traffic counts, power flows) across the network. This paper investigates the sensor placement problem for networks. We first formalize the problem under a flow conservation assumption and show that it is NP-hard to place a fixed set of sensors optimally. Next, we propose an efficient and adaptive greedy heuristic for sensor placement that scales to large networks. Our experiments, using datasets from real-world application domains, show that the proposed approach enables more accurate inference than existing alternatives from the literature. We demonstrate that considering even imperfect or incomplete ground-truth estimates can vastly improve the prediction error, especially when a small number of sensors is available.
Better Fair than Sorry: Adversarial Missing Data Imputation for Fair GNNs
Lina, Debolina Halder, Silva, Arlei
With the increasing popularity of machine learning models for high-stakes decision-making, it has become a consensus that (1) these models carry implicit biases [1, 2] and (2) these biases should be addressed to improve the fairness of algorithmic decisions [3, 4]. The disparate treatment of such models towards African Americans and women has been illustrated in the well-documented COMPAS [1] and Apple credit card [2] cases. While there has been extensive research on fair algorithms and fair machine learning in recent years, the proposed solutions have mostly disregarded important challenges that arise in real-world settings. Existing work in fair machine learning has focused on tabular, image, and text data [5, 6]. However, in several applications, data can be naturally modeled as graphs (or networks), representing different objects, their relationships, and attributes [7, 8]. Graph Neural Networks (GNNs) have achieved state-of-the-art results in many graph machine learning tasks, including node classification, link prediction, and graph classification [9, 10, 11, 12]. For instance, in the case of link prediction in a professional network, such as LinkedIn, which is a key recruiting and networking tool, link recommendations should not be biased against protected groups [13, 14, 15, 16, 17]. However, guaranteeing fairness in graph data is a challenge due to well-known correlations in the network caused by homophily and influence.
Robust Ante-hoc Graph Explainer using Bilevel Optimization
Kosan, Mert, Silva, Arlei, Singh, Ambuj
Explaining the decisions made by machine learning models for high-stakes applications is critical for increasing transparency and guiding improvements to these decisions. This is particularly true in the case of models for graphs, where decisions often depend on complex patterns combining rich structural and attribute data. While recent work has focused on designing so-called post-hoc explainers, the question of what constitutes a good explanation remains open. One intuitive property is that explanations should be sufficiently informative to enable humans to approximately reproduce the predictions given the data. However, we show that post-hoc explanations do not achieve this goal as their explanations are highly dependent on fixed model parameters (e.g., learned GNN weights). To address this challenge, this paper proposes RAGE (Robust Ante-hoc Graph Explainer), a novel and flexible ante-hoc explainer designed to discover explanations for a broad class of graph neural networks using bilevel optimization. RAGE is able to efficiently identify explanations that contain the full information needed for prediction while still enabling humans to rank these explanations based on their influence. Our experiments, based on graph classification and regression, show that RAGE explanations are more robust than existing post-hoc and ante-hoc approaches and often achieve similar or better accuracy than state-of-the-art models.
Link Prediction without Graph Neural Networks
Huang, Zexi, Kosan, Mert, Silva, Arlei, Singh, Ambuj
Link prediction, which consists of predicting edges based on graph features, is a fundamental task in many graph applications. As for several related problems, Graph Neural Networks (GNNs), which are based on an attribute-centric message-passing paradigm, have become the predominant framework for link prediction. GNNs have consistently outperformed traditional topology-based heuristics, but what contributes to their performance? Are there simpler approaches that achieve comparable or better results? To answer these questions, we first identify important limitations in how GNN-based link prediction methods handle the intrinsic class imbalance of the problem -- due to the graph sparsity -- in their training and evaluation. Moreover, we propose Gelato, a novel topology-centric framework that applies a topological heuristic to a graph enhanced by attribute information via graph learning. Our model is trained end-to-end with an N-pair loss on an unbiased training set to address class imbalance. Experiments show that Gelato is 145% more accurate, trains 11 times faster, infers 6,000 times faster, and has less than half of the trainable parameters compared to state-of-the-art GNNs for link prediction.
Event Detection on Dynamic Graphs
Kosan, Mert, Silva, Arlei, Medya, Sourav, Uzzi, Brian, Singh, Ambuj
Event detection is a critical task for timely decision-making in graph analytics applications. Despite the recent progress towards deep learning on graphs, event detection on dynamic graphs presents particular challenges to existing architectures. Real-life events are often associated with sudden deviations of the normal behavior of the graph. However, existing approaches for dynamic node embedding are unable to capture the graph-level dynamics related to events. In this paper, we propose DyGED, a simple yet novel deep learning model for event detection on dynamic graphs. DyGED learns correlations between the graph macro dynamics -- i.e. a sequence of graph-level representations -- and labeled events. Moreover, our approach combines structural and temporal self-attention mechanisms to account for application-specific node and time importances effectively. Our experimental evaluation, using a representative set of datasets, demonstrates that DyGED outperforms competing solutions in terms of event detection accuracy by up to 8.5% while being more scalable than the top alternatives. We also present case studies illustrating key features of our model.
Feature-based Individual Fairness in k-Clustering
Kar, Debajyoti, Kosan, Mert, Mandal, Debmalya, Medya, Sourav, Silva, Arlei, Dey, Palash, Sanyal, Swagato
Ensuring fairness in machine learning algorithms is a challenging and essential task. We consider the problem of clustering a set of points while satisfying fairness constraints. While there have been several attempts to capture group fairness in the $k$-clustering problem, fairness at an individual level is relatively less explored. We introduce a new notion of individual fairness in $k$-clustering based on features not necessarily used for clustering. We show that this problem is NP-hard and does not admit a constant factor approximation. Therefore, we design a randomized algorithm that guarantees approximation both in terms of minimizing the clustering distance objective and individual fairness under natural restrictions on the distance metric and fairness constraints. Finally, our experimental results against six competing baselines validate that our algorithm produces individually fairer clusters than the fairest baseline by 12.5% on average while also being less costly in terms of the clustering objective than the best baseline by 34.5% on average.