Spinelli, Indro
Following the Human Thread in Social Navigation
Scofano, Luca, Sampieri, Alessio, Campari, Tommaso, Sacco, Valentino, Spinelli, Indro, Ballan, Lamberto, Galasso, Fabio
The success of collaboration between humans and robots in shared environments relies on the robot's real-time adaptation to human motion. Specifically, in Social Navigation, the agent should be close enough to assist but ready to back up to let the human move freely, avoiding collisions. Human trajectories emerge as crucial cues in Social Navigation, but they are partially observable from the robot's egocentric view and computationally complex to process. We propose the first Social Dynamics Adaptation model (SDA) based on the robot's state-action history to infer the social dynamics. We propose a two-stage Reinforcement Learning framework: the first learns to encode the human trajectories into social dynamics and learns a motion policy conditioned on this encoded information, the current status, and the previous action. Here, the trajectories are fully visible, i.e., assumed as privileged information. In the second stage, the trained policy operates without direct access to trajectories. Instead, the model infers the social dynamics solely from the history of previous actions and statuses in real-time. Tested on the novel Habitat 3.0 platform, SDA sets a novel state of the art (SoA) performance in finding and following humans.
TopoX: A Suite of Python Packages for Machine Learning on Topological Domains
Hajij, Mustafa, Papillon, Mathilde, Frantzen, Florian, Agerberg, Jens, AlJabea, Ibrahem, Ballester, Ruben, Battiloro, Claudio, Bernรกrdez, Guillermo, Birdal, Tolga, Brent, Aiden, Chin, Peter, Escalera, Sergio, Fiorellino, Simone, Gardaa, Odin Hoff, Gopalakrishnan, Gurusankar, Govil, Devendra, Hoppe, Josef, Karri, Maneel Reddy, Khouja, Jude, Lecha, Manuel, Livesay, Neal, Meiรner, Jan, Mukherjee, Soham, Nikitin, Alexander, Papamarkou, Theodore, Prรญlepok, Jaro, Ramamurthy, Karthikeyan Natesan, Rosen, Paul, Guzmรกn-Sรกenz, Aldo, Salatiello, Alessandro, Samaga, Shreyas N., Scardapane, Simone, Schaub, Michael T., Scofano, Luca, Spinelli, Indro, Telyatnikov, Lev, Truong, Quang, Walters, Robin, Yang, Maosheng, Zaghen, Olga, Zamzmi, Ghada, Zia, Ali, Miolane, Nina
We introduce TopoX, a Python software suite that provides reliable and user-friendly building blocks for computing and machine learning on topological domains that extend graphs: hypergraphs, simplicial, cellular, path and combinatorial complexes. TopoX consists of three packages: TopoNetX facilitates constructing and computing on these domains, including working with nodes, edges and higher-order cells; TopoEmbedX provides methods to embed topological domains into vector spaces, akin to popular graph-based embedding algorithms such as node2vec; TopoModelX is built on top of PyTorch and offers a comprehensive toolbox of higher-order message passing functions for neural networks on topological domains. The extensively documented and unit-tested source code of TopoX is available under MIT license at https://github.com/pyt-team.
Adaptive Point Transformer
Baiocchi, Alessandro, Spinelli, Indro, Nicolosi, Alessandro, Scardapane, Simone
The recent surge in 3D data acquisition has spurred the development of geometric deep learning models for point cloud processing, boosted by the remarkable success of transformers in natural language processing. While point cloud transformers (PTs) have achieved impressive results recently, their quadratic scaling with respect to the point cloud size poses a significant scalability challenge for real-world applications. To address this issue, we propose the Adaptive Point Cloud Transformer (AdaPT), a standard PT model augmented by an adaptive token selection mechanism. AdaPT dynamically reduces the number of tokens during inference, enabling efficient processing of large point clouds. Furthermore, we introduce a budget mechanism to flexibly adjust the computational cost of the model at inference time without the need for retraining or fine-tuning separate models. Our extensive experimental evaluation on point cloud classification tasks demonstrates that AdaPT significantly reduces computational complexity while maintaining competitive accuracy compared to standard PTs. The code for AdaPT is made publicly available.
ICML 2023 Topological Deep Learning Challenge : Design and Results
Papillon, Mathilde, Hajij, Mustafa, Jenne, Helen, Mathe, Johan, Myers, Audun, Papamarkou, Theodore, Birdal, Tolga, Dey, Tamal, Doster, Tim, Emerson, Tegan, Gopalakrishnan, Gurusankar, Govil, Devendra, Guzmรกn-Sรกenz, Aldo, Kvinge, Henry, Livesay, Neal, Mukherjee, Soham, Samaga, Shreyas N., Ramamurthy, Karthikeyan Natesan, Karri, Maneel Reddy, Rosen, Paul, Sanborn, Sophia, Walters, Robin, Agerberg, Jens, Barikbin, Sadrodin, Battiloro, Claudio, Bazhenov, Gleb, Bernardez, Guillermo, Brent, Aiden, Escalera, Sergio, Fiorellino, Simone, Gavrilev, Dmitrii, Hassanin, Mohammed, Hรคusner, Paul, Gardaa, Odin Hoff, Khamis, Abdelwahed, Lecha, Manuel, Magai, German, Malygina, Tatiana, Ballester, Rubรฉn, Nadimpalli, Kalyan, Nikitin, Alexander, Rabinowitz, Abraham, Salatiello, Alessandro, Scardapane, Simone, Scofano, Luca, Singh, Suraj, Sjรถlund, Jens, Snopov, Pavel, Spinelli, Indro, Telyatnikov, Lev, Testa, Lucia, Yang, Maosheng, Yue, Yixiao, Zaghen, Olga, Zia, Ali, Miolane, Nina
This paper presents the computational challenge on topological deep learning that was hosted within the ICML 2023 Workshop on Topology and Geometry in Machine Learning. The competition asked participants to provide open-source implementations of topological neural networks from the literature by contributing to the python packages TopoNetX (data processing) and TopoModelX (deep learning). The challenge attracted twenty-eight qualifying submissions in its two-month duration. This paper describes the design of the challenge and summarizes its main findings.
From Latent Graph to Latent Topology Inference: Differentiable Cell Complex Module
Battiloro, Claudio, Spinelli, Indro, Telyatnikov, Lev, Bronstein, Michael, Scardapane, Simone, Di Lorenzo, Paolo
Latent Graph Inference (LGI) relaxed the reliance of Graph Neural Networks (GNNs) on a given graph topology by dynamically learning it. However, most of LGI methods assume to have a (noisy, incomplete, improvable, ...) input graph to rewire and can solely learn regular graph topologies. In the wake of the success of Topological Deep Learning (TDL), we study Latent Topology Inference (LTI) for learning higher-order cell complexes (with sparse and not regular topology) describing multi-way interactions between data points. To this aim, we introduce the Differentiable Cell Complex Module (DCM), a novel learnable function that computes cell probabilities in the complex to improve the downstream task. We show how to integrate DCM with cell complex message passing networks layers and train it in a end-to-end fashion, thanks to a two-step inference procedure that avoids an exhaustive search across all possible cells in the input, thus maintaining scalability. Our model is tested on several homophilic and heterophilic graph datasets and it is shown to outperform other state-of-the-art techniques, offering significant improvements especially in cases where an input graph is not provided.
Combining Stochastic Explainers and Subgraph Neural Networks can Increase Expressivity and Interpretability
Spinelli, Indro, Guerra, Michele, Bianchi, Filippo Maria, Scardapane, Simone
Subgraph-enhanced graph neural networks (SGNN) can increase the expressive power of the standard message-passing framework. This model family represents each graph as a collection of subgraphs, generally extracted by random sampling or with hand-crafted heuristics. Our key observation is that by selecting "meaningful" subgraphs, besides improving the expressivity of a GNN, it is also possible to obtain interpretable results. For this purpose, we introduce a novel framework that jointly predicts the class of the graph and a set of explanatory sparse subgraphs, which can be analyzed to understand the decision process of the classifier. We compare the performance of our framework against standard subgraph extraction policies, like random node/edge deletion strategies. The subgraphs produced by our framework allow to achieve comparable performance in terms of accuracy, with the additional benefit of providing explanations.
Drop Edges and Adapt: a Fairness Enforcing Fine-tuning for Graph Neural Networks
Spinelli, Indro, Bianchini, Riccardo, Scardapane, Simone
The rise of graph representation learning as the primary solution for many different network science tasks led to a surge of interest in the fairness of this family of methods. Link prediction, in particular, has a substantial social impact. However, link prediction algorithms tend to increase the segregation in social networks by disfavoring the links between individuals in specific demographic groups. This paper proposes a novel way to enforce fairness on graph neural networks with a fine-tuning strategy. We Drop the unfair Edges and, simultaneously, we Adapt the model's parameters to those modifications, DEA in short. We introduce two covariance-based constraints designed explicitly for the link prediction task. We use these constraints to guide the optimization process responsible for learning the new "fair" adjacency matrix. One novelty of DEA is that we can use a discrete yet learnable adjacency matrix in our fine-tuning. We demonstrate the effectiveness of our approach on five real-world datasets and show that we can improve both the accuracy and the fairness of the link prediction tasks. In addition, we present an in-depth ablation study demonstrating that our training algorithm for the adjacency matrix can be used to improve link prediction performances during training. Finally, we compute the relevance of each component of our framework to show that the combination of both the constraints and the training of the adjacency matrix leads to optimal performances.
Explainability in subgraphs-enhanced Graph Neural Networks
Guerra, Michele, Spinelli, Indro, Scardapane, Simone, Bianchi, Filippo Maria
Recently, subgraphs-enhanced Graph Neural Networks (SGNNs) have been introduced to enhance the expressive power of Graph Neural Networks (GNNs), which was proved to be not higher than the 1-dimensional Weisfeiler-Leman isomorphism test. The new paradigm suggests using subgraphs extracted from the input graph to improve the model's expressiveness, but the additional complexity exacerbates an already challenging problem in GNNs: explaining their predictions. In this work, we adapt PGExplainer, one of the most recent explainers for GNNs, to SGNNs. The proposed explainer accounts for the contribution of all the different subgraphs and can produce a meaningful explanation that humans can interpret. The experiments that we performed both on real and synthetic datasets show that our framework is successful in explaining the decision process of an SGNN on graph classification tasks.
A Meta-Learning Approach for Training Explainable Graph Neural Networks
Spinelli, Indro, Scardapane, Simone, Uncini, Aurelio
In this paper, we investigate the degree of explainability of graph neural networks (GNNs). Existing explainers work by finding global/local subgraphs to explain a prediction, but they are applied after a GNN has already been trained. Here, we propose a meta-learning framework for improving the level of explainability of a GNN directly at training time, by steering the optimization procedure towards what we call `interpretable minima'. Our framework (called MATE, MetA-Train to Explain) jointly trains a model to solve the original task, e.g., node classification, and to provide easily processable outputs for downstream algorithms that explain the model's decisions in a human-friendly way. In particular, we meta-train the model's parameters to quickly minimize the error of an instance-level GNNExplainer trained on-the-fly on randomly sampled nodes. The final internal representation relies upon a set of features that can be `better' understood by an explanation algorithm, e.g., another instance of GNNExplainer. Our model-agnostic approach can improve the explanations produced for different GNN architectures and use any instance-based explainer to drive this process. Experiments on synthetic and real-world datasets for node and graph classification show that we can produce models that are consistently easier to explain by different algorithms. Furthermore, this increase in explainability comes at no cost for the accuracy of the model.
Biased Edge Dropout for Enhancing Fairness in Graph Representation Learning
Spinelli, Indro, Scardapane, Simone, Hussain, Amir, Uncini, Aurelio
Graph representation learning has become a ubiquitous component in many scenarios, ranging from social network analysis to energy forecasting in smart grids. In several applications, ensuring the fairness of the node (or graph) representations with respect to some protected attributes is crucial for their correct deployment. Yet, fairness in graph deep learning remains under-explored, with few solutions available. In particular, the tendency of similar nodes to cluster on several real-world graphs (i.e., homophily) can dramatically worsen the fairness of these procedures. In this paper, we propose a biased edge dropout algorithm (FairDrop) to counter-act homophily and improve fairness in graph representation learning. FairDrop can be plugged in easily on many existing algorithms, is efficient, adaptable, and can be combined with other fairness-inducing solutions. After describing the general algorithm, we demonstrate its application on two benchmark tasks, specifically, as a random walk model for producing node embeddings, and to a graph convolutional network for link prediction. We prove that the proposed algorithm can successfully improve the fairness of all models up to a small or negligible drop in accuracy, and compares favourably with existing state-of-the-art solutions. In an ablation study, we demonstrate that our algorithm can flexibly interpolate between biasing towards fairness and an unbiased edge dropout. Furthermore, to better evaluate the gains, we propose a new dyadic group definition to measure the bias of a link prediction task when paired with group-based fairness metrics. In particular, we extend the metric used to measure the bias in the node embeddings to take into account the graph structure.