Goto

Collaborating Authors

 multigraph




Beyond Simple Graphs: Neural Multi-Objective Routing on Multigraphs

Rydin, Filip, Lischka, Attila, Wu, Jiaming, Chehreghani, Morteza Haghir, Kulcsár, Balázs

arXiv.org Artificial Intelligence

Learning-based methods for routing have gained significant attention in recent years, both in single-objective and multi-objective contexts. Y et, existing methods are unsuitable for routing on multigraphs, which feature multiple edges with distinct attributes between node pairs, despite their strong relevance in real-world scenarios. In this paper, we propose two graph neural network-based methods to address multi-objective routing on multigraphs. Our first approach operates directly on the multigraph by autoregressively selecting edges until a tour is completed. The second model, which is more scalable, first simplifies the multigraph via a learned pruning strategy and then performs autoregressive routing on the resulting simple graph. The field of neural combinatorial optimization has grown significantly in recent years and vehicle routing problems in particular have attracted much attention (Zhou et al., 2024a). While early works focused on the Traveling Salesman Problem (TSP) (Vinyals et al., 2015; Bello et al., 2017), new learning-based methods for vehicle routing can solve a wide range of problems efficiently and effectively, often surpassing classical state-of-the-art heuristics (Zhou et al., 2024b; Drakulic et al., 2025). Y et, existing methods have one limitation in common: they assume problems are defined on simple graphs. However, multigraph formulations, featuring several edges between each node pair, become relevant as soon as there are competing edges that cannot be chosen between a priori. Such situations typically occur when edges have more than one feature of interest, such as both travel time and distance. In spite of the high practical relevance of multigraph formulations (Lai et al., 2016; Ben Ticha et al., 2017), current learning-based methods are incapable of handling them due to two main reasons. Firstly, many state-of-the-art neural solvers rely on transformers to encode the problem instance. While these work well in the Euclidean setting (Kool et al., 2018) and with some modifications on asymmetric, directed graphs (Kwon et al., 2021), they lack the capability to encode multigraph structures. Secondly and more importantly, planning routes in multigraphs requires both selecting the node order and which edges to traverse, making current decoding strategies unsuitable. In this work, we aim to bridge the gap between learning-based methods for routing and accurate network representations given by multigraphs. We focus on the Multi-Objective (MO) setting, as several competing objectives naturally translates to several competing edges between each node pair. Nevertheless, our methods are general and can easily be extended to single-objective settings. Our code will be released publicly upon paper acceptance. To our knowledge, we present the first neural solvers designed for such structures.



On the existence of EFX allocations in multigraphs

Sgouritsa, Alkmini, Sotiriou, Minas Marios

arXiv.org Artificial Intelligence

We study the problem of "fairly" dividing indivisible goods to several agents that have valuation set functions over the sets of goods. As fair we consider the allocations that are envy-free up to any good (EFX), i.e., no agent envies any proper subset of the goods given to any other agent. The existence or not of EFX allocations is a major open problem in Fair Division, and there are only positive results for special cases. [George Christodoulou, Amos Fiat, Elias Koutsoupias, Alkmini Sgouritsa 2023] introduced a restriction on the agents' valuations according to a graph structure: the vertices correspond to agents and the edges to goods, and each vertex/agent has zero marginal value (or in other words, they are indifferent) for the edges/goods that are not adjacent to them. The existence of EFX allocations has been shown for simple graphs with general monotone valuations [George Christodoulou, Amos Fiat, Elias Koutsoupias, Alkmini Sgouritsa 2023], and for multigraphs for restricted additive valuations [Alireza Kaviani, Masoud Seddighin, Amir Mohammad Shahrezaei 2024]. In this work, we push the state-of-the-art further, and show that the EFX allocations always exists in multigraphs and general monotone valuations if any of the following three conditions hold: either (a) the multigraph is bipartite, or (b) each agent has at most $\lceil \frac{n}{4} \rceil -1$ neighbors, where $n$ is the total number of agents, or (c) the shortest cycle with non-parallel edges has length at least 6.


Multigraph Message Passing with Bi-Directional Multi-Edge Aggregations

Bilgi, H. Çağrı, Chen, Lydia Y., Atasu, Kubilay

arXiv.org Artificial Intelligence

Graph Neural Networks (GNNs) have seen significant advances in recent years, yet their application to multigraphs, where parallel edges exist between the same pair of nodes, remains under-explored. Standard GNNs, designed for simple graphs, compute node representations by combining all connected edges at once, without distinguishing between edges from different neighbors. There are some GNN architectures proposed specifically for multigraphs, yet these architectures perform only node-level aggregation in their message passing layers, which limits their expressive power. Furthermore, these approaches either lack permutation equivariance when a strict total edge ordering is absent, or fail to preserve the topological structure of the multigraph. To address all these shortcomings, we propose MEGA-GNN, a unified framework for message passing on multigraphs that can effectively perform diverse graph learning tasks. Our approach introduces a two-stage aggregation process in the message passing layers: first, parallel edges are aggregated, followed by a node-level aggregation of messages from distinct neighbors. We show that MEGA-GNN is not only permutation equivariant but also universal given a strict total ordering on the edges. Experiments show that MEGA-GNN significantly outperforms state-of-the-art solutions by up to 13% on Anti-Money Laundering datasets and is on par with their accuracy on real-world phishing classification datasets in terms of minority class F1 score. Graph Neural Networks (GNNs) (Xu et al. (2019); Gilmer et al. (2017); Veličković et al. (2018); Corso et al. (2020); Hamilton et al. (2017)) have become Swiss Army knives for learning on graphstructured data. However, their widespread adoption has primarily focused on simple graphs, where only a single edge can connect a given pair of nodes. This simplification overlooks a crucial aspect of many real-world scenarios, where multigraphs, graphs that feature parallel edges between the same pair of nodes, are common. For instance, financial transaction networks, communication networks and transportation systems are often modeled as multigraphs, allowing multiple different interactions between the same two nodes.


Leveraging Fixed-Parameter Tractability for Robot Inspection Planning

Mizutani, Yosuke, Salomao, Daniel Coimbra, Crane, Alex, Bentert, Matthias, Drange, Pål Grønås, Reidl, Felix, Kuntz, Alan, Sullivan, Blair D.

arXiv.org Artificial Intelligence

Autonomous robotic inspection, where a robot moves through its environment and inspects points of interest, has applications in industrial settings, structural health monitoring, and medicine. Planning the paths for a robot to safely and efficiently perform such an inspection is an extremely difficult algorithmic challenge. In this work we consider an abstraction of the inspection planning problem which we term Graph Inspection. We give two exact algorithms for this problem, using dynamic programming and integer linear programming. We analyze the performance of these methods, and present multiple approaches to achieve scalability. We demonstrate significant improvement both in path weight and inspection coverage over a state-of-the-art approach on two robotics tasks in simulation, a bridge inspection task by a UAV and a surgical inspection task using a medical robot.


Tensor cumulants for statistical inference on invariant distributions

Kunisky, Dmitriy, Moore, Cristopher, Wein, Alexander S.

arXiv.org Machine Learning

Many problems in high-dimensional statistics appear to have a statistical-computational gap: a range of values of the signal-to-noise ratio where inference is information-theoretically possible, but (conjecturally) computationally intractable. A canonical such problem is Tensor PCA, where we observe a tensor $Y$ consisting of a rank-one signal plus Gaussian noise. Multiple lines of work suggest that Tensor PCA becomes computationally hard at a critical value of the signal's magnitude. In particular, below this transition, no low-degree polynomial algorithm can detect the signal with high probability; conversely, various spectral algorithms are known to succeed above this transition. We unify and extend this work by considering tensor networks, orthogonally invariant polynomials where multiple copies of $Y$ are "contracted" to produce scalars, vectors, matrices, or other tensors. We define a new set of objects, tensor cumulants, which provide an explicit, near-orthogonal basis for invariant polynomials of a given degree. This basis lets us unify and strengthen previous results on low-degree hardness, giving a combinatorial explanation of the hardness transition and of a continuum of subexponential-time algorithms that work below it, and proving tight lower bounds against low-degree polynomials for recovering rather than just detecting the signal. It also lets us analyze a new problem of distinguishing between different tensor ensembles, such as Wigner and Wishart tensors, establishing a sharp computational threshold and giving evidence of a new statistical-computational gap in the Central Limit Theorem for random tensors. Finally, we believe these cumulants are valuable mathematical objects in their own right: they generalize the free cumulants of free probability theory from matrices to tensors, and share many of their properties, including additivity under additive free convolution.


Edge-exchangeable graphs and sparsity

Neural Information Processing Systems

Many popular network models rely on the assumption of (vertex) exchangeability, in which the distribution of the graph is invariant to relabelings of the vertices. However, the Aldous-Hoover theorem guarantees that these graphs are dense or empty with probability one, whereas many real-world graphs are sparse. We present an alternative notion of exchangeability for random graphs, which we call edge exchangeability, in which the distribution of a graph sequence is invariant to the order of the edges. We demonstrate that edge-exchangeable models, unlike models that are traditionally vertex exchangeable, can exhibit sparsity. To do so, we outline a general framework for graph generative models; by contrast to the pioneering work of Caron and Fox [12], models within our framework are stationary across steps of the graph sequence. In particular, our model grows the graph by instantiating more latent atoms of a single random measure as the dataset size increases, rather than adding new atoms to the measure.


Simple Multigraph Convolution Networks

Wu, Danyang, Shen, Xinjie, Lu, Jitao, Xu, Jin, Nie, Feiping

arXiv.org Artificial Intelligence

Existing multigraph convolution methods either ignore the crossview interaction among multiple graphs, or induce extremely high computational cost due to standard cross-view polynomial operators. To alleviate this problem, this paper proposes a Simple Multi-Graph Convolution Networks (SMGCN) which first extracts consistent cross-view topology from multigraphs including edge-level and subgraph-level topology, then performs polynomial expansion based on raw multigraphs and consistent topologies. In theory, SMGCN utilizes the consistent topologies in polynomial expansion rather than standard cross-view polynomial expansion, which performs credible cross-view spatial message-passing, follows the spectral convolution paradigm, and effectively reduces the complexity of standard polynomial expansion. In the simulations, experimental results demonstrate that SMGCN achieves state-of-the-art performance on ACM and DBLP multigraph benchmark datasets. Our Figure 1: Overview of the proposed SMGCN.