Can message-passing GNN approximate triangular factorizations of sparse matrices?
Trifonov, Vladislav, Muravleva, Ekaterina, Oseledets, Ivan
–arXiv.org Artificial Intelligence
Specifically, we show that there exist classes of Networks (GNNs) for learning sparse matrix matrices, starting from simple ones such as tridiagonal matrices preconditioners. While recent works have shown arising from discretization of PDEs, where optimal promising results using GNNs to predict incomplete sparse preconditioners exist but exhibit non-local dependencies factorizations, we demonstrate that the local - changing a single entry in A can significantly nature of message passing creates inherent barriers affect all entries in L. This means, that message passing for capturing non-local dependencies required GNNs, having limited receptive field, can not represent such for optimal preconditioning. We introduce a new non-local mappings. To address these limitations, we introduce benchmark dataset of matrices where good sparse a new benchmark dataset of matrices where optimal preconditioners exist but require non-local computations, sparse preconditioners are known to exist but require nonlocal constructed using both synthetic examples computations. We construct this dataset using both and real-world matrices. Our experimental results synthetic examples and real-world matrices from the SuiteSparse show that current GNN architectures struggle to collection. For synthetic benchmarks, we carefully design approximate these preconditioners, suggesting the tridiagonal matrices where the Cholesky factors depend need for new architectural approaches beyond traditional non-locally on the matrix elements by leveraging properties message passing networks. We provide of rank-1 semiseparable matrices. For real-world problems, theoretical analysis and empirical evidence to explain we explicitly compute so-called K-optimal preconditioners these limitations, with implications for the based on the inverse matrix with sparsity patterns matching broader use of GNNs in numerical linear algebra.
arXiv.org Artificial Intelligence
Feb-3-2025