Goto

Collaborating Authors

 universal invariant


Universal Invariant and Equivariant Graph Neural Networks

Neural Information Processing Systems

Graph Neural Networks (GNN) come in many flavors, but should always be either invariant (permutation of the nodes of the input graph does not affect the output) or \emph{equivariant} (permutation of the input permutes the output). In this paper, we consider a specific class of invariant and equivariant networks, for which we prove new universality theorems. More precisely, we consider networks with a single hidden layer, obtained by summing channels formed by applying an equivariant linear operator, a pointwise non-linearity, and either an invariant or equivariant linear output layer. Recently, Maron et al. (2019) showed that by allowing higher-order tensorization inside the network, universal invariant GNNs can be obtained. As a first contribution, we propose an alternative proof of this result, which relies on the Stone-Weierstrass theorem for algebra of real-valued functions. Our main contribution is then an extension of this result to the \emph{equivariant} case, which appears in many practical applications but has been less studied from a theoretical point of view. The proof relies on a new generalized Stone-Weierstrass theorem for algebra of equivariant functions, which is of independent interest. Additionally, unlike many previous works that consider a fixed number of nodes, our results show that a GNN defined by a single set of parameters can approximate uniformly well a function defined on graphs of varying size.


Reviews: Universal Invariant and Equivariant Graph Neural Networks

Neural Information Processing Systems

This paper uses the framework of Stone-Weiserstrass theorem to prove universal approximation of equivariant/invariant functions with respect to permutation groups by a family of single-hidden layer graph neural networks. Reviewers agreed that this work is technically sound, and offers a complementary perspective of known universal approximation results under (discrete) symmetries. Some reviewers were skeptical about the significance of this work, in the sense that it may feel incremental with respect to Maron et al.'19 which establish universal approximation in the invariant case. After discussions with reviewers and based on author feedback, ultimately the AC considers the positive aspects contributed in this work outweight its shortcomings, and therefore recommends acceptance.


Reviews: Universal Invariant and Equivariant Graph Neural Networks

Neural Information Processing Systems

This paper proves the universality of invariant and equivariant graph neural networks by using the Stone-Weierstrass theorem. The property of invariance and equivariance of graph neural networks has seldom been formalized in the existing literature. This paper lays a solid theoretical foundation on the universality of invariant and equivariant graph neural networks defined by a single set of parameters. However, unless the graph neural network proposed in Equation 1 covers a large number of existing graph neural networks, it is needed to validate the superiority of the proposed graph neural networks over other graph neural networks with empirical studies such as graph classification. Due to lack of generalization ability to existing graph neural networks or experimental studies, at this stage the contribution of this paper is of weak significance to the society of graph deep learning. Other aspects of this paper are summarized below: Quality The quality of this paper largely depends on the correctness on proof of the proposed theorems.


Universal Invariant and Equivariant Graph Neural Networks

Neural Information Processing Systems

Graph Neural Networks (GNN) come in many flavors, but should always be either invariant (permutation of the nodes of the input graph does not affect the output) or \emph{equivariant} (permutation of the input permutes the output). In this paper, we consider a specific class of invariant and equivariant networks, for which we prove new universality theorems. More precisely, we consider networks with a single hidden layer, obtained by summing channels formed by applying an equivariant linear operator, a pointwise non-linearity, and either an invariant or equivariant linear output layer. Recently, Maron et al. (2019) showed that by allowing higher-order tensorization inside the network, universal invariant GNNs can be obtained. As a first contribution, we propose an alternative proof of this result, which relies on the Stone-Weierstrass theorem for algebra of real-valued functions.


Relational Action Bases: Formalization, Effective Safety Verification, and Invariants (Extended Version)

Ghilardi, Silvio, Gianola, Alessandro, Montali, Marco, Rivkin, Andrey

arXiv.org Artificial Intelligence

Modeling and verification of dynamic systems operating over a relational representation of states are increasingly investigated problems in AI, Business Process Management, and Database Theory. To make these systems amenable to verification, the amount of information stored in each relational state needs to be bounded, or restrictions are imposed on the preconditions and effects of actions. We introduce the general framework of relational action bases (RABs), which generalizes existing models by lifting both these restrictions: unbounded relational states can be evolved through actions that can quantify both existentially and universally over the data, and that can exploit numerical datatypes with arithmetic predicates. We then study parameterized safety of RABs via (approximated) SMT-based backward search, singling out essential meta-properties of the resulting procedure, and showing how it can be realized by an off-the-shelf combination of existing verification modules of the state-of-the-art MCMT model checker. We demonstrate the effectiveness of this approach on a benchmark of data-aware business processes. Finally, we show how universal invariants can be exploited to make this procedure fully correct.


Universal Invariant and Equivariant Graph Neural Networks

Keriven, Nicolas, Peyré, Gabriel

Neural Information Processing Systems

Graph Neural Networks (GNN) come in many flavors, but should always be either invariant (permutation of the nodes of the input graph does not affect the output) or \emph{equivariant} (permutation of the input permutes the output). In this paper, we consider a specific class of invariant and equivariant networks, for which we prove new universality theorems. More precisely, we consider networks with a single hidden layer, obtained by summing channels formed by applying an equivariant linear operator, a pointwise non-linearity, and either an invariant or equivariant linear output layer. Recently, Maron et al. (2019) showed that by allowing higher-order tensorization inside the network, universal invariant GNNs can be obtained. As a first contribution, we propose an alternative proof of this result, which relies on the Stone-Weierstrass theorem for algebra of real-valued functions.