Goto

Collaborating Authors

 neural approximation


Neural Approximation of Graph Topological Features

Neural Information Processing Systems

Topological features based on persistent homology capture high-order structural information so as to augment graph neural network methods. However, computing extended persistent homology summaries remains slow for large and dense graphs and can be a serious bottleneck for the learning pipeline. Inspired by recent success in neural algorithmic reasoning, we propose a novel graph neural network to estimate extended persistence diagrams (EPDs) on graphs efficiently. Our model is built on algorithmic insights, and benefits from better supervision and closer alignment with the EPD computation algorithm.


Neural approximation of Wasserstein distance via a universal architecture for symmetric and factorwise group invariant functions

Neural Information Processing Systems

Learning distance functions between complex objects, such as the Wasserstein distance to compare point sets, is a common goal in machine learning applications. However, functions on such complex objects (e.g., point sets and graphs) are often required to be invariant to a wide variety of group actions e.g.


Demystifying excessively volatile human learning: A Bayesian persistent prior and a neural approximation

Neural Information Processing Systems

Understanding how humans and animals learn about statistical regularities in stable and volatile environments, and utilize these regularities to make predictions and decisions, is an important problem in neuroscience and psychology. Using a Bayesian modeling framework, specifically the Dynamic Belief Model (DBM), it has previously been shown that humans tend to make the {\it default} assumption that environmental statistics undergo abrupt, unsignaled changes, even when environmental statistics are actually stable. Because exact Bayesian inference in this setting, an example of switching state space models, is computationally intense, a number of approximately Bayesian and heuristic algorithms have been proposed to account for learning/prediction in the brain. Here, we examine a neurally plausible algorithm, a special case of leaky integration dynamics we denote as EXP (for exponential filtering), that is significantly simpler than all previously suggested algorithms except for the delta-learning rule, and which far outperforms the delta rule in approximating Bayesian prediction performance. We derive the theoretical relationship between DBM and EXP, and show that EXP gains computational efficiency by foregoing the representation of inferential uncertainty (as does the delta rule), but that it nevertheless achieves near-Bayesian performance due to its ability to incorporate a persistent prior influence unique to DBM and absent from the other algorithms. Furthermore, we show that EXP is comparable to DBM but better than all other models in reproducing human behavior in a visual search task, suggesting that human learning and prediction also incorporates an element of persistent prior. More broadly, our work demonstrates that when observations are information-poor, detecting changes or modulating the learning rate is both {\it difficult} and (thus) {\it unnecessary} for making Bayes-optimal predictions.


Provable wavelet-based neural approximation

Hur, Youngmi, Lim, Hyojae, Lim, Mikyoung

arXiv.org Machine Learning

Provable wavelet-based neural approximation Youngmi Hur Hyojae Lim Mikyoung Lim April 24, 2025 Abstract In this paper, we develop a wavelet-based theoretical framework for analyzing the universal approximation capabilities of neural networks over a wide range of activation functions. Leveraging wavelet frame theory on the spaces of homogeneous type, we derive sufficient conditions on activation functions to ensure that the associated neural network approximates any functions in the given space, along with an error estimate. These sufficient conditions accommodate a variety of smooth activation functions, including those that exhibit oscillatory behavior. Furthermore, by considering the L 2 -distance between smooth and non-smooth activation functions, we establish a generalized approximation result that is applicable to non-smooth activations, with the error explicitly controlled by this distance. This provides increased flexibility in the design of network architectures. 1 Introduction Neural networks have long been recognized for their remarkable ability to approximate a wide range of functions, enabling state-of-the-art achievements across various fields in machine learning and artificial intelligence, image processing, natural language processing, and scientific computing (see, for example, [13, 19] and references therein). Various activation functions, such as ReLU, Sigmoid, Tanh, and oscillatory functions, have also been explored to further enhance network performance and adaptability. The versatility of neural networks originates from the structural flexibility of architectures that combine affine transformations with nonlinear activation functions. In addition, classical universal approximation theorems [5, 12, 16] provide a theoretical basis for this flexibility by guaranteeing that, under suitable conditions, neural networks can approximate any continuous function on a bounded domain, underscoring their representational power. These seminal results have been extended along various directions, including radial basis function (RBF) networks [22, 25], non-polynomial activations [20], approximation of functions and their derivatives [15, 21], the influence of network depth [9], approximation error bounds [1], convolutional neural networks (CNN) [32], recurrent neural networks (RNN) [27]. As neural network architectures continue to evolve and diversify in practice, their theoretical foundations-beyond those provided by classical approximation theorems-have attracted Department of Mathematics, Yonsei University, Seoul 03722, Republic of Korea (yhur@yonsei.ac.kr)


Neural Approximation of Graph Topological Features

Neural Information Processing Systems

Topological features based on persistent homology capture high-order structural information so as to augment graph neural network methods. However, computing extended persistent homology summaries remains slow for large and dense graphs and can be a serious bottleneck for the learning pipeline. Inspired by recent success in neural algorithmic reasoning, we propose a novel graph neural network to estimate extended persistence diagrams (EPDs) on graphs efficiently. Our model is built on algorithmic insights, and benefits from better supervision and closer alignment with the EPD computation algorithm. Our method is also efficient; on large and dense graphs, we accelerate the computation by nearly 100 times.


Neural approximation of Wasserstein distance via a universal architecture for symmetric and factorwise group invariant functions

Neural Information Processing Systems

Learning distance functions between complex objects, such as the Wasserstein distance to compare point sets, is a common goal in machine learning applications. However, functions on such complex objects (e.g., point sets and graphs) are often required to be invariant to a wide variety of group actions e.g. Therefore, continuous and symmetric *product* functions (such as distance functions) on such complex objects must also be invariant to the *product* of such group actions. We call these functions symmetric and factor-wise group invariant functions (or SGFI functions} in short).In this paper, we first present a general neural network architecture for approximating SFGI functions. The main contribution of this paper combines this general NN with a sketching idea in order to develop a specific and efficient neural network which can approximate the p -th Wasserstein distance between point sets.Very importantly, the required model complexity is *independent* of the sizes of input point sets.


Reviews: Demystifying excessively volatile human learning: A Bayesian persistent prior and a neural approximation

Neural Information Processing Systems

This paper builds on a successful line of research from Yu and colleagues on change point detection. The paper presents some interesting theoretical results linking the Bayes-optimal solution to computationally efficient and neurally plausible approximations. The paper also presents a cursory analysis of empirical data using the approximations. The paper is well-written and technically rigorous. There were a number of important useful insights from the theoretical analysis.


How accurate are neural approximations of complex network dynamics?

Vasiliauskaite, Vaiva, Antulov-Fantulin, Nino

arXiv.org Machine Learning

Data-driven approximations of ordinary differential equations offer a promising alternative to classical methods of discovering a dynamical system model, particularly in complex systems lacking explicit first principles. This paper focuses on a complex system whose dynamics is described with a system of such equations, coupled through a complex network. Numerous real-world systems, including financial, social, and neural systems, belong to this class of dynamical models. We propose essential elements for approximating these dynamical systems using neural networks, including necessary biases and an appropriate neural architecture. Emphasizing the differences from static supervised learning, we advocate for evaluating generalization beyond classical assumptions of statistical learning theory. To estimate confidence in prediction during inference time, we introduce a dedicated null model. By studying various complex network dynamics, we demonstrate that the neural approximations of dynamics generalize across complex network structures, sizes, and statistical properties of inputs. Our comprehensive framework enables accurate and reliable deep learning approximations of high-dimensional, nonlinear dynamical systems.


Look Back When Surprised: Stabilizing Reverse Experience Replay for Neural Approximation

#artificialintelligence

Experience replay methods, which are an essential part of reinforcement learning(RL) algorithms, are designed to mitigate spurious correlations and biases while learning from temporally dependent data. Roughly speaking, these methods allow us to draw batched data from a large buffer such that these temporal correlations do not hinder the performance of descent algorithms. In this experimental work, we consider the recently developed and theoretically rigorous reverse experience replay (RER), which has been shown to remove such spurious biases in simplified theoretical settings. We combine RER with optimistic experience replay (OER) to obtain RER++, which is stable under neural function approximation. We show via experiments that this has a better performance than techniques like prioritized experience replay (PER) on various tasks, with a significantly smaller computational complexity. It is well known in the RL literature that choosing examples greedily with the largest TD error (as in OER) or forming mini-batches with consecutive data points (as in RER) leads to poor performance. However, our method, which combines these techniques, works very well.


Demystifying excessively volatile human learning: A Bayesian persistent prior and a neural approximation

Ryali, Chaitanya, Reddy, Gautam, Yu, Angela J.

Neural Information Processing Systems

Understanding how humans and animals learn about statistical regularities in stable and volatile environments, and utilize these regularities to make predictions and decisions, is an important problem in neuroscience and psychology. Using a Bayesian modeling framework, specifically the Dynamic Belief Model (DBM), it has previously been shown that humans tend to make the {\it default} assumption that environmental statistics undergo abrupt, unsignaled changes, even when environmental statistics are actually stable. Because exact Bayesian inference in this setting, an example of switching state space models, is computationally intense, a number of approximately Bayesian and heuristic algorithms have been proposed to account for learning/prediction in the brain. Here, we examine a neurally plausible algorithm, a special case of leaky integration dynamics we denote as EXP (for exponential filtering), that is significantly simpler than all previously suggested algorithms except for the delta-learning rule, and which far outperforms the delta rule in approximating Bayesian prediction performance. We derive the theoretical relationship between DBM and EXP, and show that EXP gains computational efficiency by foregoing the representation of inferential uncertainty (as does the delta rule), but that it nevertheless achieves near-Bayesian performance due to its ability to incorporate a "persistent prior" influence unique to DBM and absent from the other algorithms.