Goto

Collaborating Authors

 sparse grid



Kernel Interpolation with Sparse Grids

Neural Information Processing Systems

Structured kernel interpolation (SKI) accelerates Gaussian processes (GP) inference by interpolating the kernel covariance function using a dense grid of inducing points, whose corresponding kernel matrix is highly structured and thus amenable to fast linear algebra. Unfortunately, SKI scales poorly in the dimension of the input points, since the dense grid size grows exponentially with the dimension. To mitigate this issue, we propose the use of sparse grids within the SKI framework. These grids enable accurate interpolation, but with a number of points growing more slowly with dimension. We contribute a novel nearly linear time matrix-vector multiplication algorithm for the sparse grid kernel matrix. We also describe how sparse grids can be combined with an efficient interpolation scheme based on simplicial complexes. With these modifications, we demonstrate that SKI can be scaled to higher dimensions while maintaining accuracy, for both synthetic and real datasets.



Kernel Interpolation with Sparse Grids

Neural Information Processing Systems

These grids enable accurate interpolation, but with a number of points growing more slowly with dimension. We contribute a novel nearly linear time matrix-vector multiplication algorithm for the sparse grid kernel matrix.


Higher Order Approximation Rates for ReLU CNNs in Korobov Spaces

Li, Yuwen, Zhang, Guozhi

arXiv.org Artificial Intelligence

This paper investigates the $L_p$ approximation error for higher order Korobov functions using deep convolutional neural networks (CNNs) with ReLU activation. For target functions having a mixed derivative of order m+1 in each direction, we improve classical approximation rate of second order to (m+1)-th order (modulo a logarithmic factor) in terms of the depth of CNNs. The key ingredient in our analysis is approximate representation of high-order sparse grid basis functions by CNNs. The results suggest that higher order expressivity of CNNs does not severely suffer from the curse of dimensionality.


Kernel Interpolation with Sparse Grids

Neural Information Processing Systems

Structured kernel interpolation (SKI) accelerates Gaussian processes (GP) inference by interpolating the kernel covariance function using a dense grid of inducing points, whose corresponding kernel matrix is highly structured and thus amenable to fast linear algebra. Unfortunately, SKI scales poorly in the dimension of the input points, since the dense grid size grows exponentially with the dimension. To mitigate this issue, we propose the use of sparse grids within the SKI framework. These grids enable accurate interpolation, but with a number of points growing more slowly with dimension. We contribute a novel nearly linear time matrix-vector multiplication algorithm for the sparse grid kernel matrix.


Gaussian Processes Sampling with Sparse Grids under Additive Schwarz Preconditioner

Chen, Haoyuan, Tuo, Rui

arXiv.org Machine Learning

Gaussian processes (GPs) are widely used in non-parametric Bayesian modeling, and play an important role in various statistical and machine learning applications. In a variety tasks of uncertainty quantification, generating random sample paths of GPs is of interest. As GP sampling requires generating high-dimensional Gaussian random vectors, it is computationally challenging if a direct method, such as the Cholesky decomposition, is used. In this paper, we propose a scalable algorithm for sampling random realizations of the prior and posterior of GP models. The proposed algorithm leverages inducing points approximation with sparse grids, as well as additive Schwarz preconditioners, which reduce computational complexity, and ensure fast convergence. We demonstrate the efficacy and accuracy of the proposed method through a series of experiments and comparisons with other recent works.


Graph-Informed Neural Networks for Sparse Grid-Based Discontinuity Detectors

Della Santa, Francesco, Pieraccini, Sandra

arXiv.org Artificial Intelligence

Detecting discontinuity interfaces of discontinuous functions is a challenging task with significant implications across various scientific and engineering applications. Identifying these interfaces is particularly critical for functions with a high-dimensional domain, as their discontinuities can significantly influence the behavior of numerical methods and simulations; for example, within the realm of uncertainty quantification, where the smoothness of the target function plays a fundamental role in the use of stochastic collocation methods. Specifically, the knowledge of discontinuity interfaces enables the partitioning of the function domain into regions of smoothness, a crucial factor in improving the performance of numerical methods (e.g., see [17]). Other examples of discontinuity detection applications include signal processing, nonlinear partial differential equation (PDE) simulations, investigations of phase transitions in physical systems [14], and change-point analyses in geology or biology, to name a few [30]. The central objective of most discontinuity detection methods is to identify the position of discontinuities in the function domain using function evaluations on sets of points. Over the last few decades, progresses has been made in discontinuity detection, leading to the development of various algorithms. Notable works, such as [3, 2, 16, 35], have introduced significant contributions in this field. In particular, [3] introduced a polynomial annihilation edge detection method designed for piece-wise smooth functions with low-dimensional domains (n 2). This method identifies discontinuous interfaces by reconstructing jump functions based on a set of function evaluations.


Accurate Data-Driven Surrogates of Dynamical Systems for Forward Propagation of Uncertainty

De, Saibal, Jones, Reese E., Kolla, Hemanth

arXiv.org Artificial Intelligence

Stochastic collocation (SC) is a well-known non-intrusive method of constructing surrogate models for uncertainty quantification. In dynamical systems, SC is especially suited for full-field uncertainty propagation that characterizes the distributions of the high-dimensional primary solution fields of a model with stochastic input parameters. However, due to the highly nonlinear nature of the parameter-to-solution map in even the simplest dynamical systems, the constructed SC surrogates are often inaccurate. This work presents an alternative approach, where we apply the SC approximation over the dynamics of the model, rather than the solution. By combining the data-driven sparse identification of nonlinear dynamics (SINDy) framework with SC, we construct dynamics surrogates and integrate them through time to construct the surrogate solutions. We demonstrate that the SC-over-dynamics framework leads to smaller errors, both in terms of the approximated system trajectories as well as the model state distributions, when compared against full-field SC applied to the solutions directly.


Exploiting Sparsity in Automotive Radar Object Detection Networks

Lippke, Marius, Quach, Maurice, Braun, Sascha, Köhler, Daniel, Ulrich, Michael, Bischoff, Bastian, Tan, Wei Yap

arXiv.org Artificial Intelligence

Having precise perception of the environment is crucial for ensuring the secure and reliable functioning of autonomous driving systems. Radar object detection networks are one fundamental part of such systems. CNN-based object detectors showed good performance in this context, but they require large compute resources. This paper investigates sparse convolutional object detection networks, which combine powerful grid-based detection with low compute resources. We investigate radar specific challenges and propose sparse kernel point pillars (SKPP) and dual voxel point convolutions (DVPC) as remedies for the grid rendering and sparse backbone architectures. We evaluate our SKPP-DPVCN architecture on nuScenes, which outperforms the baseline by 5.89% and the previous state of the art by 4.19% in Car AP4.0. Moreover, SKPP-DPVCN reduces the average scale error (ASE) by 21.41% over the baseline.