Goto

Collaborating Authors

 spectral density



Deep learning estimation of the spectral density of functional time series on large domains

Mohammadi, Neda, Sarkar, Soham, Kokoszka, Piotr

arXiv.org Machine Learning

We derive an estimator of the spectral density of a functional time series that is the output of a multilayer perceptron neural network. The estimator is motivated by difficulties with the computation of existing spectral density estimators for time series of functions defined on very large grids that arise, for example, in climate compute models and medical scans. Existing estimators use autocovariance kernels represented as large $G \times G$ matrices, where $G$ is the number of grid points on which the functions are evaluated. In many recent applications, functions are defined on 2D and 3D domains, and $G$ can be of the order $G \sim 10^5$, making the evaluation of the autocovariance kernels computationally intensive or even impossible. We use the theory of spectral functional principal components to derive our deep learning estimator and prove that it is a universal approximator to the spectral density under general assumptions. Our estimator can be trained without computing the autocovariance kernels and it can be parallelized to provide the estimates much faster than existing approaches. We validate its performance by simulations and an application to fMRI images.


Analysis of one-hidden-layer neural networks via the resolvent method

Neural Information Processing Systems

In this work, we investigate the asymptotic spectral density of the random feature matrix $M = Y Y^*$ with $Y = f(WX)$ generated by a single-hidden-layer neural network, where $W$ and $X$ are random rectangular matrices with i.i.d.




Nonlinear random matrix theory for deep learning

Jeffrey Pennington, Pratik Worah

Neural Information Processing Systems

The list of successful applications of deep learning is growing at a staggering rate. Image recognition (Krizhevsky et al., 2012), audio synthesis (Oord et al., 2016), translation (Wu et al., 2016), and speech recognition (Hinton et al., 2012) are just a few of the recent achievements.


Scalable Levy Process Priors for Spectral Kernel Learning

Phillip A. Jang, Andrew Loeb, Matthew Davidow, Andrew G. Wilson

Neural Information Processing Systems

Gaussian processes are rich distributions over functions, with generalization properties determined by a kernel function. When used for long-range extrapolation, predictions are particularly sensitive to the choice of kernel parameters. It is therefore critical to account for kernel uncertainty in our predictive distributions. We propose a distribution over kernels formed by modelling a spectral mixture density with a L evy process. The resulting distribution has support for all stationary covariances--including the popular RBF, periodic, and Mat ern kernels-- combined with inductive biases which enable automatic and data efficient learning, long-range extrapolation, and state of the art predictive performance. The proposed model also presents an approach to spectral regularization, as the L evy process introduces a sparsity-inducing prior over mixture components, allowing automatic selection over model order and pruning of extraneous components. We exploit the algebraic structure of the proposed process for O (n) training and O (1) predictions. We perform extrapolations having reasonable uncertainty estimates on several benchmarks, show that the proposed model can recover flexible ground truth covariances and that it is robust to errors in initialization.