Goto

Collaborating Authors

 surrogate density


A Neural Network Based on First Principles

Baggenstoss, Paul M

arXiv.org Machine Learning

In this paper, a Neural network is derived from first principles, assuming only that each layer begins with a linear dimension-reducing transformation. The approach appeals to the principle of Maximum Entropy (MaxEnt) to find the posterior distribution of the input data of each layer, conditioned on the layer output variables. This posterior has a well-defined mean, the conditional mean estimator, that is calculated using a type of neural network with theoretically-derived activation functions similar to sigmoid, softplus, and relu. This implicitly provides a theoretical justification for their use. A theorem that finds the conditional distribution and conditional mean estimator under the MaxEnt prior is proposed, unifying results for special cases. Combining layers results in an auto-encoder with conventional feed-forward analysis network and a type of linear Bayesian belief network in the reconstruction path.


Some Insights About the Small Ball Probability Factorization for Hilbert Random Elements

Bongiorno, Enea, Goia, Aldo

arXiv.org Machine Learning

Asymptotic factorizations for the small-ball probability (SmBP) of a Hilbert valued random element $X$ are rigorously established and discussed. In particular, given the first $d$ principal components (PCs) and as the radius $\varepsilon$ of the ball tends to zero, the SmBP is asymptotically proportional to (a) the joint density of the first $d$ PCs, (b) the volume of the $d$-dimensional ball with radius $\varepsilon$, and (c) a correction factor weighting the use of a truncated version of the process expansion. Moreover, under suitable assumptions on the spectrum of the covariance operator of $X$ and as $d$ diverges to infinity when $\varepsilon$ vanishes, some simplifications occur. In particular, the SmBP factorizes asymptotically as the product of the joint density of the first $d$ PCs and a pure volume parameter. All the provided factorizations allow to define a surrogate intensity of the SmBP that, in some cases, leads to a genuine intensity. To operationalize the stated results, a non-parametric estimator for the surrogate intensity is introduced and it is proved that the use of estimated PCs, instead of the true ones, does not affect the rate of convergence. Finally, as an illustration, simulations in controlled frameworks are provided.


The functional mean-shift algorithm for mode hunting and clustering in infinite dimensions

Ciollaro, Mattia, Genovese, Christopher, Lei, Jing, Wasserman, Larry

arXiv.org Machine Learning

We introduce the functional mean-shift algorithm, an iterative algorithm for estimating the local modes of a surrogate density from functional data. We show that the algorithm can be used for cluster analysis of functional data. We propose a test based on the bootstrap for the significance of the estimated local modes of the surrogate density. We present two applications of our methodology. In the first application, we demonstrate how the functional mean-shift algorithm can be used to perform spike sorting, i.e. cluster neural activity curves. In the second application, we use the functional mean-shift algorithm to distinguish between original and fake signatures.