Kernel Bayes' rule
Fukumizu, Kenji, Song, Le, Gretton, Arthur
Kernel methods have long provided powerful tools for generalizing linear statistical approaches to nonlinear settings, through an embedding of the sample to a high dimensional feature space, namely a reproducing kernel Hilbert space (RKHS) [18, 28]. Examples include support vector machines, kernel PCA, and kernel CCA, among others. In these cases, data are mapped via a canonical feature map to a reproducing kernel Hilbert space (of high or even infinite dimension), in which the linear operations that define the algorithms are implemented. The inner product between feature mappings need never be computed explicitly, but is given by a positive definite kernel function unique to the RKHS: this permits efficient computation without the need to deal explicitly with the feature representation. The mappings of individual points to a feature space may be generalized to mappings of probability measures[e.g. 3, Chapter 4]. We call such mappings the kernel means of the underlying random variables.
Sep-28-2011
- Country:
- Asia > Middle East
- Jordan (0.04)
- North America > United States
- Massachusetts > Middlesex County
- Cambridge (0.04)
- Pennsylvania > Allegheny County
- Pittsburgh (0.04)
- Massachusetts > Middlesex County
- Asia > Middle East
- Genre:
- Research Report (0.50)
- Technology: