Learning in RKHM: a $C^*$-Algebraic Twist for Kernel Machines
Hashimoto, Yuka, Ikeda, Masahiro, Kadri, Hachem
–arXiv.org Artificial Intelligence
Supervised learning in reproducing kernel Hilbert space (RKHS) has been actively investigated since the early 1990s (Murphy, 2012; Christmann & Steinwart, 2008; Shawe-Taylor & Cristianini, 2004; Schölkopf & Smola, 2002; Boser et al., 1992). The notion of reproducing kernels as dot products in Hilbert spaces was first brought to the field of machine learning by Aizerman et al. (1964), while the theoretical foundation of reproducing kernels and their Hilbert spaces dates back to at least Aronszajn (1950). By virtue of the representer theorem (Schölkopf et al., 2001), we can compute the solution of an infinite-dimensional minimization problem in RKHS with given finite samples. In addition to the standard RKHSs, applying vector-valued RKHSs (vvRKHSs) to supervised learning has also been proposed and used in analyzing vector-valued data (Micchelli & Pontil, 2005; Álvarez et al., 2012; Kadri et al., 2016; Minh et al., 2016; Brouard et al., 2016; Laforgue et al., 2020; Huusari & Kadri, 2021). Generalization bounds of the supervised problems in RKHS and vvRKHS are also derived (Mohri et al., 2018; Caponnetto & De Vito, 2007; Audiffren & Kadri, 2013; Huusari & Kadri, 2021).
arXiv.org Artificial Intelligence
Nov-12-2022
- Country:
- Asia > Japan
- Honshū > Kantō
- Kanagawa Prefecture > Yokohama (0.04)
- Tokyo Metropolis Prefecture > Tokyo (0.14)
- Honshū > Kantō
- Europe
- France > Provence-Alpes-Côte d'Azur
- Bouches-du-Rhône > Marseille (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- France > Provence-Alpes-Côte d'Azur
- Asia > Japan
- Genre:
- Research Report (0.40)
- Technology: