reproducing kernel hilbert space
Experimental Design for Linear Functionals in Reproducing Kernel Hilbert Spaces
Optimal experimental design seeks to determine the most informative allocation of experiments to infer an unknown statistical quantity. In this work, we investigate optimal design of experiments for {\em estimation of linear functionals in reproducing kernel Hilbert spaces (RKHSs)}. This problem has been extensively studied in the linear regression setting under an estimability condition, which allows estimating parameters without bias. We generalize this framework to RKHSs, and allow for the linear functional to be only approximately inferred, i.e., with a fixed bias.
Distributed Learning of Conditional Quantiles in the Reproducing Kernel Hilbert Space
We study distributed learning of nonparametric conditional quantiles with Tikhonov regularization in a reproducing kernel Hilbert space (RKHS). Although distributed parametric quantile regression has been investigated in several existing works, the current nonparametric quantile setting poses different challenges and is still unexplored. The difficulty lies in the illusive explicit bias-variance decomposition in the quantile RKHS setting as in the regularized least squares regression. For the simple divide-and-conquer approach that partitions the data set into multiple parts and then takes an arithmetic average of the individual outputs, we establish the risk bounds using a novel second-order empirical process for quantile risk.
Reliable Estimation of KL Divergence using a Discriminator in Reproducing Kernel Hilbert Space
Estimating Kullback-Leibler (KL) divergence from samples of two distributions is essential in many machine learning problems. Variational methods using neural network discriminator have been proposed to achieve this task in a scalable manner. However, we noticed that most of these methods using neural network discriminators suffer from high fluctuations (variance) in estimates and instability in training. In this paper, we look at this issue from statistical learning theory and function space complexity perspective to understand why this happens and how to solve it. We argue that the cause of these pathologies is lack of control over the complexity of the neural network discriminator function and could be mitigated by controlling it. To achieve this objective, we 1) present a novel construction of the discriminator in the Reproducing Kernel Hilbert Space (RKHS), 2) theoretically relate the error probability bound of the KL estimates to the complexity of the discriminator in the RKHS space, 3) present a scalable way to control the complexity (RKHS norm) of the discriminator for a reliable estimation of KL divergence, and 4) prove the consistency of the proposed estimator. In three different applications of KL divergence -- estimation of KL, estimation of mutual information and Variational Bayes -- we show that by controlling the complexity as developed in the theory, we are able to reduce the variance of KL estimates and stabilize the training.
Learning Dynamical Systems via Koopman Operator Regression in Reproducing Kernel Hilbert Spaces
We study a class of dynamical systems modelled as stationary Markov chains that admit an invariant distribution via the corresponding transfer or Koopman operator. While data-driven algorithms to reconstruct such operators are well known, their relationship with statistical learning is largely unexplored. We formalize a framework to learn the Koopman operator from finite data trajectories of the dynamical system. We consider the restriction of this operator to a reproducing kernel Hilbert space and introduce a notion of risk, from which different estimators naturally arise. We link the risk with the estimation of the spectral decomposition of the Koopman operator. These observations motivate a reduced-rank operator regression (RRR) estimator. We derive learning bounds for the proposed estimator, holding both in i.i.d and non i.i.d.
Notes on Kernel Methods in Machine Learning
Pérez-Rosero, Diego Armando, Salazar-Dubois, Danna Valentina, Lugo-Rojas, Juan Camilo, Álvarez-Meza, Andrés Marino, Castellanos-Dominguez, Germán
These notes provide a self-contained introduction to kernel methods and their geometric foundations in machine learning. Starting from the construction of Hilbert spaces, we develop the theory of positive definite kernels, reproducing kernel Hilbert spaces (RKHS), and Hilbert-Schmidt operators, emphasizing their role in statistical estimation and representation of probability measures. Classical concepts such as covariance, regression, and information measures are revisited through the lens of Hilbert space geometry. We also introduce kernel density estimation, kernel embeddings of distributions, and the Maximum Mean Discrepancy (MMD). The exposition is designed to serve as a foundation for more advanced topics, including Gaussian processes, kernel Bayesian inference, and functional analytic approaches to modern machine learning.
- North America > United States > New York (0.05)
- North America > United States > New Jersey > Hudson County > Hoboken (0.04)
- South America > Colombia (0.04)
- (8 more...)
- Research Report (0.64)
- Instructional Material > Course Syllabus & Notes (0.48)
Learning with Invariance via Linear Functionals on Reproducing Kernel Hilbert Space
Incorporating invariance information is important for many learning problems. To exploit invariances, most existing methods resort to approximations that either lead to expensive optimization problems such as semi-definite programming, or rely on separation oracles to retain tractability. Some methods further limit the space of functions and settle for non-convex models. In this paper, we propose a framework for learning in reproducing kernel Hilbert spaces (RKHS) using local invariances that explicitly characterize the behavior of the target function around data instances. These invariances are \emph{compactly} encoded as linear functionals whose value are penalized by some loss function. Based on a representer theorem that we establish, our formulation can be efficiently optimized via a convex program. For the representer theorem to hold, the linear functionals are required to be bounded in the RKHS, and we show that this is true for a variety of commonly used RKHS and invariances. Experiments on learning with unlabeled data and transform invariances show that the proposed method yields better or similar results compared with the state of the art.
STRIDE: Subset-Free Functional Decomposition for XAI in Tabular Settings
Most explainable AI (XAI) frameworks are limited in their expressiveness, summarizing complex feature effects as single scalar values ϕ_i. This approach answers "what" features are important but fails to reveal "how" they interact. Furthermore, methods that attempt to capture interactions, like those based on Shapley values, often face an exponential computational cost. We present STRIDE, a scalable framework that addresses both limitations by reframing explanation as a subset-enumeration-free, orthogonal "functional decomposition" in a Reproducing Kernel Hilbert Space (RKHS). In the tabular setups we study, STRIDE analytically computes functional components f_S(x_S) via a recursive kernel-centering procedure. The approach is model-agnostic and theoretically grounded with results on orthogonality and L^2 convergence. In tabular benchmarks (10 datasets, median over 10 seeds), STRIDE attains a 3.0 times median speedup over TreeSHAP and a mean R^2=0.93 for reconstruction. We also introduce "component surgery", a diagnostic that isolates a learned interaction and quantifies its contribution; on California Housing, removing a single interaction reduces test R^2 from 0.019 to 0.027.
- North America > United States > California (0.25)
- Europe > United Kingdom (0.04)
- Asia > South Korea > Seoul > Seoul (0.04)
Kernel VICReg for Self-Supervised Learning in Reproducing Kernel Hilbert Space
Sepanj, M. Hadi, Ghojogh, Benyamin, Fieguth, Paul
Self-supervised learning (SSL) has emerged as a powerful paradigm for representation learning by optimizing geometric objectives--such as invariance to augmentations, variance preservation, and feature decorrelation--without requiring labels. However, most existing methods operate in Euclidean space, limiting their ability to capture nonlinear dependencies and geometric structures. In this work, we propose Kernel VICReg, a novel self-supervised learning framework that lifts the VICReg objective into a Reproducing Kernel Hilbert Space (RKHS). By kernelizing each term of the loss-variance, invariance, and covariance--we obtain a general formulation that operates on double-centered kernel matrices and Hilbert-Schmidt norms, enabling nonlinear feature learning without explicit mappings. We demonstrate that Kernel VICReg not only avoids representational collapse but also improves performance on tasks with complex or small-scale data. Empirical evaluations across MNIST, CIFAR-10, STL-10, TinyImageNet, and ImageNet100 show consistent gains over Euclidean VICReg, with particularly strong improvements on datasets where nonlinear structures are prominent. UMAP visualizations further confirm that kernel-based embeddings exhibit better isometry and class separation. Our results suggest that kernelizing SSL objectives is a promising direction for bridging classical kernel methods with modern representation learning.
Supplementary Material: Experimental Design for Linear Functionals in Reproducing Kernel Hilbert Spaces A Estimability results
In A.1, we show consequence of Def. 1 which is used in the proofs We can apply Theorem??, to get C We show that our condition in Def. 1 and Pukelsheims and estimability This definition is sometimes used as restatement of the estimability property. Definition 4 (Projected data) . Lemma 2. The assumption in Definition 4 implies the assumption in Definition 1 with This section includes proofs for the concentration results presented in the main text. Z is as in Def. 2 where X The term above is so called self-normalized noise, which can be handled by techniques of de la Peña et al. (2009) popularized by Abbasi-Y adkori et al. (2011). From now on the proof is generic.