reproducing kernel krein space
Learning with Operator-valued Kernels in Reproducing Kernel Krein Spaces
Operator-valued kernels have shown promise in supervised learning problems with functional inputs and functional outputs. The crucial (and possibly restrictive) assumption of positive definiteness of operator-valued kernels has been instrumental in developing efficient algorithms. In this work, we consider operator-valued kernels which might not be necessarily positive definite. To tackle the indefiniteness of operator-valued kernels, we harness the machinery of Reproducing Kernel Krein Spaces (RKKS) of function-valued functions. A representer theorem is illustrated which yields a suitable loss stabilization problem for supervised learning with function-valued inputs and outputs. Analysis of generalization properties of the proposed framework is given. An iterative Operator based Minimum Residual (OpMINRES) algorithm is proposed for solving the loss stabilization problem. Experiments with indefinite operator-valued kernels on synthetic and real data sets demonstrate the utility of the proposed approach.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- Asia > India > Maharashtra > Mumbai (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > Canada (0.04)
Review for NeurIPS paper: Learning with Operator-valued Kernels in Reproducing Kernel Krein Spaces
Summary and Contributions: Post-rebuttal comments Thank you for the comments. I am happy with the response and would recommend including the paragraph (stabilization vs ERM) from the rebuttal into the final version of the paper. It might be interesting as an open problem for future work. Operator valued kernels provide a theoretical framework for modelling learning problems that map functions to functions. A potential shortcoming of this framework is the fact that kernels need to be positive definite.
Learning with Operator-valued Kernels in Reproducing Kernel Krein Spaces
Operator-valued kernels have shown promise in supervised learning problems with functional inputs and functional outputs. The crucial (and possibly restrictive) assumption of positive definiteness of operator-valued kernels has been instrumental in developing efficient algorithms. In this work, we consider operator-valued kernels which might not be necessarily positive definite. To tackle the indefiniteness of operator-valued kernels, we harness the machinery of Reproducing Kernel Krein Spaces (RKKS) of function-valued functions. A representer theorem is illustrated which yields a suitable loss stabilization problem for supervised learning with function-valued inputs and outputs.