Goto

Collaborating Authors

 kernel-target alignment


Quantum Adversarial Learning for Kernel Methods

Montalbano, Giuseppe, Banchi, Leonardo

arXiv.org Artificial Intelligence

We show that hybrid quantum classifiers based on quantum kernel methods and support vector machines are vulnerable against adversarial attacks, namely small engineered perturbations of the input data can deceive the classifier into predicting the wrong result. Nonetheless, we also show that simple defence strategies based on data augmentation with a few crafted perturbations can make the classifier robust against new attacks. Our results find applications in security-critical learning problems and in mitigating the effect of some forms of quantum noise, since the attacker can also be understood as part of the surrounding environment.


On Kernel-Target Alignment

Neural Information Processing Systems

We introduce the notion of kernel-alignment, a measure of similar(cid:173) ity between two kernel functions or between a kernel and a target function. This quantity captures the degree of agreement between a kernel and a given learning task, and has very natural interpre(cid:173) tations in machine learning, leading also to simple algorithms for model selection and learning. We analyse its theoretical properties, proving that it is sharply concentrated around its expected value, and we discuss its relation with other standard measures of per(cid:173) formance. Finally we describe some of the algorithms that can be obtained within this framework, giving experimental results show(cid:173) ing that adapting the kernel to improve alignment on the labelled data significantly increases the alignment on the test set, giving improved classification accuracy. Hence, the approach provides a principled method of performing transduction.


A Framework for an Assessment of the Kernel-target Alignment in Tree Ensemble Kernel Learning

Feng, Dai, Baumgartner, Richard

arXiv.org Machine Learning

Kernels ensuing from tree ensembles such as random forest (RF) or gradient boosted trees (GBT), when used for kernel learning, have been shown to be competitive to their respective tree ensembles (particularly in higher dimensional scenarios). On the other hand, it has been also shown that performance of the kernel algorithms depends on the degree of the kernel-target alignment. However, the kernel-target alignment for kernel learning based on the tree ensembles has not been investigated and filling this gap is the main goal of our work. Using the eigenanalysis of the kernel matrix, we demonstrate that for continuous targets good performance of the tree-based kernel learning is associated with strong kernel-target alignment. Moreover, we show that well performing tree ensemble based kernels are characterized by strong target aligned components that are expressed through scalar products between the eigenvectors of the kernel matrix and the target. This suggests that when tree ensemble based kernel learning is successful, relevant information for the supervised problem is concentrated near lower dimensional manifold spanned by the target aligned components. Persistence of the strong target aligned components in tree ensemble based kernels is further supported by sensitivity analysis via landmark learning. In addition to a comprehensive simulation study, we also provide experimental results from several real life data sets that are in line with the simulations.


Training Quantum Embedding Kernels on Near-Term Quantum Computers

Hubregtsen, Thomas, Wierichs, David, Gil-Fuster, Elies, Derks, Peter-Jan H. S., Faehrmann, Paul K., Meyer, Johannes Jakob

arXiv.org Artificial Intelligence

Kernel methods are a cornerstone of classical machine learning. The idea of using quantum computers to compute kernels has recently attracted attention. Quantum embedding kernels (QEKs) constructed by embedding data into the Hilbert space of a quantum computer are a particular quantum kernel technique that allows to gather insights into learning problems and that are particularly suitable for noisy intermediate-scale quantum devices. In this work, we first provide an accessible introduction to quantum embedding kernels and then analyze the practical issues arising when realizing them on a noisy near-term quantum computer. We focus on quantum embedding kernels with variational parameters. These variational parameters are optimized for a given dataset by increasing the kernel-target alignment, a heuristic connected to the achievable classification accuracy. We further show under which conditions noise from device imperfections influences the predicted kernel and provide a strategy to mitigate these detrimental effects which is tailored to quantum embedding kernels. We also address the influence of finite sampling and derive bounds that put guarantees on the quality of the kernel matrix. We illustrate our findings by numerical experiments and tests on actual hardware.


On Kernel-Target Alignment

Cristianini, Nello, Shawe-Taylor, John, Elisseeff, André, Kandola, Jaz S.

Neural Information Processing Systems

We introduce the notion of kernel-alignment, a measure of similarity betweentwo kernel functions or between a kernel and a target function. This quantity captures the degree of agreement between a kernel and a given learning task, and has very natural interpretations inmachine learning, leading also to simple algorithms for model selection and learning. We analyse its theoretical properties, proving that it is sharply concentrated around its expected value, and we discuss its relation with other standard measures of performance. Finallywe describe some of the algorithms that can be obtained within this framework, giving experimental results showing thatadapting the kernel to improve alignment on the labelled data significantly increases the alignment on the test set, giving improved classification accuracy. Hence, the approach provides a principled method of performing transduction.


On Kernel-Target Alignment

Cristianini, Nello, Shawe-Taylor, John, Elisseeff, André, Kandola, Jaz S.

Neural Information Processing Systems

We introduce the notion of kernel-alignment, a measure of similarity between two kernel functions or between a kernel and a target function. This quantity captures the degree of agreement between a kernel and a given learning task, and has very natural interpretations in machine learning, leading also to simple algorithms for model selection and learning. We analyse its theoretical properties, proving that it is sharply concentrated around its expected value, and we discuss its relation with other standard measures of performance. Finally we describe some of the algorithms that can be obtained within this framework, giving experimental results showing that adapting the kernel to improve alignment on the labelled data significantly increases the alignment on the test set, giving improved classification accuracy. Hence, the approach provides a principled method of performing transduction.


On Kernel-Target Alignment

Cristianini, Nello, Shawe-Taylor, John, Elisseeff, André, Kandola, Jaz S.

Neural Information Processing Systems

We introduce the notion of kernel-alignment, a measure of similarity between two kernel functions or between a kernel and a target function. This quantity captures the degree of agreement between a kernel and a given learning task, and has very natural interpretations in machine learning, leading also to simple algorithms for model selection and learning. We analyse its theoretical properties, proving that it is sharply concentrated around its expected value, and we discuss its relation with other standard measures of performance. Finally we describe some of the algorithms that can be obtained within this framework, giving experimental results showing that adapting the kernel to improve alignment on the labelled data significantly increases the alignment on the test set, giving improved classification accuracy. Hence, the approach provides a principled method of performing transduction.