optimized random feature
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States (0.04)
- Europe > Germany (0.04)
- (4 more...)
Learning with Optimized Random Features: Exponential Speedup by Quantum Machine Learning without Sparsity and Low-Rank Assumptions
Kernel methods augmented with random features give scalable algorithms for learning from big data. But it has been computationally hard to sample random features according to a probability distribution that is optimized for the data, so as to minimize the required number of features for achieving the learning to a desired accuracy. Here, we develop a quantum algorithm for sampling from this optimized distribution over features, in runtime O(D) that is linear in the dimension D of the input data. Our algorithm achieves an exponential speedup in D compared to any known classical algorithm for this sampling task. In contrast to existing quantum machine learning algorithms, our algorithm circumvents sparsity and low-rank assumptions and thus has wide applicability. We also show that the sampled features can be combined with regression by stochastic gradient descent to achieve the learning without canceling out our exponential speedup. Our algorithm based on sampling optimized random features leads to an accelerated framework for machine learning that takes advantage of quantum computers.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States (0.04)
- Europe > Germany (0.04)
- (4 more...)
- Asia > India (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- (8 more...)
Review for NeurIPS paper: Learning with Optimized Random Features: Exponential Speedup by Quantum Machine Learning without Sparsity and Low-Rank Assumptions
Weaknesses: I think the main weakness of this paper is that the writing is too dense to parse, while on the other hand the current content in the main body is not enough to examine the correctness of Theorem 1 and 2. To be more specific, Theorem 1 and 2 are all both technical results, especially Theorem 1. From my understanding of Section 3.2 and the relevant part in the appendices, the authors used the quantum RAM as well as the quantum singular value transformation (QSVT) algorithm to achieve the speedup for sampling from the features using the quantum Fourier transform. I understand that NeurIPS submissions have page limitation, but I think all the steps should at least be highlighted. In particular, I feel that discussions are needed for: - What quantum RAM do we need for the task? - What can we use the QSVT without the sparse or low-rank assumption? On the other hand, I'm not totally sure how Section 2.3 (discretized representation of real number) and Section 2.4 (assumption on data in discretized representation) help with the overall story--I can foresee their usage, but the paper can probably shrink this space and highlight more on the proofs of Theorem 1 and 2. These are the two main technical results, but are only given in the last page (Page 8) without a comprehensive discussion. A practical solution is to fulfill more details between Line 251-272.
Review for NeurIPS paper: Learning with Optimized Random Features: Exponential Speedup by Quantum Machine Learning without Sparsity and Low-Rank Assumptions
The paper gives an O(input dimension) quantum ML algorithm for kernel methods with random Fourier features, without requiring sparsity or low-rank assumptions. This is a much improved resubmission from a different venue, and is able to avoid relying on quantum RAM. The fact that it relies on only standard input and does not rely on assumptions on the data are very nice. Some of the reviewers were not as convinced looking at the classical techniques (RFF), but after discussion they ultimately agreed that this is a significant contribution to Quantum ML. However there are a few clarity issues that will need to be addressed in the final version.
Learning with Optimized Random Features: Exponential Speedup by Quantum Machine Learning without Sparsity and Low-Rank Assumptions
Kernel methods augmented with random features give scalable algorithms for learning from big data. But it has been computationally hard to sample random features according to a probability distribution that is optimized for the data, so as to minimize the required number of features for achieving the learning to a desired accuracy. Here, we develop a quantum algorithm for sampling from this optimized distribution over features, in runtime O(D) that is linear in the dimension D of the input data. Our algorithm achieves an exponential speedup in D compared to any known classical algorithm for this sampling task. In contrast to existing quantum machine learning algorithms, our algorithm circumvents sparsity and low-rank assumptions and thus has wide applicability.
Exponential Error Convergence in Data Classification with Optimized Random Features: Acceleration by Quantum Machine Learning
Random features are a central technique for scalable learning algorithms based on kernel methods. A recent work has shown that an algorithm for machine learning by quantum computer, quantum machine learning (QML), can exponentially speed up sampling of optimized random features, even without imposing restrictive assumptions on sparsity and low-rankness of matrices that had limited applicability of conventional QML algorithms; this QML algorithm makes it possible to significantly reduce and provably minimize the required number of features for regression tasks. However, a major interest in the field of QML is how widely the advantages of quantum computation can be exploited, not only in the regression tasks. We here construct a QML algorithm for a classification task accelerated by the optimized random features. We prove that the QML algorithm for sampling optimized random features, combined with stochastic gradient descent (SGD), can achieve state-of-the-art exponential convergence speed of reducing classification error in a classification task under a low-noise condition; at the same time, our algorithm with optimized random features can take advantage of the significant reduction of the required number of features so as to accelerate each iteration in the SGD and evaluation of the classifier obtained from our algorithm. These results discover a promising application of QML to significant acceleration of the leading classification algorithm based on kernel methods, without ruining its applicability to a practical class of data sets and the exponential error-convergence speed.
- Europe > Austria > Vienna (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- (6 more...)
Fast Quantum Algorithm for Learning with Optimized Random Features
Yamasaki, Hayata, Subramanian, Sathyawageeswar, Sonoda, Sho, Koashi, Masato
Kernel methods augmented with random features give scalable algorithms for learning from big data. But it has been computationally hard to sample random features according to a probability distribution that is optimized for the data, so as to minimize the required number of features for achieving the learning to a desired accuracy. Here, we develop a quantum algorithm for sampling from this optimized distribution over features, in runtime $O(D)$ that is linear in the dimension $D$ of the input data. Our algorithm achieves an exponential speedup in $D$ compared to any known classical algorithm for this sampling task. In contrast to existing quantum machine learning algorithms, our algorithm circumvents sparsity and low-rank assumptions and thus has wide applicability. We also show that the sampled features can be combined with regression by stochastic gradient descent to achieve the learning without canceling out our exponential speedup. Our algorithm based on sampling optimized random features leads to an accelerated framework for machine learning that takes advantage of quantum computers.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.28)
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- Asia > India (0.14)
- (9 more...)