Review for NeurIPS paper: Learning with Optimized Random Features: Exponential Speedup by Quantum Machine Learning without Sparsity and Low-Rank Assumptions
–Neural Information Processing Systems
Weaknesses: I think the main weakness of this paper is that the writing is too dense to parse, while on the other hand the current content in the main body is not enough to examine the correctness of Theorem 1 and 2. To be more specific, Theorem 1 and 2 are all both technical results, especially Theorem 1. From my understanding of Section 3.2 and the relevant part in the appendices, the authors used the quantum RAM as well as the quantum singular value transformation (QSVT) algorithm to achieve the speedup for sampling from the features using the quantum Fourier transform. I understand that NeurIPS submissions have page limitation, but I think all the steps should at least be highlighted. In particular, I feel that discussions are needed for: - What quantum RAM do we need for the task? - What can we use the QSVT without the sparse or low-rank assumption? On the other hand, I'm not totally sure how Section 2.3 (discretized representation of real number) and Section 2.4 (assumption on data in discretized representation) help with the overall story--I can foresee their usage, but the paper can probably shrink this space and highlight more on the proofs of Theorem 1 and 2. These are the two main technical results, but are only given in the last page (Page 8) without a comprehensive discussion. A practical solution is to fulfill more details between Line 251-272.
Neural Information Processing Systems
Jan-27-2025, 01:18:53 GMT
- Technology: