Goto

Collaborating Authors

 dequantization




Prospects for quantum advantage in machine learning from the representability of functions

Masot-Llima, Sergi, Gil-Fuster, Elies, Bravo-Prieto, Carlos, Eisert, Jens, Guaita, Tommaso

arXiv.org Machine Learning

Quantum machine learning (QML) is recognized as a promising approach to harness quantum computing for learning tasks [1-3]. As with all quantum algorithms, a central question is whether QML holds potential for quantum advantage [4-7] over classical computing. The counter-narrative to quantum advantage is dequantization, where upon close inspection certain quantum algorithms yield no benefit over classical counterparts, as one can classically solve the task at hand. Dequantization of quantum algorithms for machine learning, in particular, has seen a surge of interest in recent years, leaving few claims of quantum advantage unchallenged [8-12]. While QML models for classical data can be studied from several perspectives, significant theoretical developments have emerged from investigating the function families that parametrized quantum circuits (PQCs) can give rise to [8, 10, 13-16]. Characterizing the functional forms arising from PQCs allows us to delineate the boundaries of quantum learning and guide the search for advantage.


Adaptive Dataset Quantization: A New Direction for Dataset Pruning

Yu, Chenyue, Yu, Jianyu

arXiv.org Artificial Intelligence

This paper addresses the challenges of storage and communication costs for large-scale datasets in resource-constrained edge devices by proposing a novel dataset quantization approach to reduce intra-sample redundancy. Unlike traditional dataset pruning and distillation methods that focus on inter-sample redundancy, the proposed method compresses each image by reducing redundant or less informative content within samples while preserving essential features. It first applies linear symmetric quantization to obtain an initial quantization range and scale for each sample. Then, an adaptive quantization allocation algorithm is introduced to distribute different quantization ratios for samples with varying precision requirements, maintaining a constant total compression ratio. The main contributions include: (1) being the first to use limited bits to represent datasets for storage reduction; (2) introducing a dataset-level quantization algorithm with adaptive ratio allocation; and (3) validating the method's effectiveness through extensive experiments on CIFAR-10, CIFAR-100, and ImageNet-1K. Results show that the method maintains model training performance while achieving significant dataset compression, outperforming traditional quantization and dataset pruning baselines under the same compression ratios.


20c86a628232a67e7bd46f76fba7ce12-AuthorFeedback.pdf

Neural Information Processing Systems

We thank for the valuable feedback. We address the questions below and will revise our paper accordingly. On CIFAR-10, MaCow is 7.3 times slower than Glow, much faster than Emerging Convolution and MAF, whose factors are 360 and 600 respectively. We see that the time of generation increases linearly with the the image resolution. Convolutional Flow [Hoogeboom et al., 2019] is basically a linear transformation with masked convolutional kernels, Emerging Convolution [Hoogeboom et al., 2019] obtained 0.02 improvement on bits/dim by MaCow adopts additive coupling layers.






Neural Bayesian Filtering

Solinas, Christopher, Haluska, Radovan, Sychrovsky, David, Timbers, Finbarr, Bard, Nolan, Buro, Michael, Schmid, Martin, Sturtevant, Nathan R., Bowling, Michael

arXiv.org Machine Learning

As an example, consider the problem of tracking an autonomous robot with an unknown starting position in a d d grid (Figure 1). Suppose the agent's policy is known, and an observer sees that the agent moved a step without colliding into a wall. This information indicates how the observer should update their beliefs about the agent's position. Tracking these belief states can be challenging when they are either continuous or too large to enumerate (Solinas et al., 2023)--even when the agent's policy and the environment dynamics are known. A common approach frames belief state modeling as a Bayesian filtering problem in which a posterior is maintained and updated with each new observation. Classical Bayesian filters, such as the Kalman Filter (Kalman, 1960) and its nonlinear variants (e.g., Extended and Unscented Kalman Filters (Sorenson, 1985; Julier & Uhlmann, 2004)), assume that the underlying distributions are unimodal and approximately Gaussian. While computationally efficient, this limits their applicability in settings that do not satisfy these assumptions.