dequantization
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.15)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- North America > Canada > Alberta > Census Division No. 15 > Improvement District No. 9 > Banff (0.04)
- North America > Mexico > Mexico City > Mexico City (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Europe > Denmark (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > Canada > British Columbia > Metro Vancouver Regional District > Vancouver (0.04)
- Europe > France (0.04)
- (3 more...)
Prospects for quantum advantage in machine learning from the representability of functions
Masot-Llima, Sergi, Gil-Fuster, Elies, Bravo-Prieto, Carlos, Eisert, Jens, Guaita, Tommaso
Quantum machine learning (QML) is recognized as a promising approach to harness quantum computing for learning tasks [1-3]. As with all quantum algorithms, a central question is whether QML holds potential for quantum advantage [4-7] over classical computing. The counter-narrative to quantum advantage is dequantization, where upon close inspection certain quantum algorithms yield no benefit over classical counterparts, as one can classically solve the task at hand. Dequantization of quantum algorithms for machine learning, in particular, has seen a surge of interest in recent years, leaving few claims of quantum advantage unchallenged [8-12]. While QML models for classical data can be studied from several perspectives, significant theoretical developments have emerged from investigating the function families that parametrized quantum circuits (PQCs) can give rise to [8, 10, 13-16]. Characterizing the functional forms arising from PQCs allows us to delineate the boundaries of quantum learning and guide the search for advantage.
Adaptive Dataset Quantization: A New Direction for Dataset Pruning
This paper addresses the challenges of storage and communication costs for large-scale datasets in resource-constrained edge devices by proposing a novel dataset quantization approach to reduce intra-sample redundancy. Unlike traditional dataset pruning and distillation methods that focus on inter-sample redundancy, the proposed method compresses each image by reducing redundant or less informative content within samples while preserving essential features. It first applies linear symmetric quantization to obtain an initial quantization range and scale for each sample. Then, an adaptive quantization allocation algorithm is introduced to distribute different quantization ratios for samples with varying precision requirements, maintaining a constant total compression ratio. The main contributions include: (1) being the first to use limited bits to represent datasets for storage reduction; (2) introducing a dataset-level quantization algorithm with adaptive ratio allocation; and (3) validating the method's effectiveness through extensive experiments on CIFAR-10, CIFAR-100, and ImageNet-1K. Results show that the method maintains model training performance while achieving significant dataset compression, outperforming traditional quantization and dataset pruning baselines under the same compression ratios.