Goto

Collaborating Authors

 dequantization



MaCow: Masked Convolutional Generative Flow

Xuezhe Ma, Xiang Kong, Shanghang Zhang, Eduard Hovy

Neural Information Processing Systems

Unsupervised learning of probabilistic models is a central yet challenging problem. Deep generative models have shown promising results in modeling complex distributions such as natural images (Radford et al.,2015), audio (Van Den Oord et al.,2016)and text (Bowman et al.,2015).






Prospects for quantum advantage in machine learning from the representability of functions

Masot-Llima, Sergi, Gil-Fuster, Elies, Bravo-Prieto, Carlos, Eisert, Jens, Guaita, Tommaso

arXiv.org Machine Learning

Quantum machine learning (QML) is recognized as a promising approach to harness quantum computing for learning tasks [1-3]. As with all quantum algorithms, a central question is whether QML holds potential for quantum advantage [4-7] over classical computing. The counter-narrative to quantum advantage is dequantization, where upon close inspection certain quantum algorithms yield no benefit over classical counterparts, as one can classically solve the task at hand. Dequantization of quantum algorithms for machine learning, in particular, has seen a surge of interest in recent years, leaving few claims of quantum advantage unchallenged [8-12]. While QML models for classical data can be studied from several perspectives, significant theoretical developments have emerged from investigating the function families that parametrized quantum circuits (PQCs) can give rise to [8, 10, 13-16]. Characterizing the functional forms arising from PQCs allows us to delineate the boundaries of quantum learning and guide the search for advantage.


Adaptive Dataset Quantization: A New Direction for Dataset Pruning

Yu, Chenyue, Yu, Jianyu

arXiv.org Artificial Intelligence

This paper addresses the challenges of storage and communication costs for large-scale datasets in resource-constrained edge devices by proposing a novel dataset quantization approach to reduce intra-sample redundancy. Unlike traditional dataset pruning and distillation methods that focus on inter-sample redundancy, the proposed method compresses each image by reducing redundant or less informative content within samples while preserving essential features. It first applies linear symmetric quantization to obtain an initial quantization range and scale for each sample. Then, an adaptive quantization allocation algorithm is introduced to distribute different quantization ratios for samples with varying precision requirements, maintaining a constant total compression ratio. The main contributions include: (1) being the first to use limited bits to represent datasets for storage reduction; (2) introducing a dataset-level quantization algorithm with adaptive ratio allocation; and (3) validating the method's effectiveness through extensive experiments on CIFAR-10, CIFAR-100, and ImageNet-1K. Results show that the method maintains model training performance while achieving significant dataset compression, outperforming traditional quantization and dataset pruning baselines under the same compression ratios.