compression system
Deep Generative Models for Distribution-Preserving Lossy Compression
We propose and study the problem of distribution-preserving lossy compression. Motivated by recent advances in extreme image compression which allow to maintain artifact-free reconstructions even at very low bitrates, we propose to optimize the rate-distortion tradeoff under the constraint that the reconstructed samples follow the distribution of the training data. The resulting compression system recovers both ends of the spectrum: On one hand, at zero bitrate it learns a generative model of the data, and at high enough bitrates it achieves perfect reconstruction.
Grab rare deals on Hyperice's high-end fitness recovery tools including percussion massagers and compression systems
This is high-end recovery gear that's worthy of professional athletes. It's a lot easier to stick to your resolutions when you recover faster. We may earn revenue from the products available on this page and participate in affiliate programs. Recovery gear is not a substitute for sleep, smart training, or eating with fitness in mind, but it can make the day-after soreness far less annoying. Hyperice has discounts right now on Normatec compression systems, Hypervolt percussion massagers, and joint-focused heat therapy that's built for the spots that complain the loudest.
- Leisure & Entertainment > Sports (0.35)
- Leisure & Entertainment > Games (0.30)
Deep Generative Models for Distribution-Preserving Lossy Compression
We propose and study the problem of distribution-preserving lossy compression. Motivated by recent advances in extreme image compression which allow to maintain artifact-free reconstructions even at very low bitrates, we propose to optimize the rate-distortion tradeoff under the constraint that the reconstructed samples follow the distribution of the training data. The resulting compression system recovers both ends of the spectrum: On one hand, at zero bitrate it learns a generative model of the data, and at high enough bitrates it achieves perfect reconstruction.
DynaQuant: Compressing Deep Learning Training Checkpoints via Dynamic Quantization
Agrawal, Amey, Reddy, Sameer, Bhattamishra, Satwik, Nookala, Venkata Prabhakara Sarath, Vashishth, Vidushi, Rong, Kexin, Tumanov, Alexey
With the increase in the scale of Deep Learning (DL) training workloads in terms of compute resources and time consumption, the likelihood of encountering in-training failures rises substantially, leading to lost work and resource wastage. Such failures are typically offset by a checkpointing mechanism, which comes at the cost of storage and network bandwidth overhead. State-of-the-art approaches involve lossy model compression mechanisms, which induce a tradeoff between the resulting model quality (accuracy) and compression ratio. Delta compression is then used to further reduce the overhead by only storing the difference between consecutive checkpoints. We make a key enabling observation that the sensitivity of model weights to compression varies during training, and different weights benefit from different quantization levels (ranging from retaining full precision to pruning). We propose (1) a non-uniform quantization scheme that leverages this variation, (2) an efficient search mechanism that dynamically finds the best quantization configurations, and (3) a quantization-aware delta compression mechanism that rearranges weights to minimize checkpoint differences, thereby maximizing compression. We instantiate these contributions in DynaQuant - a framework for DL workload checkpoint compression. Our experiments show that DynaQuant consistently achieves a better tradeoff between accuracy and compression ratios compared to prior works, enabling a compression ratio up to 39x and withstanding up to 10 restores with negligible accuracy impact for fault-tolerant training. DynaQuant achieves at least an order of magnitude reduction in checkpoint storage overhead for training failure recovery as well as transfer learning use cases without any loss of accuracy.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
Deep Generative Models for Distribution-Preserving Lossy Compression
Tschannen, Michael, Agustsson, Eirikur, Lucic, Mario
We propose and study the problem of distribution-preserving lossy compression. Motivated by recent advances in extreme image compression which allow to maintain artifact-free reconstructions even at very low bitrates, we propose to optimize the rate-distortion tradeoff under the constraint that the reconstructed samples follow the distribution of the training data. The resulting compression system recovers both ends of the spectrum: On one hand, at zero bitrate it learns a generative model of the data, and at high enough bitrates it achieves perfect reconstruction. We study several methods to approximately solve the proposed optimization problem, including a novel combination of Wasserstein GAN and Wasserstein Autoencoder, and present an extensive theoretical and empirical characterization of the proposed compression systems. Papers published at the Neural Information Processing Systems Conference.