Robustly overfitting latents for flexible neural image compression
–Neural Information Processing Systems
Neural image compression has made a great deal of progress. State-of-the-art models are based on variational autoencoders and are outperforming classical models. Neural compression models learn to encode an image into a quantized latent representation that can be efficiently sent to the decoder, which decodes the quantized latent into a reconstructed image. While these models have proven successful in practice, they lead to sub-optimal results due to imperfect optimization and limitations in the encoder and decoder capacity. Recent work shows how to use stochastic Gumbel annealing (SGA) to refine the latents of pre-trained neural image compression models.
Neural Information Processing Systems
Jun-2-2025, 14:12:33 GMT
- Country:
- Europe (0.14)
- North America > United States (0.14)
- Genre:
- Research Report > Experimental Study (0.93)
- Technology: