diffc
Turbo-DDCM: Fast and Flexible Zero-Shot Diffusion-Based Image Compression
Vaisman, Amit, Ohayon, Guy, Manor, Hila, Elad, Michael, Michaeli, Tomer
While zero-shot diffusion-based compression methods have seen significant progress in recent years, they remain notoriously slow and computationally demanding. This paper presents an efficient zero-shot diffusion-based compression method that runs substantially faster than existing methods, while maintaining performance that is on par with the state-of-the-art techniques. Our method builds upon the recently proposed Denoising Diffusion Codebook Models (DDCMs) compression scheme. Specifically, DDCM compresses an image by sequentially choosing the diffusion noise vectors from reproducible random codebooks, guiding the denoiser's output to reconstruct the target image. We modify this framework with Turbo-DDCM, which efficiently combines a large number of noise vectors at each denoising step, thereby significantly reducing the number of required denoising operations. This modification is also coupled with an improved encoding protocol. Furthermore, we introduce two flexible variants of Turbo-DDCM, a priority-aware variant that prioritizes user-specified regions and a distortion-controlled variant that compresses an image based on a target PSNR rather than a target BPP. Comprehensive experiments position Turbo-DDCM as a compelling, practical, and flexible image compression scheme.
Quantifying Conflicts for Spatial and Temporal Information
Condotta, Jean-François (Centre National de la Recherche Scientifique (CNRS) and Université d'Artois) | Raddaoui, Badran (University of Poitiers) | Salhi, Yakoub (Centre National de la Recherche Scientifique (CNRS) and Université d'Artois)
This paper tackles the problem of evaluating the degree of inconsistency in spatial and temporal qualitative reasoning. We first introduce postulates to propose a formal framework for measuring inconsistency in this context. Then, we provide two inconsistency measures that can be useful in various AI applications. The first one is based on the number of constraints that we need to relax to get a consistent qualitative constraint network. The second inconsistency measure is based on variable restrictions to restore consistency. It is defined from the minimum number of variables that we need to ignore to recover consistency. We show that our proposed measures satisfy required postulates and other appropriate properties. Finally, we discuss the impact of our inconsistency measures on belief merging in qualitative reasoning.
- Information Technology > Artificial Intelligence > Representation & Reasoning > Constraint-Based Reasoning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Qualitative Reasoning (0.86)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Belief Revision (0.86)