Goto

Collaborating Authors

 colorization






Prompt-based Consistent Video Colorization

Dani, Silvia, Uricchio, Tiberio, Seidenari, Lorenzo

arXiv.org Artificial Intelligence

Existing video colorization methods struggle with temporal flickering or demand extensive manual input. We propose a novel approach automating high-fidelity video colorization using rich semantic guidance derived from language and segmentation. We employ a language-conditioned diffusion model to colorize grayscale frames. Guidance is provided via automatically generated object masks and textual prompts; our primary automatic method uses a generic prompt, achieving state-of-the-art results without specific color input. Temporal stability is achieved by warping color information from previous frames using optical flow (RAFT); a correction step detects and fixes inconsistencies introduced by warping. Evaluations on standard benchmarks (DAVIS30, VIDEVO20) show our method achieves state-of-the-art performance in colorization accuracy (PSNR) and visual realism (Colorfulness, CDC), demonstrating the efficacy of automated prompt-based guidance for consistent video colorization.


Uncolorable Examples: Preventing Unauthorized AI Colorization via Perception-Aware Chroma-Restrictive Perturbation

Nii, Yuki, Waseda, Futa, Chang, Ching-Chun, Echizen, Isao

arXiv.org Artificial Intelligence

AI-based colorization has shown remarkable capability in generating realistic color images from grayscale inputs. However, it poses risks of copyright infringement -- for example, the unauthorized colorization and resale of monochrome manga and films. Despite these concerns, no effective method currently exists to prevent such misuse. To address this, we introduce the first defensive paradigm, Uncolorable Examples, which embed imperceptible perturbations into grayscale images to invalidate unauthorized colorization. To ensure real-world applicability, we establish four criteria: effectiveness, imperceptibility, transferability, and robustness. Our method, Perception-Aware Chroma-Restrictive Perturbation (PAChroma), generates Uncolorable Examples that meet these four criteria by optimizing imperceptible perturbations with a Laplacian filter to preserve perceptual quality, and applying diverse input transformations during optimization to enhance transferability across models and robustness against common post-processing (e.g., compression). Experiments on ImageNet and Danbooru datasets demonstrate that PAChroma effectively degrades colorization quality while maintaining the visual appearance. This work marks the first step toward protecting visual content from illegitimate AI colorization, paving the way for copyright-aware defenses in generative media.

  Country: Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
  Genre: Research Report (0.51)
  Industry: Law (0.55)



Supplmentary Material: L-CAD: Language-based Colorization with Any-level Descriptions using Diffusion Priors

Neural Information Processing Systems

To demonstrate the effectiveness of our proposed luminance-guided image compression, semantic-aligned latent representation, and instance-aware sampling strategy (details in Sec. We demonstrate our generalization capability by showing more colorization results on legacy black-and-white photos in Figure 1, where results are presented sequentially from left to right using descriptions at the complete, partial, and scarce levels. Learning to color from language.