AudioToken: Adaptation of Text-Conditioned Diffusion Models for Audio-to-Image Generation

Yariv, Guy, Gat, Itai, Wolf, Lior, Adi, Yossi, Schwartz, Idan

arXiv.org Artificial Intelligence 

In recent years, image generation has shown a great leap in performance, where diffusion models play a central role. Although generating high-quality images, such models are mainly conditioned on textual descriptions. This begs the question: how can we adopt such models to be conditioned on other modalities?. In this paper, we propose a novel method utilizing latent diffusion models trained for text-to-image-generation to generate images conditioned on audio recordings. Using a pre-trained audio encoding model, the proposed method encodes audio into a new token, which can be considered as an adaptation layer between the audio and text representations. Such a modeling paradigm requires a small number of trainable parameters, making the proposed approach appealing for lightweight optimization. Results suggest the proposed method is superior to the evaluated baseline methods, considering objective and subjective metrics. Code and samples Figure 1: Generated images (right) and input spectrograms are available at: https://pages.cs.huji.ac.il/ (left) from the proposed method. The model gets as input an adiyoss-lab/AudioToken.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found