Zhong, Zhi
Cross-Modal Learning for Music-to-Music-Video Description Generation
Mao, Zhuoyuan, Zhao, Mengjie, Wu, Qiyu, Zhong, Zhi, Liao, Wei-Hsiang, Wakaki, Hiromi, Mitsufuji, Yuki
Music-to-music-video generation is a challenging task due to the intrinsic differences between the music and video modalities. The advent of powerful text-to-video diffusion models has opened a promising pathway for music-video (MV) generation by first addressing the music-to-MV description task and subsequently leveraging these models for video generation. In this study, we focus on the MV description generation task and propose a comprehensive pipeline encompassing training data construction and multimodal model fine-tuning. We fine-tune existing pre-trained multimodal models on our newly constructed music-to-MV description dataset based on the Music4All dataset, which integrates both musical and visual information. Our experimental results demonstrate that music representations can be effectively mapped to textual domains, enabling the generation of meaningful MV description directly from music inputs. We also identify key components in the dataset construction pipeline that critically impact the quality of MV description and highlight specific musical attributes that warrant greater focus for improved MV description generation.
OpenMU: Your Swiss Army Knife for Music Understanding
Zhao, Mengjie, Zhong, Zhi, Mao, Zhuoyuan, Yang, Shiqi, Liao, Wei-Hsiang, Takahashi, Shusuke, Wakaki, Hiromi, Mitsufuji, Yuki
We present OpenMU-Bench, a large-scale benchmark suite for addressing the data scarcity issue in training multimodal language models to understand music. To construct OpenMU-Bench, we leveraged existing datasets and bootstrapped new annotations. OpenMU-Bench also broadens the scope of music understanding by including lyrics understanding and music tool usage. Using OpenMU-Bench, we trained our music understanding model, OpenMU, with extensive ablations, demonstrating that OpenMU outperforms baseline models such as MU-Llama. Both OpenMU and OpenMU-Bench are open-sourced to facilitate future research in music understanding and to enhance creative music production efficiency.
Music Foundation Model as Generic Booster for Music Downstream Tasks
Liao, WeiHsiang, Takida, Yuhta, Ikemiya, Yukara, Zhong, Zhi, Lai, Chieh-Hsin, Fabbro, Giorgio, Shimada, Kazuki, Toyama, Keisuke, Cheuk, Kinwai, Martรญnez-Ramรญrez, Marco A., Takahashi, Shusuke, Uhlich, Stefan, Akama, Taketo, Choi, Woosung, Koyama, Yuichiro, Mitsufuji, Yuki
We demonstrate the efficacy of using intermediate representations from a single foundation model to enhance various music downstream tasks. We introduce SoniDo, a music foundation model (MFM) designed to extract hierarchical features from target music samples. By leveraging hierarchical intermediate features, SoniDo constrains the information granularity, leading to improved performance across various downstream tasks including both understanding and generative tasks. We specifically evaluated this approach on representative tasks such as music tagging, music transcription, music source separation, and music mixing. Our results reveal that the features extracted from foundation models provide valuable enhancements in training downstream task models. This highlights the capability of using features extracted from music foundation models as a booster for downstream tasks. Our approach not only benefits existing task-specific models but also supports music downstream tasks constrained by data scarcity. This paves the way for more effective and accessible music processing solutions. Figure 1: SoniDo extracts hierarchical features of target music samples, which are useful for solving music downstream tasks including understanding and generative tasks.
VRVQ: Variable Bitrate Residual Vector Quantization for Audio Compression
Chae, Yunkee, Choi, Woosung, Takida, Yuhta, Koo, Junghyun, Ikemiya, Yukara, Zhong, Zhi, Cheuk, Kin Wai, Martรญnez-Ramรญrez, Marco A., Lee, Kyogu, Liao, Wei-Hsiang, Mitsufuji, Yuki
Recent state-of-the-art neural audio compression models have progressively adopted residual vector quantization (RVQ). Despite this success, these models employ a fixed number of codebooks per frame, which can be suboptimal in terms of rate-distortion tradeoff, particularly in scenarios with simple input audio, such as silence. To address this limitation, we propose variable bitrate RVQ (VRVQ) for audio codecs, which allows for more efficient coding by adapting the number of codebooks used per frame. Furthermore, we propose a gradient estimation method for the non-differentiable masking operation that transforms from the importance map to the binary importance mask, improving model training via a straight-through estimator. We demonstrate that the proposed training framework achieves superior results compared to the baseline method and shows further improvement when applied to the current state-of-the-art codec.
SoundCTM: Uniting Score-based and Consistency Models for Text-to-Sound Generation
Saito, Koichi, Kim, Dongjun, Shibuya, Takashi, Lai, Chieh-Hsin, Zhong, Zhi, Takida, Yuhta, Mitsufuji, Yuki
Sound content is an indispensable element for multimedia works such as video games, music, and films. Recent high-quality diffusion-based sound generation models can serve as valuable tools for the creators. However, despite producing high-quality sounds, these models often suffer from slow inference speeds. This drawback burdens creators, who typically refine their sounds through trial and error to align them with their artistic intentions. To address this issue, we introduce Sound Consistency Trajectory Models (SoundCTM). Our model enables flexible transitioning between high-quality 1-step sound generation and superior sound quality through multi-step generation. This allows creators to initially control sounds with 1-step samples before refining them through multi-step generation. While CTM fundamentally achieves flexible 1-step and multi-step generation, its impressive performance heavily depends on an additional pretrained feature extractor and an adversarial loss, which are expensive to train and not always available in other domains. Thus, we reframe CTM's training framework and introduce a novel feature distance by utilizing the teacher's network for a distillation loss. Additionally, while distilling classifier-free guided trajectories, we train conditional and unconditional student models simultaneously and interpolate between these models during inference. We also propose training-free controllable frameworks for SoundCTM, leveraging its flexible sampling capability. SoundCTM achieves both promising 1-step and multi-step real-time sound generation without using any extra off-the-shelf networks. Furthermore, we demonstrate SoundCTM's capability of controllable sound generation in a training-free manner. Our codes, pretrained models, and audio samples are available at https://github.com/sony/soundctm.
Visual Echoes: A Simple Unified Transformer for Audio-Visual Generation
Yang, Shiqi, Zhong, Zhi, Zhao, Mengjie, Takahashi, Shusuke, Ishii, Masato, Shibuya, Takashi, Mitsufuji, Yuki
In recent years, with the realistic generation results and a wide range of personalized applications, diffusion-based generative models gain huge attention in both visual and audio generation areas. Compared to the considerable advancements of text2image or text2audio generation, research in audio2visual or visual2audio generation has been relatively slow. The recent audio-visual generation methods usually resort to huge large language model or composable diffusion models. Instead of designing another giant model for audio-visual generation, in this paper we take a step back showing a simple and lightweight generative transformer, which is not fully investigated in multi-modal generation, can achieve excellent results on image2audio generation. The transformer operates in the discrete audio and visual Vector-Quantized GAN space, and is trained in the mask denoising manner. After training, the classifier-free guidance could be deployed off-the-shelf achieving better performance, without any extra training or modification. Since the transformer model is modality symmetrical, it could also be directly deployed for audio2image generation and co-generation. In the experiments, we show that our simple method surpasses recent image2audio generation methods. Generated audio samples could be found in this link.
On the Language Encoder of Contrastive Cross-modal Models
Zhao, Mengjie, Ono, Junya, Zhong, Zhi, Lai, Chieh-Hsin, Takida, Yuhta, Murata, Naoki, Liao, Wei-Hsiang, Shibuya, Takashi, Wakaki, Hiromi, Mitsufuji, Yuki
Contrastive cross-modal models such as CLIP and CLAP aid various vision-language (VL) and audio-language (AL) tasks. However, there has been limited investigation of and improvement in their language encoder, which is the central component of encoding natural language descriptions of image/audio into vector representations. We extensively evaluate how unsupervised and supervised sentence embedding training affect language encoder quality and cross-modal task performance. In VL pretraining, we found that sentence embedding training language encoder quality and aids in cross-modal tasks, improving contrastive VL models such as CyCLIP. In contrast, AL pretraining benefits less from sentence embedding training, which may result from the limited amount of pretraining data. We analyze the representation spaces to understand the strengths of sentence embedding training, and find that it improves text-space uniformity, at the cost of decreased cross-modal alignment.
Diffusion-Based Speech Enhancement with Joint Generative and Predictive Decoders
Shi, Hao, Shimada, Kazuki, Hirano, Masato, Shibuya, Takashi, Koyama, Yuichiro, Zhong, Zhi, Takahashi, Shusuke, Kawahara, Tatsuya, Mitsufuji, Yuki
Diffusion-based speech enhancement (SE) has been investigated recently, but its decoding is very time-consuming. One solution is to initialize the decoding process with the enhanced feature estimated by a predictive SE system. However, this two-stage method ignores the complementarity between predictive and diffusion SE. In this paper, we propose a unified system that integrates these two SE modules. The system encodes both generative and predictive information, and then applies both generative and predictive decoders, whose outputs are fused. Specifically, the two SE modules are fused in the first and final diffusion steps: the first step fusion initializes the diffusion process with the predictive SE for improving the convergence, and the final step fusion combines the two complementary SE outputs to improve the SE performance. Experiments on the Voice-Bank dataset show that the diffusion score estimation can benefit from the predictive information and speed up the decoding.
Combining Physically-Based Modeling and Deep Learning for Fusing GRACE Satellite Data: Can We Learn from Mismatch?
Sun, Alexander Y., Scanlon, Bridget R., Zhang, Zizhan, Walling, David, Bhanja, Soumendra N., Mukherjee, Abhijit, Zhong, Zhi
Global hydrological and land surface models are increasingly used for tracking terrestrial total water storage (TWS) dynamics, but the utility of existing models is hampered by conceptual and/or data uncertainties related to various underrepresented and unrepresented processes, such as groundwater storage. The gravity recovery and climate experiment (GRACE) satellite mission provided a valuable independent data source for tracking TWS at regional and continental scales. Strong interests exist in fusing GRACE data into global hydrological models to improve their predictive performance. Here we develop and apply deep convolutional neural network (CNN) models to learn the spatiotemporal patterns of mismatch between TWS anomalies (TWSA) derived from GRACE and those simulated by NOAH, a widely used land surface model. Once trained, our CNN models can be used to correct the NOAH simulated TWSA without requiring GRACE data, potentially filling the data gap between GRACE and its follow-on mission, GRACE-FO. Our methodology is demonstrated over India, which has experienced significant groundwater depletion in recent decades that is nevertheless not being captured by the NOAH model. Results show that the CNN models significantly improve the match with GRACE TWSA, achieving a country-average correlation coefficient of 0.94 and Nash-Sutcliff efficient of 0.87, or 14\% and 52\% improvement respectively over the original NOAH TWSA. At the local scale, the learned mismatch pattern correlates well with the observed in situ groundwater storage anomaly data for most parts of India, suggesting that deep learning models effectively compensate for the missing groundwater component in NOAH for this study region.