Goto

Collaborating Authors

 Kao, David


Very Attentive Tacotron: Robust and Unbounded Length Generalization in Autoregressive Transformer-Based Text-to-Speech

arXiv.org Artificial Intelligence

Autoregressive (AR) Transformer-based sequence models are known to have difficulty generalizing to sequences longer than those seen during training. When applied to text-to-speech (TTS), these models tend to drop or repeat words or produce erratic output, especially for longer utterances. In this paper, we introduce enhancements aimed at AR Transformer-based encoder-decoder TTS systems that address these robustness and length generalization issues. Our approach uses an alignment mechanism to provide cross-attention operations with relative location information. The associated alignment position is learned as a latent property of the model via backprop and requires no external alignment information during training. While the approach is tailored to the monotonic nature of TTS input-output alignment, it is still able to benefit from the flexible modeling power of interleaved multi-head self- and cross-attention operations. A system incorporating these improvements, which we call Very Attentive Tacotron, matches the naturalness and expressiveness of a baseline T5-based TTS system, while eliminating problems with repeated or dropped words and enabling generalization to any practical utterance length.


Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context

arXiv.org Artificial Intelligence

In this report, we introduce the Gemini 1.5 family of models, representing the next generation of highly compute-efficient multimodal models capable of recalling and reasoning over fine-grained information from millions of tokens of context, including multiple long documents and hours of video and audio. The family includes two new models: (1) an updated Gemini 1.5 Pro, which exceeds the February version on the great majority of capabilities and benchmarks; (2) Gemini 1.5 Flash, a more lightweight variant designed for efficiency with minimal regression in quality. Gemini 1.5 models achieve near-perfect recall on long-context retrieval tasks across modalities, improve the state-of-the-art in long-document QA, long-video QA and long-context ASR, and match or surpass Gemini 1.0 Ultra's state-of-the-art performance across a broad set of benchmarks. Studying the limits of Gemini 1.5's long-context ability, we find continued improvement in next-token prediction and near-perfect retrieval (>99%) up to at least 10M tokens, a generational leap over existing models such as Claude 3.0 (200k) and GPT-4 Turbo (128k). Finally, we highlight real-world use cases, such as Gemini 1.5 collaborating with professionals on completing their tasks achieving 26 to 75% time savings across 10 different job categories, as well as surprising new capabilities of large language models at the frontier; when given a grammar manual for Kalamang, a language with fewer than 200 speakers worldwide, the model learns to translate English to Kalamang at a similar level to a person who learned from the same content.


Learning the joint distribution of two sequences using little or no paired data

arXiv.org Artificial Intelligence

A classical ASR approach treats the process of generating speech as a noisy channel. In this framing, text is drawn from some distribution and statistically transformed into We present a noisy channel generative model speech audio; the speech recognition task is then to invert of two sequences, for example text and speech, this generative model to infer the text most likely to have which enables uncovering the association between given rise to a given speech waveform. This generative the two modalities when limited paired data is model of speech was historically successful (Baker, 1975; available. To address the intractability of the exact Jelinek, 1976; Rabiner, 1989), but has been superseded in model under a realistic data setup, we propose modern discriminative systems by directly modeling the a variational inference approximation. To train conditional distribution of text, given speech (Graves et al., this variational model with categorical data, we 2006; Amodei et al., 2016). The direct approach has the advantage propose a KL encoder loss approach which has of allowing limited modeling power to be solely devoted connections to the wake-sleep algorithm. Identifying to the task of interest, whereas the generative one can the joint or conditional distributions by only be extremely sensitive to faulty assumptions in the speech observing unpaired samples from the marginals is audio model despite the fact that this is not the primary only possible under certain conditions in the data object of interest. However the generative approach allows distribution and we discuss under what type of learning in a principled way from untranscribed speech conditional independence assumptions that might audio, something fundamentally impossible in the direct approach.


Non-saturating GAN training as divergence minimization

arXiv.org Machine Learning

Non-saturating generative adversarial network (GAN) training is widely used and has continued to obtain groundbreaking results. However so far this approach has lacked strong theoretical justification, in contrast to alternatives such as f-GANs and Wasserstein GANs which are motivated in terms of approximate divergence minimization. In this paper we show that non-saturating GAN training does in fact approximately minimize a particular f-divergence. We develop general theoretical tools to compare and classify f-divergences and use these to show that the new f-divergence is qualitatively similar to reverse KL. These results help to explain the high sample quality but poor diversity often observed empirically when using this scheme.