Goto

Collaborating Authors

 Wang, Gary


Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context

arXiv.org Artificial Intelligence

In this report, we introduce the Gemini 1.5 family of models, representing the next generation of highly compute-efficient multimodal models capable of recalling and reasoning over fine-grained information from millions of tokens of context, including multiple long documents and hours of video and audio. The family includes two new models: (1) an updated Gemini 1.5 Pro, which exceeds the February version on the great majority of capabilities and benchmarks; (2) Gemini 1.5 Flash, a more lightweight variant designed for efficiency with minimal regression in quality. Gemini 1.5 models achieve near-perfect recall on long-context retrieval tasks across modalities, improve the state-of-the-art in long-document QA, long-video QA and long-context ASR, and match or surpass Gemini 1.0 Ultra's state-of-the-art performance across a broad set of benchmarks. Studying the limits of Gemini 1.5's long-context ability, we find continued improvement in next-token prediction and near-perfect retrieval (>99%) up to at least 10M tokens, a generational leap over existing models such as Claude 3.0 (200k) and GPT-4 Turbo (128k). Finally, we highlight real-world use cases, such as Gemini 1.5 collaborating with professionals on completing their tasks achieving 26 to 75% time savings across 10 different job categories, as well as surprising new capabilities of large language models at the frontier; when given a grammar manual for Kalamang, a language with fewer than 200 speakers worldwide, the model learns to translate English to Kalamang at a similar level to a person who learned from the same content.


ASTRA: Aligning Speech and Text Representations for Asr without Sampling

arXiv.org Artificial Intelligence

This paper introduces ASTRA, a novel method for improving Automatic Speech Recognition (ASR) through text injection.Unlike prevailing techniques, ASTRA eliminates the need for sampling to match sequence lengths between speech and text modalities. Instead, it leverages the inherent alignments learned within CTC/RNNT models. This approach offers the following two advantages, namely, avoiding potential misalignment between speech and text features that could arise from upsampling and eliminating the need for models to accurately predict duration of sub-word tokens. This novel formulation of modality (length) matching as a weighted RNNT objective matches the performance of the state-of-the-art duration-based methods on the FLEURS benchmark, while opening up other avenues of research in speech processing.


High-precision Voice Search Query Correction via Retrievable Speech-text Embedings

arXiv.org Artificial Intelligence

Automatic speech recognition (ASR) systems can suffer from poor recall for various reasons, such as noisy audio, lack of sufficient training data, etc. Previous work has shown that recall can be improved by retrieving rewrite candidates from a large database of likely, contextually-relevant alternatives to the hypothesis text using nearest-neighbors search over embeddings of the ASR hypothesis text to correct and candidate corrections. However, ASR-hypothesis-based retrieval can yield poor precision if the textual hypotheses are too phonetically dissimilar to the transcript truth. In this paper, we eliminate the hypothesis-audio mismatch problem by querying the correction database directly using embeddings derived from the utterance audio; the embeddings of the utterance audio and candidate corrections are produced by multimodal speech-text embedding networks trained to place the embedding of the audio of an utterance and the embedding of its corresponding textual transcript close together. After locating an appropriate correction candidate using nearest-neighbor search, we score the candidate with its speech-text embedding distance before adding the candidate to the original n-best list. We show a relative word error rate (WER) reduction of 6% on utterances whose transcripts appear in the candidate set, without increasing WER on general utterances.


Google USM: Scaling Automatic Speech Recognition Beyond 100 Languages

arXiv.org Artificial Intelligence

We introduce the Universal Speech Model (USM), a single large model that performs automatic speech recognition (ASR) across 100+ languages. This is achieved by pre-training the encoder of the model on a large unlabeled multilingual dataset of 12 million (M) hours spanning over 300 languages, and fine-tuning on a smaller labeled dataset. We use multilingual pre-training with random-projection quantization and speech-text modality matching to achieve state-of-the-art performance on downstream multilingual ASR and speech-to-text translation tasks. We also demonstrate that despite using a labeled training set 1/7-th the size of that used for the Whisper model [1], our model exhibits comparable or better performance on both in-domain and out-of-domain speech recognition tasks across many languages.


Using Text Injection to Improve Recognition of Personal Identifiers in Speech

arXiv.org Artificial Intelligence

Accurate recognition of specific categories, such as persons' names, dates or other identifiers is critical in many Automatic Speech Recognition (ASR) applications. As these categories represent personal information, ethical use of this data including collection, transcription, training and evaluation demands special care. One way of ensuring the security and privacy of individuals is to redact or eliminate Personally Identifiable Information (PII) from collection altogether. However, this results in ASR models that tend to have lower recognition accuracy of these categories. We use text-injection to improve the recognition of PII categories by including fake textual substitutes of PII categories in the training data using a text injection method. We demonstrate substantial improvement to Recall of Names and Dates in medical notes while improving overall WER. For alphanumeric digit sequences we show improvements to Character Error Rate and Sentence Accuracy.


Understanding Shared Speech-Text Representations

arXiv.org Artificial Intelligence

In this work, we expand on this understanding in two directions. Recently, a number of approaches to train speech models by incorporating First, we evaluate the ability to transfer information from one domain text into end-to-end models have been developed, with Maestro to another through the joint representation (Section 4). We explore advancing state-of-the-art automatic speech recognition (ASR) which components of the text encoder are robust across corpora, and and Speech Translation (ST) performance. In this paper, we expand which are sensitive. Second, we investigate the modal representations our understanding of the resulting shared speech-text representations from the speech and text encoders (Section 5). We inspect the with two types of analyses. First we examine the limits of speechfree cross-modal consistency loss as a signal of robustness, and the ability domain adaptation, finding that a corpus-specific duration model for this loss term to generalize across corpora through T-SNE for speech-text alignment is the most important component for learning visualization of activations and a retrieval probe task.


Virtuoso: Massive Multilingual Speech-Text Joint Semi-Supervised Learning for Text-To-Speech

arXiv.org Artificial Intelligence

Although This paper proposes Virtuoso, a massively multilingual speech-text various approaches of massively multilingual self/semi-supervised joint semi-supervised learning framework for text-to-speech synthesis learning have been attempted for speech recognition tasks, they have (TTS) models. Existing multilingual TTS typically supports tens not been fully explored for multilingual speech generation tasks. of languages, which are a small fraction of the thousands of languages This paper proposes Virtuoso, a massive multilingual speech-in the world. One difficulty to scale multilingual TTS to hundreds of text joint pretraining framework based on self-supervised and semisupervised languages is collecting high-quality speech-text paired data in lowresource learning. It extends Maestro [6], a speech-text semisupervised languages. This study extends Maestro, a speech-text joint pretraining framework for ASR, to speech generation pretraining framework for automatic speech recognition (ASR), to tasks. Virtuoso allows us to pretrain a multilingual TTS model using speech generation tasks. To train a TTS model from various types unsupervised (untranscribed speech and unspoken text) and supervised of speech and text data, different training schemes are designed to (paired TTS and ASR data) datasets with training schemes handle supervised (paired TTS and ASR data) and unsupervised designed for them, which will allow the model to scale to hundreds (untranscribed speech and unspoken text) datasets.


Modular Hybrid Autoregressive Transducer

arXiv.org Artificial Intelligence

Text-only adaptation of a transducer model remains challenging for end-to-end speech recognition since the transducer has no clearly separated acoustic model (AM), language model (LM) or blank model. In this work, we propose a modular hybrid autoregressive transducer (MHAT) that has structurally separated label and blank decoders to predict label and blank distributions, respectively, along with a shared acoustic encoder. The encoder and label decoder outputs are directly projected to AM and internal LM scores and then added to compute label posteriors. We train MHAT with an internal LM loss and a HAT loss to ensure that its internal LM becomes a standalone neural LM that can be effectively adapted to text. Moreover, text adaptation of MHAT fosters a much better LM fusion than internal LM subtraction-based methods. On Google's large-scale production data, a multi-domain MHAT adapted with 100B sentences achieves relative WER reductions of up to 12.4% without LM fusion and 21.5% with LM fusion from 400K-hour trained HAT.