SpeechCLIP: Integrating Speech with Pre-Trained Vision and Language Model
Shih, Yi-Jen, Wang, Hsuan-Fu, Chang, Heng-Jui, Berry, Layne, Lee, Hung-yi, Harwath, David
–arXiv.org Artificial Intelligence
Data-driven speech processing models usually perform well with a large amount of text supervision, but collecting transcribed speech data is costly. Therefore, we propose SpeechCLIP, a novel framework bridging speech and text through images to enhance speech models without transcriptions. We leverage state-of-the-art pre-trained HuBERT and CLIP, aligning them via paired images and spoken captions with minimal fine-tuning. SpeechCLIP outperforms prior state-of-the-art on image-speech retrieval and performs zero-shot speech-text retrieval without direct supervision from transcriptions. Moreover, SpeechCLIP can directly retrieve semantically related keywords from speech.
arXiv.org Artificial Intelligence
Oct-25-2022
- Country:
- Asia
- Middle East > Qatar
- Taiwan (0.04)
- North America > United States
- New Jersey > Middlesex County
- Piscataway (0.04)
- Texas > Travis County
- Austin (0.04)
- New Jersey > Middlesex County
- South America > Chile
- Asia
- Genre:
- Research Report (0.64)
- Technology:
- Information Technology > Artificial Intelligence
- Machine Learning (1.00)
- Natural Language (1.00)
- Speech > Speech Recognition (0.49)
- Information Technology > Artificial Intelligence