Huynh, Jessica
The Pipeline System of ASR and NLU with MLM-based Data Augmentation toward STOP Low-resource Challenge
Futami, Hayato, Huynh, Jessica, Arora, Siddhant, Wu, Shih-Lun, Kashiwagi, Yosuke, Peng, Yifan, Yan, Brian, Tsunoo, Emiru, Watanabe, Shinji
This paper describes our system for the low-resource domain adaptation track (Track 3) in Spoken Language Understanding Grand Challenge, which is a part of ICASSP Signal Processing Grand Challenge 2023. In the track, we adopt a pipeline approach of ASR and NLU. For ASR, we fine-tune Whisper for each domain with upsampling. For NLU, we fine-tune BART on all the Track3 data and then on low-resource domain data. We apply masked LM (MLM) -based data augmentation, where some of input tokens and corresponding target labels are replaced using MLM. We also apply a retrieval-based approach, where model input is augmented with similar training samples. As a result, we achieved exact match (EM) accuracy 63.3/75.0 (average: 69.15) for reminder/weather domain, and won the 1st place at the challenge.
A Study on the Integration of Pipeline and E2E SLU systems for Spoken Semantic Parsing toward STOP Quality Challenge
Arora, Siddhant, Futami, Hayato, Wu, Shih-Lun, Huynh, Jessica, Peng, Yifan, Kashiwagi, Yosuke, Tsunoo, Emiru, Yan, Brian, Watanabe, Shinji
Recently there have been efforts to introduce new benchmark tasks for spoken language understanding (SLU), like semantic parsing. In this paper, we describe our proposed spoken semantic parsing system for the quality track (Track 1) in Spoken Language Understanding Grand Challenge which is part of ICASSP Signal Processing Grand Challenge 2023. We experiment with both end-to-end and pipeline systems for this task. Strong automatic speech recognition (ASR) models like Whisper and pretrained Language models (LM) like BART are utilized inside our SLU framework to boost performance. We also investigate the output level combination of various models to get an exact match accuracy of 80.8, which won the 1st place at the challenge.
Understanding the Effectiveness of Very Large Language Models on Dialog Evaluation
Huynh, Jessica, Jiao, Cathy, Gupta, Prakhar, Mehri, Shikib, Bajaj, Payal, Chaudhary, Vishrav, Eskenazi, Maxine
In recent years, language models such as GPT-3 [5] have grown larger, and their performance on downstream natural language processing (NLP) tasks has significantly improved in low-resource settings where only a few instances per task are available (few-shot). The larger these models are, the higher their performances trend on tasks such as language generation and evaluation [39]. They can generate coherent, fluent and interesting responses. However, they can also produce responses that are repetitive and un-engaging [29], in addition to being hard to control. Dialog evaluation is the task of assessing the quality of responses generated by dialog models in terms of properties like those mentioned above. However, one significant impediment for open-domain dialog generation research is the lack of meaningful automatic metrics for open-domain dialog evaluation. Standard language generation metrics have been shown to be ineffective for dialog evaluation [11], a large part of which is because conversations can be followed by multiple valid responses.
The DialPort tools
Huynh, Jessica, Mehri, Shikib, Jiao, Cathy, Eskenazi, Maxine
Static datasets are ineffective for both evaluation and optimization. The Alexa Prize challenge (Ram et al., 2018; This has led to the creation of the DialPort Khatri et al., 2018) allows university teams to build Portal, which facilitates the collection of socialbots that are assessed in interactive settings flexible and evolving data as well as interactive assessment with Alexa users.
SAPPHIRE: Approaches for Enhanced Concept-to-Text Generation
Feng, Steven Y., Huynh, Jessica, Narisetty, Chaitanya, Hovy, Eduard, Gangal, Varun
We motivate and propose a suite of simple but effective improvements for concept-to-text generation called SAPPHIRE: Set Augmentation and Post-hoc PHrase Infilling and REcombination. We demonstrate their effectiveness on generative commonsense reasoning, a.k.a. the CommonGen task, through experiments using both BART and T5 models. Through extensive automatic and human evaluation, we show that SAPPHIRE noticeably improves model performance. An in-depth qualitative analysis illustrates that SAPPHIRE effectively addresses many issues of the baseline model generations, including lack of commonsense, insufficient specificity, and poor fluency.