Dai, Luke
Alexa, play with robot: Introducing the First Alexa Prize SimBot Challenge on Embodied AI
Shi, Hangjie, Ball, Leslie, Thattai, Govind, Zhang, Desheng, Hu, Lucy, Gao, Qiaozi, Shakiah, Suhaila, Gao, Xiaofeng, Padmakumar, Aishwarya, Yang, Bofei, Chung, Cadence, Guthy, Dinakar, Sukhatme, Gaurav, Arumugam, Karthika, Wen, Matthew, Ipek, Osman, Lange, Patrick, Khanna, Rohan, Pansare, Shreyas, Sharma, Vasu, Zhang, Chao, Flagg, Cris, Pressel, Daniel, Vaz, Lavina, Dai, Luke, Goyal, Prasoon, Sahai, Sattvik, Liu, Shaohua, Lu, Yao, Gottardi, Anna, Hu, Shui, Liu, Yang, Hakkani-Tur, Dilek, Bland, Kate, Rocker, Heather, Jeun, James, Rao, Yadunandana, Johnston, Michael, Iyengar, Akshaya, Mandal, Arindam, Natarajan, Prem, Ghanadan, Reza
The Alexa Prize program has empowered numerous university students to explore, experiment, and showcase their talents in building conversational agents through challenges like the SocialBot Grand Challenge and the TaskBot Challenge. As conversational agents increasingly appear in multimodal and embodied contexts, it is important to explore the affordances of conversational interaction augmented with computer vision and physical embodiment. This paper describes the SimBot Challenge, a new challenge in which university teams compete to build robot assistants that complete tasks in a simulated physical environment. This paper provides an overview of the SimBot Challenge, which included both online and offline challenge phases. We describe the infrastructure and support provided to the teams including Alexa Arena, the simulated environment, and the ML toolkit provided to teams to accelerate their building of vision and language models. We summarize the approaches the participating teams took to overcome research challenges and extract key lessons learned. Finally, we provide analysis of the performance of the competing SimBots during the competition.
Improving Open-Domain Dialogue Evaluation with a Causal Inference Model
Le, Cat P., Dai, Luke, Johnston, Michael, Liu, Yang, Walker, Marilyn, Ghanadan, Reza
Effective evaluation methods remain a significant challenge for research on open-domain conversational dialogue systems. Explicit satisfaction ratings can be elicited from users, but users often do not provide ratings when asked, and those they give can be highly subjective. Post-hoc ratings by experts are an alternative, but these can be both expensive and complex to collect. Here, we explore the creation of automated methods for predicting both expert and user ratings of open-domain dialogues. We compare four different approaches. First, we train a baseline model using an end-to-end transformer to predict ratings directly from the raw dialogue text. The other three methods are variants of a two-stage approach in which we first extract interpretable features at the turn level that capture, among other aspects, user dialogue behaviors indicating contradiction, repetition, disinterest, compliments, or criticism. We project these features to the dialogue level and train a dialogue-level MLP regression model, a dialogue-level LSTM, and a novel causal inference model called counterfactual-LSTM (CF-LSTM) to predict ratings. The proposed CF-LSTM is a sequential model over turn-level features which predicts ratings using multiple regressors depending on hypotheses derived from the turn-level features. As a causal inference model, CF-LSTM aims to learn the underlying causes of a specific event, such as a low rating. We also bin the user ratings and perform classification experiments with all four models. In evaluation experiments on conversational data from the Alexa Prize SocialBot, we show that the CF-LSTM achieves the best performance for predicting dialogue ratings and classification.