Muppala, Vaishnavi
ECLAIR: Enhanced Clarification for Interactive Responses
Murzaku, John, Liu, Zifan, Tanjim, Md Mehrab, Muppala, Vaishnavi, Chen, Xiang, Li, Yunyao
We present ECLAIR (Enhanced CLArification for Interactive Responses), a novel unified and end-to-end framework for interactive disambiguation in enterprise AI assistants. ECLAIR generates clarification questions for ambiguous user queries and resolves ambiguity based on the user's response.We introduce a generalized architecture capable of integrating ambiguity information from multiple downstream agents, enhancing context-awareness in resolving ambiguities and allowing enterprise specific definition of agents. We further define agents within our system that provide domain-specific grounding information. We conduct experiments comparing ECLAIR to few-shot prompting techniques and demonstrate ECLAIR's superior performance in clarification question generation and ambiguity resolution.
Exploring Rewriting Approaches for Different Conversational Tasks
Tanjim, Md Mehrab, Rossi, Ryan A., Rimer, Mike, Chen, Xiang, Kim, Sungchul, Muppala, Vaishnavi, Yu, Tong, Hu, Zhengmian, Sinha, Ritwik, Zhang, Wei, Burhanuddin, Iftikhar Ahamath, Dernoncourt, Franck
Conversational assistants often require a question rewriting algorithm that leverages a subset of past interactions to provide a more meaningful (accurate) answer to the user's question or request. However, the exact rewriting approach may often depend on the use case and application-specific tasks supported by the conversational assistant, among other constraints. In this paper, we systematically investigate two different approaches, denoted as rewriting and fusion, on two fundamentally different generation tasks, including a text-to-text generation task and a multimodal generative task that takes as input text and generates a visualization or data table that answers the user's question. Our results indicate that the specific rewriting or fusion approach highly depends on the underlying use case and generative task. In particular, we find that for a conversational question-answering assistant, the query rewriting approach performs best, whereas for a data analysis assistant that generates visualizations and data tables based on the user's conversation with the assistant, the fusion approach works best. Notably, we explore two datasets for the data analysis assistant use case, for short and long conversations, and we find that query fusion always performs better, whereas for the conversational text-based question-answering, the query rewrite approach performs best.
Detecting Ambiguities to Guide Query Rewrite for Robust Conversations in Enterprise AI Assistants
Tanjim, Md Mehrab, Chen, Xiang, Bursztyn, Victor S., Bhattacharya, Uttaran, Mai, Tung, Muppala, Vaishnavi, Maharaj, Akash, Mitra, Saayan, Koh, Eunyee, Li, Yunyao, Russell, Ken
Multi-turn conversations with an Enterprise AI Assistant can be challenging due to conversational dependencies in questions, leading to ambiguities and errors. To address this, we propose an NLU-NLG framework for ambiguity detection and resolution through reformulating query automatically and introduce a new task called "Ambiguity-guided Query Rewrite." To detect ambiguities, we develop a taxonomy based on real user conversational logs and draw insights from it to design rules and extract features for a classifier which yields superior performance in detecting ambiguous queries, outperforming LLM-based baselines. Furthermore, coupling the query rewrite module with our ambiguity detecting classifier shows that this end-to-end framework can effectively mitigate ambiguities without risking unnecessary insertions of unwanted phrases for clear queries, leading to an improvement in the overall performance of the AI Assistant. Due to its significance, this has been deployed in the real world application, namely Adobe Experience Platform AI Assistant.