Detecting Ambiguities to Guide Query Rewrite for Robust Conversations in Enterprise AI Assistants

Tanjim, Md Mehrab, Chen, Xiang, Bursztyn, Victor S., Bhattacharya, Uttaran, Mai, Tung, Muppala, Vaishnavi, Maharaj, Akash, Mitra, Saayan, Koh, Eunyee, Li, Yunyao, Russell, Ken

arXiv.org Artificial Intelligence 

Multi-turn conversations with an Enterprise AI Assistant can be challenging due to conversational dependencies in questions, leading to ambiguities and errors. To address this, we propose an NLU-NLG framework for ambiguity detection and resolution through reformulating query automatically and introduce a new task called "Ambiguity-guided Query Rewrite." To detect ambiguities, we develop a taxonomy based on real user conversational logs and draw insights from it to design rules and extract features for a classifier which yields superior performance in detecting ambiguous queries, outperforming LLM-based baselines. Furthermore, coupling the query rewrite module with our ambiguity detecting classifier shows that this end-to-end framework can effectively mitigate ambiguities without risking unnecessary insertions of unwanted phrases for clear queries, leading to an improvement in the overall performance of the AI Assistant. Due to its significance, this has been deployed in the real world application, namely Adobe Experience Platform AI Assistant.