Acharya, Saurav
Leveraging LLMs to Enable Natural Language Search on Go-to-market Platforms
Yao, Jesse, Acharya, Saurav, Parida, Priyaranjan, Attipalli, Srinivas, Dasdan, Ali
Enterprise searches require users to have complex knowledge of queries, configurations, and metadata, rendering it difficult for them to access information as needed. Most go-to-market (GTM) platforms utilize advanced search, an interface that enables users to filter queries by various fields using categories or keywords, which, historically, however, has proven to be exceedingly cumbersome, as users are faced with seemingly hundreds of options, fields, and buttons. Consequently, querying with natural language has long been ideal, a notion further empowered by Large Language Models (LLMs). In this paper, we implement and evaluate a solution for the Zoominfo product for sellers, which prompts the LLM with natural language, producing search fields through entity extraction that are then converted into a search query. The intermediary search fields offer numerous advantages for each query, including the elimination of syntax errors, simpler ground truths, and an intuitive format for the LLM to interpret. We paired this pipeline with many advanced prompt engineering strategies, featuring an intricate system message, few-shot prompting, chain-of-thought (CoT) reasoning, and execution refinement. Furthermore, we manually created the ground truth for 500+ natural language queries, enabling the supervised fine-tuning of Llama-3-8B-Instruct and the introduction of sophisticated numerical metrics. Comprehensive experiments with closed, open source, and fine-tuned LLM models were conducted through exact, Jaccard, cosine, and semantic similarity on individual search entities to demonstrate the efficacy of our approach. Overall, the most accurate closed model had an average accuracy of 97% per query, with only one field performing under 90%, with comparable results observed from the fine-tuned models.
Towards Situated Open World Reference Resolution
Williams, Tom (Tufts University) | Schreitter, Stephanie (Austrian Research Institute for Artificial Intelligence) | Acharya, Saurav (Tufts University) | Scheutz, Matthias (Tufts University)
Natural language dialogue provides the opportunity fortruly natural human-robot interaction. A robot participating in natural language dialogue must identify or create new representations for referenced entities if it is to discuss, reason about, or perform actions involving that entity, a capability known as reference resolution. In previous work we presented algorithms for resolving references occurring in definite noun phrases. In this paper we propose an algorithm for resolving references in a wider array of linguistic forms, using the Givenness Hierarchy.