Paraphrase and Aggregate with Large Language Models for Minimizing Intent Classification Errors
Yadav, Vikas, Tang, Zheng, Srinivasan, Vijay
–arXiv.org Artificial Intelligence
Large language models (LLM) have received label. Hence, as an alternative solution, we propose more spotlight for generative tasks such as a (p)araphrasing and (ag)gregating approach question answering, dialogue, summarization, etc (PAG) to fix LLM errors on intent classification (Peng et al., 2023; Beeching et al., 2023). We argue task, where input query is paraphrased to perform that key NLP tasks such as intent classification is intent classification on its multiple variations. Our widely utilized in real-world dialogue systems and approach is inspired by observations that often user thus should also be given high emphasis when evaluating queries are unclear which when rephrased, improve LLMs, considering their proven capability downstream systems (Brabant et al., 2022). PAG-to solve a wide range of NLP tasks (Beeching et al., LLM leverages the versatility of LLMs to perform 2023). In this work, we focus on studying LLMs three tasks: paraphrasing, intent classification, and for large intent classification tasks with two intent aggregation. We first generate N paraphrases of classification datasets: CLINC (Larson et al., 2019) the input query, then generate classification predictions which has 150 classes and Banking (Casanueva for the original query and its N paraphrases et al., 2020) which has 77 classes.
arXiv.org Artificial Intelligence
Jun-24-2024