Effective Proxy for Human Labeling: Ensemble Disagreement Scores in Large Language Models for Industrial NLP
Du, Wei, Advani, Laksh, Gambhir, Yashmeet, Perry, Daniel J, Shiralkar, Prashant, Xing, Zhengzheng, Colak, Aaron
–arXiv.org Artificial Intelligence
More recently, (Fu et al., 2023) natural language processing (NLP) tasks using creates a meta-model responsible for predicting the latest generative pretrained models such as the accuracy of the LLM model using the model's GPT (OpenAI, 2023; Ouyang et al., 2022), PaLM confidence scores as features. Methods from the (Chowdhery et al., 2022), and many others (Touvron computer vision (CV) domain to assess unlabeled et al., 2023; Bai et al., 2022; Penedo et al., data more generally have, for example, proposed 2023; Taori et al., 2023). This new generation of the average threshold confidence method that learns models opens up many new possibilities including a threshold over the model's confidence, predicting competitive performance in zero-shot and few-shot accuracy as the fraction of unlabeled examples settings for tasks that have typically been modeled exceeding that threshold (Garg et al., 2022), or iteratively using a supervised setting (OpenAI, 2023). More learn an ensemble of models to identify established language models (BERT (Devlin et al., misclassified data points and perform self-training 2019), RoBERTa (Liu et al., 2019), XLM-Roberta to improve the ensemble with the identified points (Conneau et al., 2020b), etc.) provide a strong balance (Chen et al., 2021). However, the metrics and hyperparameters of inference cost and task performance for in previous works are specifically for such systems. This broad class of large language classification tasks and cannot be easily extended models (LLMs) used for complex supervised NLP to more complex tasks.
arXiv.org Artificial Intelligence
Nov-19-2023