Enhancing Trust in Large Language Models with Uncertainty-Aware Fine-Tuning
Krishnan, Ranganath, Khanna, Piyush, Tickoo, Omesh
–arXiv.org Artificial Intelligence
Large language models (LLMs) have revolutionized the field of natural language processing with their impressive reasoning and question-answering capabilities. However, these models are sometimes prone to generating credible-sounding but incorrect information, a phenomenon known as LLM hallucinations. Reliable uncertainty estimation in LLMs is essential for fostering trust in their generated responses and serves as a critical tool for the detection and prevention of erroneous or hallucinated outputs. To achieve reliable and well-calibrated uncertainty quantification in open-ended and free-form natural language generation, we propose an uncertainty-aware fine-tuning approach for LLMs. This approach enhances the model's ability to provide reliable uncertainty estimates without compromising accuracy, thereby guiding them to produce more trustworthy responses. We introduce a novel uncertainty-aware causal language modeling loss function, grounded in the principles of decision theory. Through rigorous evaluation on multiple free-form question-answering datasets and models, we demonstrate that our uncertainty-aware fine-tuning approach yields better calibrated uncertainty estimates in natural language generation tasks than fine-tuning with the standard causal language modeling loss. Furthermore, the experimental results show that the proposed method significantly improves the model's ability to detect hallucinations and identify out-of-domain prompts. Large Language Models (LLMs) have shown remarkable success in various natural language processing tasks (Touvron et al., 2023; Gemma et al., 2024; Achiam et al., 2023) and are increasingly becoming ubiquitous in a variety of domains for their decision-making and reasoning abilities (Eigner & Händler, 2024). However, their real-world deployment, particularly in high-stakes and safety-critical applications, is hindered by challenges such as hallucinations and out-of-domain prompts, which can lead to the generation of erroneous or nonsensical outputs. Hallucinations, often described as plausible-sounding but incorrect or unfaithful model generations (Ji et al., 2023), present a crucial challenge in developing trustworthy systems especially in critical domains such as medical (Ahmad et al., 2023) and legal (Magesh et al., 2024). The ability to recognize out-of-domain prompts and to acknowledge the limits of a model's knowledge base paves the way for building safe AI systems (Amodei et al., 2016). Uncertainty quantification (UQ) in LLMs plays a pivotal role in understanding what the model knows and does not know, which is an active area of research for free-form natural language generation (NLG) (Kadavath et al., 2022; Kuhn et al., 2023; Lin et al., 2024).
arXiv.org Artificial Intelligence
Dec-3-2024
- Country:
- Europe (0.28)
- Genre:
- Research Report > New Finding (0.48)
- Industry:
- Health & Medicine (0.93)
- Technology: