LACIE: Listener-Aware Finetuning for Calibration in Large Language Models

Neural Information Processing Systems 

When answering questions, large language models (LLMs) can convey not only an answer to the question, but a level of confidence about the answer being correct. This includes explicit markers of confidence (e.g.