Evaluating Prompt Engineering Techniques for Accuracy and Confidence Elicitation in Medical LLMs
Naderi, Nariman, Atf, Zahra, Lewis, Peter R, far, Aref Mahjoub, Safavi-Naini, Seyed Amir Ahmad, Soroush, Ali
–arXiv.org Artificial Intelligence
This paper investigates how prompt engineering techniques impact both accuracy and confidence elicitation in Large Language Models (LLMs) applied to medical contexts. Using a stratified dataset of Persian board exam questions across multiple specialties, we evaluated five LLMs - GPT-4o, o3-mini, Llama-3.3-70b, Llama-3.1-8b, and DeepSeek-v3 - across 156 configurations. These configurations varied in temperature settings (0.3, 0.7, 1.0), prompt styles (Chain-of-Thought, Few-Shot, Emotional, Expert Mimicry), and confidence scales (1-10, 1-100). We used AUC-ROC, Brier Score, and Expected Calibration Error (ECE) to evaluate alignment between confidence and actual performance. Chain-of-Thought prompts improved accuracy but also led to overconfidence, highlighting the need for calibration. Emotional prompting further inflated confidence, risking poor decisions. Smaller models like Llama-3.1-8b underperformed across all metrics, while proprietary models showed higher accuracy but still lacked calibrated confidence. These results suggest prompt engineering must address both accuracy and uncertainty to be effective in high-stakes medical tasks.
arXiv.org Artificial Intelligence
Jun-3-2025
- Country:
- Asia
- Middle East > Iran
- Tehran Province > Tehran (0.04)
- Thailand > Bangkok
- Bangkok (0.04)
- Middle East > Iran
- North America
- Canada > Ontario
- Durham Region > Oshawa (0.04)
- Mexico > Mexico City
- Mexico City (0.04)
- United States
- Florida > Miami-Dade County
- Miami (0.04)
- New York > New York County
- New York City (0.05)
- Florida > Miami-Dade County
- Canada > Ontario
- Asia
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Health & Medicine (1.00)
- Technology: