Evaluating Causal Explanation in Medical Reports with LLM-Based and Human-Aligned Metrics
–arXiv.org Artificial Intelligence
This study investigates how accurately different evaluation metrics capture the quality of causal explanations in automatically generated diagnostic reports. We compare six metrics: BERTScore, Cosine Similarity, BioSentVec, GPT-White, GPT-Black, and expert qualitative assessment across two input types: observation-based and multiple-choice-based report generation. Two weighting strategies are applied: one reflecting task-specific priorities, and the other assigning equal weights to all metrics. Our results show that GPT-Black demonstrates the strongest discriminative power in identifying logically coherent and clinically valid causal narratives. GPT-White also aligns well with expert evaluations, while similarity-based metrics diverge from clinical reasoning quality. These findings emphasize the impact of metric selection and weighting on evaluation outcomes, supporting the use of LLM-based evaluation for tasks requiring interpretability and causal reasoning.
arXiv.org Artificial Intelligence
Jun-24-2025
- Genre:
- Research Report > New Finding (0.87)
- Industry:
- Health & Medicine
- Diagnostic Medicine > Imaging (0.48)
- Health Care Technology > Medical Record (0.41)
- Health & Medicine
- Technology: