Can LLM-Generated Textual Explanations Enhance Model Classification Performance? An Empirical Study
Dhaini, Mahdi, Vladika, Juraj, Erdogan, Ege, Attaoui, Zineb, Kasneci, Gjergji
–arXiv.org Artificial Intelligence
In the rapidly evolving field of Explainable Natural Language Processing (NLP), textual explanations, i.e., human-like rationales, are pivotal for explaining model predictions and enriching datasets with interpretable labels. Traditional approaches rely on human annotation, which is costly, labor-intensive, and impedes scalability. In this work, we present an automated framework that leverages multiple state-of-the-art large language models (LLMs) to generate high-quality textual explanations. We rigorously assess the quality of these LLM-generated explanations using a comprehensive suite of Natural Language Generation (NLG) metrics. Furthermore, we investigate the downstream impact of these explanations on the performance of pre-trained language models (PLMs) and LLMs across natural language inference tasks on two diverse benchmark datasets. Our experiments demonstrate that automated explanations exhibit highly competitive effectiveness compared to human-annotated explanations in improving model performance. Our findings underscore a promising avenue for scalable, automated LLM-based textual explanation generation for extending NLP datasets and enhancing model performance.
arXiv.org Artificial Intelligence
Nov-12-2025
- Country:
- Asia > Thailand
- Europe
- Austria > Vienna (0.14)
- Germany
- Bavaria > Upper Bavaria
- Munich (0.04)
- North Rhine-Westphalia > Upper Bavaria
- Munich (0.04)
- Bavaria > Upper Bavaria
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Health & Medicine (0.47)
- Technology: