PLEX: Perturbation-free Local Explanations for LLM-Based Text Classification
Rahulamathavan, Yogachandran, Farooq, Misbah, De Silva, Varuna
–arXiv.org Artificial Intelligence
--Large Language Models (LLMs) excel in text classification, but their complexity hinders interpretability, making it difficult to understand the reasoning behind their predictions. Explainable AI (XAI) methods like LIME and SHAP offer local explanations by identifying influential words, but they rely on computationally expensive perturbations. These methods typically generate thousands of perturbed sentences and perform inferences on each, incurring a substantial computational burden, especially with LLMs. T o address this, we propose P erturbation-free L ocal Ex planation (PLEX), a novel method that leverages the contextual embeddings extracted from the LLM and a "Siamese network" style neural network trained to align with feature importance scores. This one-off training eliminates the need for subsequent perturbations, enabling efficient explanations for any new sentence. We demonstrate PLEX's effectiveness on four different classification tasks (sentiment, fake news, fake COVID-19 news and depression), showing more than 92% agreement with LIME and SHAP . Our evaluation using a "stress test" reveals that PLEX accurately identifies influential words, leading to a similar decline in classification accuracy as observed with LIME and SHAP when these words are removed. Notably, in some cases, PLEX demonstrates superior performance in capturing the impact of key features. PLEX dramatically accelerates explanation, reducing time and computational overhead by two and four orders of magnitude, respectively. This work offers a promising solution for explainable LLM-based text classification. ARGE language models (LLMs) have significantly advanced text classification, achieving state-of-the-art results in tasks like emotion recognition, sentiment analysis, topic categorization, and spam detection [1]. Powered by transformer architectures with millions or billions of parameters, they effectively capture complex linguistic patterns. However, the very complexity that enables their high performance also renders their internal workings opaque and difficult to interpret.
arXiv.org Artificial Intelligence
Jul-16-2025
- Country:
- Europe > United Kingdom > England
- Greater London > London (0.04)
- Leicestershire > Loughborough (0.04)
- Europe > United Kingdom > England
- Genre:
- Overview (0.93)
- Research Report > Promising Solution (0.68)
- Industry:
- Health & Medicine > Therapeutic Area (1.00)
- Media > News (0.69)
- Technology: