Integration of Explainable AI Techniques with Large Language Models for Enhanced Interpretability for Sentiment Analysis
Thogesan, Thivya, Nugaliyadde, Anupiya, Wong, Kok Wai
–arXiv.org Artificial Intelligence
Interpretability remains a key difficulty in sentiment analysis with Large Language Models (LLMs), particularly in high-stakes applications where it is crucial to comprehend the rationale behind forecasts. This research addressed this by introducing a technique that applies SHAP (Shapley Additive Explanations) by breaking down LLMs into components such as embedding layer,encoder,decoder and attention layer to provide a layer-by-layer knowledge of sentiment prediction. The approach offers a clearer overview of how model interpret and categorise sentiment by breaking down LLMs into these parts. The method is evaluated using the Stanford Sentiment Treebank (SST-2) dataset, which shows how different sentences affect different layers. The effectiveness of layer-wise SHAP analysis in clarifying sentiment-specific token attributions is demonstrated by experimental evaluations, which provide a notable enhancement over current whole-model explainability techniques. These results highlight how the suggested approach could improve the reliability and transparency of LLM-based sentiment analysis in crucial applications.
arXiv.org Artificial Intelligence
Mar-14-2025
- Country:
- North America > United States > Oregon > Multnomah County > Portland (0.04)
- Genre:
- Research Report (0.82)
- Technology: