Triadic Fusion of Cognitive, Functional, and Causal Dimensions for Explainable LLMs: The TAXAL Framework
Herrera-Poyatos, David, Peláez-González, Carlos, Zuheros, Cristina, Tejedor, Virilo, Montes, Rosana, Herrera, Francisco
–arXiv.org Artificial Intelligence
Large Language Models (LLMs) such as GPT -5, GEMINI, Claude, and LLaMA have become foundational tools in artificial intelligence (AI), achieving state-of-the-art performance in summarization, translation, reasoning, and dialogue. However, since LLMs are increasingly integrated in high-risk decision making in domains such as healthcare, law, and education, their lack of transparency raises urgent concerns for safety, accountability, and public trust [12]. The scale and complexity of these models, covering billions of parameters trained in opaque corpora, make their internal reasoning fundamentally inscrutable. This opacity creates barriers to responsible adoption, as users often lack meaningful ways to understand or challenge outputs. Without stakeholder-sensitive explanations, systems risk overtrust, misinterpretation, or outright rejection [11]. Explainable AI (XAI) for LLMs has therefore evolved beyond technical introspection [6]. The goal is not only to expose internal mechanisms but also to support human interaction, trust calibration, and decision assurance. As model behavior becomes more emergent and unpredictable [10], explanation systems must serve cognitive, functional, and ethical purposes simultaneously [7].
arXiv.org Artificial Intelligence
Sep-8-2025
- Country:
- Asia > Middle East
- UAE > Abu Dhabi Emirate > Abu Dhabi (0.04)
- Europe > Spain
- Andalusia > Granada Province > Granada (0.04)
- Asia > Middle East
- Genre:
- Research Report (1.00)
- Industry:
- Education > Educational Setting (0.67)
- Government (0.93)
- Health & Medicine
- Diagnostic Medicine (1.00)
- Therapeutic Area > Cardiology/Vascular Diseases (0.93)
- Law (1.00)
- Technology: