Triadic Fusion of Cognitive, Functional, and Causal Dimensions for Explainable LLMs: The TAXAL Framework

Herrera-Poyatos, David, Peláez-González, Carlos, Zuheros, Cristina, Tejedor, Virilo, Montes, Rosana, Herrera, Francisco

arXiv.org Artificial Intelligence 

Large Language Models (LLMs) such as GPT -5, GEMINI, Claude, and LLaMA have become foundational tools in artificial intelligence (AI), achieving state-of-the-art performance in summarization, translation, reasoning, and dialogue. However, since LLMs are increasingly integrated in high-risk decision making in domains such as healthcare, law, and education, their lack of transparency raises urgent concerns for safety, accountability, and public trust [12]. The scale and complexity of these models, covering billions of parameters trained in opaque corpora, make their internal reasoning fundamentally inscrutable. This opacity creates barriers to responsible adoption, as users often lack meaningful ways to understand or challenge outputs. Without stakeholder-sensitive explanations, systems risk overtrust, misinterpretation, or outright rejection [11]. Explainable AI (XAI) for LLMs has therefore evolved beyond technical introspection [6]. The goal is not only to expose internal mechanisms but also to support human interaction, trust calibration, and decision assurance. As model behavior becomes more emergent and unpredictable [10], explanation systems must serve cognitive, functional, and ethical purposes simultaneously [7].