LLM Driven Processes to Foster Explainable AI
–arXiv.org Artificial Intelligence
We present a modular, explainable LLM-agent pipeline for decision support that externalizes reasoning into auditable artifacts. The system instantiates three frameworks: Vester's Sensitivity Model (factor set, signed impact matrix, systemic roles, feedback loops); normal-form games (strategies, payoff matrix, equilibria); and sequential games (role-conditioned agents, tree construction, backward induction), with swappable modules at every step. LLM components (default: GPT-5) are paired with deterministic analyzers for equilibria and matrix-based role classification, yielding traceable intermediates rather than opaque outputs. In a real-world logistics case (100 runs), mean factor alignment with a human baseline was 55.5\% over 26 factors and 62.9\% on the transport-core subset; role agreement over matches was 57\%. An LLM judge using an eight-criterion rubric (max 100) scored runs on par with a reconstructed human baseline. Configurable LLM pipelines can thus mimic expert workflows with transparent, inspectable steps.
arXiv.org Artificial Intelligence
Nov-11-2025
- Country:
- Asia > China
- Europe
- Germany (0.04)
- Sweden > Vaestra Goetaland
- Gothenburg (0.04)
- Switzerland (0.04)
- North America
- Canada (0.15)
- United States > New Jersey
- Mercer County > Princeton (0.04)
- Genre:
- Research Report > New Finding (0.46)
- Industry:
- Government (1.00)
- Leisure & Entertainment > Games (1.00)
- Technology: