Enabling Ethical AI: A case study in using Ontological Context for Justified Agentic AI Decisions

McGee, Liam, Harvey, James, Cull, Lucy, Vermeulen, Andreas, Visscher, Bart-Floris, Sharan, Malvika

arXiv.org Artificial Intelligence 

Agentic AI systems, software agents with autonomy, decision-making ability, and adaptability, are increasingly used to execute complex tasks on behalf of organisations. Most such systems rely on Large Language Models (LLMs), whose broad semantic capabilities enable powerful language processing but lack explicit, institution-specific grounding. In enterprises, data rarely comes with an inspectable semantic layer, and constructing one typically requires labour-intensive "data archaeology": cleaning, modelling, and curating knowledge into ontologies, taxonomies, and other formal structures. At the same time, explainability methods such as saliency maps expose an "interpretability gap": they highlight what the model attends to but not why, leaving decision processes opaque. In this preprint, we present a case study, developed by Kaiasm and Avantra AI through their work with The Turing Way Practitioners Hub, a forum developed under the InnovateUK BridgeAI program. This study presents a collaborative human-AI approach to building an inspectable semantic layer for Agentic AI. AI agents first propose candidate knowledge structures from diverse data sources; domain experts then validate, correct, and extend these structures, with their feedback used to improve subsequent models. Authors show how this process captures tacit institutional knowledge, improves response quality and efficiency, and mitigates institutional amnesia. We argue for a shift from post-hoc explanation to justifiable Agentic AI, where decisions are grounded in explicit, inspectable evidence and reasoning accessible to both experts and non-specialists.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found