A taxonomy of explanations to support Explainability-by-Design
Tsakalakis, Niko, Stalla-Bourdillon, Sophie, Huynh, Trung Dong, Moreau, Luc
–arXiv.org Artificial Intelligence
As automated decision-making solutions are increasingly applied to all aspects of everyday life, capabilities to generate meaningful explanations for a variety of stakeholders (i.e., decision-makers, recipients of decisions, auditors, regulators...) become crucial. In this paper, we present a taxonomy of explanations that was developed as part of a holistic 'Explainability-by-Design' approach for the purposes of the project PLEAD. The taxonomy was built with a view to produce explanations for a wide range of requirements stemming from a variety of regulatory frameworks or policies set at the organizational level either to translate high-level compliance requirements or to meet business needs. The taxonomy comprises nine dimensions. It is used as a stand-alone classifier of explanations conceived as detective controls, in order to aid supportive automated compliance strategies. A machinereadable format of the taxonomy is provided in the form of a light ontology and the benefits of starting the Explainability-by-Design journey with such a taxonomy are demonstrated through a series of examples.
arXiv.org Artificial Intelligence
Nov-14-2024
- Country:
- Europe
- Belgium > Brussels-Capital Region
- Brussels (0.04)
- Netherlands > North Holland
- Amsterdam (0.04)
- Spain > Catalonia
- Barcelona Province > Barcelona (0.04)
- United Kingdom > England
- Cambridgeshire > Cambridge (0.04)
- Greater London > London (0.04)
- Hampshire > Southampton (0.04)
- Belgium > Brussels-Capital Region
- North America > United States
- New York > New York County > New York City (0.04)
- Europe
- Genre:
- Research Report (1.00)
- Industry:
- Banking & Finance > Credit (0.68)
- Education (1.00)
- Government (1.00)
- Information Technology > Security & Privacy (1.00)
- Law > Statutes (0.68)
- Technology: