Goto

Collaborating Authors

Explanation Ontology: A Model of Explanations for User-Centered AI

arXiv.org Artificial Intelligence

Explainability has been a goal for Artificial Intelligence (AI) systems since their conception, with the need for explainability growing as more complex AI models are increasingly used in critical, high-stakes settings such as healthcare. Explanations have often added to an AI system in a non-principled, post-hoc manner. With greater adoption of these systems and emphasis on user-centric explainability, there is a need for a structured representation that treats explainability as a primary consideration, mapping end user needs to specific explanation types and the system's AI capabilities. We design an explanation ontology to model both the role of explanations, accounting for the system and user attributes in the process, and the range of different literature-derived explanation types. We indicate how the ontology can support user requirements for explanations in the domain of healthcare. We evaluate our ontology with a set of competency questions geared towards a system designer who might use our ontology to decide which explanation types to include, given a combination of users' needs and a system's capabilities, both in system design settings and in real-time operations. Through the use of this ontology, system designers will be able to make informed choices on which explanations AI systems can and should provide.


Explanation Ontology in Action: A Clinical Use-Case

arXiv.org Artificial Intelligence

We addressed the problem of a lack of semantic representation for user-centric explanations and different explanation types in our Explanation Ontology (https://purl.org/heals/eo). Such a representation is increasingly necessary as explainability has become an important problem in Artificial Intelligence with the emergence of complex methods and an uptake in high-precision and user-facing settings. In this submission, we provide step-by-step guidance for system designers to utilize our ontology, introduced in our resource track paper, to plan and model for explanations during the design of their Artificial Intelligence systems. We also provide a detailed example with our utilization of this guidance in a clinical setting.


Foundations of Explainable Knowledge-Enabled Systems

arXiv.org Artificial Intelligence

Explainability has been an important goal since the early days of Artificial Intelligence. Several approaches for producing explanations have been developed. However, many of these approaches were tightly coupled with the capabilities of the artificial intelligence systems at the time. With the proliferation of AI-enabled systems in sometimes critical settings, there is a need for them to be explainable to end-users and decision-makers. We present a historical overview of explainable artificial intelligence systems, with a focus on knowledge-enabled systems, spanning the expert systems, cognitive assistants, semantic applications, and machine learning domains. Additionally, borrowing from the strengths of past approaches and identifying gaps needed to make explanations user- and context-focused, we propose new definitions for explanations and explainable knowledge-enabled systems.


Directions for Explainable Knowledge-Enabled Systems

arXiv.org Artificial Intelligence

Interest in the field of Explainable Artificial Intelligence has been growing for decades, and has accelerated recently. As Artificial Intelligence models have become more complex, and often more opaque, with the incorporation of complex machine learning techniques, explainability has become more critical. Recently, researchers have been investigating and tackling explainability with a user-centric focus, looking for explanations to consider trustworthiness, comprehensibility, explicit provenance, and context-awareness. In this chapter, we leverage our survey of explanation literature in Artificial Intelligence and closely related fields and use these past efforts to generate a set of explanation types that we feel reflect the expanded needs of explanation for today's artificial intelligence applications. We define each type and provide an example question that would motivate the need for this style of explanation. We believe this set of explanation types will help future system designers in their generation and prioritization of requirements and further help generate explanations that are better aligned to users' and situational needs.


Conceptual Modeling of Explainable Recommender Systems: An Ontological Formalization to Guide Their Design and Development

Journal of Artificial Intelligence Research

With the increasing importance of e-commerce and the immense variety of products, users need help to decide which ones are the most interesting to them. This is one of the main goals of recommender systems. However, users' trust may be compromised if they do not understand how or why the recommendation was achieved. Here, explanations are essential to improve user confidence in recommender systems and to make the recommendation useful. Providing explanation capabilities into recommender systems is not an easy task as their success depends on several aspects such as the explanation's goal, the user's expectation, the knowledge available, or the presentation method. Therefore, this work proposes a conceptual model to alleviate this problem by defining the requirements of explanations for recommender systems. Our goal is to provide a model that guides the development of effective explanations for recommender systems as they are correctly designed and suited to the user's needs. Although earlier explanation taxonomies sustain this work, our model includes new concepts not considered in previous works. Moreover, we make a novel contribution regarding the formalization of this model as an ontology that can be integrated into the development of proper explanations for recommender systems.