Goto

Collaborating Authors

Semantic Modeling for Food Recommendation Explanations

arXiv.org Artificial Intelligence

With the increased use of AI methods to provide recommendations in the health, specifically in the food dietary recommendation space, there is also an increased need for explainability of those recommendations. Such explanations would benefit users of recommendation systems by empowering them with justifications for following the system's suggestions. We present the Food Explanation Ontology (FEO) that provides a formalism for modeling explanations to users for food-related recommendations. FEO models food recommendations, using concepts from the explanation domain to create responses to user questions about food recommendations they receive from AI systems such as personalized knowledge base question answering systems. FEO uses a modular, extensible structure that lends itself to a variety of explanations while still preserving important semantic details to accurately represent explanations of food recommendations. In order to evaluate this system, we used a set of competency questions derived from explanation types present in literature that are relevant to food recommendations. Our motivation with the use of FEO is to empower users to make decisions about their health, fully equipped with an understanding of the AI recommender systems as they relate to user questions, by providing reasoning behind their recommendations in the form of explanations.


Towards Tractable and Practical ABox Abduction over Inconsistent Description Logic Ontologies

AAAI Conferences

ABox abduction plays an important role in reasoning over description logic (DL) ontologies. However, it does not work with inconsistent DL ontologies. To tackle this problem while achieving tractability, we generalize ABox abduction from the classical semantics to an inconsistency-tolerant semantics, namely the Intersection ABox Repair (IAR) semantics, and propose the notion of IAR-explanations in inconsistent DL ontologies. We show that computing all minimal IAR-explanations is tractable in data complexity for first-order rewritable ontologies. However, the computational method may still not be practical due to a possibly large number of minimal IAR-explanations. Hence we propose to use preference information to reduce the number of explanations to be computed.


Explanation Ontology in Action: A Clinical Use-Case

arXiv.org Artificial Intelligence

We addressed the problem of a lack of semantic representation for user-centric explanations and different explanation types in our Explanation Ontology (https://purl.org/heals/eo). Such a representation is increasingly necessary as explainability has become an important problem in Artificial Intelligence with the emergence of complex methods and an uptake in high-precision and user-facing settings. In this submission, we provide step-by-step guidance for system designers to utilize our ontology, introduced in our resource track paper, to plan and model for explanations during the design of their Artificial Intelligence systems. We also provide a detailed example with our utilization of this guidance in a clinical setting.


Providing Explanations for Recommendations in Reciprocal Environments

arXiv.org Artificial Intelligence

Automated platforms which support users in finding a mutually beneficial match, such as online dating and job recruitment sites, are becoming increasingly popular. These platforms often include recommender systems that assist users in finding a suitable match. While recommender systems which provide explanations for their recommendations have shown many benefits, explanation methods have yet to be adapted and tested in recommending suitable matches. In this paper, we introduce and extensively evaluate the use of "reciprocal explanations" -- explanations which provide reasoning as to why both parties are expected to benefit from the match. Through an extensive empirical evaluation, in both simulated and real-world dating platforms with 287 human participants, we find that when the acceptance of a recommendation involves a significant cost (e.g., monetary or emotional), reciprocal explanations outperform standard explanation methods which consider the recommendation receiver alone. However, contrary to what one may expect, when the cost of accepting a recommendation is negligible, reciprocal explanations are shown to be less effective than the traditional explanation methods.


Conceptual Modeling of Explainable Recommender Systems: An Ontological Formalization to Guide Their Design and Development

Journal of Artificial Intelligence Research

With the increasing importance of e-commerce and the immense variety of products, users need help to decide which ones are the most interesting to them. This is one of the main goals of recommender systems. However, users' trust may be compromised if they do not understand how or why the recommendation was achieved. Here, explanations are essential to improve user confidence in recommender systems and to make the recommendation useful. Providing explanation capabilities into recommender systems is not an easy task as their success depends on several aspects such as the explanation's goal, the user's expectation, the knowledge available, or the presentation method. Therefore, this work proposes a conceptual model to alleviate this problem by defining the requirements of explanations for recommender systems. Our goal is to provide a model that guides the development of effective explanations for recommender systems as they are correctly designed and suited to the user's needs. Although earlier explanation taxonomies sustain this work, our model includes new concepts not considered in previous works. Moreover, we make a novel contribution regarding the formalization of this model as an ontology that can be integrated into the development of proper explanations for recommender systems.