Goto

Collaborating Authors

Explainable artificial intelligence (XAI)

#artificialintelligence

Explainable artificial intelligence (XAI) is a collection of techniques and strategies that enable human users to grasp and trust machine learning algorithms' results and output. Explainable AI refers to a model's projected influence and probable biases. It contributes to the definition of model correctness, fairness, transparency, and results in AI-powered decision making. Explainable AI is critical for a business to create trust and confidence when deploying AI models. AI explainability also assists a company in taking a responsible approach to AI development.


The How of Explainable AI: Explainable Modelling

#artificialintelligence

Achieving explainable modelling is sometimes considered synonymous with restricting the choice of AI model to specific family of models that are considered inherently explainable. We will review this family of AI models. However, our discussion goes far beyond the conventional explainable model families and includes more recent and novel approaches such as joint prediction and explanation, hybrid models, and more. Ideally we can avoid the black-box problem from the beginning by developing a model that is explainable by design. The traditional approach to achieve explainable modelling is to adopt from a specific family of models that are considered explainable.


An Inherently Explainable Model for Video Activity Interpretation

AAAI Conferences

The ability of artificial intelligence systems to offer explanations for its decisions is central to building user confidence and structuring smart human-machine interactions. Understanding the rationale behind such a system’s output helps in making an informed action based on a model’s prediction. In this paper, we introduce a novel framework integrating Grenandar’s pattern theory structures to produce inherently explainable, symbolic representations for video activity interpretation. These representations provide semantically coherent, rich interpretations of video activity using connected structures of detected (grounded) concepts, such as objects and actions, that are bound by semantics through background concepts not directly observed, i.e. contextualization cues. We use contextualization cues to establish semantic relationships among entities directly hypothesized from video signal, such as possible object and actions labels, and infer a deeper interpretation of events than what can be directly sensed. We demonstrate the viability of this idea on video data primarily from the cooking domain by introducing a dialog model that uses these interpretations as the source of knowledge to generate explanations grounded in both video data as well as semantic connections between concepts.


Explainable Machine Learning with LIME and H2O in R

#artificialintelligence

Welcome to this hands-on, guided introduction to Explainable Machine Learning with LIME and H2O in R. By the end of this project, you will be able to use the LIME and H2O packages in R for automatic and interpretable machine learning, build classification models quickly with H2O AutoML and explain and interpret model predictions using LIME. Machine learning (ML) models such as Random Forests, Gradient Boosted Machines, Neural Networks, Stacked Ensembles, etc., are often considered black boxes. However, they are more accurate for predicting non-linear phenomena due to their flexibility. Experts agree that higher accuracy often comes at the price of interpretability, which is critical to business adoption, trust, regulatory oversight (e.g., GDPR, Right to Explanation, etc.). As more industries from healthcare to banking are adopting ML models, their predictions are being used to justify the cost of healthcare and for loan approvals or denials.