Interview with Dr. Alain Briançon, Chief Technology Officer and VP Data Science of Cerebri AI

#artificialintelligence

Dr. Alain Briançon is the Chief Technology Officer and VP Data Science of Cerebri AI, an AI platform company that helps enterprises better understand what their customer needs are through data. Their technology is applied to a wide range of industries such as automotive, financial, telecommunications, and travel. I am an engineer by training. I have an MIT PhD in Electrical Engineering. I am an avid history, opera, and movie buff.


Customers’ Retention Requires an Explainability Feature in Machine Learning Systems They Use

AAAI Conferences

We formulate a question of how important explainability feature is for customers of machine learning (ML) systems. We analyze the state of the art and limitations of explainable and unexplainable ML. To quantitatively estimate the volume of customers who request explainability from companies employing ML systems, we analyze customer complaints. We build a natural language (NL) classifier that detects a request to explain in implicit or explicit form, and evaluate it on the set of 800 complaints. As a result of classifier application, we discover that a quarter of customers demand explainability from companies, when something went wrong with a product or service and it has to be communicated properly by the company. We conclude that explainability feature is more important than the recognition accuracy for most customers.


Introducing AI Explainability 360 IBM Research Blog

#artificialintelligence

The toolkit has been engineered with a common interface for all of the different ways of explaining (not an easy feat) and is extensible to accelerate innovation by the community advancing AI explainability. We are open sourcing it to help create a community of practice for data scientists, policymakers, and the general public that need to understand how algorithmic decision making affects them. AI Explainability 360 differs from other open source explainability offerings [1] through the diversity of its methods, focus on educating a variety of stakeholders, and extensibility via a common framework. Moreover, it interoperates with AI Fairness 360 and Adversarial Robustness 360, two other open-source toolboxes from IBM Research released in 2018, to support the development of holistic trustworthy machine learning pipelines. The initial release contains eight algorithms recently created by IBM Research, and also includes metrics from the community that serve as quantitative proxies for the quality of explanations. Beyond the initial release, we encourage contributions of other algorithms from the broader research community.


Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI

arXiv.org Artificial Intelligence

In the last years, Artificial Intelligence (AI) has achieved a notable momentum that may deliver the best of expectations over many application sectors across the field. For this to occur, the entire community stands in front of the barrier of explainability, an inherent problem of AI techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI. Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is acknowledged as a crucial feature for the practical deployment of AI models. This overview examines the existing literature in the field of XAI, including a prospect toward what is yet to be reached. We summarize previous efforts to define explainability in Machine Learning, establishing a novel definition that covers prior conceptual propositions with a major focus on the audience for which explainability is sought. We then propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at Deep Learning methods for which a second taxonomy is built. This literature analysis serves as the background for a series of challenges faced by XAI, such as the crossroads between data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to XAI with a reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.


Explainable Machine Learning in Deployment

arXiv.org Artificial Intelligence

Explainable machine learning seeks to provide various stakeholders with insights into model behavior via feature importance scores, counterfactual explanations, and influential samples, among other techniques. Recent advances in this line of work, however, have gone without surveys of how organizations are using these techniques in practice. This study explores how organizations view and use explainability for stakeholder consumption. We find that the majority of deployments are not for end users affected by the model but for machine learning engineers, who use explainability to debug the model itself. There is a gap between explainability in practice and the goal of public transparency, since explanations primarily serve internal stakeholders rather than external ones. Our study synthesizes the limitations with current explainability techniques that hamper their use for end users. To facilitate end user interaction, we develop a framework for establishing clear goals for explainability, including a focus on normative desiderata.