Dr. Alain Briançon is the Chief Technology Officer and VP Data Science of Cerebri AI, an AI platform company that helps enterprises better understand what their customer needs are through data. Their technology is applied to a wide range of industries such as automotive, financial, telecommunications, and travel. I am an engineer by training. I have an MIT PhD in Electrical Engineering. I am an avid history, opera, and movie buff.
How do we balance the potential benefits of deep learning with the need for explainability? People distrust artificial intelligence and in some ways this makes sense. With the desire to create the best performing AI models, many organizations have prioritized complexity over the concepts of explainability and trust. As the world becomes more dependent on algorithms for making a wide range of decisions, technologies and business leaders will be tasked with explaining how a model selected its outcome. Transparency is an essential requirement for generating trust and AI adoption.
We formulate a question of how important explainability feature is for customers of machine learning (ML) systems. We analyze the state of the art and limitations of explainable and unexplainable ML. To quantitatively estimate the volume of customers who request explainability from companies employing ML systems, we analyze customer complaints. We build a natural language (NL) classifier that detects a request to explain in implicit or explicit form, and evaluate it on the set of 800 complaints. As a result of classifier application, we discover that a quarter of customers demand explainability from companies, when something went wrong with a product or service and it has to be communicated properly by the company. We conclude that explainability feature is more important than the recognition accuracy for most customers.
Data science is the current powerhouse for organizations, turning mountains of data into actionable business insights that impact every part of the business, including customer experience, revenue, operations, risk management and other functions. Data science has the potential to dramatically accelerate digital transformation initiatives, delivering greater performance and advantages over the competition. However, not all data science platforms and methodologies are created equal. The ability to use data science to make predictions and take decisions that optimize business outcome requires transparency and accountability. There are several underlying factors such as trust, having confidence in the prediction and understanding how the technology works, but fundamentally it comes down to whether the platform uses a black-box or white-box model approach.
Arrieta, Alejandro Barredo, Díaz-Rodríguez, Natalia, Del Ser, Javier, Bennetot, Adrien, Tabik, Siham, Barbado, Alberto, García, Salvador, Gil-López, Sergio, Molina, Daniel, Benjamins, Richard, Chatila, Raja, Herrera, Francisco
In the last years, Artificial Intelligence (AI) has achieved a notable momentum that may deliver the best of expectations over many application sectors across the field. For this to occur, the entire community stands in front of the barrier of explainability, an inherent problem of AI techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI. Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is acknowledged as a crucial feature for the practical deployment of AI models. This overview examines the existing literature in the field of XAI, including a prospect toward what is yet to be reached. We summarize previous efforts to define explainability in Machine Learning, establishing a novel definition that covers prior conceptual propositions with a major focus on the audience for which explainability is sought. We then propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at Deep Learning methods for which a second taxonomy is built. This literature analysis serves as the background for a series of challenges faced by XAI, such as the crossroads between data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to XAI with a reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.