Goto

Collaborating Authors

Results


A 'Glut' of Innovation Spotted in Data Science and ML Platforms

#artificialintelligence

These are heady days in data science and machine learning (DSML) according to Gartner, which identified a "glut" of innovation occurring in the market for DSML platforms. From established companies chasing AutoML or model governance to startups focusing on MLops or explainable AI, a plethora of vendors are simultaneously moving in all directions with their products as they seek to differentiate themselves amid a very diverse audience. "The DSML market is simultaneously more vibrant and messier than ever," a gaggle of Gartner analysts led by Peter Krensky wrote in the Magic Quadrant for DSML Platforms, which was published earlier this month. "The definitions and parameters of data science and data scientists continue to evolve, and the market is dramatically different from how it was in 2014, when we published the first Magic Quadrant on it." The 2021 Magic Quadrant for DSML is heavily represented by companies to the right of the axis, which anybody who's familiar with Gartner's quadrant-based assessment method knows represents the "completeness of vision."


What Are Explainable AI Principles

#artificialintelligence

Explainable AI (XAI) principles are a set of guidelines for the fundamental properties that explainable AI systems should adopt. Explainable AI seeks to explain the way that AI systems work. These four principles capture a variety of disciplines that contribute to explainable AI, including computer science, engineering and psychology. The four explainable AI principles apply individually, so the presence of one does not imply that others will be present. The NIST suggests that each principle should be evaluated in its own right.


White Paper Machine Learning in Certified Systems

arXiv.org Artificial Intelligence

Machine Learning (ML) seems to be one of the most promising solution to automate partially or completely some of the complex tasks currently realized by humans, such as driving vehicles, recognizing voice, etc. It is also an opportunity to implement and embed new capabilities out of the reach of classical implementation techniques. However, ML techniques introduce new potential risks. Therefore, they have only been applied in systems where their benefits are considered worth the increase of risk. In practice, ML techniques raise multiple challenges that could prevent their use in systems submitted to certification constraints. But what are the actual challenges? Can they be overcome by selecting appropriate ML techniques, or by adopting new engineering or certification practices? These are some of the questions addressed by the ML Certification 3 Workgroup (WG) set-up by the Institut de Recherche Technologique Saint Exup\'ery de Toulouse (IRT), as part of the DEEL Project.


A Comparative Approach to Explainable Artificial Intelligence Methods in Application to High-Dimensional Electronic Health Records: Examining the Usability of XAI

arXiv.org Artificial Intelligence

Explainable Artificial Intelligence (XAI) is a rising field in AI. It aims to produce a demonstrative factor of trust, which for human subjects is achieved through communicative means, which Machine Learning (ML) algorithms cannot solely produce, illustrating the necessity of an extra layer producing support to the model output. When approaching the medical field, we can see challenges arise when dealing with the involvement of human-subjects, the ideology behind trusting a machine to tend towards the livelihood of a human poses an ethical conundrum - leaving trust as the basis of the human-expert in acceptance to the machines decision. The aim of this paper is to apply XAI methods to demonstrate the usability of explainable architectures as a tertiary layer for the medical domain supporting ML predictions and human-expert opinion, XAI methods produce visualization of the feature contribution towards a given models output on both a local and global level. The work in this paper uses XAI to determine feature importance towards high-dimensional data-driven questions to inform domain-experts of identifiable trends with a comparison of model-agnostic methods in application to ML algorithms. The performance metrics for a glass-box method is also provided as a comparison against black-box capability for tabular data. Future work will aim to produce a user-study using metrics to evaluate human-expert usability and opinion of the given models.


Counterfactuals and Causability in Explainable Artificial Intelligence: Theory, Algorithms, and Applications

arXiv.org Artificial Intelligence

There has been a growing interest in model-agnostic methods that can make deep learning models more transparent and explainable to a user. Some researchers recently argued that for a machine to achieve a certain degree of human-level explainability, this machine needs to provide human causally understandable explanations, also known as causability. A specific class of algorithms that have the potential to provide causability are counterfactuals. This paper presents an in-depth systematic review of the diverse existing body of literature on counterfactuals and causability for explainable artificial intelligence. We performed an LDA topic modelling analysis under a PRISMA framework to find the most relevant literature articles. This analysis resulted in a novel taxonomy that considers the grounding theories of the surveyed algorithms, together with their underlying properties and applications in real-world data. This research suggests that current model-agnostic counterfactual algorithms for explainable AI are not grounded on a causal theoretical formalism and, consequently, cannot promote causability to a human decision-maker. Our findings suggest that the explanations derived from major algorithms in the literature provide spurious correlations rather than cause/effects relationships, leading to sub-optimal, erroneous or even biased explanations. This paper also advances the literature with new directions and challenges on promoting causability in model-agnostic approaches for explainable artificial intelligence.


Temenos demystifies artificial intelligence, helping banks fight the black box effect

#artificialintelligence

The banking software company is teaming up with Canadian Western Bank (CWB) to provide its new Temenos Virtual COO solution to small and medium-sized businesses (SMBs). The product is built on top of Temenos' omnichannel digital banking platform and utilizes explainable AI (XAI) and analytics to support financial decision-making at SMBs. By aggregating banking and business data, SMBs are able to assess their current and projected financial health through the use of XAI-powered models that simulate different business scenarios. Banks could utilize XAI technology to rectify the black box problem associated with traditional AI models used in banking. While a powerful tool in terms of generating financial insights, banks should use XAI to complement their existing interactions with customers--not replace them.


An introduction to Explainable Artificial Intelligence or xAI

#artificialintelligence

A few years ago, when I was still working for IBM, I managed an AI project for a bank. During the final phase, my team and I went to the steering committee to present the results. Proud as the project leader, I have shown that the model has achieved 98 percent accuracy in detecting fraudulent transactions. In my manager's eyes, I could see a general panic when I explained that we used an artificial neural network, that it worked with a synapse system and weight adjustments. Although very efficient, there was no way to understand its logic objectively. Even if it was based on real facts, this raw explanation conditioned the project's continuity at that time, unless we could provide a full explanation that the senior executive could understand and trust.


An introduction to Explainable Artificial Intelligence or xAI

#artificialintelligence

A few years ago, when I was still working for IBM, I managed an AI project for a bank. During the final phase, my team and I went to the steering committee to present the results. Proud as the project leader, I have shown that the model has achieved 98 percent accuracy in detecting fraudulent transactions. In my manager's eyes, I could see a general panic when I explained that we used an artificial neural network, that it worked with a synapse system and weight adjustments. Although very efficient, there was no way to understand its logic objectively.


Reasons, Values, Stakeholders: A Philosophical Framework for Explainable Artificial Intelligence

arXiv.org Artificial Intelligence

The societal and ethical implications of the use of opaque artificial intelligence systems for consequential decisions, such as welfare allocation and criminal justice, have generated a lively debate among multiple stakeholder groups, including computer scientists, ethicists, social scientists, policy makers, and end users. However, the lack of a common language or a multi-dimensional framework to appropriately bridge the technical, epistemic, and normative aspects of this debate prevents the discussion from being as productive as it could be. Drawing on the philosophical literature on the nature and value of explanations, this paper offers a multi-faceted framework that brings more conceptual precision to the present debate by (1) identifying the types of explanations that are most pertinent to artificial intelligence predictions, (2) recognizing the relevance and importance of social and ethical values for the evaluation of these explanations, and (3) demonstrating the importance of these explanations for incorporating a diversified approach to improving the design of truthful algorithmic ecosystems. The proposed philosophical framework thus lays the groundwork for establishing a pertinent connection between the technical and ethical aspects of artificial intelligence systems.


How explainable artificial intelligence can help humans innovate

AIHub

The field of artificial intelligence (AI) has created computers that can drive cars, synthesize chemical compounds, fold proteins and detect high-energy particles at a superhuman level. However, these AI algorithms cannot explain the thought processes behind their decisions. A computer that masters protein folding and also tells researchers more about the rules of biology is much more useful than a computer that folds proteins without explanation. Therefore, AI researchers like me are now turning our efforts toward developing AI algorithms that can explain themselves in a manner that humans can understand. If we can do this, I believe that AI will be able to uncover and teach people new facts about the world that have not yet been discovered, leading to new innovations.