Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI

arXiv.org Artificial Intelligence

In the last years, Artificial Intelligence (AI) has achieved a notable momentum that may deliver the best of expectations over many application sectors across the field. For this to occur, the entire community stands in front of the barrier of explainability, an inherent problem of AI techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI. Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is acknowledged as a crucial feature for the practical deployment of AI models. This overview examines the existing literature in the field of XAI, including a prospect toward what is yet to be reached. We summarize previous efforts to define explainability in Machine Learning, establishing a novel definition that covers prior conceptual propositions with a major focus on the audience for which explainability is sought. We then propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at Deep Learning methods for which a second taxonomy is built. This literature analysis serves as the background for a series of challenges faced by XAI, such as the crossroads between data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to XAI with a reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.


Ethical Artificial Intelligence

arXiv.org Artificial Intelligence

This book-length article combines several peer reviewed papers and new material to analyze the issues of ethical artificial intelligence (AI). The behavior of future AI systems can be described by mathematical equations, which are adapted to analyze possible unintended AI behaviors and ways that AI designs can avoid them. This article makes the case for utility-maximizing agents and for avoiding infinite sets in agent definitions. It shows how to avoid agent self-delusion using model-based utility functions and how to avoid agents that corrupt their reward generators (sometimes called "perverse instantiation") using utility functions that evaluate outcomes at one point in time from the perspective of humans at a different point in time. It argues that agents can avoid unintended instrumental actions (sometimes called "basic AI drives" or "instrumental goals") by accurately learning human values. This article defines a self-modeling agent framework and shows how it can avoid problems of resource limits, being predicted by other agents, and inconsistency between the agent's utility function and its definition (one version of this problem is sometimes called "motivated value selection"). This article also discusses how future AI will differ from current AI, the politics of AI, and the ultimate use of AI to help understand the nature of the universe and our place in it.


Sleep apnea survey: 53 percent of drivers pay out of pocket

#artificialintelligence

A survey released by the American Transportation Research Institute says more than half of truck drivers who have been referred to a sleep study have incurred some or all of the test costs. The survey, which was released Thursday, May 26, includes data from more than 800 commercial drivers to help quantify the costs and impact on truck drivers as they address diagnosis and a potential treatment regimen for obstructive sleep apnea. On March 10, the Federal Motor Carrier Safety Administration and Federal Railroad Administration published an advanced notice of proposed rulemaking about a possible regulation regarding sleep apnea. Specifically, the agencies requested comment on the costs and benefits of requiring motor carrier and rail transportation workers who exhibit multiple risk factors for sleep apnea to undergo evaluation and treatment by a healthcare professional with expertise in sleep disorders. The agencies have held three public listening sessions on the issue.


$1.4 million in oxycodone found hidden in woman's car at Otay Mesa border

Los Angeles Times

A 22-year-old woman was arrested Wednesday on suspicion of trying to smuggle $1.4 million worth of addictive painkillers across the Otay Mesa border crossing. The 47,340 tablets found in a hidden compartment under the woman's car represent the largest seizure of oxycodone along the U.S.-Mexico border in at least five years, the U.S. Attorney's Office said. The woman, Adriana Morfin-Paniagua, a U.S. citizen living in Tijuana, was charged with importing a controlled substance. According to a federal complaint, a Customs and Border Protection officer smelled a silicone-like odor when Morfin-Paniagua drove up to the Otay Mesa Port of Entry around 9 a.m. The officer used a mirror to look under the 1999 Honda Accord and spotted packages in a makeshift compartment, according to the complaint.


Florida airport shooter blames government 'mind control,' Islamic State chatrooms

The Japan Times

FORT LAUDERDALE, FLORIDA – The man suspected of fatally shooting five people and wounding six others at a Florida airport told investigators initially he was under government mind control and then claimed to be inspired by Islamic State websites and chatrooms, authorities said at a hearing Tuesday. FBI Agent Michael Ferlazzo also confirmed that the 9mm Walther handgun used in the Jan. 6 shooting rampage at Fort Lauderdale-Hollywood International Airport is the same weapon Anchorage, Alaska, police seized and later returned to 26-year-old Esteban Santiago last year. Ferlazzo testified at a bond hearing that Santiago mentioned after the shooting that his mind was under some kind of government control. Later in the interview he claimed to have been inspired by Islamic State-related chatrooms and websites, although it is not clear if the FBI has been able to corroborate any terror-related claims. U.S. Magistrate Judge Lurana Snow set a Jan. 30 arraignment hearing for Santiago to enter a formal plea.