Explanation & Argumentation


Explainable Artificial Intelligence

#artificialintelligence

Explainable AI--especially explainable machine learning--will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners. The XAI program will focus the development of multiple systems on addressing challenges problems in two areas: (1) machine learning problems to classify events of interest in heterogeneous, multimedia data; and (2) machine learning problems to construct decision policies for an autonomous system to perform a variety of simulated missions. These two challenge problem areas were chosen to represent the intersection of two important machine learning approaches (classification and reinforcement learning) and two important operational problem areas for the Department of Defense (intelligence analysis and autonomous systems). At the end of the program, the final delivery will be a toolkit library consisting of machine learning and human-computer interface software modules that could be used to develop future explainable AI systems.


Racist artificial intelligence? Maybe not, if computers explain their 'thinking'

#artificialintelligence

Growing concerns about how artificial intelligence (AI) makes decisions has inspired U.S. researchers to make computers explain their "thinking." "In fact, it can get much worse where if the AI agents are part of a loop where they're making decisions, even the future data, the biases get reinforced," he added. Researchers hope that, by seeing the thought process of the computers, they can make sure AI doesn't pick up any gender or racial biases that humans have. But Singh says understanding the decision process is critical for future use, particularly in cases where AI is making decisions, like approving loan applications, for example.


Google's research chief questions value of 'Explainable AI'

#artificialintelligence

Despite being used to make life-altering decisions from medical diagnoses to loan limits, the inner workings of various machine learning architectures – including deep learning, neural networks and probabilistic graphical models – are incredibly complex and increasingly opaque. Just as humans worked to make sense and explain their actions after the fact, a similar method could be adopted in AI, Norvig explained. "So we might end up being in the same place with machine learning where we train one system to get an answer and then we train another system to say – given the input of this first system, now it's your job to generate an explanation." Besides, Norvig added yesterday: "Explanations alone aren't enough, we need other ways of monitoring the decision making process."


The Next Big Disruptive Trend in Business. . . Explainable AI - Disruption

#artificialintelligence

That's why some of the smartest AI researchers in the industry are now hot on the trail of finding new ways to make machines understandable for humans. How the driverless car treats this goat has massive real-world implications – causing the car to stop, slow down or maybe even speed up. In a best-case scenario, researchers would be able to get the driverless car to explain its actions later – to explain the exact steps and decision-making process that led to it acting the way it did. Or, consider the use of AI-powered machines to help Wall Street firms trade stocks and other financial instruments.


Artificial Intelligence Owes You an Explanation

Slate

One of the most prominent moves in the direction of the right to an explanation comes from the European Union. In 2016, the European Parliament and the Council of the European Union adopted the General Data Protection Regulation--a new data protection regime that promises to usher in major changes to how companies handle the personal data they gather about EU-based consumers. But once you do, you'll find several new rules directly responding to the question of how artificial intelligence technologies, like Amazon's Alexa, should be allowed to access and use personal data. Among the most noteworthy: When companies collect personal data related to their consumers, they are required to inform individuals whether "automated decision-making, including profiling" is involved in processing that data and provide them with "meaningful information about the logic involved" with that processing.


Oracle quietly researching 'Explainable AI'

#artificialintelligence

Artificial intelligence systems that can explain their decision making process in human terms are now the subject of intense research by software and cloud vendor Oracle, the company's senior vice-president of data-driven applications revealed to Computerworld yesterday. The aim of XAI research – which is being carried out by the likes of the Defense Advanced Research Projects Agency (DARPA), an agency of the US Department of Defense – is to give machine-learning systems the ability to explain their rationale, characterise their strengths and weaknesses, and convey an understanding of how they will behave in the future in a way that is understandable and useful to end users. "Explanatory AI is actually something we're actually looking at and trying to work on," Jack Berkowitz, Oracle vice-president of products, data science and adaptive intelligence, told Computerworld. Part of the problem with AI systems lies in the understanding of users, Berkowitz added.


Algorithms are Black Boxes, That is Why We Need Explainable AI

#artificialintelligence

Data governance and ethics have always been important and a few years ago, I developed ethical guidelines for organisations to follow, if they want to get started with big data. Organisations should use a variety of long-term- and short-term-focused data sources, as well as offering algorithms soft goals and hard goals, to create a stable algorithm. As such, the objective of XAI is to ensure that an algorithm can explain its rationale behind certain decisions and explain the strengths or weaknesses of that decision. Once that is known, the algorithm can be changed by adding additional (soft) goals and adding different data sources to improve its decision-making capabilities.


Capital One Pursues 'Explainable AI' to Guard Against Bias in Models

#artificialintelligence

The effort aims to better understand how a machine-learning model comes to a logical conclusion.


Complex AI Systems Explain Their Actions - Future of Life Institute

#artificialintelligence

Veloso's CoBots are capable of autonomous localization and navigation in the Gates-Hillman Center using WiFi, LIDAR, and/or a Kinect sensor (yes, the same type used for video games). Currently, Veloso's research focuses on getting the robots to generate these explanations in plain language. These sorts of corrections could be programmed into the code, but Veloso believes that "trustability" in AI systems will benefit from our ability to dialogue, query, and correct their autonomy. In the future, when we will have more and more AI systems that are able to perceive the world, make decisions, and support human decision-making, the ability to engage in these types of conversations will be essential .


Explainable Artificial Intelligence (XAI) Darpa Funding

#artificialintelligence

To gain intuition and reasoning of a model is to have understanding and trust--transparency. When you strike a nail with a hammer, it's pretty predictable what might happen: the nail could get hit, the hammer could miss, or very rarely, the hammer's head may fly off of the handle. When you replace the hammer with a black box that works correctly 99.999% of the time, but for 0.001%, something completely unpredictable happens, then there's a problem with volatility because that unpredictable event may have unacceptable consequences. I think explainable AI could help with intuitive and more fine-grained risk analysis, and that's certainly a good thing in high-stakes applications such as defense.