'Explainable Artificial Intelligence': Cracking open the black box of AI

#artificialintelligence

At a demonstration of Amazon Web Services' new artificial intelligence image recognition tool last week, the deep learning analysis calculated with near certainty that a photo of speaker Glenn Gore depicted a potted plant. "It is very clever, it can do some amazing things but it needs a lot of hand holding still. AI is almost like a toddler. They can do some pretty cool things, sometimes they can cause a fair bit of trouble," said AWS' chief architect in his day two keynote at the company's summit in Sydney. Where the toddler analogy falls short, however, is that a parent can make a reasonable guess as to, say, what led to their child drawing all over the walls, and ask them why.


Inside DARPA's effort to create explainable artificial intelligence

#artificialintelligence

Since its founding, the Defense Advanced Research Projects Agency (DARPA) has been a hub of innovation. While created as the research arm of the Department of Defense, DARPA has played an important role in some of the technologies that have become (or will become) fundamental to modern human societies. In the 1960s and 1970s, DARPA (then known as ARPA), created ARPANET, the computer network that became the precursor to the internet. In 2003, DARP launched CALO, a project that ushered in the era of Siri and other voice-enabled assistants. In 2004, DARPA launched the Grand Challenge, a competition that set the stage for current developments and advances in self-driving cars. In 2013, DARPA launched the Brain Initiative, an ambitious project that brings together universities, tech companies and neuroscientists to discover how the brain works and develop technologies that enable the human brain to interact with the digital world. Among DARPA's many exciting projects is Explainable Artificial Intelligence (XAI), an initiative launched in 2016 aimed at solving one of the principal challenges of deep learning and neural networks, the subset of AI that is becoming increasing prominent in many different sectors.


Holding Artificial Intelligence Accountable

#artificialintelligence

The irony is not lost on Kate Saenko. Now that humans have programmed computers to learn, they want to know exactly what the computers have learned, and how they make decisions after their learning process is complete. To do that, Saenko, a Boston University College of Arts & Sciences associate professor of computer science, used humans--asking them to look at dozens of pictures depicting steps that the computer may have taken on its road to a decision, and identify its most likely path. The humans gave Saenko answers that made sense, but there was a problem: they made sense to humans, and humans, Saenko knew, have biases. In fact, humans don't even understand how they themselves make decisions.


Different XAI for Different HRI

AAAI Conferences

Artificial Intelligence (AI) has become more widespread in critical decision making at all levels of robotics, along with demands that the agent also explain to us humans why they do what they do. This has driven renewed interest in Explainable Artificial Intelligence (XAI). Much work exists on the Human-Robot Interaction (HRI) challenges of creating and presenting explanations to different human users in different applications but matching these up with AI and Machine Learning (ML) techniques that can provide the underlying explanatory information can still be a challenge. In this short paper, we present a categorisation of explanations that communicate the XAI requirements of various users and applications, and the XAI capabilities of various underlying AI and ML techniques.


"Why Did You Do That?" Explainable Intelligent Robots

AAAI Conferences

As autonomous intelligent systems become more widespread, society is beginning to ask: "What are the machines up to?". Various forms of artificial intelligence control our latest cars, load balance components of our power grids, dictate much of the movement in our stock markets and help doctors diagnose and treat our ailments. As they become increasingly able to learn and model more complex phenomena, so the ability of human users to understand the reasoning behind their decisions often decreases. It becomes very difficult to ensure that the robot will perform properly and that it is possible to correct errors. In this paper, we outline a variety of techniques for generating the underlying knowledge required for explainable artificial intelligence, ranging from early work in expert systems through to systems based on Behavioural Cloning. These are techniques that may be used to build intelligent robots that explain their decisions and justify their actions. We will then illustrate how decision trees are particularly well suited to generating these kinds of explanations. We will also discuss how additional explanations can be obtained, beyond simply the structure of the tree, based on knowledge of how the training data was generated. Finally, we will illustrate these capabilities in the context of a robot learning to drive over rough terrain in both simulation and in reality.