Goto

Collaborating Authors

'Explainable Artificial Intelligence': Cracking open the black box of AI

#artificialintelligence

At a demonstration of Amazon Web Services' new artificial intelligence image recognition tool last week, the deep learning analysis calculated with near certainty that a photo of speaker Glenn Gore depicted a potted plant. "It is very clever, it can do some amazing things but it needs a lot of hand holding still. AI is almost like a toddler. They can do some pretty cool things, sometimes they can cause a fair bit of trouble," said AWS' chief architect in his day two keynote at the company's summit in Sydney. Where the toddler analogy falls short, however, is that a parent can make a reasonable guess as to, say, what led to their child drawing all over the walls, and ask them why.


The case for self-explainable AI

#artificialintelligence

This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence. Would you trust an artificial intelligence algorithm that works eerily well, making accurate decisions 99.9 percent of the time, but is a mysterious black box? Every system fails every now and then, and when it does, we want explanations, especially when human lives are at stake. And a system that can't be explained can't be trusted. That is one of the problems the AI community faces as their creations become smarter and more capable of tackling complicated and critical tasks.


Inside DARPA's effort to create explainable artificial intelligence

#artificialintelligence

Since its founding, the Defense Advanced Research Projects Agency (DARPA) has been a hub of innovation. While created as the research arm of the Department of Defense, DARPA has played an important role in some of the technologies that have become (or will become) fundamental to modern human societies. In the 1960s and 1970s, DARPA (then known as ARPA), created ARPANET, the computer network that became the precursor to the internet. In 2003, DARP launched CALO, a project that ushered in the era of Siri and other voice-enabled assistants. In 2004, DARPA launched the Grand Challenge, a competition that set the stage for current developments and advances in self-driving cars. In 2013, DARPA launched the Brain Initiative, an ambitious project that brings together universities, tech companies and neuroscientists to discover how the brain works and develop technologies that enable the human brain to interact with the digital world. Among DARPA's many exciting projects is Explainable Artificial Intelligence (XAI), an initiative launched in 2016 aimed at solving one of the principal challenges of deep learning and neural networks, the subset of AI that is becoming increasing prominent in many different sectors.


Holding Artificial Intelligence Accountable

#artificialintelligence

The irony is not lost on Kate Saenko. Now that humans have programmed computers to learn, they want to know exactly what the computers have learned, and how they make decisions after their learning process is complete. To do that, Saenko, a Boston University College of Arts & Sciences associate professor of computer science, used humans--asking them to look at dozens of pictures depicting steps that the computer may have taken on its road to a decision, and identify its most likely path. The humans gave Saenko answers that made sense, but there was a problem: they made sense to humans, and humans, Saenko knew, have biases. In fact, humans don't even understand how they themselves make decisions.


Different XAI for Different HRI

AAAI Conferences

Artificial Intelligence (AI) has become more widespread in critical decision making at all levels of robotics, along with demands that the agent also explain to us humans why they do what they do. This has driven renewed interest in Explainable Artificial Intelligence (XAI). Much work exists on the Human-Robot Interaction (HRI) challenges of creating and presenting explanations to different human users in different applications but matching these up with AI and Machine Learning (ML) techniques that can provide the underlying explanatory information can still be a challenge. In this short paper, we present a categorisation of explanations that communicate the XAI requirements of various users and applications, and the XAI capabilities of various underlying AI and ML techniques.