At a demonstration of Amazon Web Services' new artificial intelligence image recognition tool last week, the deep learning analysis calculated with near certainty that a photo of speaker Glenn Gore depicted a potted plant. "It is very clever, it can do some amazing things but it needs a lot of hand holding still. AI is almost like a toddler. They can do some pretty cool things, sometimes they can cause a fair bit of trouble," said AWS' chief architect in his day two keynote at the company's summit in Sydney. Where the toddler analogy falls short, however, is that a parent can make a reasonable guess as to, say, what led to their child drawing all over the walls, and ask them why.
This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence. Would you trust an artificial intelligence algorithm that works eerily well, making accurate decisions 99.9 percent of the time, but is a mysterious black box? Every system fails every now and then, and when it does, we want explanations, especially when human lives are at stake. And a system that can't be explained can't be trusted. That is one of the problems the AI community faces as their creations become smarter and more capable of tackling complicated and critical tasks.
Since its founding, the Defense Advanced Research Projects Agency (DARPA) has been a hub of innovation. While created as the research arm of the Department of Defense, DARPA has played an important role in some of the technologies that have become (or will become) fundamental to modern human societies. In the 1960s and 1970s, DARPA (then known as ARPA), created ARPANET, the computer network that became the precursor to the internet. In 2003, DARP launched CALO, a project that ushered in the era of Siri and other voice-enabled assistants. In 2004, DARPA launched the Grand Challenge, a competition that set the stage for current developments and advances in self-driving cars. In 2013, DARPA launched the Brain Initiative, an ambitious project that brings together universities, tech companies and neuroscientists to discover how the brain works and develop technologies that enable the human brain to interact with the digital world. Among DARPA's many exciting projects is Explainable Artificial Intelligence (XAI), an initiative launched in 2016 aimed at solving one of the principal challenges of deep learning and neural networks, the subset of AI that is becoming increasing prominent in many different sectors.
The irony is not lost on Kate Saenko. Now that humans have programmed computers to learn, they want to know exactly what the computers have learned, and how they make decisions after their learning process is complete. To do that, Saenko, a Boston University College of Arts & Sciences associate professor of computer science, used humans--asking them to look at dozens of pictures depicting steps that the computer may have taken on its road to a decision, and identify its most likely path. The humans gave Saenko answers that made sense, but there was a problem: they made sense to humans, and humans, Saenko knew, have biases. In fact, humans don't even understand how they themselves make decisions.
The car's underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed for tasks like image captioning, voice recognition, and language translation. There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries. But this won't happen--or shouldn't happen--unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur--and it's inevitable they will. That's one reason Nvidia's car is still experimental.