Goto

Collaborating Authors

Explainable Artificial Intelligence (XAI)

#artificialintelligence

This article was written by Dr. Matt Turek. Dramatic success in machine learning has led to a torrent of Artificial Intelligence (AI) applications. Continued advances promise to produce autonomous systems that will perceive, learn, decide, and act on their own. However, the effectiveness of these systems is limited by the machine's current inability to explain their decisions and actions to human users (Figure 1). The Department of Defense (DoD) is facing challenges that demand more intelligent, autonomous, and symbiotic systems.


DARPA Wants to Understand how AI Systems Reach Decisions

#artificialintelligence

The U.S. Defense Advanced Research Projects Agency (DARPA) has launched a program that will create the technology to make new generations of artificial intelligence (AI) systems "explainable." DARPA'S Explainable AI (XAI) program aims to create new machine learning methods to produce more explainable models and combine them with explanation techniques. And why the need to understand AI? That's because explainable AI -- especially explainable machine learning -- will be essential if future American warfighters are to understand, appropriately trust and effectively manage an emerging generation of AI "partners" such as battlefield robots and machines. XAI is vital because continued advances in AI promise to produce autonomous systems that will perceive, learn, decide and act on their own. The effectiveness of these AI systems, however, is limited by the machine's current inability to explain their decisions and actions to human users.


XAI--Explainable artificial intelligence

#artificialintelligence

High-level patterns are the basis for describing big plans in big steps. Automating the discovery of abstractions has long been a challenge, and understanding the discovery and sharing of abstractions in learning and explanation are at the frontier of XAI research today.


Inside DARPA's effort to create explainable artificial intelligence

#artificialintelligence

Since its founding, the Defense Advanced Research Projects Agency (DARPA) has been a hub of innovation. While created as the research arm of the Department of Defense, DARPA has played an important role in some of the technologies that have become (or will become) fundamental to modern human societies. In the 1960s and 1970s, DARPA (then known as ARPA), created ARPANET, the computer network that became the precursor to the internet. In 2003, DARP launched CALO, a project that ushered in the era of Siri and other voice-enabled assistants. In 2004, DARPA launched the Grand Challenge, a competition that set the stage for current developments and advances in self-driving cars. In 2013, DARPA launched the Brain Initiative, an ambitious project that brings together universities, tech companies and neuroscientists to discover how the brain works and develop technologies that enable the human brain to interact with the digital world. Among DARPA's many exciting projects is Explainable Artificial Intelligence (XAI), an initiative launched in 2016 aimed at solving one of the principal challenges of deep learning and neural networks, the subset of AI that is becoming increasing prominent in many different sectors.


Explainable Artificial Intelligence

#artificialintelligence

Dramatic success in machine learning has led to a torrent of Artificial Intelligence (AI) applications. Continued advances promise to produce autonomous systems that will perceive, learn, decide, and act on their own. However, the effectiveness of these systems is limited by the machine's current inability to explain their decisions and actions to human users. The Department of Defense is facing challenges that demand more intelligent, autonomous, and symbiotic systems. Explainable AI--especially explainable machine learning--will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.