Turing's Red Flag

Communications of the ACM

The 19th-century U.K. Locomotive Act, also known as the Red Flag Act, required motorized vehicles to be preceded by a person waving a red flag to signal the oncoming danger. Movies can be a good place to see what the future looks like. According to Robert Wallace, a retired director of the CIA's Office of Technical Service: "... When a new James Bond movie was released, we always got calls asking, 'Do you have one of those?' If I answered'no', the next question was, 'How long will it take you to make it?' Folks didn't care about the laws of physics or that Q was an actor in a fictional series--his character and inventiveness pushed our imagination ..."3 As an example, the CIA successfully copied the shoe-mounted spring-loaded and poison-tipped knife in From Russia With Love. It's interesting to speculate on what else Bond movies may have led to being invented. For this reason, I have been considering what movies predict about the future of artificial intelligence (AI).


A Multiagent Approach to Autonomous Intersection Management

AAAI Conferences

Artificial intelligence research is ushering in a new era of sophisticated, mass-market transportation technology. While computers can already fly a passenger jet better than a trained human pilot, people are still faced with the dangerous yet tedious task of driving automobiles. Intelligent Transportation Systems (ITS) is the field that focuses on integrating information technology with vehicles and transportation infrastructure to make transportation safer, cheaper, and more efficient. Recent advances in ITS point to a future in which vehicles themselves handle the vast majority of the driving task. Once autonomous vehicles become popular, autonomous interactions amongst multiple vehicles will be possible.


Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI

arXiv.org Artificial Intelligence

In the last years, Artificial Intelligence (AI) has achieved a notable momentum that may deliver the best of expectations over many application sectors across the field. For this to occur, the entire community stands in front of the barrier of explainability, an inherent problem of AI techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI. Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is acknowledged as a crucial feature for the practical deployment of AI models. This overview examines the existing literature in the field of XAI, including a prospect toward what is yet to be reached. We summarize previous efforts to define explainability in Machine Learning, establishing a novel definition that covers prior conceptual propositions with a major focus on the audience for which explainability is sought. We then propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at Deep Learning methods for which a second taxonomy is built. This literature analysis serves as the background for a series of challenges faced by XAI, such as the crossroads between data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to XAI with a reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.


IBM delivers a piece of its brain-inspired supercomputer to Livermore national lab

#artificialintelligence

IBM is about to deliver the foundation of a brain-inspired supercomputer to Lawrence Livermore National Laboratory, one of the federal government's top research institutions. The delivery is one small "blade" within a server rack with 16 chips, dubbed TrueNorth, and is modeled after the way the human brain functions. Silicon Valley is awash in optimism about artificial intelligence, largely based on the progress that deep learning neural networks are making in solving big problems. Companies from Google to Nvidia are hoping they'll provide the AI smarts for self-driving cars and other tough problems. It is within this environment that IBM has been pursuing solutions in brain-inspired supercomputers.


IBM delivers a piece of its brain-inspired supercomputer to Livermore national lab

#artificialintelligence

IBM is about to deliver the foundation of a brain-inspired supercomputer to Lawrence Livermore National Laboratory, one of the federal government's top research institutions. The delivery is one small "blade" within a server rack with 16 chips, dubbed TrueNorth, and is modeled after the way the human brain functions. Silicon Valley is awash in optimism about artificial intelligence, largely based on the progress that deep learning neural networks are making in solving big problems. Companies from Google to Nvidia are hoping they'll provide the AI smarts for self-driving cars and other tough problems. It is within this environment that IBM has been pursuing solutions in brain-inspired supercomputers.