A British fighter jet was returning to its base in Kuwait after a mission on the third day of the 2003 Iraq War when a U.S. anti-missile system spotted it, identified it as an enemy missile, and fired. The two men in the plane were both killed. A week and a half later, the same system--the vaunted Patriot--made the same mistake. This time, it was an American plane downed, and an American pilot killed. The missile battery that targeted the two jets was almost entirely automated.
On a normal day, humans encounter Artificial Intelligence (AI) numerous times. Artificial intelligence has become a part of human's routine in many ways. They enter our lives in the form of smartphones, appliances in our homes and technology in our cars. Since humans are at the verge of accepting robotics into society, the question that lingers in everyone's mind is'Can robots be trusted?' Human has a mythical illustration of robots turning aggressive once they are provided with all the features of humans. We are pushed to such conclusions through movies, soap operas and dramas.
Alves-Oliveira, Patrícia (Instituto Universitário de Lisboa) | Freedman, Richard G. (University of Massachusetts Amherst) | Grollman, Dan (Sphero, Inc.) | Herlant, Laura (arnegie Mellon University) | Humphrey, Laura (Air Force Research Laboratory) | Liu, Fei (University of Central Florida) | Mead, Ross (Semio) | Stein, Frank (IBM) | Williams, Tom (Tufts University) | Wilson, Shomir (University of Cincinnati)
The AAAI 2016 Fall Symposium Series was held Thursday through Saturday, November 17–19, at the Westin Arlington Gateway in Arlington, Virginia adjacent to Washington, DC. The titles of the six symposia were Accelerating Science: A Grand Challenge for AI; Artificial Intelligence for Human-Robot Interaction, Cognitive Assistance in Government and Public Sector Applications, Cross-Disciplinary Challenges for Autonomous Systems, Privacy and Language Technologies, Shared Autonomy in Research and Practice. The highlights of each (except Acceleration Science) symposium are presented in this report.
The future of defense technology will be driven by artificial intelligence (AI), but robotic weapons, vehicles, and soldiers won't be much use if human service members don't trust or understand their automated counterparts. In order to build human confidence in machines, Pentagon researchers want to develop systems that explain exactly what they're doing. Last week, the United States Defense Advanced Research Projects Agency (DARPA) announced their Explainable AI (XAI) program, an initiative that will ensure people won't be confused by emerging battlefield tech. The agency will begin development in May 2017 and work on the project for four years. The human-robot trust gap is a growing concern for the Department of Defense.