Defense and security organizations depend upon science and technology to meet operational needs, predict and counter threats, and meet increasingly complex demands of modern warfare. Artificial intelligence and robotics could provide solutions to a wide range of military gaps and deficiencies. At the same time, the unique and rapidly evolving nature of AI and robotics challenges existing polices, regulations, and values, and introduces complex ethical issues that might impede their development, evaluation, and use by the Canadian Armed Forces (CAF). Early consideration of potential ethical issues raised by military use of emerging AI and robotics technologies in development is critical to their effective implementation. This article presents an ethics assessment framework for emerging AI and robotics technologies. It is designed to help technology developers, policymakers, decision makers, and other stakeholders identify and broadly consider potential ethical issues that might arise with the military use and integration of emerging AI and robotics technologies of interest. We also provide a contextual environment for our framework, as well as an example of how our framework can be applied to a specific technology. Finally, we briefly identify and address several pervasive issues that arose during our research.
Submissions for HCOMP-19 Are Due in June! The Seventh AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2019) will be held October 28-30 at Skamania Lodge in Washington State near the Columbia Gorge River, just 45 minutes from Portland, Oregon. This year is the 10-year anniversary of the very first HCOMP workshop in Paris, and to celebrate, there will be special events, talks, and panels throughout the conference. HCOMP is the premier venue for disseminating the latest research findings on crowdsourcing and human computation. While artificial intelligence (AI) and human-computer interaction (HCI) represent traditional mainstays of the conference, HCOMP believes strongly in inviting, fostering, and promoting broad, interdisciplinary research.
The purpose of this article is to draw attention to an aspect of intelligence that has not yet received significant attention from the AI community, but that plays a crucial role in a technology’s effectiveness in the world, namely teaming intelligence. We propose that Al will reach its full potential only if, as part of its intelligence, it also has enough teaming intelligence to work well with people. Although seemingly counterintuitive, the more intelligent the technological system, the greater the need for collaborative skills. This paper will argue why teaming intelligence is important to AI, provide a general structure for AI researchers to use in developing intelligent systems that team well, assess the current state of the art and, in doing so, suggest a path forward for future AI systems. This is not a call to develop a new capability, but rather, an approach to what AI capabilities should be built, and how, so as to imbue intelligent systems with teaming competence.
The intelligence explosion hypothesis (for example, a technological singularity) is roughly the hypothesis that accelerating knowledge or technological growth radically changes humanity. While 20th-century figures are commonly credited as the first discoverers of the hypothesis, I assert that Nicolas de Condorcet, the 18th-century mathematician, is the earliest to (1) mathematically model an intelligence explosion, and (2) present an accelerating historical worldview, and (3) make intelligence explosion predictions that were restated centuries later. Condorcet provides insights on how ontology and social choice can help resolve value alignment.
The AI Bookie column documents highlights from AI Bets, an online forum for the creation of adjudicatable predictions, in the form of bets, about the future of AI. While it is easy to make broad, generalized, or off-the-cuff predictions about the future, it is more difficult to develop predictions that are carefully thought out, concrete, and measurable. This forum was created to help researchers craft predictions whose accuracy can be clearly and unambiguously judged when the bets come due. The bets will be documented both online and regularly in this column. We encourage bets that are rigorously and scientifically argued. We discourage bets that are too general to be evaluated or too specific to an individual or institution. The goal is not to continue to feed the media frenzy and outsized pundit predictions about AI, but rather to curate and promote bets whose outcomes will provide useful feedback to the scientific community. For detailed guidelines and to place bets, visit sciencebets.org.
The idea of implementing reinforcement learning in a computer was one of the earliest ideas about the possibility of AI, but reinforcement learning remained on the margin of AI until relatively recently. Today we see reinforcement learning playing essential roles in some of the most impressive AI applications. This article presents observations from the author’s personal experience with reinforcement learning over the most recent 40 years of its history in AI, focusing on striking connections that emerged between largely separate disciplines and on some of the findings that surprised him along the way. These connections and surprises place reinforcement learning in a historical context, and they help explain the success it is finding in modern AI. The article concludes by discussing some of the challenges that need to be faced as reinforcement learning moves out into real world.
This page includes forthcoming AAAI sponsored conferences, conferences presented by AAAI Affiliates, and conferences held in cooperation with AAAI. AI Magazine also maintains a calendar listing that includes nonaffiliated conferences at www.aaai.org/Magazine/calendar.php. ICAIL-2019 will be held 17-21 June in USA. SoCS-19 will be held July 16-17 2019 (immediately AAAI Fall Symposium Series. IAAI-20 Conference will be held February 9-11, 2020 at the Hilton New York Midtown Hotel in New York, New York USA.
Artificial intelligence, as a capability enhancer, offers significant improvements to our tactical warfighting advantage. AI provides methods for fusing and analyzing data to enhance our knowledge of the tactical environment; it provides methods for generating and assessing decision options from multidimensional, complex situations; and it provides predictive analytics to identify and examine the effects of tactical courses of action. Machine learning can improve these processes in an evolutionary manner. Advanced computing techniques can handle highly heterogeneous and vast datasets and can synchronize knowledge across distributed warfare assets. This article presents concepts for applying AI to various aspects of tactical battle management and discusses their potential improvements to future warfare.
XAI, explainable AI, and the DARPA program to provide that. The European Commission has legislated a demand (Order GDPR 2016/2679) specifying that deployed machine learning systems must explain their decisions. The commission has done this even though no one knows how to provide what they are requiring. What would follow if we and machines are in roughly the same position with respect to the transparency of our ethical decision-making? I want to reintroduce the notion of orthosis into ethical explanation: medically, an orthosis is an externally applied device designed and fitted to the body to aid rehabilitation, and usually contrasted with a prosthesis, which replaces a missing part, like a foot or leg. Here, it will mean an explanatory software agent associated with a human or machine.
Traditional cyber security techniques have led to an asymmetric disadvantage for defenders. The defender must detect all possible threats at all times from all attackers and defend all systems against all possible exploitation. In contrast, an attacker needs only to find a single path to the defender’s critical information. In this article, we discuss how this asymmetry can be rebalanced using cyber deception to change the attacker’s perception of the network environment, and lead attackers to false beliefs about which systems contain critical information or are critical to a defender’s computing infrastructure. We introduce game theory concepts and models to represent and reason over the use of cyber deception by the defender and the effect it has on attacker perception. Finally, we discuss techniques for combining artificial intelligence algorithms with game theory models to estimate hidden states of the attacker using feedback through payoffs to learn how best to defend the system using cyber deception. It is our opinion that adaptive cyber deception is a necessary component of future information systems and networks. The techniques we present can simultaneously decrease the risks and impacts suffered by defenders and dramatically increase the costs and risks of detection for attackers. Such techniques are likely to play a pivotal role in defending national and international security concerns.