AAAI Conferences

The vision of populating the world with autonomous systems that reduce human labor and improve safety is gradually becoming a reality. Autonomous systems have changed the way space exploration is conducted and are beginning to transform everyday life with a range of household products. In many areas, however, there are considerable barriers to the deployment of fully autonomous systems. We refer to systems that require some degree of human intervention in order to complete a task as semi-autonomous systems. We examine the broad rationale for semi-autonomy and define basic properties of such systems. Accounting for the human in the loop presents a considerable challenge for current planning techniques. We examine various design choices in the development of semi-autonomous systems and their implications on planning and execution. Finally, we discuss fruitful research directions for advancing the science of semi-autonomy.

Artificial Intelligence and the Future of Warfare


Both military and commercial robots will in the future incorporate'artificial intelligence' (AI) that could make them capable of undertaking tasks and missions on their own. In the military context, this gives rise to a debate as to whether such robots should be allowed to execute such missions, especially if there is a possibility that any human life could be at stake. To better understand the issues at stake, this paper presents a framework explaining the current state of the art for AI, the strengths and weaknesses of the technology, and what the future likely holds. The framework demonstrates that while computers and AI can be superior to humans in some skill- and rule-based tasks, under situations that require judgment and knowledge, in the presence of significant uncertainty, humans are superior to computers. In the complex discussion of if and how the development of autonomous weapons should be controlled, the rapidly expanding commercial market for both air and ground autonomous systems must be given full consideration.

Killer autonomous weapons are coming... but they're not here yet


Pioneers from the worlds of artificial intelligence and robotics – including Elon Musk and Deepmind's Mustafa Suleyman – have asked the United Nations to ban autonomous weapon systems. A letter from the experts says the weapons currently under development risk opening a "Pandora's box" that if left open could create a dangerous "third revolution in warfare". The open letter coincides with the International Joint Conference on Artificial Intelligence, which is currently being held in Melbourne, Australia. Ahead of the same conference in 2015, the Telsa founder was joined by Steven Hawking, Steve Wozniak and Noam Chomsky in condemning a new "global arms race". Suggestions that warfare will be transformed by artificially intelligent weapons capable of making their own decisions about who to kill are not hyperbolic.

Why autonomous vehicle systems need human-centric approach


Currently the trending concept behind autonomous vehicles is removing the human and focusing on the machine. But I have a different view. After 12 years at NASA researching autonomous systems for Mars, and seven years at Nissan leading work on autonomous vehicles in Silicon Valley, I believe that an autonomous system without people as a central component will be pretty much useless. As the Hong Kong government targets a 30 percent adoption of connected and autonomous vehicles (CAV), and begins testing autonomous technologies, it's crucial to take a human-centric perspective to reap the real rewards of this technology. Imagine you just bought your first autonomous vehicle.

Industry Urges United Nations to Ban Lethal Autonomous Weapons in New Open Letter

IEEE Spectrum Robotics

Today (or, yesterday, but today Australia time, where it's probably already tomorrow), 116 founders of robotics and artificial intelligence companies from 26 countries released an open letter urging the United Nations to ban lethal autonomous weapon systems (LAWS). This is a follow-up to the 2015 anti-"killer robots" UN letter that we covered extensively when it was released, but with a new focus on industry that attempts to help convince the UN to get something done. The press release accompanying the letter mentions that it was signed by Elon Musk, Mustafa Suleyman (founder and Head of Applied AI at Google's DeepMind), Esben Østergaard, (founder & CTO of Universal Robotics), and a bunch of other people who you may or may not have heard of. You can read the entire thing here, including all 116 signatories. For some context on this, we spoke with Toby Walsh, Scientia Professor of Artificial Intelligence at the University of New South Wales in Sydney and one of the organizers of the letter.