Both military and commercial robots will in the future incorporate'artificial intelligence' (AI) that could make them capable of undertaking tasks and missions on their own. In the military context, this gives rise to a debate as to whether such robots should be allowed to execute such missions, especially if there is a possibility that any human life could be at stake. To better understand the issues at stake, this paper presents a framework explaining the current state of the art for AI, the strengths and weaknesses of the technology, and what the future likely holds. The framework demonstrates that while computers and AI can be superior to humans in some skill- and rule-based tasks, under situations that require judgment and knowledge, in the presence of significant uncertainty, humans are superior to computers. In the complex discussion of if and how the development of autonomous weapons should be controlled, the rapidly expanding commercial market for both air and ground autonomous systems must be given full consideration.
Pioneers from the worlds of artificial intelligence and robotics – including Elon Musk and Deepmind's Mustafa Suleyman – have asked the United Nations to ban autonomous weapon systems. A letter from the experts says the weapons currently under development risk opening a "Pandora's box" that if left open could create a dangerous "third revolution in warfare". The open letter coincides with the International Joint Conference on Artificial Intelligence, which is currently being held in Melbourne, Australia. Ahead of the same conference in 2015, the Telsa founder was joined by Steven Hawking, Steve Wozniak and Noam Chomsky in condemning a new "global arms race". Suggestions that warfare will be transformed by artificially intelligent weapons capable of making their own decisions about who to kill are not hyperbolic.
Today (or, yesterday, but today Australia time, where it's probably already tomorrow), 116 founders of robotics and artificial intelligence companies from 26 countries released an open letter urging the United Nations to ban lethal autonomous weapon systems (LAWS). This is a follow-up to the 2015 anti-"killer robots" UN letter that we covered extensively when it was released, but with a new focus on industry that attempts to help convince the UN to get something done. The press release accompanying the letter mentions that it was signed by Elon Musk, Mustafa Suleyman (founder and Head of Applied AI at Google's DeepMind), Esben Østergaard, (founder & CTO of Universal Robotics), and a bunch of other people who you may or may not have heard of. You can read the entire thing here, including all 116 signatories. For some context on this, we spoke with Toby Walsh, Scientia Professor of Artificial Intelligence at the University of New South Wales in Sydney and one of the organizers of the letter.
You can use algorithms and apps to systematically analyze, design, and visualize the behavior of complex systems in time and frequency domains. Automatically tune compensator parameters using interactive techniques such as bode loop shaping and the root locus method. You can tune gain-scheduled controllers and specify multiple tuning objectives, such as reference tracking, disturbance rejection, and stability margins. Code generation and requirements traceability helps you validate your system and certify compliance.
My parents gave me "Motivation" before I left for college. Below a beautiful picture of a rocky shore at sunset it reads, "If a pretty poster and a cute saying are all it takes to motivate you, you probably have a very easy job. The kind robots will be doing soon." Since the beginning of time, Americans have been inventing better ways of doing more for less, from Eli Whitney to Henry Ford. Revolutionary innovations often bring growing pains, but at the end of the day we adapt and move the world forward.