One response to the call by experts in robotics and artificial intelligence for an ban on "killer robots" ("lethal autonomous weapons systems" or Laws in the language of international treaties) is to say: shouldn't you have thought about that sooner? Figures such as Tesla's CEO, Elon Musk, are among the 116 specialists calling for the ban. "We do not have long to act," they say. "Once this Pandora's box is opened, it will be hard to close." But such systems are arguably already here, such as the "unmanned combat air vehicle" Taranis developed by BAE and others, or the autonomous SGR-A1 sentry gun made by Samsung and deployed along the South Korean border.
In 2014, Stanford University launched the One Hundred Year Study, a long-term look into the future of artificial intelligence set to publish a paper every five years. Just two years in, the team released its first report Sept. 1, Artificial Intelligence and Life in 2030. The document outlines the history of AI and where its being currently applied, like transportation for self-driving cars and healthcare with surgical robots. It's an important document not only for the research community, but for policymakers grappling to understand technology that existing laws could be unequipped to handle. The report says evil AI isn't what people need to anticipate--it's the unintended consequences of otherwise helpful things AI gives, like the erosion of privacy or displacement of labor.
Over the course of the last month, two astounding videos surfaced showing the extent to which artificial intelligence and robotics have developed. If you've been keeping tabs on either of these projects, you would agree that both respective developments represent incredible breakthroughs. But perhaps the most interesting aspect of Atlas and DeepMind lies in the idea that such systems are now able to directly interact with people. Add to this the exponential rise of self-driving cars and the internet of things, and we begin to realise that the next decade will present legal challenges we never before thought possible. We've spoken at length here at Technolegem about the challenges facing the legal industry, tracking things such as the regulation of Uber, anti-piracy schemes and the saga that was Dallas Buyers Club.
With the robotics industry rapidly growing, MEPs have warned that rules are needed to'guarantee a standard level of safety and security.' In a resolution voted today, MEPs are asking the EU Commission to enforce regulatory standards for robotics, and have stressed that the key issue lies with self-driving cars. They have suggested that a European agency for robotics and artificial intelligence should be set up, to supply public authorities with technical, ethical and regulatory expertise. They also asked for specific legal status for robots as'electronic persons' in the long run, in order to establish who is liable if they cause damage. MEPs have warned that robots need to be fitted with'kill switches' to prevent a Terminator-style uprising against humans If a robot unlawfully kills someone in the heat of battle, who is liable for the death?
Autonomous weapons are being increasingly sought my militaries around the world, but experts fear the worst. AUTONOMOUS robots with the ability to make life or death decisions and snuff out the enemy could very soon be a common feature of warfare, as a new-age arms race between world powers heats up. Harnessing artificial intelligence -- and weaponising it for the battlefield and to gain advantage in cyber warfare -- has the US, Chinese, Russian and other governments furiously working away to gain the edge over their global counterparts. But researchers warn of the incredible dangers involved and the "terrifying future" we risk courting. "The arms race is already starting," said Professor Toby Walsh from UNSW's School of Computer Science and Engineering.