There is widespread public support for a ban on so-called "killer robots", which campaigners say would "cross a moral line" after which it would be difficult to return. Polling across 26 countries found over 60 per cent of the thousands asked opposed lethal autonomous weapons that can kill with no human input, and only around a fifth backed them. The figures showed public support was growing for a treaty to regulate these controversial new technologies - a treaty which is already being pushed by campaigners, scientists and many world leaders. However, a meeting in Geneva at the close of last year ended in a stalemate after nations including the US and Russia indicated they would not support the creation of such a global agreement. Mary Wareham of Human Rights Watch, who coordinates the Campaign to Stop Killer Robots, compared the movement to successful efforts to eradicate landmines from battlefields.
You don't have to agree with Elon Musk's apocalyptic fears of artificial intelligence to be concerned that, in the rush to apply the technology in the real world, some algorithms could inadvertently cause harm. This type of self-learning software powers Uber's self-driving cars, helps Facebook identify people in social-media posts, and let's Amazon's Alexa understand your questions. Now DeepMind, the London-based AI company owned by Alphabet Inc., has developed a simple test to check if these new algorithms are safe.
After just confirming its plans to help Volvo create self-driving cars, NVIDIA has now revealed that it's also working with another leading car manufacturer. Announcing a partnership with Volkswagen, the tech company states its artificial intelligence and deep learning tech will be used to help VW expand its AI business beyond just autonomous vehicles. While this collaboration may sound surprising, the move actually looks to help expand Volkswagen's existing AI-focused research division - The VW Data Lab. The two companies have suggested that this sharing of tech could be used to help the pair optimize traffic flow in cities and even to devise solutions that make human and robot collaboration easier. In a statement, Volkswagen's CIO Dr. Martin Hofmann says that AI is "the key to the digital future of the Volkswagen Group" describing its collaboration with NVIDIA as "a major step" in expanding the company's proficiency in the field.
One response to the call by experts in robotics and artificial intelligence for an ban on "killer robots" ("lethal autonomous weapons systems" or Laws in the language of international treaties) is to say: shouldn't you have thought about that sooner? Figures such as Tesla's CEO, Elon Musk, are among the 116 specialists calling for the ban. "We do not have long to act," they say. "Once this Pandora's box is opened, it will be hard to close." But such systems are arguably already here, such as the "unmanned combat air vehicle" Taranis developed by BAE and others, or the autonomous SGR-A1 sentry gun made by Samsung and deployed along the South Korean border.
As the automation of physical and knowledge work advances, many jobs will be redefined rather than eliminated--at least in the short term. The potential of artificial intelligence and advanced robotics to perform tasks once reserved for humans is no longer reserved for spectacular demonstrations by the likes of IBM's Watson, Rethink Robotics' Baxter, DeepMind, or Google's driverless car. Just head to an airport: automated check-in kiosks now dominate many airlines' ticketing areas. Pilots actively steer aircraft for just three to seven minutes of many flights, with autopilot guiding the rest of the journey. Passport-control processes at some airports can place more emphasis on scanning document bar codes than on observing incoming passengers.