A new generation of autonomous weapons or "killer robots" could accidentally start a war or cause mass atrocities, a former top Google software engineer has warned. Laura Nolan, who resigned from Google last year in protest at being sent to work on a project to dramatically enhance US military drone technology, has called for all AI killing machines not operated by humans to be banned. Nolan said killer robots not guided by human remote control should be outlawed by the same type of international treaty that bans chemical weapons. Unlike drones, which are controlled by military teams often thousands of miles away from where the flying weapon is being deployed, Nolan said killer robots have the potential to do "calamitous things that they were not originally programmed for". There is no suggestion that Google is involved in the development of autonomous weapons systems.
Increasingly sophisticated killer AI robots and machines could accidentally start a war and lead to mass atrocities, an ex-Google worker has told The Guardian. Laura Nolan resigned from Google last year in protest at being assigned to Project Maven, which was aimed at enhancing U.S. military drone technology. She has called for all unmanned autonomous weapons to be banned. AI killer robots have the potential to do "calamitous things that they were not originally programmed for," Nolan explained to the Guardian. She is part of a growing group of experts that are showing concern over the development of artificial intelligence programmed into war machines.
A former Google engineer has expressed fears about a new generation of robots that could carry out'atrocities and unlawful killings'. Laura Nolan, who previously worked on the tech giant's military drone initiative, Project Maven, is calling for the ban of all autonomous war drones, as these machines do not have the same common sense or discernment as humans. Project Maven focused on enhancing drones with artificial intelligence (AI) to distinguish enemy targets from people and other objects – but was discontinued after employees protested the technology in development, calling it'evil'. Nolan, who left Google in 2018 in protest against the US military drone technology, is now calling for all drones not operated by humans to fall under the same ban as chemical weapons, according to The Guardian. Former Google engineer has expressed fears about a new generation of robots that could carryout'atrocities and unlawful killings'.
Advancements in artificial intelligence may result in "atrocities" because the technology will behave in unexpected ways, a former Google software engineer has warned. Computer scientist Laura Nolan left Google in June last year after raising concerns about its work with the U.S. Department of Defense on Project Maven, a drone program that was using AI algorithms to speed up analysis of vast amounts of captured surveillance footage. Speaking to The Guardian, the software engineer said the use of autonomous or AI-enhanced weapons systems that lack a human touch may have severe, even fatal, consequences. She said: "What you are looking at are possible atrocities and unlawful killings even under laws of warfare, especially if hundreds or thousands of these machines are deployed. There could be large-scale accidents because these things will start to behave in unexpected ways.
The chief executive of Google has called for international cooperation on regulating artificial intelligence technology to ensure it is'harnessed for good'. Sundar Pichai said that while regulation by individual governments and existing rules such as GDPR can provide a'strong foundation' for the regulation of AI, a more coordinated international effort is'critical' to making global standards work. The CEO said that history is full of examples of how'technology's virtues aren't guaranteed' and that with technological innovations come side effects. These range from internal combustion engines, which allowed people to travel beyond their own areas but also caused more accidents, to the internet, which helped people connect but also made it easier for misinformation to spread. These lessons teach us'we need to be clear-eyed about what could go wrong' in the development of AI-based technologies, he said.