Goto

Collaborating Authors

Could China Develop Killer Robots in the Near Future? Experts Fear So

#artificialintelligence

Russia started sabotaging the discussion from the very first session. Throughout the morning of Aug. 21, its diplomats at the United Nations in Geneva took the floor, nitpicking language in a document meant to pave the way for an eventual ban on lethal autonomous weapons, also known as killer robots, an emerging category of weapons that would be able to fight on their own and decide who to target and kill. "They were basically trying to waste time," says Laura Nolan of the International Committee for Robot Arms Control, who watched with frustration in the hall. But while Russia vigorously worked to derail progress, it had a quieter partner: China. "I very much get the impression that they're working together in some way," says Nolan. "[The Chinese] are letting the Russians steamroll the process, and they're happy to hang back."


China's brightest teens are studying about AI weapons so Beijing could 'lead the war game'

Daily Mail - Science & tech

Some of China's smartest high school graduates have been recruited to study the manufacturing of AI weaponry to keep Beijing ahead of the war game. The Chinese teenagers are studying at Beijing Institute of Technology, a top university in the country specialising in engineering and national defence. The class, unveiled last month, comprises 31 students who are selected based on their academic achievements and their level of patriotism, according the school. AI weapons, called by some as'killer robots', generally mean automated weapons which select, engage and eliminate human targets without the involvement of other humans. It has been described as the third revolution in warfare - after gunpowder and nuclear arms - and has been a controversial topic due to the ethics behind them.


Tech leaders call for autonomous weapons ban

Al Jazeera

Thousands of the world's pre-eminent technology experts called for a global ban on the development of lethal autonomous weapons, warning they could become instruments of "violence and oppression". More than 2,400 individuals and 150 companies from 90 different countries vowed to play no part in the construction, trade, or use of autonomous weapons in a pledge signed on Wednesday at the 2018 International Joint Conference on Artificial Intelligence in Stockholm, Sweden. Elon Musk, CEO of SpaceX and Tesla, and representatives of Google's DeepMind subsidiary were among supporters of the pledge. "The decision to take a human life should never be delegated to a machine," a statement said. "Lethal autonomous weapons - selecting and engaging targets without human intervention - would be dangerously destabilising for every country and individual."


Why AI researchers shouldn't turn their backs on the military

#artificialintelligence

More than 2,400 AI researchers recently signed a pledge promising not to build so-called autonomous weapons--systems that would decide on their own whom to kill. This follows Google's decision not to renew a contract to supply the Pentagon with AI for analysis of drone footage after the company came under pressure from many employees opposed to its work on a project known as Maven. Paul Scharre, the author of a new book, Army of None: Autonomous Weapons and the Future of War, believes that AI researchers need to do more than opt out if they want to bring about change. An Army Ranger in Iraq and Afghanistan and now a senior fellow at the Center for a New American Security, Scharre argues that AI experts should engage with policymakers and military professionals to explain why researchers are concerned and help them understand the limitations of AI systems. Scharre spoke with MITTechnology Review senior editor Will Knight about the best way to halt a potentially dangerous AI arms race.


Elon Musk says AI poses bigger threat than North Korea and could trigger World War Three

The Independent - Tech

Elon Musk has warned that competition for superiority in the world of artificial intelligence could trigger World War III. His tweet followed a statement from Russian President Vladimir Putin that "artificial intelligence is the future, not only for Russia, but for all humankind … It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world." Musk said he was less concerned about the threat of a nuclear missile strike from North Korea, and said any such action would be "suicide". Musk has long since been a vocal opponent of lethal autonomous weapons.