Toby Walsh, a professor of AI at the University of Sydney, told CNBC the dangers have only "become nearer and more serious" since the letter was published. "Autonomous weapons must be regulated," he said. The Future of Life Institute, a non-profit research institute in Boston, Massachusetts, said last month there are many positive military applications for AI but "delegating life and death decisions to autonomous weapon systems is not one of them." The institute pointed out that autonomous drones could be used for reconnaissance missions to avoid putting troops in danger, while AI could also be used to power defensive anti-missile guns which detect, target, and destroy incoming threats without a human command. "Neither application involves a machine selecting and attacking humans without an operator's green light," it said.
Mar-6-2021, 13:35:24 GMT