According to scientists and legal experts, responding to the bank's warning this November, there is now an urgent need for the development of intelligent algorithms to be put on the political agenda. Top of the agenda as far as Lightfoot is concerned is the economic impact if AI cuts large amounts of jobs and the incomes from people, how will they make a living and what will they do, a concern that Professor Toby Walsh, an expert in AI at Australia's University of New South Wales and a prominent campaigner against the use of AI in military weapons, says is justified and one that needs to be urgently considered. Though Professor Walsh and fellow AI expert Murray Shanahan, Professor of Cognitive Robotics at London's Imperial College were wary of calls for regulation of the sector, which they said, would inhibit research. According to Professor Walsh scientists working in AI have already started to exercise a degree of self-control over the exploitation of the discoveries being made in AI the areas that need to be focussed on are the ramifications of the technology.
Scientists and business leaders, including Professor Walsh, called for the use of lethal autonomous weapons, or'killer robots', to be outlawed. SYDNEY - Scientists from around the world have called for the United Nations (UN) to take action to stop the proliferation of "killer robots". At the International Joint Conference on Artificial Intelligence in Melbourne on Monday, technology leaders congregated at the event and requested that the development of weaponry using artificial intelligence be halted as "once this Pandora's box is opened, it will be hard to close." As part of this open letter to the UN, the scientists and business leaders, including world-renowned AI expert Toby Walsh, Elon Musk of Tesla, and James Chow of China's UBTECH, called for the use of lethal autonomous weapons, or killer robots, to be outlawed much in the same way as chemical and biological weapons on the battlefield. "Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.
The artificial intelligence (AI) developed by Chinese company Tencent beat world number-two Go player Ke Jie last week with a two-stone handicap, the official People's Daily newspaper reported. Handicaps are used in Go to even out the difference in skill level between players. Google's AlphaGo AI beat Ke last year just months after defeating fellow grandmaster Lee Se-dol of South Korea -- however AlphaGo has never competed against top-level players using a handicap. AlphaGo has since been placed in retirement, with Google instead focusing its energies on its self-teaching AlphaGo Zero machine, which mastered the complex game in 40 days last year. Tencent drew on research papers on AlphaGo Zero released publicly by Google to create its own champion, and its victory is a sign of just how seriously China is taking the race for AI supremacy.
Campaigners are renewing calls for a pre-emptive ban on so-called "killer robots" as representatives of more than 80 countries meet to discuss the autonomous weapons systems. The use of lethal autonomous weapons systems (LAWS) is "a step too far", said Mary Wareham, the global coordinator of the Campaign to Stop Killer Robots. "They cross a moral line, because we would see machines taking human lives on the battlefield or in law enforcement. "We want weapon systems and the use of force to remain under human control," Wareham said. Wareham spoke to Al Jazeera before Monday's meeting in Geneva, Switzerland on a possible ban on LAWS.
Robots probably won't kill people, but people could kill people with robots. That's the concern of an open letter signed by scientists and other interested parties -- including Elon Musk, Steve Wozniak and Stephen Hawking. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. See also: Team KAIST won the 2015 DARPA Robotics Challenge, so now what? The Future of Life Institute, the volunteer-backed research organization which posted the letter, aims to "maximize the future benefits of AI while avoiding pitfalls," according to its website.