Goto

Collaborating Authors

Results


May seeks 'safe and ethical' AI tech

#artificialintelligence

The prime minister says she wants the UK to lead the world in deciding how artificial intelligence can be deployed in a safe and ethical manner. In a speech at the World Economic Forum in Davos, Theresa May said a new advisory body, previously announced in the Autumn Budget, will co-ordinate efforts with other countries. In addition, she confirmed that the UK would join the Davos forum's own council on artificial intelligence. But others may have stronger claims. Earlier this week, Google picked France as the base for a new research centre dedicated to exploring how AI can be applied to health and the environment.


Elon Musk is right: we should all be worried about killer robots

#artificialintelligence

Tesla and SpaceX CEO Elon Musk, along with 115 other artificial intelligence and robotics specialists, has signed an open letter to urge the United Nations to recognize the dangers of lethal autonomous weapons and to ban their use internationally. There are already numerous weapons, like automatic anti-aircraft guns and drones, that can operate with minimal human oversight; advanced tech will eventually help them to carry out military functions entirely autonomously. To illustrate why this is a problem, consider the UK government's argument in which it opposed a ban on lethal autonomous weapons in 2015: it said that "international humanitarian law already provides sufficient regulation for this area," and that all weapons employed by UK armed forces would be "under human oversight and control." I signed the open letter because the use of AI in autonomous weapons hurts my sense of ethics, would be likely to lead to a very dangerous escalation, because it would hurt the further development of AI's good applications, and because it is a matter that needs to be handled by the international community, similarly to what has been done in the past for some other morally wrong weapons (biological, chemical, nuclear).


sorry-banning-killer-robots-just-isnt-practical

WIRED

That's not because it's impossible to ban weapons technologies. Some 192 nations have signed the Chemical Weapons Convention that bans chemical weapons, for example. But it hasn't suggested it would be open to international agreement banning autonomous weapons. In 2015, the UK government responded to calls for a ban on autonomous weapons by saying there was no need for one, and that existing international law was sufficient.


Elon Musk leads 116 experts calling for outright ban of killer robots

#artificialintelligence

In their letter, the founders warn the review conference of the convention on conventional weapons that this arms race threatens to usher in the "third revolution in warfare" after gunpowder and nuclear arms. This is not the first time the IJCAI, one of the world's leading AI conferences, has been used as a platform to discuss lethal autonomous weapons systems. It said that the UK was not developing lethal autonomous weapons and that all weapons employed by UK armed forces would be "under human oversight and control". The unmanned combat aerial vehicle, about the size of a BAE Hawk, the plane used by the Red Arrows, had its first test flight in 2013 and is expected to be operational some time after 2030 as part of the Royal Air Force's Future Offensive Air System, destined to replace the human-piloted Tornado GR4 warplanes.