Protracted negotiations within the United Nations are causing'killer robots' to move a step closer to reality, an expert has warned. The UK and US are currently attempting to water down a preemptive ban on autonomous weapons at the UN general assembly in New York. But this is delaying an agreement on banning the technology, which could allow nations to possess killer robots before any laws come into force. Protracted negotiations within the United Nations are causing'killer robots' to move a step closer to reality, an expert has warned. This was the stark warning made by Christof Heyns, the UN special rapporteur on extrajudicial, summary or arbitrary executions, in an in-depth article for The Guardian.
Will we ever discover alien civilization? There have been enough films about alien take overs and out of this world life forms from Alien to District 9, but now according to British physicist Brian Cox it is unlikely to happen. Professor Cox, best known for presenting Stargazing Live and Wonders of the Universe, has admitted that he believes our search is unlikely to see results because intelligent life destroys itself not long after it evolves. It is one of astronomy's great mysteries: Why, given the estimated 200bn-400bn stars and at least 100bn planets in our galaxy, are there no signs of alien intelligence? According to The Sunday Times, Professor Cox's suggestion is that the rate of advances in science and engineering in any type of alien civilisation may outstrip the development of political institutions able to manage them, leading to a self-destruction model.
An open letter calling for a ban on lethal weapons controlled by artificially intelligent machines was signed last week by thousands of scientists and technologists, reflecting growing concern that swift progress in artificial intelligence could be harnessed to make killing machines more efficient, and less accountable, both on the battlefield and off. But experts are more divided on the issue of robot killing machines than you might expect. The letter, presented at the International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, was signed by many leading AI researchers as well as prominent scientists and entrepreneurs including Elon Musk, Stephen Hawking, and Steve Wozniak. "Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is--practically if not legally--feasible within years not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms." Rapid advances have indeed been made in artificial intelligence in recent years, especially within the field of machine learning, which involves teaching computers to recognize often complex or subtle patterns in large quantities of data.
A pledge against the use of autonomous weapons was in July signed by over 2,400 individuals working in artificial intelligence (AI) and robotics representing 150 companies from 90 countries. The pledge, signed at the 2018 International Joint Conference on Artificial Intelligence (IJCAI) in Stockholm and organised by the Future of Life Institute, called on governments, academia, and industry to "create a future with strong international norms, regulations, and laws against lethal autonomous weapons". The institute defines lethal autonomous weapons systems -- also known as "killer robots" -- as weapons that can identify, target, and kill a person, without a human "in-the-loop". Arkin told D61 Live on Wednesday that instead of banning autonomous systems in war zones, they instead should be guided by strong legal and legislative directives. Citing a recent survey of 27,000 people by the European Commission, Arkin said 60 percent of respondents felt that robots should not be used for the care of children, the elderly, and the disabled, even though this is the space that most roboticists are playing in.
UK robotics professor leading calls for a worldwide ban on autonomous weapons We can't rely on robots to conform to international law, says Noel Sharkey Sharkey is chairman of and NGO leading a campaign to "Stop Killer Robots" Autonomous robots could destabilize world security and trigger unintentional wars We can't rely on robots to conform to international law, says Noel Sharkey Sharkey is chairman of and NGO leading a campaign to "Stop Killer Robots" As wars become increasingly automated, we must ask ourselves how far we want to delegate responsibility to machines. Where do we want to draw the line? Weapons systems have been evolving for millennia and there have always been attempts to resist them. But does that mean that we should just sit back and accept our fate and hand over the ultimate responsibility for killing to machines? Over the last few months there has been an increasing debate about the use of fully autonomous robot weapons: armed robots that once launched can select their own targets and kill them without further human intervention.