An AI judge has accurately predicted most verdicts of the European Court of Human Rights, and might soon be making important decisions about cases. Scientists built an artificially intelligence computer that was able to look at legal evidence as well as considering ethical questions to decide how a case should be decided. And it predicted those with 79 per cent accuracy, according to its creators. The algorithm looked at data sets made up 584 cases relating to torture and degrading treatment, fair trials and privacy. The computer was able to look through that information and make its own decision – which lined up with those made by Europe's most senior judges in almost every case.
Human Rights Watch issued a report Friday urging a ban on the development of fully autonomous weapons. The 49-page report entitled "Making the Case: The Dangers of Killer Robots and the Need for a Preemptive Ban" detailed the various dangers of creating weapons that could think for themselves, including the concern that self-operating defense technology would remove the human element from warfare upon which most clauses of military law are bound. The organization argued machines would not be able to responsibly make the same complex tactical decisions involved in warfare and that existing laws did not adequately cover their use in combat. Human Rights Watch also contended that removing the human element of warfare raised serious moral issues, saying lack of empathy would exacerbate unlawful and unnecessary violence. The organization warned that such technology could be used by authoritarian leaders as a means of controlling and subjugating populations without fear of revolt.
In 1942, Russian science fiction writer Isaac Asimov drew up his'Three Laws of Robotics': 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey orders given it by human beings except where such orders would conflict with the First Law; 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. For 75 years, these clauses have inspired research and thinking on robot rights. They were even taken up in the first'Ethics Charter for Robots' drawn up by South Korea in 2007. However, today Asimov's laws seem rather simplistic and obsolete as they are centred on humans rather than robots. Nowadays ethics and the rights of machines are starting to go further.
Likewise, I am convinced that the revolution in artificial intelligence will enable computers and robots to do many of the tasks that white-collar workers now do. It's not surprising, therefore, that many people are worried about the fate of those whose jobs are vulnerable -- or have already been lost -- to the latest disruptive technology. What will happen to the millions of men and women who now drive trucks and taxis when the trucks and taxis can drive themselves? What will happen to the accountants and health workers when computers can do their jobs? Some analysts have estimated that, with many fewer employees needed to produce the current volume of goods and services, a large share of current employment could be made redundant.
"We have a profound ethical responsibility to design systems that have a positive impact on society, obey the law, and adhere to our highest ethical standards."–Oren On the impact of Artificial Intelligence (AI) on society, I have interviewed Oren Etzioni, Chief Executive Officer of the Allen Institute for Artificial Intelligence. What is the mission of the Allen Institute for AI (AI2)? Oren Etzioni: Our mission is to contribute to humanity through high-impact AI research and engineering. AI2 is the creation of Paul Allen, Microsoft co-founder, and you are the lead.