In an address to the UN--which included a briefing by the Campaign to Stop Killer Robots--Toby Walsh, professor of Artificial Intelligence at University of New South Wales, highlighted the necessary steps involved in obtaining a ban on fully autonomous weapons. Bonnie Docherty, who represents the Human Rights Watch and Harvard Law School's International Human Rights Clinic, co-authored a report this week highlighting the dangers of fully autonomous weapons. "We are pleased that the countries at this major disarmament forum have agreed to formalize discussions on lethal autonomous weapons systems, which should be an important step on the road to a ban." AI will destroy entry-level jobs - but lead to a basic income for all (TechRepublic) The future of AI in the US: What it could look like in the Trump Administration (TechRepublic) Q&A: Former AAAI chair discusses future of AI research and what's coming up at AAAI next month (TechRepublic) Artificial Intelligence and IT: The good, the bad and the scary (Tech Pro Research) Obama's report on the future of artificial intelligence: The main takeaways (ZDNet) Q&A: Former AAAI chair discusses future of AI research and what's coming up at AAAI next month (TechRepublic)
Human Rights Watch issued a report Friday urging a ban on the development of fully autonomous weapons. Human Rights Watch also contended that removing the human element of warfare raised serious moral issues, saying lack of empathy would exacerbate unlawful and unnecessary violence. To prevent the development of fully autonomous weapons, Human Rights Watch recommended comprehensive, legally binding restrictions on international and national levels, as well as formal discussions on the matter to be held at the Geneva Convention's Fifth Review Conference. In response to growing concern over the pursuit of fully autonomous technology, the White House issued a report in October praising the development of artificial intelligence and emphasizing its potential role in future day-to-day life.
Researchers from the University of Sheffield, the University of Pennsylvania and University College London programmed the machine to analyse text from cases heard at the European Court of Human Rights (ECtHR) and predict the outcome of the judicial decision. "We don't see AI replacing judges or lawyers, but we think they'd find it useful for rapidly identifying patterns in cases that lead to certain outcomes," explained Dr Nikolaos Aletras, who led the study at UCL Computer Science. The team of computer and legal scientists extracted case information published by the ECtHR in their openly accessible database. The researchers identified English language data sets for 584 cases relating to Articles 3, 6 and 8 of the Convention and applied an AI algorithm to find patterns in the text.
"The court has a huge queue of cases that have not been processed and it's quite easy to say if some of them have a high probability of violation, and others have a low probability of violation," said Vasileios Lampos, also a UCL scientist and co-author of the study. To do this, the scientists fed a database of court decisions into a natural language processing neural network. He has written on culture, politics, travel, tech, business, human rights, for local, national, and international news services and magazines. He has a keen interest in the role technology is playing in the transformation of society, culture and politics, especially in developing nations.
A group of researchers from the University College London (UCL), University of Sheffield, and University of Pennsylvania, created an Artificial Intelligence system to judge 584 human rights cases and had released its findings recently. The cases analyzed by the AI method were previously heard at the European Court of Human Rights (ECHR) and were equally divided into violation and non-violation cases to prevent bias. Basing its judgment on the case text, the AI judge managed to predict the decisions on the cases with 79% accuracy. Team leader Dr. Nikolaos Aletras, also from UCL Computer Science, thinks that the AI method can be used as a tool for determining which cases might be violations of the European Convention on Human Rights.
A team of computer and legal scientists from the UK worked alongside Daniel Preoțiuc-Pietro – a postdoctoral researcher in natural language processing and machine learning from the University of Pennsylvania – to extract case information published by the ECtHR. They identified English language data sets for 584 cases relating to Articles 3, 6 and 8 of the Convention. Article 3 forbids torture and inhuman and degrading treatment (250 cases); Article 6 protects the right to a fair trial (80 cases) and Article 8 provides a right to respect for one's "private and family life, his home and his correspondence" (254 cases). They then applied an AI algorithm to find patterns in the text.
In this paper, our particular focus is on the automatic analysis of cases of the European Court of Human Rights (ECtHR or Court). Our task is to predict whether a particular Article of the Convention has been violated, given textual evidence extracted from a case, which comprises of specific parts pertaining to the facts, the relevant applicable law and the arguments presented by the parties involved. Accordingly, in the discussion we highlight ways in which automatically predicting the outcomes of ECtHR cases could potentially provide insights on whether judges follow a so-called legal model (Grey, 1983) of decision making or their behavior conforms to the legal realists' theorization (Leiter, 2007), according to which judges primarily decide cases by responding to the stimulus of the facts of the case. It can also be used to develop prior indicators for diagnosing potential violations of specific Articles in lodged applications and eventually prioritise the decision process on cases where violation seems very likely.
An artificial intelligence algorithm has predicted the outcome of human rights trials with 79 percent accuracy, according to a study published today in PeerJ Computer Science. Developed by researchers from the University College London (UCL), the University of Sheffield, and the University of Pennsylvania, the system is the first of its kind trained solely on case text from a major international court, the European Court of Human Rights (ECtHR). "Our motivation was twofold," co-author Vasileios Lampos of UCL Computer Science told Digital Trends. The algorithm analyzed texts from nearly 600 cases related to human right's issues including fair trials, torture, and privacy in an effort to identify patterns.
The judicial decisions of the European Court of Human Rights (ECtHR) have been predicted to 79% accuracy using an artificial intelligence (AI) method developed by researchers at UCL, the University of Sheffield and the University of Pennsylvania. It could also be a valuable tool for highlighting which cases are most likely to be violations of the European Convention on Human Rights," explained Dr Nikolaos Aletras, who led the study at UCL Computer Science. "Ideally, we'd test and refine our algorithm using the applications made to the court rather than the published judgements, but without access to that data we rely on the court-published summaries of these submissions," explained co-author, Dr Vasileios Lampos, UCL Computer Science. They identified English language data sets for 584 cases relating to Articles 3, 6 and 8* of the Convention and applied an AI algorithm to find patterns in the text.
RotM Artificial Intelligence can predict the outcomes of European Court of Human Rights trials to a high accuracy, according to research published today. It can judge the final result of legal trials based on the information in human rights cases to 79 per cent accuracy. It could also be a valuable tool for highlighting which cases are most likely to be violations of the European Convention on Human Rights," said Dr Nikolaos Aletras, lead-author of the research and researcher at the Department of Computer Science at University College London. The software uses natural language processing and machine learning to analyse case information from both sides, Aletras told The Register.