Civil Rights & Constitutional Law


AI Predicts Human Rights Trial Outcomes With 79% Accuracy - RealClearLife

#artificialintelligence

Artificial intelligence can now predict human rights cases with 79% accuracy, part of an increasing trend of the computer-driven tech being applied in novel ways. The accuracy is impressive, but many are concerned it could lead to eliminating human judgment from the rule of law. By scanning court documents, the algorithm was able to anticipate the judicial decisions made in the European Court of Human Rights (ECHR) trials with relative precision. The results were part of a study by an international group published in PeerJ Computer Science, an academic journal. Led by Nikolaos Aletras, a Research Associate at the University of London's Computer Science Department, the team of researchers argued that the machine-generated analysis provides a look at the most important parts of the judicial system, such as the schism of law interpretation among ECHR judges.


This AI predicts the outcome of human rights trials

#artificialintelligence

"The court has a huge queue of cases that have not been processed and it's quite easy to say if some of them have a high probability of violation, and others have a low probability of violation," said Vasileios Lampos, also a UCL scientist and co-author of the study. To do this, the scientists fed a database of court decisions into a natural language processing neural network. He has written on culture, politics, travel, tech, business, human rights, for local, national, and international news services and magazines. He has a keen interest in the role technology is playing in the transformation of society, culture and politics, especially in developing nations.


AI judge predicts human rights rulings with 79% accuracy rate

#artificialintelligence

A group of researchers from the University College London (UCL), University of Sheffield, and University of Pennsylvania, created an Artificial Intelligence system to judge 584 human rights cases and had released its findings recently. The cases analyzed by the AI method were previously heard at the European Court of Human Rights (ECHR) and were equally divided into violation and non-violation cases to prevent bias. Basing its judgment on the case text, the AI judge managed to predict the decisions on the cases with 79% accuracy. Team leader Dr. Nikolaos Aletras, also from UCL Computer Science, thinks that the AI method can be used as a tool for determining which cases might be violations of the European Convention on Human Rights.


AI judge created by British scientists can predict human rights rulings

#artificialintelligence

The AI system--developed by researchers from University College London, the University of Sheffield, and the University of Pennsylvania--parsed 584 cases which had previously been heard at the European Court of Human Rights (ECHR), and successfully predicted 79 percent of the decisions. A machine learning algorithm was trained to search for patterns in English-language datasets relating to three articles of the European Convention on Human Rights: Article 3, concerning torture and inhuman and degrading treatment; Article 6, which protects the right to fair trial; and Article 8, on the right to a private and family life. It could also be a valuable tool for highlighting which cases are most likely to be violations of the European Convention on Human Rights. "Previous studies have predicted outcomes based on the nature of the crime, or the policy position of each judge, so this is the first time judgments have been predicted using analysis of text prepared by the court," said UCL's Vasileios Lampos.


AI predicts outcome of human rights cases - BBC News

#artificialintelligence

An artificial intelligence system has correctly predicted the outcomes of hundreds of cases heard at the European Court of Human Rights, researchers have claimed. But critics said no AI would be able to understand the nuances of a legal case. These were picked both because they represented cases about fundamental rights and because there was a large amount of published data on them. Increasingly law firms are turning to AI to help them wade through vast amounts of legal data.


Google's Brain Team: 'AIs can be racist and sexist but we can change that'

ZDNet

But as three Google researchers note in a new paper, there currently is no vetted methodology for avoiding discrimination against sensitive attributes in machine learning. Another approach, called "demographic parity", would require a prediction to be uncorrelated with the sensitive attribute, but Hardt argues in the case of predicting medical conditions such as heart failure, it's "neither realistic nor desirable to prevent all correlation between the predicted outcome and group membership". According to Hardt, its methodology not only can measure and prevent discrimination based on sensitive attributes but also help scrutinize predictors. "When implemented, our framework also improves incentives by shifting the cost of poor predictions from the individual to the decision maker, who can respond by investing in improved prediction accuracy," writes Hardt.