Government & the Courts

AI Judges and Juries

Communications of the ACM

When the head of the U.S. Supreme Court says artificial intelligence (AI) is having a significant impact on how the legal system in this country works, you pay attention. That's exactly what happened when Chief Justice John Roberts was asked the following question: "Can you foresee a day when smart machines, driven with artificial intelligences, will assist with courtroom fact-finding or, more controversially even, judicial decision-making?" His answer startled the audience. "It's a day that's here and it's putting a significant strain on how the judiciary goes about doing things," he said, as reported by The New York Times. In the last decade, the field of AI has experienced a renaissance.

Man named Brett Kavanagh complains about having name like SCOTUS judge

Daily Mail

Sharing a name with a famous person can prompt endless jokes and comments -- but in these particularly politically-charged times, having the same name as a political figure can be especially tiresome. That's something a young man from Kentucky named Brett Kavanagh has learned only too well in recent weeks: On Friday, Brett, 27, complained about the recent woes of having his name, prompting others with famous names to commiserate. Women named Siri and Alexa, and men named Michael Jackson and Bruce Lee, all tweeted about how hard it is to have a well-known name. His tweet inspired others to chime in, including this person who pointed to a Scottish man named Steve Bannon -- who is not the same as Breitbart's Steve Bannon A man named Bruce Y. Lee knows the struggle This Brett, who works in customer service and lives in Louisville, spells his last name differently from new Supreme Court Justice Brett Kavanaugh, but it seems their nearly-identical names has caused him some trouble. Tough times: Brett (pictured) doesn't spell his name the same way as the judge, either'This is a terrible time to be named Brett Kavanagh,' he tweeted.

European Court of Human Right Open Data project Machine Learning

This paper presents thirteen datasets for binary, multiclass and multilabel classification based on the European Court of Human Rights judgments since its creation. The interest of such datasets is explained through the prism of the researcher, the data scientist, the citizen and the legal practitioner. Contrarily to many datasets, the creation process, from the collection of raw data to the feature transformation, is provided under the form of a collection of fully automated and open-source scripts. It ensures reproducibility and a high level of confidence in the processed data which is some of the most important issues in data governance nowadays.

Ford gives scientific explanation for her memory of alleged Kavanaugh incident

FOX News

Dr. Christine Blasey Ford responds to a question from Sen. Dianne Feinstein during testimony before the Senate Judiciary Committee on her sexual assault allegations against Supreme Court nominee Brett Kavanaugh. Christine Blasey Ford gave a detailed scientific explanation for her memory of the alleged incident involving Supreme Court nominee Judge Brett Kavanaugh at her highly anticipated Senate testimony Thursday. Senate Judiciary Committee Ranking Member Dianne Feinstein, D-Calif., pressed Ford over her level of certainty that it was, in fact, Kavanaugh who allegedly pinned her down 36 years ago, while in high school, and attempted to remove her clothing. "How are you so sure that it was he?" Feinstein asked. Ford, a California-based psychology professor, laid out a detailed scientific explanation.

Brett Kavanaugh Has Some Alarmingly Outdated Views on Privacy


Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society. Starting in 2012, the Supreme Court's approach to digital privacy has undergone a seismic shift. In a series of recent cases on location tracking and cellular phone searches, the court has recognized that, when it comes to big data, old rules about our expectations of privacy may not apply. Because information can now be gathered, stored, and analyzed cheaply, the Supreme Court has recently found that Fourth Amendment protections must be carefully recalibrated to prevent unchecked police power. Supreme Court nominee Brett Kavanaugh, however, has exhibited a contrasting and outdated understanding of privacy.

Amazon's Face-Scanning Surveillance Software Contrasts With Its Privacy Stance WSJD - Technology

Face recognition is a stark example of a technology that is being deployed faster than society and the law can adopt new norms and rules. It lets governments and private enterprise track citizens anywhere there is a camera, even if they're not carrying any devices. In general, people who are in public don't have any legal expectation of privacy and can be photographed or recorded. Because of this, the technology has the potential to be more intrusive than phone tracking, the legality of which the U.S. Supreme Court will soon decide. There are only two states, Texas and Illinois, that limit private companies' ability to track people via their faces.

The Ethical Implications Of Artificial Intelligence


Artificial intelligence is transforming the legal profession -- and that includes legal ethics. AI and similar cutting-edge technologies raise many complex ethical issues and challenges that lawyers ignore at their peril. At the same time, AI also holds out the promise of helping lawyers to meet their ethical obligations, serve their clients more effectively, and promote access to justice and the rule of law. What does AI mean for legal ethics, what should lawyers do to prepare for these changes, and how could AI help improve the legal profession? Together with our partners at Thomson Reuters, we at Above the Law have been examining these important subjects.

When Software Rules: Rule of Law in the Age of Artificial Intelligence


Artificial Intelligence (AI) is changing how our society operates. AI now helps make judiciary decisions, medical diagnoses, and drives cars. The use of AI in our society also has important environmental implications. AI can help improve resource use, improve energy efficiency, predict extreme weather events, and aid in scientific research. But while AI has the potential to improve human interaction with the environment, AI can also exacerbate existing environmental issues.

What do AI and blockchain mean for the rule of law?


Digital services have frequently been in collision -- if not out-and-out conflict -- with the rule of law. But what happens when technologies such as deep learning software and self-executing code are in the driving seat of legal decisions? How can we be sure next-gen'legal tech' systems are not unfairly biased against certain groups or individuals? And what skills will lawyers need to develop to be able to properly assess the quality of the justice flowing from data-driven decisions? While entrepreneurs have been eyeing traditional legal processes for some years now, with a cost-cutting gleam in their eye and the word'streamline' on their lips, this early phase of legal innovation pales in significance beside the transformative potential of AI technologies that are already pushing their algorithmic fingers into legal processes -- and perhaps shifting the line of the law itself in the process.

Qualit\"atsma{\ss}e bin\"arer Klassifikationen im Bereich kriminalprognostischer Instrumente der vierten Generation Machine Learning

This master's thesis discusses an important issue regarding how algorithmic decision making (ADM) is used in crime forecasting. In America forecasting tools are widely used by judiciary systems for making decisions about risk offenders based on criminal justice for risk offenders. By making use of such tools, the judiciary relies on ADM in order to make error free judgement on offenders. For this purpose, one of the quality measures for machine learning techniques which is widly used, the $AUC$ (area under curve), is compared to and contrasted for results with the $PPV_k$ (positive predictive value). Keeping in view the criticality of judgement along with a high dependency on tools offering ADM, it is necessary to evaluate risk tools that aid in decision making based on algorithms. In this methodology, such an evaluation is conducted by implementing a common machine learning approach called binary classifier, as it determines the binary outcome of the underlying juristic question. This thesis showed that the $PPV_k$ (positive predictive value) technique models the decision of judges much better than the $AUC$. Therefore, this research has investigated whether there exists a classifier for which the $PPV_k$ deviates from $AUC$ by a large proportion. It could be shown that the deviation can rise up to 0.75. In order to test this deviation on an already in used Classifier, data from the fourth generation risk assement tool COMPAS was used. The result were were quite alarming as the two measures derivate from each other by 0.48. In this study, the risk assessment evaluation of the forecasting tools was successfully conducted, carefully reviewed and examined. Additionally, it is also discussed whether such systems used for the purpose of making decisions should be socially accepted or not.