Government & the Courts


New Artificial Intelligence Advisory Body in England and Wales – Bringing the Modern World to the Judiciary

#artificialintelligence

Lord Burnett of Maldon, the current Lord Chief Justice, has set up a new Advisory Body with the aim of ensuring that the Judiciary of England and Wales is fully informed about developments in artificial intelligence (AI). Professor Richard Susskind, President of the Society for Computers & Law, has been named chair of the body, and in a recent interview stated that AI has taken off in the last six or seven years, to the point where it has become "affordable and practical". Professor Susskind believes that the new group will start a dialogue among the judiciary about "one of the most influential technologies that there is", and recognises the importance of judges being open to the opportunities that AI technology could offer to the court system (with "practical tasks" cited as an example). The 10-person team will be made up of both senior judges (including Lord Neuberger, past President of the UK Supreme Court, and Lady Justice Sharp, Vice-President of the Queen's Bench Division), as well as leading experts on AI and law (such as Professor Katie Atkinson, past President of the International Association for AI and Law). There is little doubt that automation already plays an essential role for the legal profession, for example, in large disclosure exercises.


Parsing the Shadow Docket

Slate

Slate Plus members get extended, ad-free versions of our podcasts--and much more. Sign up today and try it free for two weeks. Copy this link and add it in your podcast app. For detailed instructions, see our Slate Plus podcasts page. Listen to Amicus via Apple Podcasts, Overcast, Spotify, Stitcher, or Google Podcasts.


A.I. Judges: The Future of Justice Hangs in the Balance

#artificialintelligence

In 1970, Lyudmila Terentyevna Aleksandrova lost her right hand. It happened at work, where she was employed by the Russian state. With her hand gone, she fought for a disability allowance that never materialized, batted about by district and regional courts. Eventually, after decades of frustration, she brought the case to the European Court of Human Rights, which ruled in 2007 that there had been a violation in Aleksandrova's right to a fair trial. Pay the money, it told Russia.


Police across the US are training crime-predicting AIs on falsified data

#artificialintelligence

In May of 2010, prompted by a series of high-profile scandals, the mayor of New Orleans asked the US Department of Justice to investigate the city police department (NOPD). Ten months later, the DOJ offered its blistering analysis: during the period of its review from 2005 onwards, the NOPD had repeatedly violated constitutional and federal law. It used excessive force, and disproportionately against black residents; targeted racial minorities, non-native English speakers, and LGBTQ individuals; and failed to address violence against women. The problems, said assistant attorney general Thomas Perez at the time, were "serious, wide-ranging, systemic and deeply rooted within the culture of the department." Despite the disturbing findings, the city entered a secret partnership only a year later with data-mining firm Palantir to deploy a predictive policing system.


Why we should fear the imminent prevalence of facial recognition technology - Intelegain

#artificialintelligence

As technology develops, its pervasiveness makes Orwell's fiction to today's reality. There have been many debates against facial recognition technology regarding its enabling of abuse and other corrosive activities like facilitation of violence and harassment, disproportionate impact on POC and vulnerable population, misuse by authorities, denial of essential rights such as "as protection against "arbitrary government tracking of one's movements, habits, relationships, interests, and thoughts" etc. However, there are more reasons why we should fear facial recognition technology. Faces are hard to hide or change. They cannot be encrypted like an email or a text- they are just distantly capture-able from remote cameras and increasingly easy and inexpensive to obtain and store in the cloud- a feature that in itself stimulates "surveillance creep". Unlike traditional surveillance technologies- which require fresh, expensive hardware or new data sources, the data sources for facial recognition are prevalent and in the field right now, namely with body cams and CCTVs. There is a standing legacy of names and face databases- like for driver's licenses, social media profiles and mugshots. Any database of faces created to identify individuals caught or arrested on camera requires preparing matching databases that- with a few lines of code can be applied to examine body cam or CCTV feed in real time. It also worth remembering that faces, unlike fingerprints or iris patterns- are central to our identity. While it is easy to think that facial privacy is not the biggest concern for most as we show them to the world every-day, but we do. In fact, humans have for history, created values and institutions linked with privacy protections during periods where it's been hard to identify most people we don't know. Biological limitations and population size/distribution also affect the number of faces we can recognize. As succinctly worded by Chief Justice John Roberts "A person does not surrender all Fourth Amendment protection by venturing into the public sphere.


How artificial intelligence can help us make judges less biased

#artificialintelligence

As artificial intelligence moves into the courtroom, much has been written about sentencing algorithms with hidden biases. Daniel L. Chen, a researcher at both the Toulouse School of Economics and University of Toulouse Faculty of Law, has a different idea: using AI to help correct the biased decisions of human judges. Chen, who holds both a law degree and a doctorate in economics, has spent years collecting data on judges and US courts. "One thing that's been particularly nagging my mind is how to understand all of the behavioral biases that we've found," he says. For example, human biases that can tip the scales when making a decision.


Hunting for Discriminatory Proxies in Linear Regression Models

Neural Information Processing Systems

A machine learning model may exhibit discrimination when used to make decisions involving people. One potential cause for such outcomes is that the model uses a statistical proxy for a protected demographic attribute. In this paper we formulate a definition of proxy use for the setting of linear regression and present algorithms for detecting proxies. Our definition follows recent work on proxies in classification models, and characterizes a model's constituent behavior that: 1) correlates closely with a protected random variable, and 2) is causally influential in the overall behavior of the model. We show that proxies in linear regression models can be efficiently identified by solving a second-order cone program, and further extend this result to account for situations where the use of a certain input variable is justified as a ``business necessity''. Finally, we present empirical results on two law enforcement datasets that exhibit varying degrees of racial disparity in prediction outcomes, demonstrating that proxies shed useful light on the causes of discriminatory behavior in models.


Hunting for Discriminatory Proxies in Linear Regression Models

Neural Information Processing Systems

A machine learning model may exhibit discrimination when used to make decisions involving people. One potential cause for such outcomes is that the model uses a statistical proxy for a protected demographic attribute. In this paper we formulate a definition of proxy use for the setting of linear regression and present algorithms for detecting proxies. Our definition follows recent work on proxies in classification models, and characterizes a model's constituent behavior that: 1) correlates closely with a protected random variable, and 2) is causally influential in the overall behavior of the model. We show that proxies in linear regression models can be efficiently identified by solving a second-order cone program, and further extend this result to account for situations where the use of a certain input variable is justified as a ``business necessity''. Finally, we present empirical results on two law enforcement datasets that exhibit varying degrees of racial disparity in prediction outcomes, demonstrating that proxies shed useful light on the causes of discriminatory behavior in models.


Why Sprint is paying a record $330 million settlement in New York

USATODAY

Sprint shares were higher on Wednesday following news that the company is preparing to mortgage its wireless airwaves. ALBANY – Sprint has agreed to pay a $330 million settlement after the company skirted New York tax law for nearly a decade, New York's attorney general announced Friday. The record-breaking settlement came in the wake of a false claims lawsuit filed by Attorney General Barbara Underwood alleging the cellular provider failed to collect and remit over $100 million in state and local taxes on flat-rate calling plans. The $330 million settlement is the largest recovery by a single state in a false claims lawsuit, according to the attorney general's office. "Sprint knew exactly how New York sales tax law applied to its plans – yet for years the company flagrantly broke the law, cheating the state and its localities out of tax dollars that should have been invested in our communities," Underwood said in a statement.


Document classification using a Bi-LSTM to unclog Brazil's supreme court

arXiv.org Machine Learning

The Brazilian court system is currently the most clogged up judiciary system in the world. Thousands of lawsuit cases reach the supreme court every day. These cases need to be analyzed in order to be associated to relevant tags and allocated to the right team. Most of the cases reach the court as raster scanned documents with widely variable levels of quality. One of the first steps for the analysis is to classify these documents. In this paper we present a Bidirectional Long Short-Term Memory network (Bi-LSTM) to classify these pieces of legal document.