Can Artificial Intelligence Give Us Equal Justice?

#artificialintelligence 

It's "misleading and counterproductive" to block the use of machine-learning algorithms in the justice system on the grounds that some of them may be subject to racial bias, according to a forthcoming study in the American Criminal Law Review. The use of artificial intelligence by judges, prosecutors, police and other justice authorities remains "the best means to overcome the pervasive bias and discrimination that exists in all parts of the deeply flawed criminal justice system," said the study. Algorithmic systems are used in a variety of ways in the U.S. justice system in practices ranging from identifying and predicting crime "hot spots" to real-time surveillance. More than 60 kinds of risk assessment tools are currently in use by court systems around the country, usually to weigh whether individuals should be held in detention before trial or can be released on their own recognizance. The risk assessment tools, which assign weights to data points such as previous arrests and the age of the offender, have come under fire from activists, judges, prosecutors, and some criminologists who say they are susceptible to bias themselves.