A 'black box' AI system has been influencing criminal justice decisions for over two decades – it's time to open it up

AIHub 

Justice systems around the world are using artificial intelligence (AI) to assess people with criminal convictions. These AI technologies rely on machine learning algorithms and their key purpose is to predict the risk of reoffending. They influence decisions made by the courts and prisons and by parole and probation officers. This kind of tech has been an intrinsic part of the UK justice system since 2001. That was the year a risk assessment tool, known as Oasys (Offender Assessment System), was introduced and began taking over certain tasks from probation officers. Yet in over two decades, scientists outside the government have not been permitted access to the data behind Oasys to independently analyse its workings and assess its accuracy – for example, whether the decisions it influences lead to fewer offences or reconvictions. Lack of transparency affects AI systems generally. Their complex decision-making processes can evolve into a black box – too obscure to unravel without advanced technical knowledge. Proponents believe that AI algorithms are more objective scientific tools because they are standardised and this helps to reduce human bias in assessments and decision making. This, supporters claim, makes them useful for public protection. But critics say that a lack of access to the data, as well as other crucial information required for independent evaluation, raises serious questions of accountability and transparency.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found