Being taken to the police station for processing is part and parcel of the legal process - if suspects commit the crime they prepare to pay the consequences. For many suspects in states across the US, this involves undergoing a computerised risk assessment which works out their likelihood of re-offending. The premise is that the data can be used by judges at trial to aid them in sentencing, using the scores to work out who is a bigger risk to the public. But an investigation has raised serious questions about the methods used by to generate risk scores, claiming there may be racial elements at play in assigning scores. An investigation has raised serious questions about the methods used by to generate risk scores for offenders in the US, which could have implications for sentencing, claiming there may be racial elements at play in assigning scores.
Domestic violence (DV) is a global social and public health issue that is highly gendered. Being able to accurately predict DV recidivism, i.e., re-offending of a previously convicted offender, can speed up and improve risk assessment procedures for police and front-line agencies, better protect victims of DV, and potentially prevent future re-occurrences of DV. Previous work in DV recidivism has employed different classification techniques, including decision tree (DT) induction and logistic regression, where the main focus was on achieving high prediction accuracy. As a result, even the diagrams of trained DTs were often too difficult to interpret due to their size and complexity, making decision-making challenging. Given there is often a trade-off between model accuracy and interpretability, in this work our aim is to employ DT induction to obtain both interpretable trees as well as high prediction accuracy. Specifically, we implement and evaluate different approaches to deal with class imbalance as well as feature selection. Compared to previous work in DV recidivism prediction that employed logistic regression, our approach can achieve comparable area under the ROC curve results by using only 3 of 11 available features and generating understandable decision trees that contain only 4 leaf nodes.
BRITISH cops are using a system to stop crimes BEFORE they happen. Police in Durham are employing artificial intelligence designed to help officers decide whether or not to keep a suspect in custody. Dubbed the Harm Assessment Risk Tool (HART), it predicts the risk of the suspect re-offending by categorising them as low, medium or high risk. The force says the system is due to go live in the next few months, and could be picked up elsewhere in the country before the end of the year. HART is a system developed by University of Cambridge Professor Dr Geoffrey Barnes in a partnership between Durham Constabulary and the University of Cambridge's Centre for Evidence-Based Policing.
Companies are using AI to prevent and detect everything from routine employee theft to insider trading. Many banks and large corporations employ artificial intelligence to detect and prevent fraud and money laundering. Social media companies use machine learning to block illicit content such as child pornography. Businesses are constantly experimenting with new ways to use artificial intelligence for better risk management and faster, more responsive fraud detection -- and even to predict and prevent crimes. While today's basic technology is not necessarily revolutionary, the algorithms it uses and the results they can produce are.