pose discrimination risk


Predictive policing poses discrimination risk, thinktank warns

#artificialintelligence

Predictive policing – the use of machine-learning algorithms to fight crime – risks unfairly discriminating against protected characteristics including race, sexuality and age, a security thinktank has warned. Such algorithms, used to mine insights from data collected by police, are currently deployed for various purposes including facial recognition, mobile phone data extraction, social media analysis, predictive crime mapping and individual risk assessment. Researchers at the Royal United Services Institute (RUSI), commissioned by the government's Centre for Data Ethics and Innovation, focused on predictive crime mapping and individual risk assessment and found algorithms that are trained on police data may replicate – and in some cases amplify – the existing biases inherent in the data set, such as over- or under-policing of certain communities. "The effects of a biased sample could be amplified by algorithmic predictions via a feedback loop, whereby future policing is predicted, not future crime," the authors said. The paper reveals that police officers, who were interviewed for the research, are concerned about the lack of safeguards and oversight regarding the use of predictive policing.