Artificial intelligence used by UK police to predict crimes amplifies human bias

#artificialintelligence

Artificial intelligence technology used by police forces in the UK to predict future crimes replicates - and in some cases amplifies - human prejudices, according to a new report. While "predictive policing" tools have been used in the UK since at least 2004, advances in machine learning and AI have enabled the development of more sophisticated systems. These are now used for a wide range of functions including facial recognition and video analysis, mobile phone data extraction, social media intelligence analysis, predictive crime mapping and individual risk assessment. However, the report by the Royal United Services Institute (RUSI) warns that human biases are being built into these machine learning algorithms, resulting in people being unfairly discriminated against due to their race, sexuality and age. One police officer who was interviewed for the report commented that: "Young black men are more likely to be stop and searched than young white men, and that's purely down to human bias. "That human bias is then introduced into the datasets, and bias is then generated in the outcomes of the application of those datasets." In addition to these inherent biases, the report points out that individuals from disadvantaged sociodemographic backgrounds are likely to engage with public services more frequently. As a result, police often have access to more data relating to these individuals, which "may in turn lead to them being calculated as posing a greater risk". Matters could worsen over time, another officer said, when software is used to predict future crime hotspots. "We pile loads of resources into a certain area and it becomes a self-fulfilling prophecy, purely because there's more policing going into that area, not necessarily because of discrimination on the part of officers," the officer said. The report also warns that police forces could become over-reliant on the AI to predict future crimes, and discount other relevant information. "Officers often disagree with the algorithm.


Police fear bias in use of artificial intelligence to fight crime

#artificialintelligence

British police officers are among those concerned that the use of artificial intelligence in fighting crime is raising the risk of profiling bias, according to a report commissioned by government officials. The paper warned that algorithms might judge people from disadvantaged backgrounds as "a greater risk" since they were more likely to have contact with public services, thus generating more data that in turn could be used to train the AI. "Police officers themselves are concerned about the lack of safeguards and oversight regarding the use of algorithms in fighting crime," researchers from the defence think-tank the Royal United Services Institute said. The report acknowledged that emerging technology including facial recognition had "many potential benefits". But it warned that assessment of long-term risks was "often missing".



Home Office to fund use of AI to help catch dark web paedophiles

The Guardian

Artificial intelligence could be used to help catch paedophiles operating on the dark web, the Home Office has announced. The government has pledged to spend more money on the child abuse image database, which since 2014 has allowed police and other law enforcement agencies to search seized computers and other devices for indecent images of children quickly, against a record of 14m images, to help identify victims. The investment will be used to trial aspects of AI including voice analysis and age estimation to see whether they would help track down child abusers. Earlier this month, the chancellor, Sajid Javid, announced £30m would be set aside to tackle online child sexual exploitation, with the Home Office releasing more information on how this would be spent on Tuesday. There has been debate over the use of machine learning algorithms, part of the broad field of AI, with the government's Centre for Data Ethics and Innovation developing a code of practice for the trialling of the predictive analytical technology in policing.


Police Seek 'Balance' In Use Of AI To Predict Crime Silicon UK Tech News

#artificialintelligence

Police have said they are seeking "balance" in the use of artificial intelligence to predict crimes, after freedom of information requests found that 14 UK police forces were deploying, testing or investigating predictive AI techniques. The report by Liberty, "Policing by Machine", warned that the tools risk entrenching existing biases and delivering inaccurate predictions. The civil liberties group urged police to end the use of predictive AI, saying mapping techniques rely on "problematic" historical arrest data, while individual risk assessment programmes "encourage discriminatory profiling". The forces using or trialling predictive mapping programmes are Avon and Somerset Constabulary, Cheshire Constabulary, Dyfed-Powys Police, Greater Manchester Police, Kent Police, Lancashire Police, Merseyside Police, the Metropolitan Police Service, Norfolk Constabulary, Northamptonshire Police, Warwickshire Police and West Mercia Police, West Midlands Police and West Yorkshire Police, while a further three forces – Avon and Somerset, Durham and West Midlands – are using or trialling individual risk-assessment programmes. Norfolk Police, for instance, is trialling a system for identifying whether burglaries should be investigated, while Durham Constabulary's Harm Assessment Risk Tool (Hart) provides advice to custody officers on individuals' risk of re-offending, and West Midlands Police uses hotspot mapping and a data-driven analysis project.