AI experts say research into algorithms that claim to predict criminality must end

#artificialintelligence 

A coalition of AI researchers, data scientists, and sociologists has called on the academic world to stop publishing studies that claim to predict an individual's criminality using algorithms trained on data like facial scans and criminal statistics. Such work is not only scientifically illiterate, says the Coalition for Critical Technology, but perpetuates a cycle of prejudice against Black people and people of color. Numerous studies show the justice system treats these groups more harshly than white people, so any software trained on this data simply amplifies and entrenches societal bias and racism. "Let's be clear: there is no way to develop a system that can predict or identify'criminality' that is not racially biased -- because the category of'criminality' itself is racially biased," write the group. "Research of this nature -- and its accompanying claims to accuracy -- rest on the assumption that data regarding criminal arrest and conviction can serve as reliable, neutral indicators of underlying criminal activity. Yet these records are far from neutral."

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found