Global Big Data Conference
On Tuesday, a number of AI researchers, ethicists, data scientists, and social scientists released a blog post arguing that academic researchers should stop pursuing research that endeavors to predict the likelihood that an individual will commit a criminal act, as based upon variables like crime statistics and facial scans. The blog post was authored by the Coalition for Critical Technology, who argued that the utilization of such algorithms perpetuates a cycle of prejudice against minorities. Many studies of the efficacy of face recognition and predictive policing algorithms find that the algorithms tend to judge minorities more harshly, which the authors of the blog post argue is due to the inequities in the criminal justice system. The justice system produces biased data, and therefore the algorithms trained on this data propagate those biases, the Coalition for Critical Technology argues. The coalition argues that the very notion of "criminality" is often based on race, and therefore research done on these technologies assumes the neutrality of the algorithms when in truth no such neutrality exists.
Jul-3-2020, 17:26:02 GMT
- Genre:
- Research Report (0.32)
- Industry:
- Law > Criminal Law (0.78)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.57)
- Technology:
- Information Technology > Artificial Intelligence
- Issues > Social & Ethical Issues (0.51)
- Machine Learning (0.77)
- Vision > Face Recognition (0.54)
- Information Technology > Artificial Intelligence