Collaborating Authors


New Orleans Police Claim Not To Use Facial Recognition Tech. Emails Reveal That's Not Totally True.


The City of New Orleans has been adamant that it "does not use facial recognition software." Though there is no city ordinance outright banning the technology, city officials often repeat that claim, and have even included a line in the privacy policy of its Real-Time Crime Center surveillance hub stating "Facial recognition is not utilized by the System." But that is an incomplete picture of how facial recognition technology is being used in New Orleans. Now, court evidence reveals that Louisiana state police officers can and have utilized a searchable facial recognition database to assist New Orleans police in their investigations. In at least one NOPD investigation, facial recognition was used to identify and indict a suspect.

Police Unlock AI's Potential to Monitor, Surveil and Solve Crimes


Law enforcement agencies like the New Orleans Police Department are adopting artificial-intelligence based systems to analyze surveillance footage. WSJ's Jason Bellini gets a demonstration of the tracking technology and hears why some think it's a game changer, while for others it's raising concerns around privacy and potential bias. Photo: Drew Evans/The Wall Street Journal Don't miss a WSJ video, subscribe here:

Police across the US are training crime-predicting AIs on falsified data


In May of 2010, prompted by a series of high-profile scandals, the mayor of New Orleans asked the US Department of Justice to investigate the city police department (NOPD). Ten months later, the DOJ offered its blistering analysis: during the period of its review from 2005 onwards, the NOPD had repeatedly violated constitutional and federal law. It used excessive force, and disproportionately against black residents; targeted racial minorities, non-native English speakers, and LGBTQ individuals; and failed to address violence against women. The problems, said assistant attorney general Thomas Perez at the time, were "serious, wide-ranging, systemic and deeply rooted within the culture of the department." Despite the disturbing findings, the city entered a secret partnership only a year later with data-mining firm Palantir to deploy a predictive policing system.

The Artist Working to Make Artificial Intelligence Less White


It's no secret by now that artificial intelligence has a white guy problem. One could say the same of almost any industry, but the tech world is singular in rapidly shaping the future. As has been widely publicized, the unconscious biases of white developers proliferate on the internet, mapping our social structures and behaviors onto code and repeating the imbalances and injustices that exist in the real world. There was the case of black people being classified as gorillas; the computer system that rejected an Asian man's passport photo because it read his eyes as being closed; and the controversy surrounding the predictive policing algorithms that have been deployed in cities like Chicago and New Orleans, enabling police officers to pinpoint individuals it deems to be predisposed to crime--giving rise to accusations of profiling. Earlier this year, the release of Google's Arts and Culture App, which allows users to match their faces with a historical painting, produced less than nuanced results for Asians, as well as African-Americans.