Collaborating Authors

Civil Rights & Constitutional Law

U.S. Police Already Using 'Spot' Robot From Boston Dynamics in the Real World


Massachusetts State Police (MSP) has been quietly testing ways to use the four-legged Boston Dynamics robot known as Spot, according to new documents obtained by the American Civil Liberties Union of Massachusetts. And while Spot isn't equipped with a weapon just yet, the documents provide a terrifying peek at our RoboCop future. This browser does not support the video element. The Spot robot, which was officially made available for lease to businesses last month, has been in use by MSP since at least April 2019 and has engaged in at least two police "incidents," though it's not clear what those incidents may have been. It's also not clear whether the robots were being operated by a human controller or how much autonomous action the robots are allowed.

GPT-3's bigotry is exactly why devs shouldn't use the internet to train AI


"Yeah, but your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should." It turns out that a $1 billion investment from Microsoft and unfettered access to a supercomputer wasn't enough to keep OpenAI's GPT-3 from being just as bigoted as Tay, the algorithm-based chat bot that became an overnight racist after being exposed to humans on social media. It's only logical to assume any AI trained on the internet – meaning trained on databases compiled by scraping publicly-available text online – would end up with insurmountable inherent biases, but it's still a sight to behold in the the full context (ie: it took approximately $4.6 million to train the latest iteration of GPT-3). What's interesting here is OpenAI's GPT-3 text generator is finally starting to trickle out to the public in the form of apps you can try out yourself. These are always fun, and we covered one about a month ago called Philosopher AI.

Would AI and Machine Learning be that effective if stereotypes weren't there?


We all are moving towards an era of Artificial Intelligence. Earlier when face recognition was something to be amazed at it is now easily implemented using existing libraries and frameworks. Machine learning is now embedded into our lives and it is thickening its grasp with time. Earlier it was a buzzword but now it is a reality that is making our lives easier and better. So let's talk about some of the problems with Machine Learning.

AI, Protests, and Justice


Editor's Note: The use of face recognition technology in policing has been a long-standing subject of concern, even more-so now after the murder of George Floyd and the demonstrations that have followed. In this article, Mike Loukides, VP of Content Strategy at O'Reilly Media, reviews how companies and cities have addressed these concerns, as well as ways in which individuals can mitigate face recognition technology or even use it to increase accountability. We'd love to hear from you about what you think about this piece. Largely on the impetus of the Black Lives Matter movement, the public's response to the murder of George Floyd, and the subsequent demonstrations, we've seen increased concern about the use of facial identification in policing. First, in a highly publicized wave of announcements, IBM, Microsoft, and Amazon have announced that they will not sell face recognition technology to police forces.

Algorithmic Justice League - Unmasking AI harms and biases


In today's world, AI systems are used to decide who gets hired, the quality of medical treatment we receive, and whether we become a suspect in a police investigation. While these tools show great promise, they can also harm vulnerable and marginalized people, and threaten civil rights. Unchecked, unregulated and, at times, unwanted, AI systems can amplify racism, sexism, ableism, and other forms of discrimination. The Algorithmic Justice League's mission is to raise awareness about the impacts of AI, equip advocates with empirical research, build the voice and choice of the most impacted communities, and galvanize researchers, policy makers, and industry practitioners to mitigate AI harms and biases. We're building a movement to shift the AI ecosystem towards equitable and accountable AI.

CryptoHarlem's Founder Warns Against 'Digital Stop and Frisk'


This year, many people braved the risk of coronavirus infection to protest police brutality in Black neighborhoods, but physical violence isn't the only way law enforcement can harm marginalized and minority communities: Hacker Matt Mitchell wants us to pay attention to digital policing, too. He argues that marginalized communities have become a test bed for powerful and troubling new surveillance tools that could become more widespread. In 2013, Mitchell founded a series of free security workshops in his New York City neighborhood called CryptoHarlem as a way to work through the pain of watching the divisive trial over the death of Black Florida teen Trayvon Martin. "I talk to people about the surveillance in our neighborhood and how it got there and how it works and what we can do to circumvent it and what we can do to be safer," Mitchell said, in a video interview with WIRED's Sidney Fussell at the second of three WIRED25 events Wednesday. Society's growing dependence on digital platforms and infrastructure, combined with the events of 2020, have made his work more relevant than ever.

Where is the accountability for AI ethics gatekeepers?


Elite institutions, the self-appointed arbiters of ethics are guilty of racism and unethical behavior but have zero accountability. In July 2020, MIT took a frequently cited and widely used dataset offline when two researchers found that the '80 Million Tiny Images' dataset used racist, misogynistic terms to describe images of Black and Asian people. According to The Register, Vinay Prabhu, a data scientist of Indian origin working at a startup in California, and Abeba Birhane, an Ethiopian PhD candidate at University College Dublin, who made the discovery that thousands of images in the MIT database were "labeled with racist slurs for Black and Asian people, and derogatory terms used to describe women." This problematic dataset was created back in 2008 and if left unchecked, it would have continued to spawn biased algorithms and introduce prejudice into AI models that used it as training dataset. This incident also highlights a pervasive tendency in this space to put the onus of solving ethical problems created by questionable technologies back on the marginalized groups negatively impacted by them. IBM's recent decision to exit the Facial Recognition industry, followed by similar measures by other tech giants, was in no small part due to the foundational work of Timnit Gebru, Joy Buolamwini, and other Black women scholars.

Police Commission to review LAPD's facial recognition use after Times report

Los Angeles Times

The Los Angeles Police Commission on Tuesday said it would review the city Police Department's use of facial recognition software and how it compared with programs in other major cities. The commission did so after citing reporting by The Times this week that publicly revealed the scope of the LAPD's use of facial recognition for the first time -- including that hundreds of LAPD officers have used it nearly 30,000 times since 2009. Critics say police denials of its use are part of a long pattern of deception and that transparency is essential, given potential privacy and civil rights infringements. Commission President Eileen Decker said a subcommittee of the commission would "do a deeper dive" into the technology's use and "work with the department in terms of analyzing the oversight mechanisms" for the system. "It's a good time to take a global look at this issue," Decker said.

Scientists to fight anti-Semitism online with help of artificial intelligence


An international team of scientists said Monday it had joined forces to combat the spread of anti-Semitism online with the help of artificial intelligence.

Controversial facial-recognition software used 30,000 times by LAPD in last decade, records show

Los Angeles Times

The Los Angeles Police Department has used facial-recognition software nearly 30,000 times since 2009, with hundreds of officers running images of suspects from surveillance cameras and other sources against a massive database of mugshots taken by law enforcement. The new figures, released to The Times, reveal for the first time how commonly facial recognition is used in the department, which for years has provided vague and contradictory information about how and whether it uses the technology. The LAPD has consistently denied having records related to facial recognition, and at times denied using the technology at all. The truth is that, while it does not have its own facial-recognition platform, LAPD personnel have access to facial-recognition software through a regional database maintained by the Los Angeles County Sheriff's Department. And between Nov. 6, 2009, and Sept. 11 of this year, LAPD officers used the system's software 29,817 times.