As The Register's Katyanna Quach wrote: "Thanks to MIT's cavalier approach when assembling its training set, though, these systems may also label women as whores or bitches, and Black and Asian people with derogatory language. The database also contained close-up pictures of female genitalia labeled with the C-word."
In an interview with NPR, Friar outlined steps the popular neighborhood app is planning to take to address reports of racial profiling and censorship on the platform. In an interview with NPR, Friar outlined steps the popular neighborhood app is planning to take to address reports of racial profiling and censorship on the platform. As protests swept the nation following the police killing of George Floyd, there was a surge of reports that Nextdoor, the hyperlocal social media app, was censoring posts about Black Lives Matter and racial injustice. In an interview with NPR, Nextdoor CEO Sarah Friar said the company should have moved more quickly to protect posts related to Black Lives Matter by providing clearer guidance. It "was really our fault" that moderators on forums across the country were deleting those posts, she said. People of color have long accused Nextdoor, which serves as a community bulletin board in more than 265,000 neighborhoods across the U.S., of doing nothing about users' racist comments and complaints.
Law enforcement in America is facing a day of reckoning over its systemic, institutionalized racism and ongoing brutality against the people it was designed to protect. Virtually every aspect of the system is now under scrutiny, from budgeting and staffing levels to the data-driven prevention tools it deploys. A handful of local governments have already placed moratoriums on facial recognition systems in recent months and on Wednesday, Santa Cruz, California became the first city in the nation to outright ban the use of predictive policing algorithms. While it's easy to see the privacy risks that facial recognition poses, predictive policing programs have the potential to quietly erode our constitutional rights and exacerbate existing racial and economic biases in the law enforcement community. Simply put, predictive policing technology uses algorithms to pore over massive amounts of data to predict when and where future crimes will occur.
Black Lives Matter is reverberating around the world, triggering a fresh reckoning with the racist global history of colonialism and slavery. While Confederate statues began to tumble across the American South, in Bristol, England, a diverse group felled a statue of a slave trader that has long provoked offense. Statues of colonial conquerors of Africa and South Asia have followed, along with a robust discussion of the ways in which such actions make history rather than erase it. These movements abroad are not merely echoes of BLM; BLM itself is global. The shared impetus is a common opposition to racism, of which anti-Black racism has been the most lethal and traumatic.
Race After Technology opens with a brief personal history set in the Crenshaw neighborhood of Los Angeles, where sociologist Ruha Benjamin spent a portion of her childhood. Recalling the time she set up shop on her grandmother's porch with a chalkboard and invited other kids to do math problems, she writes, "For the few who would come, I would hand out little slips of paper…until someone would insist that we go play tag or hide-and-seek instead. Needless to say, I didn't have that many friends!" As she gazed out the back window during car rides, she saw "boys lined up for police pat-downs," and inside the house she heard "the nonstop rumble of police helicopters overhead, so close that the roof would shake." The omnipresent surveillance continued when she visited her grandmother years later as a mother, her homecomings blighted by "the frustration of trying to keep the kids asleep with the sound and light from the helicopter piercing the window's thin pane." Benjamin's personal beginning sets the tone for her book's approach, one that focuses on how modern invasive technologies--from facial recognition software to electronic ankle monitors to the metadata of photos taken at protests--further racial inequality.
Two Google executives said Friday that bias in artificial intelligence is hurting already marginalized communities in America, and that more needs to be done to ensure that this does not happen. X. Eyeé, outreach lead for responsible innovation at Google, and Angela Williams, policy manager at Google, spoke at (Not IRL) Pride Summit, an event organized by Lesbians Who Tech & Allies, the world's largest technology-focused LGBTQ organization for women, non-binary and trans people around the world. In separate talks, they addressed the ways in which machine learning technology can be used to harm the black community and other communities in America -- and more widely around the world. Bias in algorithms IS NOT JUST A DATA PROBLEM. The choice to use AI can be biased, the way the algorithm learns can be biased, and the way users are impacted/interact with/perceive a system can reinforce bias! checkout @timnitGebru's work to learn more!
Three days ago, in a letter to members of the United States Congress, IBM announced that it was abandoning the development of general-purpose facial recognition technologies because of their potential for mass surveillance, human rights violations and racial discrimination. In his letter, IBM CEO Arvind Krishna called for a reconsideration of the sale of this kind of technology to law enforcement, a gesture with which the company, which after all was announcing the abandonment of a technology in which it is not a leader and that has little impact on its bottom line, managed to put pressure on the companies that do have contracts with those law enforcement agencies, notably Amazon and Microsoft. The next day, Timnit Gebru, one of the leaders of Google's artificial intelligence team, said in an interview with the New York Times that the use of facial recognition technologies by law enforcement or security forces should be banned for the moment, and that he did not know how the issue would evolve in the future. One day later, on Wednesday 10, Amazon announced a one-year moratorium on the police's use of its facial recognition technology, the controversial Rekognition, so as to continue improving it and, above all, to give the government time to reach a reasonable consensus and establish stricter regulations for its ethical use. The company will continue to facilitate the use of this technology by institutions that use it for other purposes, such as preventing human trafficking or reuniting missing children with their families, but will temporarily stop offering it to the police and law enforcement agencies, one of its main customers.
Massachusetts Sen. Ed Markey and Rep. Ayanna Pressley are pushing to ban the federal government's use of facial recognition technology, as Boston last week nixed the city use of the technology and tech giants pause their sale of facial surveillance tools to police. The momentum to stop the government use of facial recognition technology comes in the wake of the police killing of George Floyd in Minneapolis -- a black man killed by a white police officer. Floyd's death has sparked nationwide protests for racial justice and triggered calls for police reform, including ways police track people. Facial recognition technology contributes to the "systemic racism that has defined our society," Markey said on Sunday. "We cannot ignore that facial recognition technology is yet another tool in the hands of law enforcement to profile and oppress people of color in our country," Markey said during an online press briefing.
Members of Congress introduced a new bill on Thursday that would ban government use of biometric technology, including facial recognition tools. Pramila Jayapal and Ayanna Pressley announced the Facial Recognition and Biometric Technology Moratorium Act, which they said resulted from a growing body of research that "points to systematic inaccuracy and bias issues in biometric technologies which pose disproportionate risks to non-white individuals." The bill came just one day after the first documented instance of police mistakenly arresting a man due to facial recognition software. There has been long-standing, widespread concern about the use of facial recognition software from lawmakers, researchers rights groups and even the people behind the technology. Multiple studies over the past three years have repeatedly proven that the tool is still not accurate, especially for people with darker skin.
A Black man who was wrongfully arrested when facial recognition technology mistakenly identified him as a suspected shoplifter wants Detroit police to apologize -- and to end their use of the controversial technology. The complaint by Robert Williams is a rare challenge from someone who not only experienced an erroneous face recognition hit, but was able to discover that it was responsible for his subsequent legal troubles. The Wednesday complaint filed on Williams' behalf alleges that his Michigan driver license photo -- kept in a statewide image repository -- was incorrectly flagged as a likely match to a shoplifting suspect. Investigators had scanned grainy surveillance camera footage of an alleged 2018 theft inside a Shinola watch store in midtown Detroit, police records show. That led to what Williams describes as a humiliating January arrest in front of his wife and young daughters on their front lawn in the Detroit suburb of Farmington Hills.