Goto

Collaborating Authors

Civil Rights & Constitutional Law


ACLU and 70 other organizations ask DHS to stop using Clearview AI

Engadget

More than 70 advocacy groups have called on the Department of Homeland Security to stop using Clearview AI's facial recognition software. In a letter addressed to DHS Secretary Alejandro Mayorkas and Susan Rice, the director of the White House's Domestic Policy Council, the American Civil Liberties Union, Electronic Frontier Foundation, OpenMedia and other organizations argue "the use of Clearview AI by federal immigration authorities has not been subject to sufficient oversight or transparency." The letter points to a recent BuzzFeed News report that found employees from 1,803 government bodies, including police departments and public schools, have been using the software without many of their bosses knowing about it. The company has given out free trials to individual employees at those organizations hoping that they'll advocate for their agency to sign up for it. Besides the lack of oversight, the letter points to issues like racial bias in facial recognition software and the fact Clearview built its database by scraping websites like Facebook, Twitter and YouTube.


Europe Is Already Policing Privacy. AI Could Be Next

#artificialintelligence

Europe is already the world's tech privacy cop. Now it might become the AI cop too. Companies using artificial intelligence in the EU could soon be required to get audited first, under new rules set to be proposed by the European Union as soon as next week. The regulations were partly sketched out in an EU white paper last year and aim to ensure the responsible application of AI in high-stakes situations like autonomous driving, remote surgery or predictive policing. Officials want to ensure that such systems are trained on privacy-protecting and diverse data sets.


The new lawsuit that shows facial recognition is officially a civil rights issue

MIT Technology Review

Williams's wrongful arrest, which was first reported by the New York Times in August 2020, was based on a bad match from the Detroit Police Department's facial recognition system. Two more instances of false arrests have since been made public. Both are also Black men, and both have taken legal action to try rectifying the situation. Now Williams is following in their path and going further--not only by suing the Detroit Police for his wrongful arrest, but by trying to get the technology banned. On Tuesday, the ACLU and the University of Michigan Law School's Civil Rights Litigation Initiative filed a lawsuit on behalf of Williams, alleging that his arrest violated Williams's Fourth Amendment rights and was in defiance of Michigan's civil rights law.


Europeans Can’t Talk about Racist AI systems. They Lack the Words.

#artificialintelligence

Several European artificial intelligence projects rely on race without explicitly saying so. In February, El Confidencial revealed that Renfe, the Spanish railways operator, published a public tender for a system of cameras that could automatically analyze the behavior of passengers on train platforms. One characteristic that the system should be able to assess was "ethnic origin". Ethnic origin can mean many things. But in the context of an automated system that assigns a category to people based on their appearance captured by camera the term is misleading.


Crimes against women spur more surveillance in South Asia

The Japan Times

As cases of violence against women and girls have surged in South Asia in recent years, authorities have introduced harsher penalties and expanded surveillance networks, including facial recognition systems, to prevent such crimes. Police in the north Indian city of Lucknow earlier this year said they would install cameras with emotion recognition technology to spot women being harassed, while in Pakistan, police have launched a mobile safety app after a gang rape. But use of these technologies with no evidence that they help reduce crime, and with no data protection laws, has raised alarm among privacy experts and women's rights activists who say the increased surveillance can hurt women even more. "The police does not even know if this technology works," said Roop Rekha Verma, a women's rights activist in Lucknow in Uttar Pradesh state, which had the highest number of reported crimes against women in India in 2019. "Our experience with the police does not give us the confidence that they will use the technology in an effective and empathetic manner. If it is not deployed properly, it can lead to even more harassment, including from the police," she said.


What if Big Data Helped Judges Decide Exactly What Words Mean?

Slate

The precision and promise of a data-driven society has stumbled these past years, serving up some disturbing--even damning--results: facial recognition software that can't recognize Black faces, human resource software that rejects women's job applications, talking computers that spit racist vitriol. "Those who don't learn history are doomed to repeat it," George Santayana said. But most artificial intelligence applications and data-driven tools learn history aplenty--they just don't avoid its pitfalls. Instead, though touted as a step toward the future, these systems generally learn the past in order to replicate it in the present, repeating historical failures with ruthless, and mindless, efficiency. As Joy Buolamwini says, when it comes to algorithmic decision-making, "data is destiny."


Korean esports players, staff speak out on 'unspeakable' racism, harassment in America

Washington Post - Technology News

"That's part of our job, is to show people that the players on the team, even if some of them don't speak the best English and they're Korean national players, they're living here in the U.S. now. They're like you and me, they're like everybody else," Rufail said. "We're going to continue to … do a lot more content around the team to show their personality and I think people who might have a bit of a, we'll say discriminatory type personality, might understand a little bit better that our Korean players can connect with them in a way that maybe they didn't know previously."


Synthetic data for machine learning combats privacy, bias issues

#artificialintelligence

Modern enterprises are inundated with data; however, not all data is usable as is for machine learning. Though an organization may have millions of data points, it could still have data struggles that stunt machine learning. Turning to synthetic data for machine learning can boost privacy, democratize data, minimize bias in data sets and reduce costs. More broadly, real data and synthetic data tend to be used in combination. "I can't think of any project in the AI space where you wouldn't be able to get a better outcome by leveraging synthetic data," said Kjell Carlsson, principal analyst at Forrester Research.


How US Capitol attack surveillance methods could be used against protesters

The Guardian

Over the past months, federal law enforcement has used a wide variety of surveillance technologies to track down rioters who participated in the 6 January attack on the US Capitol building – demonstrating rising surveillance across the nation. Recent news coverage of the riot has largely focused on facial recognition – and how private citizens and local law enforcement officials have conducted their own facial recognition investigations in an attempt to assist the FBI with the help of social media. But charging documents reveal that the FBI has relied on a variety of other technologies, including license plate readers, police body cameras and cellphone tracking. And civil rights watchdogs like the ACLU are concerned that the same technologies used to surveil the rioters could impede protesters exercising their first amendment rights. The Capitol riot was an exceptional event – marking the first time in centuries that insurrectionists breached the center of the US federal government.


Manufacturing becomes more inclusive as AI enables hiring of workers with disabilities - Microsoft in Business Blogs

#artificialintelligence

Azure's machine learning capabilities enhanced Clover's company-wide digital transformation initiative and provided the company with data that uncovered major inefficiencies in inventory, transportation and distribution sectors. For example, at the company's distribution centers, Azure's machine learning and data analysis capabilities help minimize order assembly by determining what products and how much to stock in each warehouse and where best to stock products within the warehouse. Thanks to the technology, processing teams nearly tripled their efficiency.