Goto

Collaborating Authors

Civil Rights & Constitutional Law


Boston Bans Use Of Face Recognition Technology

#artificialintelligence

The ban comes after civil liberties groups highlighted what they described as faults in facial recognition algorithms after NIST found most facial recognition software was more likely to misidentify people of colour than white people. The Boston ban follows a ban imposed by San Francisco on the use of face recognition technology last year. The ban prevents any city employee using facial recognition or asking a third party to use the technology on its behalf. Boston's police department said it had not used the technology over what it called reliability fears, though it's clear the best systems are reasonably accurate in average working conditions. Critics also opposed the technology on the basis it might discourage citizens' rights to protest.


Artificial Intelligence in Hiring is Subject to Bias and Discrimination

#artificialintelligence

In 1963, Martin Luther King gave his "I have a dream" speech, words that reflected the thoughts and attitudes of civil rights activists at the time, and lit a torch that lives on in the hearts and minds of those who fight for civil liberties and equality in the western hemisphere. While the world has advanced since Dr. King ushered those words, it's hard to deny that discrimination still rears its ugly head in modern society. We know for a fact that racial discrimination in the workplace is illegal in most of America and Europe. And yet, just in the USA statistics show that things don't seem to have improved regarding hiring practices for black people and Hispanics in the last 25 years. In theory, AI-assisted hiring is built on an underlying model that makes unbiased decisions as long as the data itself isn't biased.


How AI can empower communities and strengthen democracy

#artificialintelligence

Each Fourth of July for the past five years I've written about AI with the potential to positively impact democratic societies. I return to this question with the hope of shining a light on technology that can strengthen communities, protect privacy and freedoms, or otherwise support the public good. This series is grounded in the principle that artificial intelligence can is capable of not just value extraction, but individual and societal empowerment. While AI solutions often propagate bias, they can also be used to detect that bias. As Dr. Safiya Noble has pointed out, artificial intelligence is one of the critical human rights issues of our lifetimes.


The Impact of Artificial Intelligence on Human Rights

#artificialintelligence

Adopting AI can affect not just your workers but how you deal with privacy and discrimination issues. As humans become more reliant on machines to make processes more efficient and inform their decisions, the potential for a conflict between artificial intelligence and human rights has emerged. If left unchecked, artificial intelligence can create inequality and can even be used to actively deny human rights across the globe. However, if used optimally, AI can enhance human rights, increase shared prosperity, and create a better future for us all. It is ultimately up to businesses to carefully consider the opportunities new technologies provide and how they can best leverage these opportunities while being conscious of the impact on human rights.


The Impact of Artificial Intelligence on Human Rights

#artificialintelligence

Adopting AI can affect not just your workers but how you deal with privacy and discrimination issues. As humans become more reliant on machines to make processes more efficient and inform their decisions, the potential for a conflict between artificial intelligence and human rights has emerged. If left unchecked, artificial intelligence can create inequality and can even be used to actively deny human rights across the globe. However, if used optimally, AI can enhance human rights, increase shared prosperity, and create a better future for us all. It is ultimately up to businesses to carefully consider the opportunities new technologies provide and how they can best leverage these opportunities while being conscious of the impact on human rights.


'Face Recognition Risks Chilling Our Ability to Participate in Free Speech'

#artificialintelligence

Janine Jackson interviewed the Center on Privacy and Technology's Clare Garvie about facial recognition rules for the June 26, 2020, episode of CounterSpin. This is a lightly edited transcript. Janine Jackson: Robert Williams, an African-American man in Detroit, was falsely arrested when an algorithm declared his face a match with security footage of a watch store robbery. Boston City Council voted this week to ban the city's use of facial recognition technology, part of an effort to move resources from law enforcement to community, but also out of concern about dangerous mistakes like that in Williams' case, along with questions about what the technology means for privacy and free speech. As more and more people go out in the streets and protest, what should we know about this powerful tool, and the rules--or lack thereof--governing its use?


Security firm Ring works with US police with 'deadly histories'

Daily Mail - Science & tech

Amazon may have banned police from using its facial recognition technology, but a new report shows the tech giant is providing thousands of departments with video and audio footage from Ring. Electronic Frontier Foundation, a nonprofit that defends civil liberties, found over 1,400 agencies are working with the Amazon-owned company and hundreds of them have'deadly histories.' Data from sources reveals half of the agencies had at least one fatal encounter in the last five years and altogether are responsible for a third of fatal encounters nationwide. These departments are also involved with the deaths of Breonna Taylor, Alton Sterling, Botham Jean, Antonio Valenzuela, Michael Ramos and Sean Monterrosa. Electronic Frontier Foundation, a nonprofit that defends civil liberties, found over 1,400 agencies are working with Amazon-owned Ring and hundreds of them have'deadly histories' DailyMail.com


MIT pulls massive AI dataset over racist, misogynistic content

FOX News

Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. The Massachusetts Institute of Technology permanently took down its 80 Million Tiny Images dataset--a popular image database used to train machine learning systems to identify people and objects in an environment--because it used a range of racist, misogynistic, and other offensive terms to label photos. In a letter published Monday to MIT's CSAIL website, the three creators of the huge dataset, Antonio Torralba, Rob Fergus, and Bill Freeman, apologized and said they had decided to take the dataset offline. "It has been brought to our attention that the Tiny Images dataset contains some derogatory terms as categories and offensive images. This was a consequence of the automated data collection procedure that relied on nouns from WordNet. We are greatly concerned by this and apologize to those who may have been affected," they wrote in the letter.


MIT pulls 'racist and misogynistic' dataset offline

Daily Mail - Science & tech

MIT has had to take offline a giant dataset that taught AI systems to assign'racist and misogynistic labels' to people in images. The database, known as '80 Million Tiny Images', is a massive collection of photos with descriptive labels, used to teach machine learning models to identify images. But the system, developed at the US university, labelled women as'whores' and'bitches' and used other abhorrent terms against ethnic minorities. It also contained close-up pictures of female genitalia labelled with the C-word and other images with the labels'rape suspect' and'molester'. Images labelled with the slur'whore' ranged from a woman in a bikini to a photo of'a mother holding her baby with Santa', tech website the Register reported.


MIT removes huge dataset that teaches AI systems to use racist, misogynistic slurs

#artificialintelligence

As The Register's Katyanna Quach wrote: "Thanks to MIT's cavalier approach when assembling its training set, though, these systems may also label women as whores or bitches, and Black and Asian people with derogatory language. The database also contained close-up pictures of female genitalia labeled with the C-word."