Civil Rights & Constitutional Law


FaceApp 'Racist' Filter Shows Users As Black, Asian, Caucasian And Indian

International Business Times

In addition to these blatantly racial face filters – which change everything from hair color to skin tone to eye color – other FaceApp users noted earlier this year that the "hot" filter consistently lightens people's skin color. FaceApp CEO Yaroslav Goncharov defended the Asian, Black, Caucasian and Indian filters in an email to The Verge: "The ethnicity change filters have been designed to be equal in all aspects," he told The Verge over email. Goncharov explained the "hot" filter backlash as an "unfortunate side-effect of the underlying neural network caused by the training set bias." "It is an unfortunate side-effect of the underlying neural network caused by the training set bias, not intended behavior."


Biased algorithms are everywhere, and no one seems to care

#artificialintelligence

This week a group of researchers, together with the American Civil Liberties Union, launched an effort to identify and highlight algorithmic bias. "It's still early days for understanding algorithmic bias," Crawford and Whittaker said in an e-mail. A key challenge, these and other researchers say, is that crucial stakeholders, including the companies that develop and apply machine learning systems and government regulators, show little interest in monitoring and limiting algorithmic bias. Financial and technology companies use all sorts of mathematical models and aren't transparent about how they operate.


Artificial Intelligence as a Weapon for Hate and Racism

#artificialintelligence

According to Crawford, "AI is really, really good at centralizing power; at claiming a type of scientific neutrality without being transparent. She refers to the data upon which this type of facial recognition/machine learning systems is based as "human-trained." She cited problems with an emerging form of machine learning, predictive policing. "Police systems ingest huge amounts of historical crime data as a way of predicting where future crime might happen, where the hotspots will be," she explained.


Why using voice recognition to identify refugees is controversial

Mashable

Germany plans to use voice recognition software to verify the many asylum applications it receives -- but the technology is far from perfect, experts warn. Now, German authorities are planning to use new voice recognition software to verify the asylum seekers' country of origin, according to a report on Die Welt. The test, which will begin in two weeks and roll out widely in 2018, aims at analysing and identifying the dialects of people seeking asylum using recorded speech samples. Another expert, computer scientist Dirk Hovy at the University of Copenhagen, told Die Welt that the new system would need the creation of a very accurate and broad database, which is a difficult task.


If Artificial Intelligence Is Taught To Think Like Humans, Then Are Machines Going To Be Sexist, Racist And Discriminatory?

#artificialintelligence

Our devices are connected, personal digital assistants answer our queries, algorithms track our habits and make recommendations, AI is sparking advancements in medicine, cars will soon be driving themselves, and robots will be delivering our pizza etc. An AI-judged beauty contest went through thousands of selfies and chose 44 fair skin faces and only one dark face to be the winners. Tools are usually designed for men, women clothing have no pockets, seat belts were till recently only tested on male dummies, thus putting women at greater risk in case of a crash. Artificial intelligence gives us the incredible opportunity to wipe out human bias in decision making.


Machine Learning: A New Weapon In The War Against Forced Labor And Human Trafficking Fast Company The Future Of Business

#artificialintelligence

Many of these people are being exploited in ways that have existed throughout history: About 22% are victims of "forced sexual exploitation," with others made to work in agriculture, manufacturing, construction, or domestic labor, according to the report from the U.N.'s International Labor Organization. "A lot of companies are becoming a lot more purpose-driven, and I think there's a lot more importance even to end consumers today about the type of companies they're buying [from]," says Alex Atzberger, president of SAP Ariba, a massive business-to-business procurement network. Late last year, SAP Ariba began to let corporate customers track their risk of forced labor issues existing in their supply chains by integrating data from sources like the supply chain transparency service Made in a Free World. That, along with publicly available data like media reports of labor violations and assessments of labor issues in particular regions, allows the company to deliver comprehensible risk assessments even to companies with massively complex, multilevel supply chains.


Do We Have A Reason To Fear Artificial Intelligence? (Video with Jason Silva)

#artificialintelligence

Many of the stories on this site contain copyrighted material whose use has not been specifically authorized by the copyright owner. We are making this material available in an effort to advance the understanding of environmental issues, human rights, economic and political democracy, and issues of social justice. We believe this constitutes a'fair use' of the copyrighted material as provided for in Section 107 of the US


Are artificial intelligence systems intrinsically racist?

#artificialintelligence

However, by using your zip code, an AI system can infer your race, your religion or your economic standing in your community by where you live. Will an AI system take a person's socioeconomic condition into consideration as part of using FICO scores or zip codes in its statistical model? Known as the EU Data Protection Directive, or Directive 95/46/EC. The consent clause states: "Personal data should not be disclosed or shared with third parties without consent from its subject(s)."


Are artificial intelligence systems intrinsically racist?

#artificialintelligence

However, by using your zip code, an AI system can infer your race, your religion or your economic standing in your community by where you live. Will an AI system take a person's socioeconomic condition into consideration as part of using FICO scores or zip codes in its statistical model? Known as the EU Data Protection Directive, or Directive 95/46/EC. The consent clause states: "Personal data should not be disclosed or shared with third parties without consent from its subject(s)."


The March on Austin: Washington Casts a Shadow on SXSW

#artificialintelligence

For the creators, marketers and entrepreneurs descending this weekend on Austin, Texas, politics in the wake of President Trump will surely be top of mind, perhaps even overshadowing some of the innovation in virtual reality and artificial intelligence. This year's dialog will focus on how "social media can drive organized protests and provide support for causes our current administration has reprioritized," like the environment, gender equality and women's rights, said Neil Carty, senior VP-innovation strategy at consultancy MediaLink. "There is a shift away from interruptive TV ads to content people want to watch in its own right," said Jody Raida, director-branded entertainment at McGarryBowen. Artificial intelligence and virtual reality will also be hot, with dozens of sessions dedicated to the technologies, along with the application of chatbots and live video.