Civil Rights & Constitutional Law


AI robots are sexist and racist, experts warn

#artificialintelligence

He said the deep learning algorithms which drive AI software are "not transparent", making it difficult to to redress the problem. Currently approximately 9 per cent of the engineering workforce in the UK is female, with women making up only 20 per cent of those taking A Level physics. "We have a problem," Professor Sharkey told Today. Professor Sharkey said researchers at Boston University had demonstrated the inherent bias in AI algorithms by training a machine to analyse text collected from Google News.


FaceApp 'Racist' Filter Shows Users As Black, Asian, Caucasian And Indian

International Business Times

In addition to these blatantly racial face filters – which change everything from hair color to skin tone to eye color – other FaceApp users noted earlier this year that the "hot" filter consistently lightens people's skin color. FaceApp CEO Yaroslav Goncharov defended the Asian, Black, Caucasian and Indian filters in an email to The Verge: "The ethnicity change filters have been designed to be equal in all aspects," he told The Verge over email. Goncharov explained the "hot" filter backlash as an "unfortunate side-effect of the underlying neural network caused by the training set bias." "It is an unfortunate side-effect of the underlying neural network caused by the training set bias, not intended behavior."


International Conference on Artificial Intelligence and Information

#artificialintelligence

submission: We invite submission for a 30 minute presentation (followed by 10 minute discussion). An extended abstract of approximately 250-500 words should be prepared for blind review and include a cover page with full name, institution, contact information and short bio. Files should be submitted in doc(x) word. Please indicate in the subject of the message the following structure: "First Name Last Name - Track - Title of Abstract" We intend to produce a collected volume based upon contributions to the conference.


FaceApp apologises for 'racist' filter that lightens users' skintone

The Guardian

The creator of an app which changes your selfies using artificial intelligence has apologised because its "hot" filter automatically lightened people's skin. So I downloaded this app and decided to pick the "hot" filter not knowing that it would make me white. Yaroslav Goncharov, the creator and CEO of FaceApp, apologised for the feature, which he said was a side-effect of the "neural network". "It is an unfortunate side-effect of the underlying neural network caused by the training set bias, not intended behaviour."


What privacy pros can take away from Uber's Greyball

#artificialintelligence

It combined data collected from its app and "other techniques" to locate, identify, and circumvent legal authorities. Through several means, the company surveilled government officials to avoid regulatory scrutiny and other law enforcement activity. Once a user was identified as law enforcement, Uber Greyballed him or her, tagging the user with a small piece of code that read Greyball followed by a string of numbers. Regulatory officials and law-enforcement officers are people with privacy rights, too.


It's Too Late--We've Already Taught AI to Be Racist and Sexist

#artificialintelligence

Miltenburg hasn't tested whether software trained on these image descriptions actually generates new, and biased, descriptions. Annotating images to teach machines should, Miltenburg wrote, be treated more as a psychological experiment, and less like a rote data collection task. By tightening the guidelines for crowdworkers, researchers would be able to better control what information deep learning software vacuums up in the first place. "One could certainly create annotation guidelines that explicitly instruct workers about gender or racial stereotypes," wrote Hockenmaier.