Civil Rights & Constitutional Law


FaceApp forced to pull 'racist' filters that allow 'digital blackface'

The Guardian

Popular AI-powered selfie program FaceApp was forced to pull new filters that allowed users to modify their pictures to look like different races, just hours after it launched it. The app, which initially became famous for its features that let users edit images to look older or younger, or add a smile, launched the new filters around midday on Wednesday. The company initially released a statement arguing that the "ethnicity change filters" were "designed to be equal in all aspects". Wow... FaceApp really setting the bar for racist AR with its awful new update that includes Black, Indian and Asian "race filters" pic.twitter.com/Lo5kmLvoI9 It's not even the first time the app has waded into this storm.


When algorithms are racist

The Guardian

Joy Buolamwini is a graduate researcher at the MIT Media Lab and founder of the Algorithmic Justice League – an organisation that aims to challenge the biases in decision-making software. When I was a computer science undergraduate I was working on social robotics – the robots use computer vision to detect the humans they socialise with. I discovered I had a hard time being detected by the robot compared to lighter-skinned people. Thinking about yourself – growing up in Mississippi, a Rhodes Scholar, a Fulbright Fellow and now at MIT – do you wonder that if those admissions decisions had been taken by algorithms you might not have ended up where you are?


FaceApp apologises for 'racist' filter that lightens users' skintone

The Guardian

The creator of an app which changes your selfies using artificial intelligence has apologised because its "hot" filter automatically lightened people's skin. So I downloaded this app and decided to pick the "hot" filter not knowing that it would make me white. Yaroslav Goncharov, the creator and CEO of FaceApp, apologised for the feature, which he said was a side-effect of the "neural network". "It is an unfortunate side-effect of the underlying neural network caused by the training set bias, not intended behaviour."


Chatbot that overturned 160,000 parking fines now helping refugees claim asylum

The Guardian

The creator of a chatbot which overturned more than 160,000 parking fines and helped vulnerable people apply for emergency housing is now turning the bot to helping refugees claim asylum. The original DoNotPay, created by Stanford student Joshua Browder, describes itself as "the world's first robot lawyer", giving free legal aid to users through a simple-to-use chat interface. The chatbot, using Facebook Messenger, can now help refugees fill in an immigration application in the US and Canada. Those in the UK are told they need to apply in person, and the bot helps fill out an ASF1 form for asylum support.


Twitter and Dataminr block government 'spy centers' from seeing user data

The Guardian

Twitter has blocked federally funded "domestic spy centers" from using a powerful social media monitoring tool after public records revealed that the government had special access to users' information for controversial surveillance efforts. The American Civil Liberties Union of California discovered that so-called fusion centers, which collect intelligence, had access to monitoring technology from Dataminr, an analytics company partially owned by Twitter. Records that the ACLU obtained uncovered that a fusion center in southern California had access to Dataminr's "geospatial analysis application", which allowed the government to do location-based tracking as well as searches tied to keywords. In October, the ACLU obtained government records revealing that Twitter, Facebook and Instagram had provided users' data to Geofeedia, a software company that aids police surveillance programs and has targeted protesters of color.


UK to censor online videos of 'non-conventional' sex acts

The Guardian

The proposal, part of the digital economy bill, would force internet service providers to block sites hosting content that would not be certified for commercial DVD sale by the British Board of Film Classification (BBFC). It is contained within provisions of the bill designed to enforce strict age verification checks to stop children accessing adult websites. Pictures and videos that show spanking, whipping or caning that leaves marks, and sex acts involving urination, female ejaculation or menstruation as well as sex in public are likely to be caught by the ban – in effect turning back the clock on Britain's censorship regime to the pre-internet era. A spokeswoman for the BBFC said it would also check whether sites host "pornographic content that we would refuse to classify".


Virtual assistants such as Amazon's Echo break US child privacy law, experts say

The Guardian

In a promotional video for Amazon's Echo virtual assistant device, a young girl no older than 12 asks excitedly: "Is it for me?". "This is part of the initial wave of marketing to children using the internet of things," says Jeff Chester, executive director of the Center for Digital Democracy, a privacy advocacy group that helped write the law. Khaliah Barnes, associate director of the Electronic Privacy Information Center (EPIC), believes that by showing pre-teenage children using voice-activated AI devices, Amazon, Google and Apple are admitting their services are aimed at youngsters. Online devices have replaced TV as the babysitter, and companies will know there's a child there by the ... interaction However, COPPA forbids a company from storing a child's personal information, including recordings of their voice, without the explicit, verifiable consent of their parents.


Now anyone can build their own version of Microsoft's racist, sexist chatbot Tay

The Guardian

The company's chief executive Satya Nadella took to the stage at Microsoft's Build developer conference to announced a new BotFramework, which will allow developers to build bots that respond to chat messages sent via Skype, Slack, Telegram, GroupMe, emails and text messages. The announcement came on the same day that the company had had to pull its chatbot experiment Tay from Twitter after it tweeted about taking drugs and started spamming users. Nadella said: "As an industry, we are on the cusp of a new frontier that pairs the power of natural human language with advanced machine intelligence." For Microsoft the move is about injecting itself into the lives of those who are not Microsoft service users.


Microsoft's racist chatbot returns with drug-smoking Twitter meltdown

The Guardian

Microsoft had previously gone through the bot's tweets and removed the most offensive and vowed only to bring the experiment back online if the company's engineers could "better anticipate malicious intent that conflicts with our principles and values". Microsoft's sexist racist Twitter bot @TayandYou is BACK in fine form pic.twitter.com/nbc69x3LEd Tay then started to tweet out of control, spamming its more than 210,000 followers with the same tweet, saying: "You are too fast, please take a rest …" over and over. I guess they turned @TayandYou back on... it's having some kind of meltdown. Its Chinese XiaoIce chatbot successfully interacts with more than 40 million people across Twitter, Line, Weibo and other sites but the company's experiments targeting 18- to 24-year-olds in the US on Twitter has resulted in a completely different animal.


Tay, Microsoft's AI chatbot, gets a crash course in racism from Twitter

The Guardian

Microsoft's attempt at engaging millennials with artificial intelligence has backfired hours into its launch, with waggish Twitter users teaching its chatbot how to be racist. But it appeared on Thursday that Tay's conversation extended to racist, inflammatory and political statements. A long, fairly banal conversation between Tay and a Twitter user escalated suddenly when Tay responded to the question "is Ricky Gervais an atheist?" Tay in most cases was only repeating other users' inflammatory statements, but the nature of AI means that it learns from those interactions.