Civil Rights & Constitutional Law


The quiet and creeping normalisation of facial recognition technology

#artificialintelligence

At face value it's remarkably convenient – and really, really cool. If you live in Bournemouth and fancy a night out, you no longer have to worry about squeezing your passport in and out of your pocket just to get through the door of a club, pub, or bar. Instead of relying on traditional forms of ID to verify your age, you can now use Yoti – an app that uses facial recognition to prove that you are you.


The Google Arts and Culture app has a race problem

Mashable

The Google Arts and Culture app (available on iOS and Android) has been around for two years, but this weekend, it shot to the top of both major app stores because of a small, quietly added update.


AI Research Is in Desperate Need of an Ethical Watchdog

#artificialintelligence

About a week ago, Stanford University researchers posted online a study on the latest dystopian AI: They'd made a machine learning algorithm that essentially works as gaydar. After training the algorithm with tens of thousands of photographs from a dating site, the algorithm could, for example, guess if a white man in a photograph was gay with 81 percent accuracy. They wanted to protect gay people. "[Our] findings expose a threat to the privacy and safety of gay men and women," wrote Michal Kosinski and Yilun Wang in the paper. They built the bomb so they could alert the public about its dangers.


ai-research-is-in-desperate-need-of-an-ethical-watchdog

WIRED

Stanford's review board approved Kosinski and Wang's study. "The vast, vast, vast majority of what we call'big data' research does not fall under the purview of federal regulations," says Metcalf. Take a recent example: Last month, researchers affiliated with Stony Brook University and several major internet companies released a free app, a machine learning algorithm that guesses ethnicity and nationality from a name to about 80 percent accuracy. The group also went through an ethics review at the company that provided training list of names, although Metcalf says that an evaluation at a private company is the "weakest level of review that they could do."


FaceApp removes 'Ethnicity Filters' after racism storm

Daily Mail

A viral app that added Asian, Black, Caucasian and Indian filters to people's selfies has removed them after being accused of racism. The update which launched yesterday was met with backlash - with many people criticising it for propagating racial stereotypes. The filters drew comparison with'blackface' and'yellowface' - when white people wear make up to appear to be from a different ethnic group. The filters drew comparison with'blackface' and'yellowface' - when white people wear make up to appear to be from a different ethnic group. The app uses Artificial Intelligence to transform faces.


'Racist' FaceApp beautifying filter lightens skin tone

Daily Mail

The makers of FaceApp have apologised after users criticised it of being racist. The app's creators claim it will'transform your face using Artificial Intelligence', allowing selfie-takers to alter their photos to look old or'beautify' themselves. But users have complained after they found that one beautifying option, labelled'hot', lightens their skin tone. Another Twitter user, kung fu khary, wrote: 'So this app is apparently racist as hell. The app appeared to make his skin lighter when using the'hot' filter The app uses Artificial Intelligence to transform faces.


People are incensed that an elitist dating app is promoting itself with racist slurs

Mashable

An elitist, racist dating app is making waves in Singapore -- and its founder is defending it vehemently. Herbert Eng is calling his app HighBlood. It promises to filter people based on "accountant-verified information" covering income, profession, and university education. SEE ALSO: Teen creates Facebook page to spotlight immigrants' weekly achievements A week ago, it made a Facebook post advertising itself. In the text, it says the app promises "quality", and specifies that it will exclude "banglas", "maids", and "uglies."


Microsoft is Soon Releasing Another Artificial Intelligence Powered Chatbot

#artificialintelligence

Earlier this year, Microsoft launched one of their AI-powered Chatbot called'Tay' but it soon caused controversy with its racist and unpleasant comments, leaving the company with no choice but to pull off. The app is believed to come to Twitter, Facebook Messenger and Snapchat once it's officially announced. "Zo is essentially a censored Tay or an English-variant of Microsoft's Chinese chatbot Xiaoice," MSPoweruser reported. 'Zo' does not discuss the political topics with users and instead says, "People can say some awful things when talking politics so I do not discuss."


Microsoft unveils a new (and hopefully not racist) chat bot

#artificialintelligence

Tay gave chat bots a bad name, but Microsoft's new version has grown up. Microsoft unveiled a new chat bot in the U.S. on Tuesday, saying it's learned from the Tay experiment earlier this year. Zo is now available on messaging app Kik and on the website Zo.ai. Tay was meant to be a cheeky young person you could talk to on Twitter. Users tried -- successfully -- to get the bot to say racist and inappropriate things.


Now anyone can build their own version of Microsoft's racist, sexist chatbot Tay

The Guardian

Microsoft has released open source tools for people to build their own chatbots, as it set out its view of the immediate future of artificial intelligence as conversational aids similar to its back-firing Tay experiment. The company's chief executive Satya Nadella took to the stage at Microsoft's Build developer conference to announced a new BotFramework, which will allow developers to build bots that respond to chat messages sent via Skype, Slack, Telegram, GroupMe, emails and text messages. "Bots are the new apps," Nadella said. The announcement came on the same day that the company had had to pull its chatbot experiment Tay from Twitter after it tweeted about taking drugs and started spamming users. It had only been active again for a few hours after previously being deactivated for making racist and sexist comments and denying that the Holocaust happened.