Civil Rights & Constitutional Law


AI Research Is in Desperate Need of an Ethical Watchdog

#artificialintelligence

Stanford's review board approved Kosinski and Wang's study. "The vast, vast, vast majority of what we call'big data' research does not fall under the purview of federal regulations," says Metcalf. Take a recent example: Last month, researchers affiliated with Stony Brook University and several major internet companies released a free app, a machine learning algorithm that guesses ethnicity and nationality from a name to about 80 percent accuracy. The group also went through an ethics review at the company that provided training list of names, although Metcalf says that an evaluation at a private company is the "weakest level of review that they could do."


ai-research-is-in-desperate-need-of-an-ethical-watchdog

WIRED

Stanford's review board approved Kosinski and Wang's study. "The vast, vast, vast majority of what we call'big data' research does not fall under the purview of federal regulations," says Metcalf. Take a recent example: Last month, researchers affiliated with Stony Brook University and several major internet companies released a free app, a machine learning algorithm that guesses ethnicity and nationality from a name to about 80 percent accuracy. The group also went through an ethics review at the company that provided training list of names, although Metcalf says that an evaluation at a private company is the "weakest level of review that they could do."


FaceApp removes 'Ethnicity Filters' after racism storm

Daily Mail

When asked to make his picture'hot' the app lightened his skin and changed the shape of his nose The app's creators claim it will'transform your face using Artificial Intelligence', allowing selfie-takers to transform their photos Earlier this year people accused the popular photo editing app Meitu of being racist. Earlier this year people accused the popular photo editing app Meitu of giving users'yellow face'. Earlier this year people accused the popular photo editing app Meitu of giving users'yellow face'. Twitter user Vaughan posted a picture of Kanye West with a filter applied, along with the caption: 'So Meitu's pretty racist'


FaceApp forced to pull 'racist' filters that allow 'digital blackface'

The Guardian

Popular AI-powered selfie program FaceApp was forced to pull new filters that allowed users to modify their pictures to look like different races, just hours after it launched it. The app, which initially became famous for its features that let users edit images to look older or younger, or add a smile, launched the new filters around midday on Wednesday. The company initially released a statement arguing that the "ethnicity change filters" were "designed to be equal in all aspects". Wow... FaceApp really setting the bar for racist AR with its awful new update that includes Black, Indian and Asian "race filters" pic.twitter.com/Lo5kmLvoI9 It's not even the first time the app has waded into this storm.


'Racist' FaceApp beautifying filter lightens skin tone

Daily Mail

When asked to make his picture'hot' the app lightened his skin and changed the shape of his nose The app's creators claim it will'transform your face using Artificial Intelligence', allowing selfie-takers to transform their photos Earlier this year people accused the popular photo editing app Meitu of being racist. Earlier this year people accused the popular photo editing app Meitu of giving users'yellow face'. Earlier this year people accused the popular photo editing app Meitu of giving users'yellow face'. Twitter user Vaughan posted a picture of Kanye West with a filter applied, along with the caption: 'So Meitu's pretty racist' The views expressed in the contents above are those of our users and do not necessarily reflect the views of MailOnline.


FaceApp apologises for 'racist' filter that lightens users' skintone

The Guardian

The creator of an app which changes your selfies using artificial intelligence has apologised because its "hot" filter automatically lightened people's skin. So I downloaded this app and decided to pick the "hot" filter not knowing that it would make me white. Yaroslav Goncharov, the creator and CEO of FaceApp, apologised for the feature, which he said was a side-effect of the "neural network". "It is an unfortunate side-effect of the underlying neural network caused by the training set bias, not intended behaviour."


People call FaceApp racist after it lightens their skin tone

Mashable

The makers of FaceApp are backtracking after users accused the popular face-morphing app of racism. Troublingly, this last option, labeled as "hot," appears to lighten users' skin tone. Since I did my face reveal, I also did the face app thing, tell me what you think, also "hot" me is just me with lighter skin, so racist,:P pic.twitter.com/l2rrVobWyW Mashable reached out to FaceApp founder and CEO Yaroslav Goncharov about the criticism, and he was quick to apologize. This is not the first time that an app has been accused of racism for altering users' appearance.


People are incensed that an elitist dating app is promoting itself with racist slurs

Mashable

An elitist, racist dating app is making waves in Singapore -- and its founder is defending it vehemently. SEE ALSO: Teen creates Facebook page to spotlight immigrants' weekly achievements A week ago, it made a Facebook post advertising itself. The term "banglas" is a racist term for the Bangladeshi migrant workers in Singapore. In an earlier Medium post he made in December, Eng said his app would allow filtering by "prestigious schools."


Microsoft is Soon Releasing Another Artificial Intelligence Powered Chatbot

#artificialintelligence

Earlier this year, Microsoft launched one of their AI-powered Chatbot called'Tay' but it soon caused controversy with its racist and unpleasant comments, leaving the company with no choice but to pull off. New reports according to Gadgets Now show that the, Redmond-based software firm is reportedly releasing another artificial intelligence powered Chatbot dubbed Zo on the social messaging app'Kik'. The app is believed to come to Twitter, Facebook Messenger and Snapchat once it's officially announced. "Zo is essentially a censored Tay or an English-variant of Microsoft's Chinese chatbot Xiaoice," MSPoweruser reported. At the initial launch of the app, the Chatbot does a "super abbreviated personality test" in which it asks if the user would rather study in school or learn from experience.


Microsoft unveils a new (and hopefully not racist) chat bot

#artificialintelligence

Tay gave chat bots a bad name, but Microsoft's new version has grown up. Microsoft unveiled a new chat bot in the U.S. on Tuesday, saying it's learned from the Tay experiment earlier this year. Zo is now available on messaging app Kik and on the website Zo.ai. Tay was meant to be a cheeky young person you could talk to on Twitter. Users tried -- successfully -- to get the bot to say racist and inappropriate things.