Civil Rights & Constitutional Law


AI Research Is in Desperate Need of an Ethical Watchdog

#artificialintelligence

Stanford's review board approved Kosinski and Wang's study. "The vast, vast, vast majority of what we call'big data' research does not fall under the purview of federal regulations," says Metcalf. Take a recent example: Last month, researchers affiliated with Stony Brook University and several major internet companies released a free app, a machine learning algorithm that guesses ethnicity and nationality from a name to about 80 percent accuracy. The group also went through an ethics review at the company that provided training list of names, although Metcalf says that an evaluation at a private company is the "weakest level of review that they could do."


ai-research-is-in-desperate-need-of-an-ethical-watchdog

WIRED

Stanford's review board approved Kosinski and Wang's study. "The vast, vast, vast majority of what we call'big data' research does not fall under the purview of federal regulations," says Metcalf. Take a recent example: Last month, researchers affiliated with Stony Brook University and several major internet companies released a free app, a machine learning algorithm that guesses ethnicity and nationality from a name to about 80 percent accuracy. The group also went through an ethics review at the company that provided training list of names, although Metcalf says that an evaluation at a private company is the "weakest level of review that they could do."


FaceApp removes 'Ethnicity Filters' after racism storm

Daily Mail

When asked to make his picture'hot' the app lightened his skin and changed the shape of his nose The app's creators claim it will'transform your face using Artificial Intelligence', allowing selfie-takers to transform their photos Earlier this year people accused the popular photo editing app Meitu of being racist. Earlier this year people accused the popular photo editing app Meitu of giving users'yellow face'. Earlier this year people accused the popular photo editing app Meitu of giving users'yellow face'. Twitter user Vaughan posted a picture of Kanye West with a filter applied, along with the caption: 'So Meitu's pretty racist'


FaceApp forced to pull 'racist' filters that allow 'digital blackface'

The Guardian

Popular AI-powered selfie program FaceApp was forced to pull new filters that allowed users to modify their pictures to look like different races, just hours after it launched it. The app, which initially became famous for its features that let users edit images to look older or younger, or add a smile, launched the new filters around midday on Wednesday. The company initially released a statement arguing that the "ethnicity change filters" were "designed to be equal in all aspects". Wow... FaceApp really setting the bar for racist AR with its awful new update that includes Black, Indian and Asian "race filters" pic.twitter.com/Lo5kmLvoI9 It's not even the first time the app has waded into this storm.


'Racist' FaceApp beautifying filter lightens skin tone

Daily Mail

When asked to make his picture'hot' the app lightened his skin and changed the shape of his nose The app's creators claim it will'transform your face using Artificial Intelligence', allowing selfie-takers to transform their photos Earlier this year people accused the popular photo editing app Meitu of being racist. Earlier this year people accused the popular photo editing app Meitu of giving users'yellow face'. Earlier this year people accused the popular photo editing app Meitu of giving users'yellow face'. Twitter user Vaughan posted a picture of Kanye West with a filter applied, along with the caption: 'So Meitu's pretty racist' The views expressed in the contents above are those of our users and do not necessarily reflect the views of MailOnline.


FaceApp apologises for 'racist' filter that lightens users' skintone

The Guardian

The creator of an app which changes your selfies using artificial intelligence has apologised because its "hot" filter automatically lightened people's skin. So I downloaded this app and decided to pick the "hot" filter not knowing that it would make me white. Yaroslav Goncharov, the creator and CEO of FaceApp, apologised for the feature, which he said was a side-effect of the "neural network". "It is an unfortunate side-effect of the underlying neural network caused by the training set bias, not intended behaviour."


People call FaceApp racist after it lightens their skin tone

Mashable

The makers of FaceApp are backtracking after users accused the popular face-morphing app of racism. Troublingly, this last option, labeled as "hot," appears to lighten users' skin tone. Since I did my face reveal, I also did the face app thing, tell me what you think, also "hot" me is just me with lighter skin, so racist,:P pic.twitter.com/l2rrVobWyW Mashable reached out to FaceApp founder and CEO Yaroslav Goncharov about the criticism, and he was quick to apologize. This is not the first time that an app has been accused of racism for altering users' appearance.


People are incensed that an elitist dating app is promoting itself with racist slurs

Mashable

An elitist, racist dating app is making waves in Singapore -- and its founder is defending it vehemently. SEE ALSO: Teen creates Facebook page to spotlight immigrants' weekly achievements A week ago, it made a Facebook post advertising itself. The term "banglas" is a racist term for the Bangladeshi migrant workers in Singapore. In an earlier Medium post he made in December, Eng said his app would allow filtering by "prestigious schools."


Now anyone can build their own version of Microsoft's racist, sexist chatbot Tay

The Guardian

The company's chief executive Satya Nadella took to the stage at Microsoft's Build developer conference to announced a new BotFramework, which will allow developers to build bots that respond to chat messages sent via Skype, Slack, Telegram, GroupMe, emails and text messages. The announcement came on the same day that the company had had to pull its chatbot experiment Tay from Twitter after it tweeted about taking drugs and started spamming users. Nadella said: "As an industry, we are on the cusp of a new frontier that pairs the power of natural human language with advanced machine intelligence." For Microsoft the move is about injecting itself into the lives of those who are not Microsoft service users.