Civil Rights & Constitutional Law


Can A.I. Be Taught to Explain Itself?

@machinelearnbot

In September, Michal Kosinski published a study that he feared might end his career. The Economist broke the news first, giving it a self-consciously anodyne title: "Advances in A.I. Are Used to Spot Signs of Sexuality." But the headlines quickly grew more alarmed. By the next day, the Human Rights Campaign and Glaad, formerly known as the Gay and Lesbian Alliance Against Defamation, had labeled Kosinski's work "dangerous" and "junk science." In the next week, the tech-news site The Verge had run an article that, while carefully reported, was nonetheless topped with a scorching headline: "The Invention of A.I. 'Gaydar' Could Be the Start of Something Much Worse."


AI robots are sexist and racist, experts warn

#artificialintelligence

He said the deep learning algorithms which drive AI software are "not transparent", making it difficult to to redress the problem. Currently approximately 9 per cent of the engineering workforce in the UK is female, with women making up only 20 per cent of those taking A Level physics. "We have a problem," Professor Sharkey told Today. Professor Sharkey said researchers at Boston University had demonstrated the inherent bias in AI algorithms by training a machine to analyse text collected from Google News.


FaceApp 'Racist' Filter Shows Users As Black, Asian, Caucasian And Indian

International Business Times

In addition to these blatantly racial face filters – which change everything from hair color to skin tone to eye color – other FaceApp users noted earlier this year that the "hot" filter consistently lightens people's skin color. FaceApp CEO Yaroslav Goncharov defended the Asian, Black, Caucasian and Indian filters in an email to The Verge: "The ethnicity change filters have been designed to be equal in all aspects," he told The Verge over email. Goncharov explained the "hot" filter backlash as an "unfortunate side-effect of the underlying neural network caused by the training set bias." "It is an unfortunate side-effect of the underlying neural network caused by the training set bias, not intended behavior."


International Conference on Artificial Intelligence and Information

#artificialintelligence

submission: We invite submission for a 30 minute presentation (followed by 10 minute discussion). An extended abstract of approximately 250-500 words should be prepared for blind review and include a cover page with full name, institution, contact information and short bio. Files should be submitted in doc(x) word. Please indicate in the subject of the message the following structure: "First Name Last Name - Track - Title of Abstract" We intend to produce a collected volume based upon contributions to the conference.


FaceApp apologises for 'racist' filter that lightens users' skintone

The Guardian

The creator of an app which changes your selfies using artificial intelligence has apologised because its "hot" filter automatically lightened people's skin. So I downloaded this app and decided to pick the "hot" filter not knowing that it would make me white. Yaroslav Goncharov, the creator and CEO of FaceApp, apologised for the feature, which he said was a side-effect of the "neural network". "It is an unfortunate side-effect of the underlying neural network caused by the training set bias, not intended behaviour."


What privacy pros can take away from Uber's Greyball

#artificialintelligence

It combined data collected from its app and "other techniques" to locate, identify, and circumvent legal authorities. Through several means, the company surveilled government officials to avoid regulatory scrutiny and other law enforcement activity. Once a user was identified as law enforcement, Uber Greyballed him or her, tagging the user with a small piece of code that read Greyball followed by a string of numbers. Regulatory officials and law-enforcement officers are people with privacy rights, too.


Fighting Words Not Ideas: Google's New AI-Powered Toxic Speech Filter Is The Right Approach

#artificialintelligence

Alphabet Jigsaw (formerly Google Ideas) officially unveiled this morning their new tool for fighting toxic speech online, appropriately called Perspective. Powered by a deep-learning model trained on more than 17 million manually reviewed reader comments provided by the New York Times, the model assigns a score to a given passage of text, rating it on a scale from 0 to 100%, similar to statements that human reviewers have previously rated as "toxic." What makes this new approach from Google so different than past approaches is that it largely focuses on language rather than ideas: for the most part you can express your thoughts freely and without fear of censorship as long as you express them clinically and clearly, while if you resort to emotional diatribes and name calling, regardless of what you talk about, you will be flagged. What does this tell us about the future of toxic speech online and the notion of machines guiding humans to a more "perfect" humanity? One of the great challenges in filtering out "toxic" speech online is first defining what precisely counts as "toxic" and then determining how to remove such speech without infringing on people's ability to freely express their ideas.


Fighting Words Not Ideas: Google's New AI-Powered Toxic Speech Filter Is The Right Approach

Forbes

Alphabet Jigsaw (formerly Google Ideas) officially unveiled this morning their new tool for fighting toxic speech online, appropriately called Perspective. Powered by a deep learning model trained on more than 17 million manually reviewed reader comments provided by the New York Times, the model assigns a score to a given passage of text, rating it on a scale from 0 to 100% similar to statements that human reviewers have previously rated as "toxic." What makes this new approach from Google so different than past approaches is that it largely focuses on language rather than ideas: for the most part you can express your thoughts freely and without fear of censorship as long as you express them clinically and clearly, while if you resort to emotional diatribes and name calling, regardless of what you talk about, you will be flagged. What does this tell us about the future of toxic speech online and the notion of machines guiding humans to a more "perfect" humanity? One of the great challenges in filtering out "toxic" speech online is first defining what precisely counts as "toxic" and then determining how to remove such speech without infringing on people's ability to freely express their ideas.


Artificial intelligence remains key as Intel buys Nervana

AITopics Original Links

With Intel's acquisition of the deep learning startup Nervana Systems on Monday, the chipmaker is moving further into artificial intelligence, a field that's become a key focus for tech companies in recent years. Intel's purchase of the company comes in the wake of Apple's acquisition of Seattle-based Turi Inc. last week, while Google, Yahoo, Microsoft, Twitter, and Samsung have also made similar deals. Those purchases – large tech firms have bought 31 AI startups since 2011 – according to the research firm CB Insights, also underscore a shift. While AI was once thought of as a sci-fi concept, the technology behind it has come to propel a slew of innovations by hardware and software companies that both attract attention – like self-driving cars – and ones that often go unnoticed, like product recommendations on Amazon. "Intel's acquisition of [Nervana] is an acknowledgment that this area of deep learning, machine learning, artificial intelligence, is really an important part of all companies going forward," says David Schubmehl, an analyst who focuses on the field at the research firm IDC.


Microsoft Translator erodes language barrier for in-person conversations - Next at Microsoft

#artificialintelligence

For James Simmonds-Read, overcoming language barriers is essential. He works at The Children's Society in London with migrants and refugees, mostly young men who are victims of human trafficking. "They are all asylum seekers and a large number of them have issues around language," he said. "Very frequently, we need to use translators." That has its own challenges, because it means the young men must disclose sensitive information to third-party interpreters.