Civil Rights & Constitutional Law


What is algorithmic bias?

@machinelearnbot

This article is part of Demystifying AI, a series of posts that (try) to disambiguate the jargon and myths surrounding AI. In early 2016, Microsoft launched Tay, an AI chatbot that was supposed to mimic the behavior of a curious teenage girl and engage in smart discussions with Twitter users. The project would display the promises and potential of AI-powered conversational interfaces. However, in less than 24 hours, the innocent Tay became a racist, misogynist and a holocaust denying AI, debunking--once again--the myth of algorithmic neutrality. For years, we've thought that artificial intelligence doesn't suffer from the prejudices and biases of its human creators because it's driven by pure, hard, mathematical logic.


How AI-Driven Insurance Could Reduce Gun Violence

#artificialintelligence

Americans do not agree on guns. Debate is otiose, because we reject each other's facts and have grown weary of each other's arguments. A little more than half the nation wants guns more tightly regulated, because tighter regulation would mean fewer guns, which would mean less gun violence. A little less than half answers, simply: The Supreme Court has found in the Second Amendment an individual right to bear arms. Legally prohibiting or confiscating guns would mean amending the Constitution, which the Framers made hard. It will never, ever happen.


How AI-Driven Insurance Could Help Prevent Gun Violence

WIRED

Americans do not agree on guns. Debate is otiose, because we reject each other's facts and have grown weary of each other's arguments. A little more than half the nation wants guns more tightly regulated, because tighter regulation would mean fewer guns, which would mean less gun violence. A little less than half answers, simply: The Supreme Court has found in the Second Amendment an individual right to bear arms. Legally prohibiting or confiscating guns would mean amending the Constitution, which the Framers made hard. It will never, ever happen.


Kanagawa police to launch AI-based predictive policing system before Olympics

The Japan Times

YOKOHAMA – The Kanagawa Prefectural Police plan to become the first in the nation to introduce predictive policing, a method of anticipating crimes and accidents using artificial intelligence, sources said Sunday.


AI robots are sexist and racist, experts warn

#artificialintelligence

He said the deep learning algorithms which drive AI software are "not transparent", making it difficult to to redress the problem. Currently approximately 9 per cent of the engineering workforce in the UK is female, with women making up only 20 per cent of those taking A Level physics. "We have a problem," Professor Sharkey told Today. "We need many more women coming into this field to solve it." His warning came as it was revealed a prototype programme developed to short-list candidates for a UK medical school had negatively selected against women and black and other ethnic minority candidates.


How not to create a racist, sexist robot

#artificialintelligence

Robots are picking up sexist and racist biases based on information used to program them predominantly coming from one homogenous group of people, suggests a new study from Princeton University and the U.K.'s University of Bath. Lead study author, Aylin Caliskan says the findings surprised her. "There's this common understanding that machines are supposed to be objective. But robots based on artificial intelligence (AI) and machine learning learn from historic human data and this data usually contain biases," Caliskan tells The Current's Anna Maria Tremonti. Machine learning takes statistics and information that has been inputted and Caliskan argues it's only until humans become completely unbiased that the possibility of an unprejudiced robot can exist.


Fighting Words Not Ideas: Google's New AI-Powered Toxic Speech Filter Is The Right Approach

#artificialintelligence

Alphabet Jigsaw (formerly Google Ideas) officially unveiled this morning their new tool for fighting toxic speech online, appropriately called Perspective. Powered by a deep-learning model trained on more than 17 million manually reviewed reader comments provided by the New York Times, the model assigns a score to a given passage of text, rating it on a scale from 0 to 100%, similar to statements that human reviewers have previously rated as "toxic." What makes this new approach from Google so different than past approaches is that it largely focuses on language rather than ideas: for the most part you can express your thoughts freely and without fear of censorship as long as you express them clinically and clearly, while if you resort to emotional diatribes and name calling, regardless of what you talk about, you will be flagged. What does this tell us about the future of toxic speech online and the notion of machines guiding humans to a more "perfect" humanity? One of the great challenges in filtering out "toxic" speech online is first defining what precisely counts as "toxic" and then determining how to remove such speech without infringing on people's ability to freely express their ideas.


Fighting Words Not Ideas: Google's New AI-Powered Toxic Speech Filter Is The Right Approach

Forbes Technology

Alphabet Jigsaw (formerly Google Ideas) officially unveiled this morning their new tool for fighting toxic speech online, appropriately called Perspective. Powered by a deep learning model trained on more than 17 million manually reviewed reader comments provided by the New York Times, the model assigns a score to a given passage of text, rating it on a scale from 0 to 100% similar to statements that human reviewers have previously rated as "toxic." What makes this new approach from Google so different than past approaches is that it largely focuses on language rather than ideas: for the most part you can express your thoughts freely and without fear of censorship as long as you express them clinically and clearly, while if you resort to emotional diatribes and name calling, regardless of what you talk about, you will be flagged. What does this tell us about the future of toxic speech online and the notion of machines guiding humans to a more "perfect" humanity? One of the great challenges in filtering out "toxic" speech online is first defining what precisely counts as "toxic" and then determining how to remove such speech without infringing on people's ability to freely express their ideas.


Artificial intelligence remains key as Intel buys Nervana

AITopics Original Links

With Intel's acquisition of the deep learning startup Nervana Systems on Monday, the chipmaker is moving further into artificial intelligence, a field that's become a key focus for tech companies in recent years. Intel's purchase of the company comes in the wake of Apple's acquisition of Seattle-based Turi Inc. last week, while Google, Yahoo, Microsoft, Twitter, and Samsung have also made similar deals. Those purchases – large tech firms have bought 31 AI startups since 2011 – according to the research firm CB Insights, also underscore a shift. While AI was once thought of as a sci-fi concept, the technology behind it has come to propel a slew of innovations by hardware and software companies that both attract attention – like self-driving cars – and ones that often go unnoticed, like product recommendations on Amazon. "Intel's acquisition of [Nervana] is an acknowledgment that this area of deep learning, machine learning, artificial intelligence, is really an important part of all companies going forward," says David Schubmehl, an analyst who focuses on the field at the research firm IDC.


Not 'Zo' Racist: Microsoft Releases New Cleaner Talking ChatBot

#artificialintelligence

The race is on between the big tech giants to develop the best artificially intelligent assistant on almost human parity levels and Zo is next in line. It seems 2016 is the year of the Artificial Intelligence (AI) assistant or indeed, chatbot. Their success depends on the machine's "IQ and EQ [Emotional Quotient -- ability to understand the emotions of others]," Harry Shum executive VP of Microsoft's AI research group told a conference in San Francisco. Creating #AI for all: Microsoft Ventures supports startups focused on inclusive growth & societal good. IQ can been developed by using deep learning techniques and speech recognition software and is essential if the bot is going to complete specific tasks.