Civil Rights & Constitutional Law


Facial Recognition: Should We Fear It or Embrace It?

#artificialintelligence

Facial-recognition technology is not new, but it has progressed immensely in the past few years, mainly because of advances in artificial intelligence. Naturally, this has drawn the interest of Silicon Valley, advertising agencies, hardware manufacturers, and the government. But not everyone is thrilled. The American Civil Liberties Union (ACLU) and 35 other advocacy groups, for example, sent a letter to Amazon CEO Jeff Bezos demanding that his company stop providing advanced facial-recognition technology to law enforcement, warning that it could be misused against immigrants and protesters. Early iterations of the technology, which dates back to the 1960s, were clunky.


City of Orlando did not renew surveillance partnership with Amazon

Mashable

The City of Orlando is no longer using Amazon to surveil its residents (for now). The Orlando Police Department and the city issued a joint statement today that announced they were no longer using Rekognition, Amazon's deep-learning technology that can identify every face in a crowd. "Staff continues to discuss and evaluate whether to recommend continuation of the pilot at a further date," reads the statement obtained by Mashable, which was issued as a response to the ACLU of Florida sending a letter of dissent to city-level officials. "At this time that process in still ongoing and the contract with Amazon remains expired." The City of Orlando did not end its partnership with Amazon as a result of public outcry, however.


AI claims to be able to thwart facial recognition software, making you "invisible"

#artificialintelligence

A team of engineering researchers from the University of Toronto has created an algorithm to dynamically disrupt facial recognition systems. Led by professor Parham Aarabi and graduate student Avishek Bose, the team used a deep learning technique called "adversarial training", which pits two artificial intelligence algorithms against each other. Aarabi and Bose designed a set of two neural networks, the first one identifies faces and the other works on disrupting the facial recognition task of the first. The two constantly battle and learn from each other, setting up an ongoing AI arms race. "The disruptive AI can'attack' what the neural net for the face detection is looking for," Bose said in an interview.


Researchers develop AI to fool facial recognition tech

#artificialintelligence

A team of engineering researchers from the University of Toronto have created an algorithm to dynamically disrupt facial recognition systems. Led by professor Parham Aarabi and graduate student Avishek Bose, the team used a deep learning technique called "adversarial training", which pits two artificial intelligence algorithms against each other. Aarabi and Bose designed a set of two neural networks, the first one identifies faces and the other works on disrupting the facial recognition task of the first. The two constantly battle and learn from each other, setting up an ongoing AI arms race. "The disruptive AI can'attack' what the neural net for the face detection is looking for," Bose said in an interview with Eureka Alert.


What is algorithmic bias?

@machinelearnbot

This article is part of Demystifying AI, a series of posts that (try) to disambiguate the jargon and myths surrounding AI. In early 2016, Microsoft launched Tay, an AI chatbot that was supposed to mimic the behavior of a curious teenage girl and engage in smart discussions with Twitter users. The project would display the promises and potential of AI-powered conversational interfaces. However, in less than 24 hours, the innocent Tay became a racist, misogynist and a holocaust denying AI, debunking--once again--the myth of algorithmic neutrality. For years, we've thought that artificial intelligence doesn't suffer from the prejudices and biases of its human creators because it's driven by pure, hard, mathematical logic.


How AI-Driven Insurance Could Reduce Gun Violence

#artificialintelligence

Americans do not agree on guns. Debate is otiose, because we reject each other's facts and have grown weary of each other's arguments. A little more than half the nation wants guns more tightly regulated, because tighter regulation would mean fewer guns, which would mean less gun violence. A little less than half answers, simply: The Supreme Court has found in the Second Amendment an individual right to bear arms. Legally prohibiting or confiscating guns would mean amending the Constitution, which the Framers made hard. It will never, ever happen.


How AI-Driven Insurance Could Help Prevent Gun Violence

WIRED

Americans do not agree on guns. Debate is otiose, because we reject each other's facts and have grown weary of each other's arguments. A little more than half the nation wants guns more tightly regulated, because tighter regulation would mean fewer guns, which would mean less gun violence. A little less than half answers, simply: The Supreme Court has found in the Second Amendment an individual right to bear arms. Legally prohibiting or confiscating guns would mean amending the Constitution, which the Framers made hard. It will never, ever happen.


AI robots are sexist and racist, experts warn

#artificialintelligence

He said the deep learning algorithms which drive AI software are "not transparent", making it difficult to to redress the problem. Currently approximately 9 per cent of the engineering workforce in the UK is female, with women making up only 20 per cent of those taking A Level physics. "We have a problem," Professor Sharkey told Today. "We need many more women coming into this field to solve it." His warning came as it was revealed a prototype programme developed to short-list candidates for a UK medical school had negatively selected against women and black and other ethnic minority candidates.


How not to create a racist, sexist robot

#artificialintelligence

Robots are picking up sexist and racist biases based on information used to program them predominantly coming from one homogenous group of people, suggests a new study from Princeton University and the U.K.'s University of Bath. Lead study author, Aylin Caliskan says the findings surprised her. "There's this common understanding that machines are supposed to be objective. But robots based on artificial intelligence (AI) and machine learning learn from historic human data and this data usually contain biases," Caliskan tells The Current's Anna Maria Tremonti. Machine learning takes statistics and information that has been inputted and Caliskan argues it's only until humans become completely unbiased that the possibility of an unprejudiced robot can exist.


Fighting Words Not Ideas: Google's New AI-Powered Toxic Speech Filter Is The Right Approach

#artificialintelligence

Alphabet Jigsaw (formerly Google Ideas) officially unveiled this morning their new tool for fighting toxic speech online, appropriately called Perspective. Powered by a deep-learning model trained on more than 17 million manually reviewed reader comments provided by the New York Times, the model assigns a score to a given passage of text, rating it on a scale from 0 to 100%, similar to statements that human reviewers have previously rated as "toxic." What makes this new approach from Google so different than past approaches is that it largely focuses on language rather than ideas: for the most part you can express your thoughts freely and without fear of censorship as long as you express them clinically and clearly, while if you resort to emotional diatribes and name calling, regardless of what you talk about, you will be flagged. What does this tell us about the future of toxic speech online and the notion of machines guiding humans to a more "perfect" humanity? One of the great challenges in filtering out "toxic" speech online is first defining what precisely counts as "toxic" and then determining how to remove such speech without infringing on people's ability to freely express their ideas.