Civil Rights & Constitutional Law


City of Orlando did not renew surveillance partnership with Amazon

Mashable

The City of Orlando is no longer using Amazon to surveil its residents (for now). The Orlando Police Department and the city issued a joint statement today that announced they were no longer using Rekognition, Amazon's deep-learning technology that can identify every face in a crowd. "Staff continues to discuss and evaluate whether to recommend continuation of the pilot at a further date," reads the statement obtained by Mashable, which was issued as a response to the ACLU of Florida sending a letter of dissent to city-level officials. "At this time that process in still ongoing and the contract with Amazon remains expired." The City of Orlando did not end its partnership with Amazon as a result of public outcry, however.


AI claims to be able to thwart facial recognition software, making you "invisible"

#artificialintelligence

A team of engineering researchers from the University of Toronto has created an algorithm to dynamically disrupt facial recognition systems. Led by professor Parham Aarabi and graduate student Avishek Bose, the team used a deep learning technique called "adversarial training", which pits two artificial intelligence algorithms against each other. Aarabi and Bose designed a set of two neural networks, the first one identifies faces and the other works on disrupting the facial recognition task of the first. The two constantly battle and learn from each other, setting up an ongoing AI arms race. "The disruptive AI can'attack' what the neural net for the face detection is looking for," Bose said in an interview.


Researchers develop AI to fool facial recognition tech

#artificialintelligence

A team of engineering researchers from the University of Toronto have created an algorithm to dynamically disrupt facial recognition systems. Led by professor Parham Aarabi and graduate student Avishek Bose, the team used a deep learning technique called "adversarial training", which pits two artificial intelligence algorithms against each other. Aarabi and Bose designed a set of two neural networks, the first one identifies faces and the other works on disrupting the facial recognition task of the first. The two constantly battle and learn from each other, setting up an ongoing AI arms race. "The disruptive AI can'attack' what the neural net for the face detection is looking for," Bose said in an interview with Eureka Alert.


What is algorithmic bias?

@machinelearnbot

This article is part of Demystifying AI, a series of posts that (try) to disambiguate the jargon and myths surrounding AI. In early 2016, Microsoft launched Tay, an AI chatbot that was supposed to mimic the behavior of a curious teenage girl and engage in smart discussions with Twitter users. The project would display the promises and potential of AI-powered conversational interfaces. However, in less than 24 hours, the innocent Tay became a racist, misogynist and a holocaust denying AI, debunking--once again--the myth of algorithmic neutrality. For years, we've thought that artificial intelligence doesn't suffer from the prejudices and biases of its human creators because it's driven by pure, hard, mathematical logic.


How AI-Driven Insurance Could Reduce Gun Violence

#artificialintelligence

Americans do not agree on guns. Debate is otiose, because we reject each other's facts and have grown weary of each other's arguments. A little more than half the nation wants guns more tightly regulated, because tighter regulation would mean fewer guns, which would mean less gun violence. A little less than half answers, simply: The Supreme Court has found in the Second Amendment an individual right to bear arms. Legally prohibiting or confiscating guns would mean amending the Constitution, which the Framers made hard. It will never, ever happen.


How AI-Driven Insurance Could Help Prevent Gun Violence

WIRED

Americans do not agree on guns. Debate is otiose, because we reject each other's facts and have grown weary of each other's arguments. A little more than half the nation wants guns more tightly regulated, because tighter regulation would mean fewer guns, which would mean less gun violence. A little less than half answers, simply: The Supreme Court has found in the Second Amendment an individual right to bear arms. Legally prohibiting or confiscating guns would mean amending the Constitution, which the Framers made hard. It will never, ever happen.


The iPhone X is slammed as RACIST by Chinese users

Daily Mail

Apple has been accused of being'racist' after a Chinese boy realised he could unlock his mum's iPhone X using the facial recognition software.


Can A.I. Be Taught to Explain Itself?

@machinelearnbot

In September, Michal Kosinski published a study that he feared might end his career. The Economist broke the news first, giving it a self-consciously anodyne title: "Advances in A.I. Are Used to Spot Signs of Sexuality." But the headlines quickly grew more alarmed. By the next day, the Human Rights Campaign and Glaad, formerly known as the Gay and Lesbian Alliance Against Defamation, had labeled Kosinski's work "dangerous" and "junk science." In the next week, the tech-news site The Verge had run an article that, while carefully reported, was nonetheless topped with a scorching headline: "The Invention of A.I. 'Gaydar' Could Be the Start of Something Much Worse."


Censoring Sensors

Communications of the ACM

Following the wave of U.K. terror attacks in the spring of 2017, prime minister Theresa May called on technology companies like Facebook and YouTube to create better tools for screening out controversial content--especially digital video--that directly promotes terrorism. Meanwhile, in the U.S., major advertisers including AT&T, Verizon, and WalMart have pulled ad campaigns from YouTube after discovering their content had been appearing in proximity to videos espousing terrorism, anti-Semitism, and other forms of hate speech. In response to these controversies, Google expanded its advertising rules to take a more aggressive stance against hate speech, and released a suite of tools allowing advertisers to block their ads from appearing on certain sites. The company also deployed new teams of human monitors to review videos for objectionable content. In a similar vein, Facebook announced that it would add 3,000 new employees to screen videos for inappropriate content.


How Machine Learning Trains AI to be Sexist (by Accident)

#artificialintelligence

While scenarios like The Matrix and I, Robot might seem like folly, AI picks up behaviors from we humans. As a result, some AIs adapt based on new information. Others, regrettably, can become racist, murderous, and/or sexist. Can we prevent AIs from learning negative behaviors and traits? If you're a fan of science fiction, you've no doubt seen the movie The Fifth Element.