Results


Amazon investors join ACLU urging halt to facial recognition tool used by police

USATODAY

Shankar Narayan, legislative director of the ACLU of Washington, left, speaks at a news conference outside Amazon headquarters, Monday, June 18, 2018, in Seattle. Representatives of community-based organizations urged Amazon to stop selling its face surveillance system, Rekognition, to the government. They later delivered the petitions to Amazon. SEATTLE (AP) -- Some Amazon company investors said Monday they are siding with privacy and civil rights advocates who are urging the tech giant to not sell a powerful face recognition tool to police. The American Civil Liberties Union is leading the effort against Amazon's Rekognition product, delivering a petition with 152,000 signatures to the company's Seattle headquarters Monday, telling the company to "cancel this order."


Some Amazon Investors Side With ACLU on Facial Recognition

U.S. News

The American Civil Liberties Union is leading the effort against Amazon's Rekognition product, delivering a petition with 152,000 signatures to the company's Seattle headquarters Monday, telling the company to "cancel this order." They're asking Amazon to stop marketing Rekognition to government agencies over privacy issues that they say can be used to discriminate against minorities.


Amazon faces pressure to stop selling facial recognition to police

Engadget

Amazon may not have much choice but to address mounting criticism over its sales of facial recognition tech to governments. The American Civil Liberties Union has delivered both a petition and a letter from 17 investors demanding that Amazon drop its Rekognition system and exit the surveillance business. While the two sides have somewhat different motivations, they share one thing in common: a concern for privacy. Both groups are worried that Amazon is handing governments surveillance power they could easily use to violate civil rights, particularly for minorities and immigrants. They could use it to track and intimidate protesters, for instance.


Amazon shareholders demand it stops selling 'Rekognition' to police

Daily Mail

Amazon is drawing the ire of its shareholders after an investigation found that it has been marketing powerful facial recognition tools to police. Nearly 20 groups of Amazon shareholders delivered a signed letter to CEO Jeff Bezos on Friday, pressuring the company to stop selling the software to law enforcement. The tool, called'Rekognition', was first released in 2016, but has since been selling it on the cheap to several police departments around the country, with Washington County Sheriff's Office in Oregon and the city of Orlando, Florida among its customers. Shareholders, including the Social Equity Group and Northwest Coalition for Responsible Investment, join the American Civil Liberties Union (ACLU) and other privacy advocates in pointing out privacy violations and the dangers of mass surveillance. 'We are concerned the technology would be used to unfairly and disproportionately target and surveil people of color, immigrants, and civil society organizations,' the shareholders write.


Google's new principles on AI need to be better at protecting human rights

#artificialintelligence

There are growing concerns about the potential risks of AI – and mounting criticism of technology giants. In the wake of what has been called an AI backlash or "techlash", states and businesses are waking up to the fact that the design and development of AI have to be ethical, benefit society and protect human rights. In the last few months, Google has faced protests from its own staff against the company's AI work with the US military. The US Department of Defense contracted Google to develop AI for analysing drone footage in what is known as "Project Maven". A Google spokesperson was reported to have said: "the backlash has been terrible for the company" and "it is incumbent on us to show leadership".


Police could face legal action over 'authoritarian' facial recognition cameras

Daily Mail

Facial recognition technology used by the UK police is making thousands of mistakes - and now there could be legal repercussions. Civil liberties group, Big Brother Watch, has teamed up with Baroness Jenny Jones to ask the government and the Met to stop using the technology. They claim the use of facial recognition has proven to be'dangerously authoritarian', inaccurate and a breach if rights protecting privacy and freedom of expression. If their request is rejected, the group says it will take the case to court in what will be the first legal challenge of its kind. South Wales Police, London's Met and Leicestershire are all trialling automated facial recognition systems in public places to identify wanted criminals.


Google Backtracks, Says Its AI Will Not Be Used for Weapons or Surveillance

#artificialintelligence

Google is committing to not using artificial intelligence for weapons or surveillance after employees protested the company's involvement in Project Maven, a Pentagon pilot program that uses artificial intelligence to analyse drone footage. However, Google says it will continue to work with the United States military on cybersecurity, search and rescue, and other non-offensive projects. Google CEO Sundar Pichai announced the change in a set of AI principles released today. The principles are intended to govern Google's use of artificial intelligence and are a response to employee pressure on the company to create guidelines for its use of AI. Employees at the company have spent months protesting Google's involvement in Project Maven, sending a letter to Pichai demanding that Google terminate its contract with the Department of Defense.


Google won't develop AI weapons, announces new ethical strategy Internet of Business

#artificialintelligence

Google has unveiled a set of principles for ethical AI development and deployment, and announced that it will not allow its AI software to be used in weapons or for "unreasonable surveillance". In a detailed blog post, CEO Sundar Pichai said that Google would not develop technologies that cause, or are likely to cause, harm. "Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints," he explained. Google will not allow its technologies to be used in weapons or in "other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people", he said. Also on the no-go list are "technologies that gather or use information for surveillance, violating internationally accepted norms", and those "whose purpose contravenes widely accepted principles of international law and human rights".


The Next Frontier of Police Surveillance Is Drones

Slate

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society. A company that makes stun guns and body cameras is teaming up with a company that makes drones to sell drones to police departments, and that might not even be the most worrisome part. The line of drones from Axon and DJI is called the Axon Air, and the devices will be linked to Axon's cloud-based database for law enforcement, Evidence.com, And it could open a vast new frontier for police surveillance. By working with a company that is already familiar with contracting with police departments, the Chinese-owned DJI--the world's biggest consumer drone manufacturer--could widen up a new, growing customer base: cops.


Residual Unfairness in Fair Machine Learning from Prejudiced Data

arXiv.org Machine Learning

Recent work in fairness in machine learning has proposed adjusting for fairness by equalizing accuracy metrics across groups and has also studied how datasets affected by historical prejudices may lead to unfair decision policies. We connect these lines of work and study the residual unfairness that arises when a fairness-adjusted predictor is not actually fair on the target population due to systematic censoring of training data by existing biased policies. This scenario is particularly common in the same applications where fairness is a concern. We characterize theoretically the impact of such censoring on standard fairness metrics for binary classifiers and provide criteria for when residual unfairness may or may not appear. We prove that, under certain conditions, fairness-adjusted classifiers will in fact induce residual unfairness that perpetuates the same injustices, against the same groups, that biased the data to begin with, thus showing that even state-of-the-art fair machine learning can have a "bias in, bias out" property. When certain benchmark data is available, we show how sample reweighting can estimate and adjust fairness metrics while accounting for censoring. We use this to study the case of Stop, Question, and Frisk (SQF) and demonstrate that attempting to adjust for fairness perpetuates the same injustices that the policy is infamous for.