San Francisco is considering a ban on facial recognition

#artificialintelligence

Facial recognition technology is everywhere you look -- from unlocking phones to shaming jaywalkers. But should corporations have the power to use it on you without consent? That's the question the city of San Francisco is tackling right now. A member of the city's Board of Supervisors proposed a ban on facial recognition technology for city agencies on Tuesday, Wired reports -- potentially forcing tech companies to justify the use of surveillance tools. San Francisco city board member Aaron Peskin is calling for an approval process for any new surveillance technology purchases by city agencies such as license plate readers, CCTV, and gun-detection systems.


San Francisco Wants to Ban Government Face Recognition

The Atlantic

A San Francisco lawmaker is proposing what would be a nationwide first: a complete moratorium on local government use of facial-recognition technology. Introduced by San Francisco Supervisor Aaron Peskin, the Stop Secret Surveillance Ordinance would ban all city departments from using facial-recognition technology and require board approval before departments purchase new surveillance devices. The bill regulates only local use, not use by private companies: The face-unlock feature included on the latest iPhone model, for example, would still be legal. Neighboring cities Berkeley and Oakland have passed similar rules, requiring public input and a privacy policy before officials implement new tech, but nowhere in the United States is facial recognition outright banned. Texas and Illinois require consent before collecting facial data, but don't ban the practice.


San Francisco could ban facial recognition technology, becoming first US city to do so

FOX News

San Francisco could become the first U.S. city to ban the use of facial recognition technology, criticized as biased by lawmakers and privacy advocates. A new bill unveiled on Tuesday, known as the Stop Secret Surveillance Ordinance, states that the risks of the controversial technology "substantially outweigh...its purported benefits, and the technology will exacerbate racial injustice and threaten our ability to live free of continuous government monitoring." "Our intent is to catch people's attention and have a broader conversation as to where the moral precipice is for technology, after which you've gone too far," Lee Hepner, a legislative aide to Supervisor Aaron Peskin, who proposed the bill, told Ars Technica. "This is a harm to our way of life, a harm to our democracy, and a harm to marginalized communities. There is a salient interest in facial recognition, too: it creeps people out."


Facial Recognition Surveillance Now at a Privacy Tipping Point

#artificialintelligence

Much more rapidly than anyone originally thought possible, facial recognition technology has become part of the cultural mainstream. Facebook, for example, now uses AI-powered facial recognition software as part of its core social networking platform to identify people, while law enforcement agencies around the world have experimented with facial recognition surveillance cameras to reduce crime and improve public safety. But now it looks like society is finally starting to wake up to the immense privacy implications of real-time facial recognition surveillance. For example, San Francisco is now considering an outright ban on facial recognition surveillance. If pending legislation known as "Stop Secret Surveillance" passes, this would make San Francisco the first city ever to ban (and not just regulate) facial recognition technology.


Facial Recognition's 'Dirty Little Secret': Millions of Online Photos Scraped Without Consent

#artificialintelligence

Legal experts warn people's online photos are being used without permission to power facial-recognition technology that could eventually be used for surveillance. Legal experts warn people's online photos are being used without permission to power facial-recognition technology that could eventually be used for surveillance. Said New York University School of Law's Jason Schultz, "This is the dirty little secret of [artificial intelligence] training sets. Researchers often just grab whatever images are available in the wild." IBM recently issued a set of nearly 1 million photos culled from the image-hosting site Flickr, and programmed to describe subjects' appearance, allegedly to help reduce bias in facial recognition; although IBM said Flickr users can opt out of the database, deleting photos is almost impossible.