police


Office worker launches UK's first police facial recognition legal action

The Guardian

An office worker who believes his image was captured by facial recognition cameras when he popped out for a sandwich in his lunch break has launched a groundbreaking legal battle against the use of the technology. Supported by the campaign group Liberty, Ed Bridges, from Cardiff, raised money through crowdfunding to pursue the action, claiming the suspected use of the technology on him by South Wales police was an unlawful violation of privacy. Bridges, 36, claims he was distressed by the apparent use of the technology and is also arguing during a three-day hearing at Cardiff civil justice and family centre that it breaches data protection and equality laws. Facial recognition technology maps faces in a crowd and then compares them to a watchlist of images, which can include suspects, missing people and persons of interest to the police. The cameras scan faces in large crowds in public places such as streets, shopping centres, football crowds and music events such as the Notting Hill carnival.


UK's controversial use of face recognition to be challenged in court

New Scientist

The first legal battle in the UK over police use of face recognition technology will begin today. Ed Bridges has crowdfunded action against South Wales Police over claims that the use of the technology on him was an unlawful violation of privacy. He will also argue it breaches data protection and equality laws during a three-day hearing at Cardiff Civil Justice and Family Centre. Face recognition technology maps faces in a crowd then compares results with a "watch list" of images which can include suspects, missing people and persons of interest. Police who have trialled the technology hope it can help tackle crime but campaigners argue it breaches privacy and civil liberty.


AI Weekly: Facial recognition policy makers debate temporary moratorium vs. permanent ban

#artificialintelligence

On Tuesday, in an 8-1 tally, the San Francisco Board of Supervisors voted to ban the use of facial recognition software by city departments, including police. Supporters of the ban cited racial inequality in audits of facial recognition software from companies like Amazon and Microsoft, as well as dystopian surveillance happening now in China. At the core of arguments around the regulation of facial recognition software use is the question of whether a temporary moratorium should be put in place until police and governments adopt policies and standards or it should be permanently banned. Some believe facial recognition software can be used to exonerate the innocent and that more time is needed to gather information. Others, like San Francisco Supervisor Aaron Peskin, believe that even if AI systems achieve racial parity, facial recognition is a "uniquely dangerous and oppressive technology."


Police Are Feeding Celebrity Photos into Facial Recognition Software to Solve Crimes

#artificialintelligence

Police departments across the nation are generating leads and making arrests by feeding celebrity photos, CGI renderings, and manipulated images into facial recognition software. Often unbeknownst to the public, law enforcement is identifying suspects based on "all manner of'probe photos,' photos of unknown individuals submitted for search against a police or driver license database," a study published on Thursday by the Georgetown Law Center on Privacy and Technology reported. The new research comes on the heels of a landmark privacy vote on Tuesday in San Francisco, which is now the first US city to ban the use of facial recognition technology by police and government agencies. A recent groundswell of opposition has led to the passage of legislation that aims to protect marginalized communities from spy technology. These systems "threaten to fundamentally change the nature of our public spaces," said Clare Garvie, author of the study and senior associate at the Georgetown Law Center on Privacy and Technology.


Britain Has More Surveillance Cameras Per Person Than Any Country Except China. That's a Massive Risk to Our Free Society

TIME - Tech

How would you feel being watched, tracked and identified by facial recognition cameras everywhere you go? Facial recognition cameras are now creeping onto the streets of Britain and the U.S., yet most people aren't even aware. As we walk around, our faces could be scanned and subjected to a digital police line up we don't even know about. There are over 6 million surveillance cameras in the U.K. – more per citizen than any other country in the world, except China. In the U.K., biometric photos are taken and stored of people whose faces match with criminals – even if the match is incorrect. As director of the U.K. civil liberties group Big Brother Watch, I have been investigating the U.K. police's "trials" of live facial recognition surveillance for several years.


San Francisco May Be First City to Ban Facial Recognition

#artificialintelligence

San Francisco is on track to become the first U.S. city to ban the use of facial recognition by police and other city agencies, reflecting a growing backlash against a technology that's creeping into airports, motor vehicle departments, stores, stadiums and home security cameras. Government agencies around the U.S. have used the technology for more than a decade to scan databases for suspects and prevent identity fraud. But recent advances in artificial intelligence have created more sophisticated computer vision tools, making it easier for police to pinpoint a missing child or protester in a moving crowd or for retailers to analyze a shopper's facial expressions as they peruse store shelves. Efforts to restrict its use are getting pushback from law enforcement groups and the tech industry, though it's far from a united front. Microsoft, while opposed to an outright ban, has urged lawmakers to set limits on the technology, warning that leaving it unchecked could enable an oppressive dystopia reminiscent of George Orwell's novel "1984."


US government looking to develop AI that can track people across surveillance network

Daily Mail - Science & tech

An advanced research arm of the U.S. government's intelligence community is looking to develop AI capable of tracking people across a vast surveillance network. As reported by Nextgov, the Intelligence Advanced Research Projects Activity (IARPA) has put out a call for more information on developing an algorithm that can be trained to identify targets by visually analyzing swaths of security camera footage. The goal, says the request, is to be able to identify and track subjects across areas as large as six miles in an effort to reconstruct crime scenes, protect military operations, and monitor critical infrastructure facilities. To develop the technology, IARPA will collect nearly 1,000 hours of video surveillance from at least 20 camera networks and then, using that sample, test various algorithms effectiveness. The agency's interest in AI-based surveillance technology mirrors a broader movement from governments and intelligence communities around the globe, many of whom have ramped up efforts to develop and scale systems.


San Francisco Is First U.S. City To Ban Facial Recognition Technology

NPR Technology

San Francisco could become the first large city to bar police from using facial recognition software. They tried a facial recognition system for a time, but sources in the department say they gave up on it because it wasn't much good. But what is significant about this legislation is the way the city has now singled-out facial recognition going forward. AARON PESKIN: Facial recognition technology is uniquely dangerous and oppressive. KASTE: That's the legislation's author, Supervisor Aaron Peskin, explaining yesterday why his legislation allows for other kinds of surveillance tech but not facial recognition.


San Francisco bans police and city use of face recognition technology

USATODAY - Tech Top Stories

San Francisco supervisors approved a ban on police using facial recognition technology, making it the first city in the U.S. with such a restriction. SAN FRANCISCO – San Francisco supervisors voted Tuesday to ban the use of facial recognition software by police and other city departments, becoming the first U.S. city to outlaw a rapidly developing technology that has alarmed privacy and civil liberties advocates. The ban is part of broader legislation that requires city departments to establish use policies and obtain board approval for surveillance technology they want to purchase or are using at present. Several other local governments require departments to disclose and seek approval for surveillance technology. "This is really about saying: 'We can have security without being a security state. We can have good policing without being a police state.' And part of that is building trust with the community based on good community information, not on Big Brother technology," said Supervisor Aaron Peskin, who championed the legislation.


San Francisco Approves Ban On Government's Use Of Facial Recognition Technology

NPR Technology

In this Oct. 31 photo, a man has his face painted to represent efforts to defeat facial recognition. It was during a protest at Amazon headquarters over the company's facial recognition system. In this Oct. 31 photo, a man has his face painted to represent efforts to defeat facial recognition. It was during a protest at Amazon headquarters over the company's facial recognition system. San Francisco has become the first U.S. city to ban the use of facial recognition technology by police and city agencies.