Goto

Collaborating Authors

police


Co-op is using facial recognition tech to scan and track shoppers

#artificialintelligence

Branches of Co-op in the south of England have been using real-time facial recognition cameras to scan shoppers entering stores. In total 18 shops from the Southern Co-op franchise have been using the technology in an effort to reduce shoplifting and abuse against staff. As a result of the trials, other regional Co-op franchises are now believed to be trialling facial recognition systems. Use of facial recognition by police forces has been controversial with the Court of Appeal ruling parts of its use to be unlawful earlier this year. But its use has been creeping into the private sector, but the true scale of its use remains unknown.


How AI Could Help the Fight for Accountability and Justice

#artificialintelligence

This time last year, I was in Hong Kong, meeting with human rights activists and documenting the large pro-democracy demonstrations. The police were cracking down on protestors, using excessive force in the streets, and perpetrating abuse behind closed doors. An independent inquiry into police abuses was, and is, crucial - and I proposed just that. On a typical Saturday evening on Nathan Road, thousands of young people would be marching and singing, and many, if not most, would be using their phones to film police using tear gas, batons, and other weapons throughout the night. At the same time, camerapersons both amateur and professional captured footage that circulated around the world via news broadcasts and social media.


Big data 'turbocharged' repression in China's Xinjiang, rights group says

The Japan Times

Beijing – Muslims in China's Xinjiang were "arbitrarily" selected for arrest by a computer program that flagged suspicious behavior, activists said Wednesday, in a report detailing big data's role in repression in the restive region. The U.S.-based Human Rights Watch said leaked police data that listed over 2,000 detainees from Aksu prefecture was further evidence of "how China's brutal repression of Xinjiang's Turkic Muslims is being turbocharged by technology." Beijing has come under intense international criticism over its policies in the resource-rich territory, where rights groups say as many as 1 million Uighurs and other mostly Muslim minorities have been held in internment camps. China defends the camps as vocational training centers aimed at stamping out terrorism and improving employment opportunities. Surveillance spending in Xinjiang has ballooned in recent years, with facial recognition, iris scanners, DNA collection and artificial intelligence deployed across the province in the name of preventing terrorism.


Very Little Stands Between the U.S. and a Technological Panopticon

Slate

This article is part of the Policing and Technology Project, a collaboration between Future Tense and the Tech, Law, & Security Program at American University Washington College of Law that examines the relationship between law enforcement, police reform, and technology. On Friday, Nov. 20, at 1 p.m. Eastern, Future Tense will co-host "Technology, Policing, and Earning the Public Trust," an online event about the role of technology in law enforcement reform. This summer, when officials in a few cities started using facial recognition software to identify protesters, many cried foul. Those objections turned ironic when protesters used facial recognition to identify police officers who had covered their badges or nameplates during protests. Powerful technology beloved by police had become a tool for accountability: David defeats Goliath.


Citizens are turning face recognition on unidentified police

MIT Technology Review

Moves have been made to restrict the use of facial recognition across the globe. In part one of this series on Face ID, Jennifer Strong and the team at MIT Technology Review explore the unexpected ways the technology is being used, including how technology is being turned on police. This episode was reported and produced by Jennifer Strong, Tate Ryan-Mosley and Emma Cillekens, and Karen Hao. Strong: A few things have happened since we last spoke about facial recognition. We've seen more places move to restrict its use while at the same time, schools and other public buildings have started using face I-D as part of their covid-prevention plans. We're even using it on animals and not just on faces with similarities to our own, like chimps and gorillas, Chinese tech firms use it on pigs, and Canadian scientists are working to identify whales, even grizzly bears.


Should America Still Police the World?

The New Yorker

In 1939, shortly before the German invasion of Poland, a British emissary, Lord Lothian, visited the White House with an unusual request. The United Kingdom was unable to protect the world from the Nazis, Lothian told President Franklin Delano Roosevelt. "Anglo-Saxon civilization" would thus need a new guardian. The scepter was falling from British hands, Lothian explained, and the United States must "snatch it up." Though informally made, it was an extraordinary entreaty.


Maine voters double down on facial recognition ban in win for privacy

Mashable

Residents of Portland, Maine, can now officially sue the bastards. In a robust show of doubling down on privacy protections, voters in the Maine city passed a measure Tuesday replacing and strengthening an existing ban on city official's use of facial recognition technology. While city employees were already prohibited from using the controversial tech, this new ban also gives residents the right to sue the city for violations and specifies monetary fines the city would have to pay out. Oh yeah, and for some icing on the cake: Under the new law, city officials that violate the ban can be fired. What's more, if a person discovers that "any person or entity acting on behalf of the City of Portland, including any officer, employee, agent, contractor, subcontractor, or vendor" used facial recognition on them, that person is entitled to no less than $100 per violation or $1,000 (whichever is greater).


Police used facial recognition to identify a Lafayette Square protester

Engadget

In the aftermath of the Lafayette Square protests in June, police in Washington DC used facial recognition technology to identify a protestor who had allegedly punched an officer in the face. They found the man after feeding an image of him they found on Twitter through a previously undisclosed database called the National Capital Region Facial Recognition Investigative Leads System (NCRFRILS). This is the first time we're learning of this database, despite the fact it's been used in other cases related to human trafficking and bank robberies. According to The Washington Post, 14 local and federal agencies have used the system more than 12,000 times since 2019. It's part of a pilot program the Metropolitan Washington Council of Goverments has been operating since 2017.


Activists Turn Facial Recognition Tools Against the Police

#artificialintelligence

Mr. Howell was offended by Mr. Wheeler's characterization of his project but relieved he could keep working on it. "There's a lot of excessive force here in Portland," he said in a phone interview. "Knowing who the officers are seems like a baseline." Mr. Howell, 42, is a lifelong protester and self-taught coder; in graduate school, he started working with neural net technology, an artificial intelligence that learns to make decisions from data it is fed, such as images. He said that the police had tear-gassed him during a midday protest in June, and that he had begun researching how to build a facial recognition product that could defeat officers' attempts to shield their identity.


IBM pushes for US to limit facial recognition system exports

ZDNet

IBM has called for the US Department of Commerce to limit the export of facial recognition systems, particularly to countries that could potentially use it for mass surveillance, racial profiling, or other human rights violations. In a letter [PDF] to the Commerce Department, IBM highlighted the need for tighter export controls for facial recognition technologies that employ for what it referred to as "1-to-many" matching. These suggested controls include controlling the export of both the high-resolution cameras used to collect data and the software algorithms used to analyse and match that data against a database of images, and restricting access to online image databases that can be used to train 1-to-many facial recognition systems. "These systems are distinct from '1 to 1' facial matching systems, such as those that might unlock your phone or allow you to board an airplane -- in those cases, facial recognition is verifying that a consenting person is who they say they are," IBM government and regulatory affairs vice president Christopher Padilla explained in a blog post. "But in a '1-to-many' application, a system can, for example, pick a face out of crowd by matching one image against a database of many others."