Goto

Collaborating Authors

 facial-recognition system


The Dome Is Watching You

The Atlantic - Technology

On a recent Wednesday night in Los Angeles, I was ready to buy a hot dog with my face. I was at the Intuit Dome, a 2 billion entertainment complex that opened earlier this month. Soon, it will be the home of the L.A. Clippers, but I was there to watch Olivia Rodrigo, queen of teen angst, perform a sold-out show. The arena was filled with people wearing purple cowboy hats and the same silver sequin miniskirt, all of us ready to scream-sing for two hours straight. But first, we needed food.


Stadiums Have Gotten Downright Dystopian

The Atlantic - Technology

Like so many cities before it, Phoenix went all out to host the Super Bowl earlier this month. Expecting about 1 million fans to come to town for the biggest American sporting event of the year, the city rolled out a fleet of self-driving electric vehicles to ferry visitors from the airport. Robots sifted through the trash to pull out anything that could be composted. There were less visible developments, too. In preparation for the game, the local authorities upgraded a network of cameras around the city's downtown--and have kept them running after the spectators have left.


Faces Are the Next Target for Fraudsters

#artificialintelligence

In the past year, thousands of people in the U.S. have tried to trick facial identification verification to fraudulently claim unemployment benefits from state workforce agencies, according to identity verification firm ID.me Inc. The company, which uses facial-recognition software to help verify individuals on behalf of 26 U.S. states, says that between June 2020 and January 2021 it found more than 80,000 attempts to fool the selfie step in government ID matchups among the agencies it worked with. That included people wearing special masks, using deepfakes--lifelike images generated by AI--or holding up images or videos of other people, says ID.me Chief Executive Blake Hall. A look at how innovation and technology are transforming the way we live, work and play. Facial recognition for one-to-one identification has become one of the most widely used applications of artificial intelligence, allowing people to make payments via their phones, walk through passport checking systems or verify themselves as workers.


Faces Are the Next Target for Fraudsters

WSJ.com: WSJD - Technology

The Future of Everything covers the innovation and technology transforming the way we live, work and play, with monthly issues on health, money, cities and more. This month is Artificial Intelligence, online starting July 2 and in the paper on July 9. Facial-recognition systems, long touted as a quick and dependable way to identify everyone from employees to hotel guests, are in the crosshairs of fraudsters. For years, researchers have warned about the technology's vulnerabilities, but recent schemes have confirmed their fears--and underscored the difficult but necessary task of improving the systems. In the past year, thousands of people in the U.S. have tried to trick facial identification verification to fraudulently claim unemployment benefits from state workforce agencies, according to identity verification firm ID.me Inc. The company, which uses facial-recognition software to help verify individuals on behalf of 26 U.S. states, says that between June 2020 and January 2021 it found more than 80,000 attempts to fool the selfie step in government ID matchups among the agencies it worked with.


The All-Seeing Eyes of New York's 15,000 Surveillance Cameras

WIRED

A new video from human rights organization Amnesty International maps the locations of more than 15,000 cameras used by the New York Police Department, both for routine surveillance and in facial-recognition searches. A 3D model shows the 200-meter range of a camera, part of a sweeping dragnet capturing the unwitting movements of nearly half of the city's residents, putting them at risk for misidentification. The group says it is the first to map the locations of that many cameras in the city. Amnesty International and a team of volunteer researchers mapped cameras that can feed NYPD's much criticized facial-recognition systems in three of the city's five boroughs--Manhattan, Brooklyn, and the Bronx--finding 15,280 in total. Brooklyn is the most surveilled, with over 8,000 cameras.


How to Make Artificial Intelligence Less Biased

#artificialintelligence

How could software designed to take the bias out of decision making, to be as objective as possible, produce these kinds of outcomes? After all, the purpose of artificial intelligence is to take millions of pieces of data and from them make predictions that are as error-free as possible. But as AI has become more pervasive--as companies and government agencies use AI to decide who gets loans, who needs more health care and how to deploy police officers, and more--investigators have discovered that focusing just on making the final predictions as error free as possible can mean that its errors aren't always distributed equally. Instead, its predictions can often reflect and exaggerate the effects of past discrimination and prejudice. In other words, the more AI focused on getting only the big picture right, the more it was prone to being less accurate when it came to certain segments of the population--in particular women and minorities.


How to Make Artificial Intelligence Less Biased

WSJ.com: WSJD - Technology

How could software designed to take the bias out of decision making, to be as objective as possible, produce these kinds of outcomes? After all, the purpose of artificial intelligence is to take millions of pieces of data and from them make predictions that are as error-free as possible. But as AI has become more pervasive--as companies and government agencies use AI to decide who gets loans, who needs more health care and how to deploy police officers, and more--investigators have discovered that focusing just on making the final predictions as error free as possible can mean that its errors aren't always distributed equally. Instead, its predictions can often reflect and exaggerate the effects of past discrimination and prejudice. In other words, the more AI focused on getting only the big picture right, the more it was prone to being less accurate when it came to certain segments of the population--in particular women and minorities.


Controversial facial-recognition software used 30,000 times by LAPD in last decade, records show

Los Angeles Times

The Los Angeles Police Department has used facial-recognition software nearly 30,000 times since 2009, with hundreds of officers running images of suspects from surveillance cameras and other sources against a massive database of mugshots taken by law enforcement. The new figures, released to The Times, reveal for the first time how commonly facial recognition is used in the department, which for years has provided vague and contradictory information about how and whether it uses the technology. The LAPD has consistently denied having records related to facial recognition, and at times denied using the technology at all. The truth is that, while it does not have its own facial-recognition platform, LAPD personnel have access to facial-recognition software through a regional database maintained by the Los Angeles County Sheriff's Department. And between Nov. 6, 2009, and Sept. 11 of this year, LAPD officers used the system's software 29,817 times.


Many Facial-Recognition Systems Are Biased, Says U.S. Study

#artificialintelligence

Civil liberties experts, however, warn that the technology -- which can be used to track people at a distance without their knowledge -- has the potential to lead to ubiquitous surveillance, chilling freedom of movement and speech. This year, San Francisco, Oakland and Berkeley in California and the Massachusetts communities Somerville and Brookline banned government use of the technology. "One false match can lead to missed flights, lengthy interrogations, watch list placements, tense police encounters, false arrests or worse," Jay Stanley, a policy analyst at the American Civil Liberties Union, said in a statement. "Government agencies including the F.B.I., Customs and Border Protection and local law enforcement must immediately halt the deployment of this dystopian technology." The federal report is one of the largest studies of its kind.


Federal study confirms racial bias of many facial-recognition systems, casts doubt on their expanding use

#artificialintelligence

The test studied both how algorithms work on "one-to-one" matching, used for unlocking a phone or verifying a passport, and "one-to-many" matching, used by police to scan for a suspect's face across a vast set of driver's license photos. Investigators tested both false negatives, in which the system fails to realize two identical faces are the same, as well as false positives, in which the system identifies two different faces as being the same -- a dangerous failure for police, who could end up arresting an innocent person.