Goto

Collaborating Authors

Results


What Happens When Our Faces Are Tracked Everywhere We Go?

#artificialintelligence

When a secretive start-up scraped the internet to build a facial-recognition tool, it tested a legal and ethical limit -- and blew the future of privacy in America wide open. In May 2019, an agent at the Department of Homeland Security received a trove of unsettling images. Found by Yahoo in a Syrian user's account, the photos seemed to document the sexual abuse of a young girl. One showed a man with his head reclined on a pillow, gazing directly at the camera. The man appeared to be white, with brown hair and a goatee, but it was hard to really make him out; the photo was grainy, the angle a bit oblique. The agent sent the man's face to child-crime investigators around the country in the hope that someone might recognize him. When an investigator in New York saw the request, she ran the face through an unusual new facial-recognition app she had just started using, called Clearview AI. The team behind it had scraped the public web -- social media, employment sites, YouTube, Venmo -- to create a database with three billion images of people, along with links to the webpages from which the photos had come. This dwarfed the databases of other such products for law enforcement, which drew only on official photography like mug shots, driver's licenses and passport pictures; with Clearview, it was effortless to go from a face to a Facebook account. The app turned up an odd hit: an Instagram photo of a heavily muscled Asian man and a female fitness model, posing on a red carpet at a bodybuilding expo in Las Vegas. The suspect was neither Asian nor a woman. But upon closer inspection, you could see a white man in the background, at the edge of the photo's frame, standing behind the counter of a booth for a workout-supplements company. On Instagram, his face would appear about half as big as your fingernail. The federal agent was astounded. The agent contacted the supplements company and obtained the booth worker's name: Andres Rafael Viola, who turned out to be an Argentine citizen living in Las Vegas.


Amnesty International calls for ban on facial recognition

#artificialintelligence

As advocates for facial recognition tout the tech's potential to track down the US Capitol rioters, a new Amnesty International campaign has provided a timely reminder of the software's dangers. The NGO has shared a stream of examples of how the software amplifies racist policing and threatens the right to protest -- and called for a global ban on the tech. The Ban the Scan campaign was launched on Tuesday in New York City, where facial recognition has been used 22,000 since 2017. Amnesty notes that the software is often prone to errors. But even when it "works," it can exacerbate discriminatory policing, violate our privacy, and threaten our rights to peaceful assembly and freedom of expression.


Can the Biases in Facial Recognition Be Fixed; Also, Should They?

Communications of the ACM

In January 2020, Robert Williams of Farmington Hills, MI, was arrested at his home by the Detroit Police Department. He was photographed, fingerprinted, had his DNA taken, and was then locked up for 30 hours. He had not committed one; a facial recognition system operated by the Michigan State Police had wrongly identified him as the thief in a 2018 store robbery. However, Williams looked nothing like the perpetrator captured in the surveillance video, and the case was dropped. Rewind to May 2019, when Detroit resident Michael Oliver was arrested after being identified by the very same police facial recognition unit as the person who stole a smartphone from a vehicle.


Civil rights groups ask Biden administration to oppose facial recognition

Washington Post - Technology News

But some cities have stuck by the systems. In Detroit, where the police chief said the system was useful even though it almost never returned a perfect match without human guidance, city leaders last year approved further use of the software, saying it helped protect the public while empowering the police.


The Next Target for a Facial Recognition Ban? New York

WIRED

Civil rights activists have successfully pushed for bans on police use of facial recognition in cities like Oakland, San Francisco, and Somerville, Massachusetts. Now, a coalition led by Amnesty International is setting its sights on the nation's biggest city--New York--as part of a drive for a global moratorium on government use of the technology. Amnesty's #BantheScan campaign is backed by Legal Aid, the New York Civil Liberties Union, and AI For the People among other groups. After New York, the group plans to target New Delhi and Ulaanbaatar in Mongolia. "New York is the biggest city in the country," says Michael Kleinman, director of Amnesty International's Silicon Valley Initiative.


LAPD panel approves new oversight of facial recognition, rejects calls to end program

Los Angeles Times

The Los Angeles Police Commission approved a policy Tuesday that set new parameters on the LAPD's use of facial recognition technology, but stopped far short of the outright ban sought by many city activists. The move followed promises by the commission to review the Los Angeles Police Department's use of photo-comparison software in September, after The Times reported that officers had used the technology -- contrary to department claims -- more than 30,000 times since 2009. The new policy restricts LAPD detectives and other trained officers to using a single software platform operated by the Los Angeles County Sheriff's Department, which only uses mugshots and is far less expansive than some third-party search platforms. It also mandates new measures for tracking the Police Department's use of the county system and its outcomes in the crime fight. Commissioners and top police executives praised the policy as a step in the right direction, saying it struck the right balance between protecting people's civil liberties and giving cops the tools they need to solve and reduce crime -- which is on the rise.


Facial Recognition Technology Isn't Good Just Because It's Used to Arrest Neo-Nazis

Slate

In a recent New Yorker article about the Capitol siege, Ronan Farrow described how investigators used a bevy of online data and facial recognition technology to confirm the identity of Larry Rendall Brock Jr., an Air Force Academy graduate and combat veteran from Texas. Brock was photographed inside the Capitol carrying zip ties, presumably to be used to restrain someone. Brock was arrested Sunday and charged with two counts.) Even as they stormed the Capitol, many rioters stopped to pose for photos and give excited interviews on livestream. Each photo uploaded, message posted, and stream shared created a torrent of data for police, researchers, activists, and journalists to archive and analyze.


Man sues police over a facial recognition-related wrongful arrest

Engadget

A New Jersey man is suing the town of Woodbridge and its police department after he was falsely arrested following an incorrect facial recognition match. Nijeer Parks spent 10 days in jail last year, including a week in "functional solitary confinement," following a shoplifting incident that January. After officers were called to a Hampton Inn in Woodbridge, the alleged shoplifter presented them with a Tennessee driver's license, which they determined was fake. When they attempted to arrest him after spotting what appeared to be a bag of marijuana in his pocket, the man fled in his rental car. One officer said he had to leap out of the way or he would have been hit.


Developing Future Human-Centered Smart Cities: Critical Analysis of Smart City Security, Interpretability, and Ethical Challenges

arXiv.org Artificial Intelligence

As we make tremendous advances in machine learning and artificial intelligence technosciences, there is a renewed understanding in the AI community that we must ensure that humans being are at the center of our deliberations so that we don't end in technology-induced dystopias. As strongly argued by Green in his book Smart Enough City, the incorporation of technology in city environs does not automatically translate into prosperity, wellbeing, urban livability, or social justice. There is a great need to deliberate on the future of the cities worth living and designing. There are philosophical and ethical questions involved along with various challenges that relate to the security, safety, and interpretability of AI algorithms that will form the technological bedrock of future cities. Several research institutes on human centered AI have been established at top international universities. Globally there are calls for technology to be made more humane and human-compatible. For example, Stuart Russell has a book called Human Compatible AI. The Center for Humane Technology advocates for regulators and technology companies to avoid business models and product features that contribute to social problems such as extremism, polarization, misinformation, and Internet addiction. In this paper, we analyze and explore key challenges including security, robustness, interpretability, and ethical challenges to a successful deployment of AI or ML in human-centric applications, with a particular emphasis on the convergence of these challenges. We provide a detailed review of existing literature on these key challenges and analyze how one of these challenges may lead to others or help in solving other challenges. The paper also advises on the current limitations, pitfalls, and future directions of research in these domains, and how it can fill the current gaps and lead to better solutions.


Facial recognition used to arrest protestor at Trump bible photo op

Mashable

Of course a little known facial recognition tool was used on Black Lives Matter protestors. This past June, as protestors were tear-gassed in Washington D.C.'s Lafayette Square so that Donald Trump could have a bible-thumping photo op, officials claim a man assaulted a police officer. The man, Michael Joseph Peterson Jr., wasn't arrested at the scene. Instead, police pulled images off Twitter and ran them through a previously secretive facial recognition system to find a match. So reports the Washington Post, which notes that many experts believe this is the first time a defendant has been told the National Capital Region Facial Recognition Investigative Leads System (NCRFRILS), as it is called, was used to track them down.