Goto

Collaborating Authors

facial recognition technology


Why IBM Decided to Halt all Facial Recognition Development

#artificialintelligence

In a letter to congress sent on June 8th, IBM's CEO Arvind Krishna made a bold statement regarding the company's policy toward facial recognition. "IBM no longer offers general purpose IBM facial recognition or analysis software," says Krishna. "IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency." The company has halted all facial recognition development and disapproves or any technology that could lead to racial profiling. The ethics of face recognition technology have been in question for years. However, there has been little to no movement in the enactment of official laws barring the technology.


Detroit police chief cops to 96-percent facial recognition error rate

#artificialintelligence

Detroit's police chief admitted on Monday that facial recognition technology used by the department misidentifies suspects about 96 percent of the time. It's an eye-opening admission given that the Detroit Police Department is facing criticism for arresting a man based on a bogus match from facial recognition software. Last week, the ACLU filed a complaint with the Detroit Police Department on behalf of Robert Williams, a Black man who was wrongfully arrested for stealing five watches worth $3,800 from a luxury retail store. Investigators first identified Williams by doing a facial recognition search with software from a company called DataWorks Plus. Under police questioning, Williams pointed out that the grainy surveillance footage obtained by police didn't actually look like him.


ACM statement on facial recognition technology

AIHub

The Association for Computing Machinery (ACM) U.S. Technology Policy Committee (USTPC) released a statement on 30 June calling for "an immediate suspension of the current and future private and governmental use of FR [facial recognition] technologies in all circumstances known or reasonably foreseeable to be prejudicial to established human and legal rights." The Committee concludes that, when rigorously evaluated, the technology too often produces results demonstrating clear bias based on ethnic, racial, gender, and other human characteristics recognizable by computer systems. The consequences of such bias, USTPC notes, frequently can and do extend well beyond inconvenience to profound injury, particularly to the lives, livelihoods and fundamental rights of individuals in specific demographic groups, including some of the most vulnerable populations in our society. Such bias and its effects are scientifically and socially unacceptable. The USTPC find that, at present, facial recognition technology is not sufficiently mature and reliable to be used fairly and safely.


U.S. facial recognition technology likely illegal in Europe

#artificialintelligence

A European privacy body said it "has doubts" that using facial recognition technology developed by U.S. company Clearview AI is legal in the EU. Clearview AI allows users to link facial images of an individual to a database of more than 3 billion pictures scraped from social media and other sources. According to media reports, over 600 law enforcement agencies worldwide are using the controversial app. But in a statement Wednesday, the European Data Protection Board said that "the use of a service such as Clearview AI by law enforcement authorities in the European Union would, as it stands, likely not be consistent with the EU data protection regime." The body issued the statement after MEPs raised questions regarding the use of the company's software.


São Paulo subway facial recognition system slammed over user data security and privacy

ZDNet

The company responsible for the operation of São Paulo's subway system has failed to demonstrate sufficient evidence that it is ensuring the protection of user privacy in the implementation of a new surveillance system that will use facial recognition technology. This is the conclusion of a group of consumer rights bodies following the conclusion of legal action initiated against Companhia do Metropolitano de São Paulo (METRO) about a project aimed at modernizing the subway's surveillance system. The current legacy system, which includes an estate of non-integrated 2200 cameras will be replaced by 5200 digital high-definition cameras controlled centrally. The platform, which will scan the faces of 4 million daily passengers, is expected to enhance operations and help authorities find wanted criminals through an integration with the police database. The consumer rights bodies that have initiated the civil lawsuit noted in a statement by the Brazilian Institute of Consumer Protection (IDEC) that METRO failed to produce a report on the impact associated with the use of facial recognition technology, or studies that demonstrate the security of the databases to be used for the implementation of the new surveillance system.


Facial Recognition Is Here To Stay, But Can We Control Its Use?

#artificialintelligence

Three days ago, in a letter to members of the United States Congress, IBM announced that it was abandoning the development of general-purpose facial recognition technologies because of their potential for mass surveillance, human rights violations and racial discrimination. In his letter, IBM CEO Arvind Krishna called for a reconsideration of the sale of this kind of technology to law enforcement, a gesture with which the company, which after all was announcing the abandonment of a technology in which it is not a leader and that has little impact on its bottom line, managed to put pressure on the companies that do have contracts with those law enforcement agencies, notably Amazon and Microsoft. The next day, Timnit Gebru, one of the leaders of Google's artificial intelligence team, said in an interview with the New York Times that the use of facial recognition technologies by law enforcement or security forces should be banned for the moment, and that he did not know how the issue would evolve in the future. One day later, on Wednesday 10, Amazon announced a one-year moratorium on the police's use of its facial recognition technology, the controversial Rekognition, so as to continue improving it and, above all, to give the government time to reach a reasonable consensus and establish stricter regulations for its ethical use. The company will continue to facilitate the use of this technology by institutions that use it for other purposes, such as preventing human trafficking or reuniting missing children with their families, but will temporarily stop offering it to the police and law enforcement agencies, one of its main customers.


Ed Markey, Ayanna Pressley push for federal ban on facial recognition technology

Boston Herald

Massachusetts Sen. Ed Markey and Rep. Ayanna Pressley are pushing to ban the federal government's use of facial recognition technology, as Boston last week nixed the city use of the technology and tech giants pause their sale of facial surveillance tools to police. The momentum to stop the government use of facial recognition technology comes in the wake of the police killing of George Floyd in Minneapolis -- a black man killed by a white police officer. Floyd's death has sparked nationwide protests for racial justice and triggered calls for police reform, including ways police track people. Facial recognition technology contributes to the "systemic racism that has defined our society," Markey said on Sunday. "We cannot ignore that facial recognition technology is yet another tool in the hands of law enforcement to profile and oppress people of color in our country," Markey said during an online press briefing.


Congress proposes ban on government use of facial recognition software

#artificialintelligence

Members of Congress introduced a new bill on Thursday that would ban government use of biometric technology, including facial recognition tools. Pramila Jayapal and Ayanna Pressley announced the Facial Recognition and Biometric Technology Moratorium Act, which they said resulted from a growing body of research that "points to systematic inaccuracy and bias issues in biometric technologies which pose disproportionate risks to non-white individuals." The bill came just one day after the first documented instance of police mistakenly arresting a man due to facial recognition software. There has been long-standing, widespread concern about the use of facial recognition software from lawmakers, researchers rights groups and even the people behind the technology. Multiple studies over the past three years have repeatedly proven that the tool is still not accurate, especially for people with darker skin.


When the Police Treat Software Like Magic

#artificialintelligence

Kash: The police are supposed to use facial recognition identification only as an investigative lead. But instead, people treat facial recognition as a kind of magic. And that's why you get a case where someone was arrested based on flawed software combined with inadequate police work. Witness testimony is also very troubling. That has been a selling point for many facial recognition technologies.


Is There A Case Of Regulating Facial Recognition Technology?

#artificialintelligence

Being one of the most scrutinised technologies of the current era, the debate against facial recognition has been raging for quite some time. However, the recent notable incident of "killing of George Floyd by a Minneapolis police officer" has brought in the urgency for framing a strict regulation and guidelines against using this technology by law enforcement. Nevertheless, in the current era, this divisive technology has penetrated almost every aspect of human lives -- smartphones, airports, police stations, advertising, and payments. It has also replaced the dated technology of biometrics amid COVID pandemic. But, the growing concerns of the recent incident has urged tech giants to reckon their decisions of building and offering this technology to police authorities.