In July 2019, Guillermo Federico Ibarrola was heading home on the subway when he was stopped by Buenos Aires police. The authorities told Ibarrola that he was being detained for an armed robbery that had happened three years ago in a city about 400 miles away. He said he had never even been to the city where he was accused of committing the crime. On the sixth day in police custody, he was suddenly released. The police officers offered Ibarrola coffee and dinner, and a bus ticket back home. As it turned out, a "Guillermo Ibarrola" had potentially committed a crime, but it wasn't this Guillermo Ibarrola.
Once a dominion of science fiction (e.g., Star Trek,) facial recognition technology has not only caught up to us in reality this century, but awareness around its benefits and pitfalls has also risen with its heightened presence in the news over the last few months. We hope to shine some light on the reasons for this ascent and the myriad thoughts and actions it has raised. To be sure, all the complex issues, implications, and ethics surrounding facial recognition technology are far too important and expansive to cover in this piece. We also recognize there is much more worth exploring, and a variety of valid and informed views on the subject. Our aim is for this piece to be informative, unbiased, and thought-provoking as the topic of facial recognition technology continues to gain attention and relevance.
Clearview says its software lets authorities plug in photos of people suspected of involvement in crimes and search for other images of their faces from the internet. The company has compiled a massive database of photos by scraping websites, including social-media platforms. Some of the platforms have accused Clearview's scraping efforts of violating their terms of service. Facebook Inc., Twitter Inc. and Microsoft Corp.'s LinkedIn are among those that have sent the startup cease-and-desist orders. Civil libertarians have raised concerns broadly about the use of facial-recognition by law enforcement, and specifically about Clearview.
Infer Genetic Disease From Your Face - DeepGestalt can accurately identify some rare genetic disorders using a photograph of a patient's face. This could lead to payers and employers potentially analyzing facial images and discriminating against individuals who have pre-existing conditions or developing medical complications.
IBM has called for the US Department of Commerce to limit the export of facial recognition systems, particularly to countries that could potentially use it for mass surveillance, racial profiling, or other human rights violations. In a letter [PDF] to the Commerce Department, IBM highlighted the need for tighter export controls for facial recognition technologies that employ for what it referred to as "1-to-many" matching. These suggested controls include controlling the export of both the high-resolution cameras used to collect data and the software algorithms used to analyse and match that data against a database of images, and restricting access to online image databases that can be used to train 1-to-many facial recognition systems. "These systems are distinct from '1 to 1' facial matching systems, such as those that might unlock your phone or allow you to board an airplane -- in those cases, facial recognition is verifying that a consenting person is who they say they are," IBM government and regulatory affairs vice president Christopher Padilla explained in a blog post. "But in a '1-to-many' application, a system can, for example, pick a face out of crowd by matching one image against a database of many others."
Pressure on US lawmakers to create federal regulations on facial recognition has been mounting. IBM, Amazon, and Microsoft stopped selling the technology to US police, and called on Congress to regulate its use. Amidst international protests against racism and police misconduct, news broke that Detroit police had wrongfully arrested a Black man based on a faulty facial recognition match. In response, House Democrats proposed a bill last week that would ban police from using facial recognition. Against that backdrop, industry groups have quietly lobbied to soften regulations and avoid an outright ban.
Portland, Oregon officials have passed what could be the strictest municipal ban on facial recognition in the country. That means places like hotels, stores and restaurants can't use facial recognition where customers will be present. According to CNET, the bill passed unanimously, and it will be enforced starting in January 2021. Businesses caught violating the law could be sued and could pay up to $1,000 a day in fines. In the document (PDF) detailing the ordinance, the city council noted that "Black, Indigenous and People of Color communities have been subject to over surveillance and disparate and detrimental impact of the misuse of surveillance."
Lawmakers in Portland, Oregon on Wednesday passed the nation's most far-reaching facial recognition ban, prohibiting not only public agencies but also private enterprises from using the technology in public spaces. Portland's four city council members voted unanimously in support of two separate ordinances -- one barring city agencies from using facial recognition and one barring private entities from using it in public spaces. While the regulation of facial recognition is in its nascent stages -- with laws existing in just a few places like San Francisco, Oakland and San Diego -- banning private enterprises from using the technology puts Portland in uncharted legal territory. What is AI? Everything you need to know about Artificial Intelligence "This is a truly historic day for the city of Portland," Mayor Ted Wheeler said after the two ordinances passed. "Portlanders deserve peace of mind. They deserve transparency from private institutions, just as they do public institutions... It's my hope that other cities, large and small, in this nation and across the globe will follow suit."
Every few months, the U.S. National Institute of Standards and Technology (NIST) releases the results of benchmark tests it conducts on facial recognition algorithms submitted by companies, universities, and independent labs. A portion of these tests focus on demographic performance -- that is, how often the algorithms misidentify a Black man as a white man, a Black woman as a Black man, and so on. Stakeholders are quick to say that the algorithms are constantly improving with regard to bias, but a VentureBeat analysis reveals a different story. In fact, our findings cast doubt on the notion that facial recognition algorithms are becoming better at recognizing people of color. That isn't surprising, as numerous studies have shown facial recognition algorithms are susceptible to bias.