Collaborating Authors




Infer Genetic Disease From Your Face - DeepGestalt can accurately identify some rare genetic disorders using a photograph of a patient's face. This could lead to payers and employers potentially analyzing facial images and discriminating against individuals who have pre-existing conditions or developing medical complications.

IBM pushes for US to limit facial recognition system exports


IBM has called for the US Department of Commerce to limit the export of facial recognition systems, particularly to countries that could potentially use it for mass surveillance, racial profiling, or other human rights violations. In a letter [PDF] to the Commerce Department, IBM highlighted the need for tighter export controls for facial recognition technologies that employ for what it referred to as "1-to-many" matching. These suggested controls include controlling the export of both the high-resolution cameras used to collect data and the software algorithms used to analyse and match that data against a database of images, and restricting access to online image databases that can be used to train 1-to-many facial recognition systems. "These systems are distinct from '1 to 1' facial matching systems, such as those that might unlock your phone or allow you to board an airplane -- in those cases, facial recognition is verifying that a consenting person is who they say they are," IBM government and regulatory affairs vice president Christopher Padilla explained in a blog post. "But in a '1-to-many' application, a system can, for example, pick a face out of crowd by matching one image against a database of many others."

The organizations positioned to lobby against a US ban on facial recognition


Pressure on US lawmakers to create federal regulations on facial recognition has been mounting. IBM, Amazon, and Microsoft stopped selling the technology to US police, and called on Congress to regulate its use. Amidst international protests against racism and police misconduct, news broke that Detroit police had wrongfully arrested a Black man based on a faulty facial recognition match. In response, House Democrats proposed a bill last week that would ban police from using facial recognition. Against that backdrop, industry groups have quietly lobbied to soften regulations and avoid an outright ban.

Portland officials pass strict ban on facial recognition systems


Portland, Oregon officials have passed what could be the strictest municipal ban on facial recognition in the country. That means places like hotels, stores and restaurants can't use facial recognition where customers will be present. According to CNET, the bill passed unanimously, and it will be enforced starting in January 2021. Businesses caught violating the law could be sued and could pay up to $1,000 a day in fines. In the document (PDF) detailing the ordinance, the city council noted that "Black, Indigenous and People of Color communities have been subject to over surveillance and disparate and detrimental impact of the misuse of surveillance."

Portland, Ore. passes first-of-its-kind facial recognition ban


Lawmakers in Portland, Oregon on Wednesday passed the nation's most far-reaching facial recognition ban, prohibiting not only public agencies but also private enterprises from using the technology in public spaces. Portland's four city council members voted unanimously in support of two separate ordinances -- one barring city agencies from using facial recognition and one barring private entities from using it in public spaces. While the regulation of facial recognition is in its nascent stages -- with laws existing in just a few places like San Francisco, Oakland and San Diego -- banning private enterprises from using the technology puts Portland in uncharted legal territory. What is AI? Everything you need to know about Artificial Intelligence "This is a truly historic day for the city of Portland," Mayor Ted Wheeler said after the two ordinances passed. "Portlanders deserve peace of mind. They deserve transparency from private institutions, just as they do public institutions... It's my hope that other cities, large and small, in this nation and across the globe will follow suit."

NIST benchmarks show facial recognition technology still struggles to identify Black faces


Every few months, the U.S. National Institute of Standards and Technology (NIST) releases the results of benchmark tests it conducts on facial recognition algorithms submitted by companies, universities, and independent labs. A portion of these tests focus on demographic performance -- that is, how often the algorithms misidentify a Black man as a white man, a Black woman as a Black man, and so on. Stakeholders are quick to say that the algorithms are constantly improving with regard to bias, but a VentureBeat analysis reveals a different story. In fact, our findings cast doubt on the notion that facial recognition algorithms are becoming better at recognizing people of color. That isn't surprising, as numerous studies have shown facial recognition algorithms are susceptible to bias.

Eight case studies on regulating biometric technology show us a path forward

MIT Technology Review

Amba Kak was in law school in India when the country rolled out the Aadhaar project in 2009. The national biometric ID system, conceived as a comprehensive identity program, sought to collect the fingerprints, iris scans, and photographs of all residents. It wasn't long, Kak remembers, before stories about its devastating consequences began to spread. "We were suddenly hearing reports of how manual laborers who work with their hands--how their fingerprints were failing the system, and they were then being denied access to basic necessities," she says. "We actually had starvation deaths in India that were being linked to the barriers that these biometric ID systems were creating. So it was a really crucial issue."

People are fighting algorithms for a more just and equitable future. You can, too.


Mashable's series Algorithms explores the mysterious lines of code that increasingly control our lives -- and our futures. From dating apps, to news feeds, to streaming and purchase recommendations, we have become accustomed to a subtle prodding by unseen instruction sets, themselves generated by unnamed humans or opaque machines. But there is another, not so gentle side to the way algorithms affect us. A side where the prodding is more forceful, and the consequences more lasting than a song not to your liking, a product you probably shouldn't have bought, or even a date that fell flat. Automated license plate readers resulting in children held at gunpoint. Algorithms have the power to drive pain and oppression, at scale, and unless there is an intentional systematic effort to push back, our ever increasing reliance on algorithmic decision-making will only lead us further down a dark path.

ICE just signed a contract with facial recognition company Clearview AI


Immigration and Customs Enforcement (ICE) signed a contract with facial recognition company Clearview AI this week for "mission support," government contracting records show (as first spotted by the tech accountability nonprofit Tech Inquiry). The purchase order for $224,000 describes "clearview licenses" and lists "ICE mission support dallas" as the contracting office. ICE is known to use facial recognition technology; last month, The Washington Post reported the agency, along with the FBI, had accessed state drivers' license databases -- a veritable facial recognition gold mine, as the Post termed it -- but without the knowledge or consent of drivers. The agency has been criticized for its practices at the US southern border, which has included separating immigrant children from their families and detaining refugees indefinitely. "Clearview AI's agreement is with Homeland Security Investigations (HSI), which uses our technology for their Child Exploitation Unit and ongoing criminal investigations," Clearview AI CEO Hoan Ton-That said in an emailed statement to The Verge.