San Francisco supervisors approved a ban on police using facial recognition technology, making it the first city in the U.S. with such a restriction. SAN FRANCISCO – A routine traffic stop goes dangerously awry when a police officer's body camera uses its built-in facial recognition software to misidentify a motorist as a convicted felon. At best, lawsuits are launched. That imaginary scenario is what some California lawmakers are trying to avoid by supporting Assembly Bill 1215, the Body Camera Accountability Act, which would ban the use of facial recognition software in police body cams – a national first if it passes a Senate vote this summer and is signed by Gov. Gavin Newsom. State law enforcement officials here do not now employ the technology to scan those in the line of sight of officers.
On Tuesday, news broke that Microsoft refused to sell its facial recognition software to law enforcement in California and an unnamed country. The move led to some praise for the company for being consistent with its policy to oppose questionable human rights applications, but a broader examination of Microsoft's actions in the past year indicates that the company has been saying one thing and doing another. Last week, the Financial Times reported that Microsoft Research Asia worked with a university associated with the Chinese military on facial recognition tech that is being used to monitor the nation's population of Uighur Muslims. Up to 500,000 members of the group, primarily in western China, were monitored over the course of a month, according to a New York Times report. Microsoft defended the work as helpful to advance the technology, but U.S. Senator Marco Rubio called the company complicit in human rights abuses.
Over the past year, Silicon Valley has been grappling with the way it handles our data, our elections, and our speech. Now it's got a new concern: our faces. In just the past few weeks, critics assailed Amazon for selling facial recognition technology to local police departments, and Facebook for how it gained consent from Europeans to identify people in their photos. Microsoft has endured its own share of criticism lately around the ethical uses of its technology, as employees protested a contract under which US Immigration and Customs Enforcement uses Microsoft's cloud-computing service. Microsoft says that contract did not involve facial recognition.
Microsoft is planning to implement self-designed ethical principles for its facial recognition technology by the end of March, as it urges governments to push ahead with matching regulation in the field. The company in December called for new legislation to govern artificial intelligence software for recognising faces, advocating for human review and oversight of the technology in some critical cases, as a way to mitigate the risks of biased outcomes, intrusions into privacy and democratic freedoms. "We do need to lead by example and we're working to do that," Microsoft President and chief legal officer Brad Smith said in an interview, adding that some other companies are also putting similar principles into place. Smith said the company plans by the end of March to "operationalise" its principles, which involves drafting policies, building governance systems and engineering tools and testing to make sure it's in line with its goals. It also involves setting controls for the company's global sales and consulting teams to prevent selling the technology in cases where it risks being used for an unwanted purpose.
Despite concerns over facial recognition's impact on civil liberties, public agencies have continued to apply the tool liberally across the U.S. with one of the biggest deployments coming to an airport near you. The U.S. Department of Homeland Security (DHS) said that it plans to expand its application of facial recognition to 97 percent of all passengers departing the U.S. by 2023, according to the Verge. By comparison, facial recognition technology is deployed in just 15 airports, according to figures recorded at the end of 2018. In what is being referred to as'biometric exit,' the agency plans to use facial recognition to more thoroughly track passengers entering and leaving the country. The U.S. Department of Homeland Security (DHS) said that it plans to expand its application of facial recognition to 97 percent of all passengers departing the U.S. by 2023 The system functions by taking a picture of passengers before they depart and then cross-referencing the image with a database containing photos of passports and visas.