Microsoft's facial-recognition technology is getting smarter at recognizing people with darker skin tones. On Tuesday, the company touted the progress, though it comes amid growing worries that these technologies will enable surveillance against people of color. Microsoft's announcement didn't broach the concerns; the company merely addressed how its facial-recognition tech could misidentify both men and women with darker skin tones. Microsoft has recently reduced the system's error rates by up to 20 times. In February, research from MIT and Stanford University highlighted how facial-recognition technologies can be built with bias.
On the heels of criticism of its work with the U.S. Immigration and Customs Enforcement (ICE), Microsoft is advocating for government to take a role in regulating facial recognition technology. Microsoft officials have said that the company's work with ICE doesn't include any facial-recognition work, in spite of a company blog post about ICE being a Microsoft customer that mentioned the potential for ICE to use facial recognition. ICE has come under fire for its work around separating immigrant children from their families. Microsoft officials haven't responded to calls by some employees and others outside the company to cease all work with ICE, which, frankly, isn't too surprising. Even though Microsoft has been stepping up its work to position itself as a champion of ethical uses of AI, government contracts are a key part of the company's business.
Microsoft has called for facial recognition technology to be regulated by government, with for laws governing its acceptable uses. In a blog post on the company's website on Friday, Microsoft president Brad Smith called for a congressional bipartisan "expert commission" to look into regulating the technology in the US. "It seems especially important to pursue thoughtful government regulation of facial recognition technology, given its broad societal ramifications and potential for abuse," he wrote. "Without a thoughtful approach, public authorities may rely on flawed or biased technological approaches to decide who to track, investigate or even arrest for a crime." Microsoft is the first big tech company to raise serious alarms about an increasingly sought-after technology for recognising a person's face from a photo or through a camera.
A team of engineering researchers from the University of Toronto has created an algorithm to dynamically disrupt facial recognition systems. Led by professor Parham Aarabi and graduate student Avishek Bose, the team used a deep learning technique called "adversarial training", which pits two artificial intelligence algorithms against each other. Aarabi and Bose designed a set of two neural networks, the first one identifies faces and the other works on disrupting the facial recognition task of the first. The two constantly battle and learn from each other, setting up an ongoing AI arms race. "The disruptive AI can'attack' what the neural net for the face detection is looking for," Bose said in an interview.
A Microsoft blog called out elected governmental officials to draft laws that would regulate the usage of facial recognition technology. They stressed the need for the development of norms based on acceptable uses. Microsoft gave examples such as government tracking of citizens over the course of months, stores tracking shoppers every visit to see which shelf they visited without notifying them of the surveillance and more. Also the technology exhibits bias in certain body features so it could create a society where certain groups are targeted because they are simply easier to track. Lastly Microsoft doesn't feel tech companies should be the ones making the rules as some have suggested.