Goto

Collaborating Authors

Microsoft tweaks facial-recognition tech to combat bias

FOX News

Microsoft's facial-recognition technology is getting smarter at recognizing people with darker skin tones. On Tuesday, the company touted the progress, though it comes amid growing worries that these technologies will enable surveillance against people of color. Microsoft's announcement didn't broach the concerns; the company merely addressed how its facial-recognition tech could misidentify both men and women with darker skin tones. Microsoft has recently reduced the system's error rates by up to 20 times. In February, research from MIT and Stanford University highlighted how facial-recognition technologies can be built with bias.


Microsoft advocates for government regulation of facial-recognition technology

ZDNet

On the heels of criticism of its work with the U.S. Immigration and Customs Enforcement (ICE), Microsoft is advocating for government to take a role in regulating facial recognition technology. Microsoft officials have said that the company's work with ICE doesn't include any facial-recognition work, in spite of a company blog post about ICE being a Microsoft customer that mentioned the potential for ICE to use facial recognition. ICE has come under fire for its work around separating immigrant children from their families. Microsoft officials haven't responded to calls by some employees and others outside the company to cease all work with ICE, which, frankly, isn't too surprising. Even though Microsoft has been stepping up its work to position itself as a champion of ethical uses of AI, government contracts are a key part of the company's business.


Microsoft calls for facial recognition technology rules given 'potential for abuse'

The Guardian

Microsoft has called for facial recognition technology to be regulated by government, with for laws governing its acceptable uses. In a blog post on the company's website on Friday, Microsoft president Brad Smith called for a congressional bipartisan "expert commission" to look into regulating the technology in the US. "It seems especially important to pursue thoughtful government regulation of facial recognition technology, given its broad societal ramifications and potential for abuse," he wrote. "Without a thoughtful approach, public authorities may rely on flawed or biased technological approaches to decide who to track, investigate or even arrest for a crime." Microsoft is the first big tech company to raise serious alarms about an increasingly sought-after technology for recognising a person's face from a photo or through a camera.


The future of facial recognition MV Pro Media

#artificialintelligence

Facial recognition technology is becoming increasingly prevalent in our everyday lives, with many of us using the technology every time we use our face to unlock our smartphone – a study found that we use our phones around 52 times per day. Whilst it has transformed how we access our phones, facial recognition technology is also being used in a number of industries outside of tech to improve the service that companies provide customers with. If you're a company that isn't adopting the use of facial recognition, it's time to start researching into it before you get left behind. Devices recognise their users by scanning facial features and shapes – specific contours and individual unique features help the likes of smartphones recognise users and open certain settings up on phones. For example, many banking apps now allow users to login to their internet banking through the use of their face – this, in some ways, is far safer than the previous ways of using online banking which would either include an individual code or a series of questions to answer that only the user would know.


AI claims to be able to thwart facial recognition software, making you "invisible"

#artificialintelligence

A team of engineering researchers from the University of Toronto has created an algorithm to dynamically disrupt facial recognition systems. Led by professor Parham Aarabi and graduate student Avishek Bose, the team used a deep learning technique called "adversarial training", which pits two artificial intelligence algorithms against each other. Aarabi and Bose designed a set of two neural networks, the first one identifies faces and the other works on disrupting the facial recognition task of the first. The two constantly battle and learn from each other, setting up an ongoing AI arms race. "The disruptive AI can'attack' what the neural net for the face detection is looking for," Bose said in an interview.