Microsoft says its facial recognition technology is less biased

Mashable

Microsoft claims its facial recognition technology just got a little less awful. Earlier this year, a study by MIT researchers found that tools from IBM, Microsoft, and Chinese company Megvii could correctly identify light-skinned men with 99-percent accuracy. But it incorrectly identified darker-skinned women as often as one-third of the time. Now imagine a computer incorrectly flagging an image at an airport or in a police database, and you can see how dangerous those errors could be. Microsoft's software performed poorly in the study.


North Korea Is Selling Facial Recognition Technology, Report Finds

NPR

North Korea has been secretly selling facial recognition software, a new report states. This photo shows a German official identified by a computer with an automatic facial recognition system that was not mentioned in the report. North Korea has been secretly selling facial recognition software, a new report states. This photo shows a German official identified by a computer with an automatic facial recognition system that was not mentioned in the report. North Korea has been secretly selling facial recognition technology, fingerprint scanning and other products overseas.


Image Recognition: A peek into the future

#artificialintelligence

Our brains are wired in a way that they can differentiate between objects, both living and non-living by simply looking at them. In fact, the recognition of objects and a situation through visualization is the fastest way to gather, as well as to relate information. This becomes a pretty big deal for computers where a vast amount of data has to be stuffed into it, before the computer can perform an operation on its own. Ironically, with each passing day, it is becoming essential for machines to identify objects through facial recognition, so that humans can take the next big step towards a more scientifically advanced social mechanism. So, what progress have we really made in that respect?


AI claims to be able to thwart facial recognition software, making you "invisible"

#artificialintelligence

A team of engineering researchers from the University of Toronto has created an algorithm to dynamically disrupt facial recognition systems. Led by professor Parham Aarabi and graduate student Avishek Bose, the team used a deep learning technique called "adversarial training", which pits two artificial intelligence algorithms against each other. Aarabi and Bose designed a set of two neural networks, the first one identifies faces and the other works on disrupting the facial recognition task of the first. The two constantly battle and learn from each other, setting up an ongoing AI arms race. "The disruptive AI can'attack' what the neural net for the face detection is looking for," Bose said in an interview.


Could the #10YearChallenge Really Improve Facial Recognition Tech?

Slate

Over the past week, the #2009vs2019 meme challenge, alternately known as the #10yearchallenge and #HowHardDidAgeHitYou, has become the latest social media trend ripe for think piece fodder. While the challenge inspired a host of discussions about social media narcissism and gendered norms, author and consultant Kate O'Neill put her own spin on the meme in a tweet raising the privacy implications of posting age-separated photos of oneself on Facebook. The post generated enough buzz and discussion on Twitter that O'Neill expanded it into an article in Wired, in which she argued that Facebook or another data-hungry entity could exploit the meme to train facial recognition algorithms to better handle age-related characteristics and age progression predictions. She noted that the clear labeling of the year in which the pictures were taken, along with the volume of pictures explicitly age-separated by a set amount of time, could be quite valuable to a company like Facebook. "In other words, thanks to this meme, there's now a very large data set of carefully curated photos of people from roughly 10 years ago and now," O'Neill wrote.