Many facial recognition systems are being trained using millions of online photos uploaded by everyday people and, more often than not, the photos are being taken without users' consent, an NBC News investigation has found. In one worrying case, IBM scraped almost a million photos from unsuspecting users on Flickr to build its facial recognition database. The practice not only raises privacy concerns, but also fuels fears that the systems could one day be used to disproportionately target minorities. Many facial recognition systems are being trained using millions of online photos uploaded by everyday people and, more often than not, the photos are being taken without users' consent IBM's database, called'Diversity in Faces,' was released in January as part of the company's efforts to'advance the study of fairness and accuracy in facial recognition technology.' The database was released following a study from MIT Media Lab researcher Joy Buolamwini, which found that popular facial recognition services from Microsoft, IBM and Face vary in accuracy based on gender and race.
Amazon's controversial facial recognition software, Rekognition, is facing renewed criticism. A new study from the MIT Media Lab found that Rekognition may have gender and racial biases. In particular, the software performed worse when identifying gender for females and darker-skinned females. Amazon's controversial facial recognition software, Rekognition, is facing renewed criticism. When the software was presented with a number of female faces, it incorrectly labeled 19 percent of them as male.
MIT researchers believe they've figured out a way to keep facial recognition software from being biased. To do this, they developed an algorithm that knows to scan for faces, but also evaluates the training data supplied to it. The algorithm scans for biases in the training data and eliminates any that it perceives, resulting in a more balanced dataset. MIT researchers believe they've figured out a way to keep facial recognition software from being biased. They developed an algorithm that's capable of balancing training data'We've learned in recent years that AI systems can be unfair, which is dangerous when they're increasingly being used to do everything from predict crime to determine what news we consume,' MIT's Computer Science & Artificial Intelligence Laboratory said in a statement.
Microsoft has updated it's facial recognition technology in an attempt to make it less'racist'. It follows a study published in March that criticised the technology for being able to more accurately recognise the gender of people with lighter skin tones. The system was found to perform best on males with lighter skin and worst on females with darker skin. The problem largely comes down to the data being used to train the AI system not containing enough images of people with darker skin tones. Experts from the computing firm say their tweaks have significantly reduced these errors, by up to 20 times for people with darker faces.
Jimmy Gomez is a California Democrat, a Harvard graduate and one of the few Hispanic lawmakers serving in the US House of Representatives. But to Amazon's facial recognition system, he looks like a potential criminal. Gomez was one of 28 US Congress members falsely matched with mugshots of people who've been arrested, as part of a test the American Civil Liberties Union ran last year of the Amazon Rekognition program. Nearly 40 percent of the false matches by Amazon's tool, which is being used by police, involved people of color. This is part of a CNET special report exploring the benefits and pitfalls of facial recognition.