Goto

Collaborating Authors

Face Recognition


How Face Recognition Can Destroy Anonymity

WIRED

Stepping out in public used to make a person largely anonymous. Unless you met someone you knew, nobody would know your identity. Cheap and widely available face recognition software means that's no longer true in some parts of the world. Police in China run face algorithms on public security cameras in real time, providing notifications whenever a person of interest walks by. China provides an extreme example of the possibilities stemming from recent improvements in face recognition technology.


OpenCV Face detection with Haar cascades - PyImageSearch

#artificialintelligence

In this tutorial, you will learn how to perform face detection with OpenCV and Haar cascades. I've been an avid reader for PyImageSearch for the last three years, thanks for all the blog posts! My company does a lot of face application work, including face detection, recognition, etc. We just started a new project using embedded hardware. I don't have the luxury of using OpenCV's deep learning face detector which you covered before, it's just too slow on my devices.


ACLU and 70 other organizations ask DHS to stop using Clearview AI

Engadget

More than 70 advocacy groups have called on the Department of Homeland Security to stop using Clearview AI's facial recognition software. In a letter addressed to DHS Secretary Alejandro Mayorkas and Susan Rice, the director of the White House's Domestic Policy Council, the American Civil Liberties Union, Electronic Frontier Foundation, OpenMedia and other organizations argue "the use of Clearview AI by federal immigration authorities has not been subject to sufficient oversight or transparency." The letter points to a recent BuzzFeed News report that found employees from 1,803 government bodies, including police departments and public schools, have been using the software without many of their bosses knowing about it. The company has given out free trials to individual employees at those organizations hoping that they'll advocate for their agency to sign up for it. Besides the lack of oversight, the letter points to issues like racial bias in facial recognition software and the fact Clearview built its database by scraping websites like Facebook, Twitter and YouTube.


Inside the rise of police department real-time crime centers

MIT Technology Review

In 2021, it might be simpler to ask what can't be mapped. Just as Google and social media have enabled each of us to reach into the figurative diaries and desk drawers of anyone we might be curious about, law enforcement agencies today have access to powerful new engines of data processing and association. Ogden is hardly the tip of the spear: police agencies in major cities are already using facial recognition to identify suspects--sometimes falsely--and deploying predictive policing to define patrol routes. "That's not happening here," Ogden's current police chief, Eric Young, told me. "We don't have any kind of machine intelligence."


AI is increasingly being used to identify emotions – here's what's at stake

#artificialintelligence

Imagine you are in a job interview. As you answer the recruiter's questions, an artificial intelligence (AI) system scans your face, scoring you for nervousness, empathy and dependability. It may sound like science fiction, but these systems are increasingly used, often without people's knowledge or consent. Emotion recognition technology (ERT) is in fact a burgeoning multi-billion-dollar industry that aims to use AI to detect emotions from facial expressions. Yet the science behind emotion recognition systems is controversial: there are biases built into the systems.


Here's why we should never trust AI to identify our emotions

#artificialintelligence

Imagine you are in a job interview. As you answer the recruiter's questions, an artificial intelligence (AI) system scans your face, scoring you for nervousness, empathy and dependability. It may sound like science fiction, but these systems are increasingly used, often without people's knowledge or consent. Emotion recognition technology (ERT) is in fact a burgeoning multi-billion-dollar industry that aims to use AI to detect emotions from facial expressions. Yet the science behind emotion recognition systems is controversial: there are biases built into the systems.


Artificial Intelligence: Can We Trust Machines to Make Fair Decisions?

#artificialintelligence

But what happens when artificial intelligence is biased? What if it makes mistakes on important decisions -- from who gets a job interview or a mortgage to who gets arrested and how much time they ultimately serve for a crime? "These everyday decisions can greatly affect the trajectories of our lives and increasingly, they're being made not by people, but by machines," said UC Davis computer science professor Ian Davidson. A growing body of research, including Davidson's, indicates that bias in artificial intelligence can lead to biased outcomes, especially for minority populations and women. Facial recognition technologies, for example, have come under increasing scrutiny because they've been shown to better detect white faces than they do the faces of people with darker skin.


AICTE is inviting applications for free online Artificial Intelligence training

#artificialintelligence

GUVI has joined hands with AICTE to offer free online training for Artificial Intelligence professionals or students. AICTE with the partnership with GUVI is offering free online Artificial Intelligence to the participants that will be a 90 minutes workshop. The event will start from April 24 2021 6 PM IST to April 25 2021 at 6 PM IST in different time slots for participants. The aim behind the event is to help the students to polish their Python skills and develop the face recognition app. The training is an initiative taken by AICTE and GUVI to help the professionals of India to top the Artificial Intelligence domain.


Makeup artist transforms herself into A-list celebs, claims she's even fooled her boyfriend

FOX News

Bésame Cosmetics founder and makeup historian Gabriela Hernandez delivers insights into the billion-dollar cosmetic industry. Learn how makeup was deeply impacted by society's perception of women. A make-up artist has become an internet sensation after transforming herself into popular celebrities -- even fooling her friends and phone. Liss Lacao, 29, has recreated the recognizable features of celebrities such as Gordon Ramsay, Dolly Parton, the Queen and British Prime Minister Boris Johnson. She's so good, she's even fooled her iPhone -- which has facial recognition -- and her friends into thinking she was one of the A-listers.


Facial recognition systems are deciding your gender for you. Activists say that needs to stop - Coda Story

#artificialintelligence

If you rode the metro in the Brazilian city of Sao Paulo in 2018, you might have come across a new kind of advertising. Glowing interactive doors featured content targeted at individuals, according to assumptions made by artificial intelligence based on their appearance. Fitted with facial recognition cameras, the screens made instantaneous decisions about passengers' gender, age and emotional state, then served them ads accordingly. Digital rights groups said the technology violated the rights of trans and non-binary people because it assigned gender to individuals based on the physical shape of their face, potentially making incorrect judgments as to their identity. It also maintained a strictly male-female model of gender, ignoring the existence of non-binary people.