Goto

Collaborating Authors

 face recognition technology


US Air Force is giving military drones the ability to recognise faces

New Scientist

The US Air Force can now equip autonomous drones with face recognition technology, raising fears that they could be used to find and kill specified people. The drones will be employed by special operations forces for intelligence gathering and for missions in foreign countries, according to a contract between the Department of Defense (DoD) and Seattle-based firm RealNetworks. The company's software, based on machine learning, is designed to work on a drone that is piloting itself, with limited or …


Top 3 Face Datasets and How to Work with Them

#artificialintelligence

An image dataset contains specially selected digital images intended to help train, test, and evaluate an artificial intelligence (AI) or machine learning (ML) algorithm, usually a computer vision algorithm. A face dataset is a type of image dataset that includes images of curated human faces, typically for an ML project. There are several publicly available face datasets that you can leverage instead of collecting your own training data. Managing and optimizing datasets for machine learning is one of the crucial stages in a machine learning operations (MLOps) pipeline. Face datasets usually include faces in varying positions and lighting conditions, showing a full range of human emotions, ethnicities, ages, and additional characteristics.


Face recognition technology for pigs could improve welfare on farms

New Scientist

Pigs could be issued with biometric passports based on facial recognition technology, giving farmers a more practical and welfare-friendly way of identifying individuals than ear notches or tags, the current industry standards. Identifying pigs based on their unique facial features could enable them to receive individualised food and veterinary care, and be traced as they go through meat processing.


How Restaurant AI Is Evolving Rapidly - Pioneering Minds

#artificialintelligence

Artificial Intelligence AI has infiltrated every aspect of our lives, and the restaurant industry is no exception. Miso Robotics’ AI kitchen assistant, “Flippy,” assists with grilling, frying, prepping, and plating. Restaurants that integrate AI into their POS systems benefit from analyzing data generated by the system. In 2019, McDonald’s began using predictive AI technologies and projecting orders in drive-thrus. By analyzing past data, stores could reduce wait times by 30 seconds on average. KFC uses face recognition technology at kiosks to recognize return customers and personalize their experience. It employs artificial intelligence face recognition technology to determine what a client might be interested in purchasing based on gender, facial expressions, and other visual characteristics. In the restaurant industry, artificial intelligence has played the most important role in improving the customer experience. Restaurants have a lot of data that isn’t being exploited, and as a result, owners aren’t getting a decent return on their investment. AI and ML work together to provide personalized suggestions to users by analyzing individual search histories.


Face Recognition Technology and Civil Rights

#artificialintelligence

From looking into waters for their reflection to mirrors and then cameras, we, humans, have come a long way. After digital cameras came into existence, it was possible to have databases of large numbers of faces and their facial features. The advancement of software and technologies made it possible to use this database, run for facial recognition technology. It is now capable of analyzing and recognizing human faces as accurately as possible.


An Experimental Evaluation on Deepfake Detection using Deep Face Recognition

Ramachandran, Sreeraj, Nadimpalli, Aakash Varma, Rattani, Ajita

arXiv.org Artificial Intelligence

Significant advances in deep learning have obtained hallmark accuracy rates for various computer vision applications. However, advances in deep generative models have also led to the generation of very realistic fake content, also known as deepfakes, causing a threat to privacy, democracy, and national security. Most of the current deepfake detection methods are deemed as a binary classification problem in distinguishing authentic images or videos from fake ones using two-class convolutional neural networks (CNNs). These methods are based on detecting visual artifacts, temporal or color inconsistencies produced by deep generative models. However, these methods require a large amount of real and fake data for model training and their performance drops significantly in cross dataset evaluation with samples generated using advanced deepfake generation techniques. In this paper, we thoroughly evaluate the efficacy of deep face recognition in identifying deepfakes, using different loss functions and deepfake generation techniques. Experimental investigations on challenging Celeb-DF and FaceForensics++ deepfake datasets suggest the efficacy of deep face recognition in identifying deepfakes over two-class CNNs and the ocular modality. Reported results suggest a maximum Area Under Curve (AUC) of 0.98 and an Equal Error Rate (EER) of 7.1% in detecting deepfakes using face recognition on the Celeb-DF dataset. This EER is lower by 16.6% compared to the EER obtained for the two-class CNN and the ocular modality on the Celeb-DF dataset. Further on the FaceForensics++ dataset, an AUC of 0.99 and EER of 2.04% were obtained. The use of biometric facial recognition technology has the advantage of bypassing the need for a large amount of fake data for model training and obtaining better generalizability to evolving deepfake creation techniques.


How Artificial Intelligence Detects Faces?

#artificialintelligence

You might've heard about face recognition and its different applications. A face recognition system can identify people in videos or static images to put it in simple terms. Many fields use the technology for surveillance and tracking people. Some countries are using face recognition systems more widely than others. But while you may hear about it more frequently now, the technology has been in existence for decades.


Privacy expert Clare Garvie explains why your face is already in a criminal lineup

#artificialintelligence

Biometric surveillance is coming for you, even if you have'nothing to hide' Clare Garvie is a Senior Associate at Georgetown University's Center on Privacy and Technology, where she has dedicated her work to studying law enforcement's use of face recognition technology on the American public. She is considered the foremost expert on face recognition technology; last year she testified in front of Congress. She writes extensively on its use in law enforcement investigations. As well, she brings to light the worrying ways the technology disrupts privacy, circumvents judicial norms and legal precedents, and promotes chilling effects on free speech and civil liberties. All of this happens under a veil of secrecy, without public consent and largely outside of the purview of American lawmakers. Garvie's research spotlights the ways these technologies are disproportionately used on Black and Brown communities and the failures of face recognition algorithms when deployed on people of color and women. The technology's efficacy, already cause for concern, is further problematized by law enforcement's cavalier practices.


AI, Protests, and Justice

#artificialintelligence

Editor's Note: The use of face recognition technology in policing has been a long-standing subject of concern, even more-so now after the murder of George Floyd and the demonstrations that have followed. In this article, Mike Loukides, VP of Content Strategy at O'Reilly Media, reviews how companies and cities have addressed these concerns, as well as ways in which individuals can mitigate face recognition technology or even use it to increase accountability. We'd love to hear from you about what you think about this piece. Largely on the impetus of the Black Lives Matter movement, the public's response to the murder of George Floyd, and the subsequent demonstrations, we've seen increased concern about the use of facial identification in policing. First, in a highly publicized wave of announcements, IBM, Microsoft, and Amazon have announced that they will not sell face recognition technology to police forces.


'Have You Thought About . . .'

Communications of the ACM

How do researchers talk to one another about the ethics of our research? How do you tell someone you are concernened their work may do more harm than good for the world? If someone tells you your work may cause harm, how do you receive that feedback with an open mind, and really listen? I find myself lately on both sides of this dilemma--needing both to speak to others and listen myself more. It is not easy on either side.