Face Recognition


How to Develop a Face Recognition System Using FaceNet in Keras

#artificialintelligence

Face recognition is a computer vision task of identifying and verifying a person based on a photograph of their face. FaceNet is a face recognition system developed in 2015 by researchers at Google that achieved then state-of-the-art results on a range of face recognition benchmark datasets. The FaceNet system can be used broadly thanks to multiple third-party open source implementations of the model and the availability of pre-trained models. The FaceNet system can be used to extract high-quality features from faces, called face embeddings, that can then be used to train a face identification system. In this tutorial, you will discover how to develop a face detection system using FaceNet and an SVM classifier to identify people from photographs. How to Develop a Face Recognition System Using FaceNet in Keras and an SVM Classifier Photo by Peter Valverde, some rights reserved. Face recognition is the general task of identifying and verifying people from photographs of their face.


AI Can Now Detect Deepfakes by Looking for Weird Facial Movements

#artificialintelligence

Here's a scenario that's becoming increasingly common: you see that a friend has shared a video of a celebrity doing or saying something on social media. You watch it, because you're only human, and something about it strikes you as deeply odd. Not only is Jon Snow from Game of Thrones apologizing for the writing on the show's last season, but the way his mouth is moving just looks off. This is a deepfake, an AI-generated dupe designed to deceive or entertain. Now, researchers have trained AI to look for visual inconsistencies similar to humans in order to detect AI-generated fake videos.


Facial Recognition with John Hershey, Machine Learning Researcher Anexinet %

#artificialintelligence

Is Facial Recognition a valuable public-safety tool or is it an infringement of our civil liberties? Also, name our new Podcast & win a prize! Links in the episode: STUDY: Facial feature discovery for ethnicity recognition San Francisco just banned facial-recognition technology SF Ban on Face Recognition – Acquisition of Surveillance Technology Facial recognition data collected by U.S. customs agency stolen by hackers Facial Recognition Software Wrongly Identifies 28 Lawmakers As Crime Suspects Does object recognition work for everyone? A new method to assess bias in CV systems Don't smile for surveillance: Why airport face scans are a privacy trap U.S. Customs and Border Protection says photos of travelers were taken in a data breach Chickens Prefer Attractive People


r/Futurology - AI Can Now Detect Deepfakes by Looking for Weird Facial Movements - Machines can now look for visual inconsistencies to identify AI-generated dupes, a lot like humans do.

#artificialintelligence

Even raw computing power is a bit misleading. It's the transistors per chip, which certainly should correlate to computing power, but is more likely to be inversely proportional to cost. The growth has slowed a bit.. down from his revised prediction of 2x growth closer to the original 1x, but how crazy is it that it held for around 50 years going from around 1000 transistors to 30 billion.


Examining The San Francisco Facial-Recognition Ban

#artificialintelligence

On May 14, 2019, the San Francisco government became the first major city in the United States to ban the use of facial-recognition technology (paywall) by the government and law enforcement agencies. This ban comes as a part of a broader anti-surveillance ordinance. As of May 14, the ordinance was set to go into effect in about a month. Local officials and civil advocates seem to fear the repercussions of allowing facial-recognition technology to proliferate throughout San Francisco, while supporters of the software claim that the ban could limit technological progress. In this article, I'll examine the ban that just took place in San Francisco, explore the concerns surrounding facial recognition technology, and explain why an outright ban may not be the best course of action.


Examining The San Francisco Facial-Recognition Ban

#artificialintelligence

On May 14, 2019, the San Francisco government became the first major city in the United States to ban the use of facial-recognition technology (paywall) by the government and law enforcement agencies. This ban comes as a part of a broader anti-surveillance ordinance. As of May 14, the ordinance was set to go into effect in about a month. Local officials and civil advocates seem to fear the repercussions of allowing facial-recognition technology to proliferate throughout San Francisco, while supporters of the software claim that the ban could limit technological progress. In this article, I'll examine the ban that just took place in San Francisco, explore the concerns surrounding facial recognition technology, and explain why an outright ban may not be the best course of action.


Scientists have created a 3D-reconstruction of a face using from a person's memory

Daily Mail - Science & tech

It's great to spot a familiar face -- and now researchers have'cracked the code' that our brains use to tell one apart from another. Our memories of the faces of people we know focus on key facial features let us recognise them when we meet. Volunteers were asked to rank how closely randomly-created digital faces matched with their memory of the face of a colleague. This process was repeated over and over, revealing key identifying facial features of the colleague being remembered. Computer software analysed the data on these instances of key features to recreate the faces in question.


UK to host world's first surveillance camera day

#artificialintelligence

The UK, which spends more than £2bn on video surveillance each year, is to mark National Surveillance Camera Day on 20 June as part of the National Surveillance Camera Strategy. The aim of the national event is to raise awareness about surveillance cameras and to encourage debate about the use of surveillance cameras in modern society by highlighting how they are used in practice, why they are used and who is using them. The initiative by the Surveillance Camera Commissioner (SCC) and the Centre for Research into Information, Surveillance and Privacy (Crisp) is also aimed at starting a nationwide conversation about how camera technology is evolving, especially around automatic face recognition and artificial intelligence (AI). The organisers hope that the resultant public debate will help inform policy-makers and service providers regarding societally acceptable surveillance practices and legitimacy for surveillance camera systems that are delivered in line with society's needs. As part of the initiative, the SCC is encouraging surveillance camera control centres to throw their "doors open" so that the public can see how they operate.


Dogs evolved muscles that give them 'sad eyes' to trigger a nurturing response in their owners

Daily Mail - Science & tech

Dogs have evolved muscles around their eyes to look cute to humans, scientific research has shown for the first time. The muscles allow dogs to raise a quizzical eyebrow and to look sad - giving them facial expressions similar to our own. The authors say that the eyebrow raising movement triggers a nurturing response in humans. They makes the dogs' eyes appear larger and more child-like. The authors say that the eyebrow raising movement triggers a nurturing response in humans.


Detecting Bias with Generative Counterfactual Face Attribute Augmentation

arXiv.org Machine Learning

We introduce a simple framework for identifying biases of a smiling attribute classifier. Our method poses counterfactual questions of the form: how would the prediction change if this face characteristic had been different? We leverage recent advances in generative adversarial networks to build a realistic generative model of face images that affords controlled manipulation of specific image characteristics. We introduce a set of metrics that measure the effect of manipulating a specific property of an image on the output of a trained classifier. Empirically, we identify several different factors of variation that affect the predictions of a smiling classifier trained on CelebA.