Here's How Disney is Implementing Artificial Intelligence

#artificialintelligence

DIS is known for its box office hits: Beauty and the Beast, Rogue One: A Star Wars Story, and Captain America: Civil War, just to name a few. As one of the biggest media conglomerates in the world, Disney is looking to better understand its moviegoing audience so that its upcoming movie line-up can continue to be moneymakers and crowd pleasers. Disney hopes to do this through artificial intelligence (AI) and facial recognition technology, using deep learning techniques to track the facial expressions of an audience watching a movie in order to gauge any emotional reaction to it. Called "factorized variational autoencoders," or FVAEs, the researchers said the technology works so well that after observing an audience member's face for just 10 minutes, it can predict how the person will react to the rest of the movie. The FVAEs go on to then recognize many facial expressions from movie viewers on their own, like smiles and laughter, and can make connections between different viewers to see if a particular movie is getting a wanted reaction at the right place and time.


Using Deep Learning To Measure The Facial Emotion Of Television

#artificialintelligence

Deep learning is increasingly capable of assessing the emotion of human faces, looking across an image to estimate how happy or sad the people in it appear to be. What if this could be applied to television news, estimating the average emotion of all of the human faces seen on the news over the course of a week? While AI-based facial sentiment assessment is still very much an active area of research, an experiment using Google's cloud AI to analyze a week's worth of television news coverage from the Internet Archive's Television News Archive demonstrates that even within the limitations of today's tools, there is a lot of visual sentiment in television news. To better understand the facial emotion of television, CNN, MSNBC and Fox News and the morning and evening broadcasts of San Francisco affiliates KGO (ABC), KPIX (CBS), KNTV (NBC) and KQED (PBS) from April 15 to April 22, 2019, totaling 812 hours of television news, were analyzed using Google's Vision AI image understanding API with all of its features enabled, including facial detection. Facial detection is very different from facial recognition.


Face recognition with OpenCV, Python, and deep learning - PyImageSearch

#artificialintelligence

In today's blog post you are going to learn how to perform face recognition in both images and video streams using: As we'll see, the deep learning-based facial embeddings we'll be using here today are both (1) highly accurate and (2) capable of being executed in real-time. To learn more about face recognition with OpenCV, Python, and deep learning, just keep reading! Inside this tutorial, you will learn how to perform facial recognition using OpenCV, Python, and deep learning. We'll start with a brief discussion of how deep learning-based facial recognition works, including the concept of "deep metric learning". From there, I will help you install the libraries you need to actually perform face recognition. Finally, we'll implement face recognition for both still images and video streams.


Generating faces for affect analysis

arXiv.org Artificial Intelligence

This paper presents a novel approach for synthesizing facial affect; either categorical, in terms of the six basic expressions (i.e., anger, disgust, fear, happiness, sadness and surprise), or dimensional, in terms of valence (i.e., how positive or negative is an emotion) and arousal (i.e., power of the emotion activation). In the Valence-Arousal case, a system is created, based on VA annotation of 600,000 frames from the 4DFAB database; in the categorical case, the system is based on the selection of apex frames of posed expression sequences from the 4DFAB. The proposed system accepts at its input: i) either the basic facial expression, or the pair of valence-arousal emotional state descriptors, which need to be synthesized and ii) a neutral 2D image of a person on which the corresponding affect will be synthesized. The proposed approach consists of the following steps: First, based on the provided desired emotional state, a set of 3D facial meshes is produced from the 4DFAB database and is used to build a blendshape model that generates the new facial affect. To synthesize this affect on the 2D neutral image, 3D Morphable Models fitting is performed and the reconstructed face is then deformed to generate the target facial expressions. Finally, the new face is rendered into the original image. Qualitative experimental studies illustrate the generation of realistic images, when the neutral image is sampled from a variety of well known lab-controlled or in-the-wild databases, including Aff-Wild, RECOLA, AffectNet, AFEW, Multi-PIE, AFEW-VA, BU-3DFE, Bosphorus, RAF-DB. Also, quantitative experiments are conducted, in which deep neural networks, trained using the generated images from each of the above databases in a data-augmentation framework, provide affect recognition; better performances are achieved through the presented approach when compared with the current state-of-the-art.


Image Recognition: A peek into the future

#artificialintelligence

Our brains are wired in a way that they can differentiate between objects, both living and non-living by simply looking at them. In fact, the recognition of objects and a situation through visualization is the fastest way to gather, as well as to relate information. This becomes a pretty big deal for computers where a vast amount of data has to be stuffed into it, before the computer can perform an operation on its own. Ironically, with each passing day, it is becoming essential for machines to identify objects through facial recognition, so that humans can take the next big step towards a more scientifically advanced social mechanism. So, what progress have we really made in that respect?