Affectiva, a startup developing "emotion recognition technology" that can read people's moods from their facial expressions captured in digital videos, raised 14 million in a Series D round of funding led by Fenox Venture Capital. According to co-founder Rana el Kaliouby, the Waltham, Mass.-based company wants its technology to become the de facto means of adding emotional intelligence and empathy to any interactive product, and the best way for organizations to attain unvarnished insights about customers, patients or constituents. She explained that Affectiva uses computer vision and deep learning technology to analyze facial expressions or non-verbal cues in visual content online, but not the language or conversations in a video. The company's technology ingests digital images--including video in chat applications, live-streamed or recorded videos, or even GIFs--through simple web cams typically. Its system first categorizes then maps the facial expressions to a number of emotional states, like happy, sad, nervous, interested or surprised.
Growing up in Egypt in the 1980s, Rana el Kaliouby was fascinated by hidden languages--the rapid-fire blinks of 1s and 0s computers use to transform electricity into commands and the infinitely more complicated nonverbal cues that teenagers use to transmit volumes of hormone-laden information to each other. Culture and social stigma discouraged girls like el Kaliouby in the Middle East from hacking either code, but she wasn't deterred. When her father brought home an Atari video game console and challenged the three el Kaliouby sisters to figure out how it worked, Rana gleefully did. When she wasn't allowed to date, el Kaliouby studied her peers the same way that she did the Atari. "I was always the first one to say'Oh, he has a crush on her' because of all of the gestures and the eye contact," she says.
Could a program detect potential terrorists by reading their facial expressions and behavior? This was the hypothesis put to the test by the US Transportation Security Administration (TSA) in 2003, as it began testing a new surveillance program called the Screening of Passengers by Observation Techniques program, or Spot for short. While developing the program, they consulted Paul Ekman, emeritus professor of psychology at the University of California, San Francisco. Decades earlier, Ekman had developed a method to identify minute facial expressions and map them on to corresponding emotions. This method was used to train "behavior detection officers" to scan faces for signs of deception.
"Without our emotions, we can't make smart decisions," says Rana el Kaliouby. In the field of artificial intelligence, this is sheer heresy. Isn't the goal of AI to create a machine with human-level intelligence but without the human "baggage" of emotions, biases, and intuitions that only get in the way of smart decisions? As the co-founder and CEO of Affectiva, el Kaliouby is on a mission to expand what we mean by "artificial intelligence" and create intelligent machines that understand our emotions. Surveying the evolution of how we have interacted with computers, she asks "what's the next more natural interface?"