Growing up in Egypt in the 1980s, Rana el Kaliouby was fascinated by hidden languages--the rapid-fire blinks of 1s and 0s computers use to transform electricity into commands and the infinitely more complicated nonverbal cues that teenagers use to transmit volumes of hormone-laden information to each other. Culture and social stigma discouraged girls like el Kaliouby in the Middle East from hacking either code, but she wasn't deterred. When her father brought home an Atari video game console and challenged the three el Kaliouby sisters to figure out how it worked, Rana gleefully did. When she wasn't allowed to date, el Kaliouby studied her peers the same way that she did the Atari. "I was always the first one to say'Oh, he has a crush on her' because of all of the gestures and the eye contact," she says.
For more on new technology that can read human emotions, check out the third episode of Should This Exist? the podcast that debates how emerging technologies will impact humanity. If we were sitting across a table from each other at a cafe and I asked about your day, you might answer with a polite response, like, "Fine." But if you were lying, I'd know from your expression, tone, twitches, and tics. We read subtext--unspoken clues--to get at the truth, to cut through what people say to understand what they mean. And now, with so many of our exchanges taking place in text online, much of our messaging, traditionally delivered via subtext, tells us less than ever before.
When you think of artificial intelligence and cars, the first thing that likely comes to mind is ambitious self-driving vehicle projects of tech giants like Google, Uber, and probably Apple. Most of these companies are leveraging AI to create cars that can understand their environments and navigate roads under different conditions, and hopefully, make driving safer--eventually. What's received less attention is the use of AI inside cars. Thanks to advances in deep learning, it has become possible to develop technologies that can determine what is happening inside vehicles and make the ride safer and more pleasant--all while creating new privacy and security risks. For better or worse, many applications of in-car AI are right around the corner.
Artificial intelligence requires us to draft a social contract with our technology, said Rana El Kaliouby, co-founder and CEO of emotion AI company Affectiva, who presented on emotion and AI at Fortune's Brainstorm Reinvent conference in Chicago on Monday. We've got to trust it, she explained. To build that trust between humans and technology, El Kaliouby said that empathy is key. In other words, machines have to understand the humans using them. When an Amazon Alexa doesn't understand its owner's request, it becomes quite frustrating to the user.
Could a program detect potential terrorists by reading their facial expressions and behavior? This was the hypothesis put to the test by the US Transportation Security Administration (TSA) in 2003, as it began testing a new surveillance program called the Screening of Passengers by Observation Techniques program, or Spot for short. While developing the program, they consulted Paul Ekman, emeritus professor of psychology at the University of California, San Francisco. Decades earlier, Ekman had developed a method to identify minute facial expressions and map them on to corresponding emotions. This method was used to train "behavior detection officers" to scan faces for signs of deception.