During my last interview I had a great talk with Daniel McDuff. Daniel's research is at the intersection of psychology and computer science. He is interested in designing hardware and algorithms for sensing human behavior at scale, and in building technologies that make life better. Applications of behavior sensing that he is most excited about are in: understanding mental health, improving online learning and designing new connected devices (IoT). Listen to more about why it is important to collect data from much larger scales and help computers read our emotional state. Key Learning Points: 1. Understanding the impact, intersection, and meaning of Psychology and Computer Science 2. Facial Expression Recognition 3. How to define Artificial Intelligence, Deep Learning, and Machine Learning 4. Applications of behavior sensing with Online Learning, Health, and Connected Devices 5. Visual Wearable sensors and heart health 6. The impact of education and learning 7. How to build computers to measure phycology, our reactions, emotions, etc 8. Daniel is building and utilizing scalable computer vision and machine learning tools to enable the automated recognition and analysis of emotions and physiology. He is currently Director of Research at Affectiva, a post-doctoral research affiliate at the MIT Media Lab and a visiting scientist at Brigham and Womens Hospital. At Affectiva Daniel is building state-of-the-art facial expression recognition software and leading analysis of the world's largest database of human emotion responses. Daniel completed his PhD in the Affective Computing Group at the MIT Media Lab in 2014 and has a B.A. and Masters from Cambridge University. His work has received nominations and awards from Popular Science magazine as one of the top inventions in 2011, South-by-South-West Interactive (SXSWi), The Webby Awards, ESOMAR, the Center for Integrated Medicine and Innovative Technology (CIMIT) and several IEEE conferences. His work has been reported in many publications including The Times, the New York Times, The Wall Street Journal, BBC News, New Scientist and Forbes magazine. Daniel has been named a 2015 WIRED Innovation Fellow.
Last May, my wife and baby and I moved from Austin to Dallas, into an actual house in an actual neighborhood on the east side of the city. We had a few weeks to set up before my wife was going to leave for about three months for job training. Aside from my child and wife, I knew no one in town. We spent a week filling our lives with the things we thought were supposed to go in houses: a sleek water heater that hummed like a cyclotron, new fuses, and a noise machine for our son. We learned about our surroundings, too.
SAN, FRANCISCO/SEOUL – As of early fall, it was clearer than ever that production problems meant Apple Inc. wouldn't have enough iPhone Xs in time for the holidays. The challenge was how to make the sophisticated phone -- with advanced features such as facial recognition -- in large enough numbers. As Wall Street analysts and fan blogs watched for signs that the company would stumble, Apple came up with a solution: It quietly told suppliers they could reduce the accuracy of the face-recognition technology to make it easier to manufacture, according to people familiar with the situation. With the iPhone X set to debut on Nov. 3, we're about to find out whether the move has paid off. Some analysts say there may still be too few iPhone Xs to meet initial demand.
In this Oct. 31, 2018, file photo, a man, who declined to be identified, has his face painted to represent efforts to defeat facial recognition during a protest at Amazon headquarters over the company's facial recognition system, "Rekognition," in Seattle. San Francisco is on track to become the first U.S. city to ban the use of facial recognition by police and other city agencies. These days, with facial recognition technology, you've got a face that can launch a thousand applications, so to speak. Sure, you may love the ease of opening your phone just by facing it instead of tapping in a code. But how do you feel about having your mug scanned, identifying you as you drive across a bridge, when you board an airplane or to confirm you're not a stalker on your way into a Taylor Swift concert?
Across Silicon Valley, technology companies are scrambling to make their software smarter with the help of artificial intelligence. Both Apple and Google have made significant improvements to their virtual assistants, Siri and Google Now, that help them better understand what a user might need before he or she asks. Meanwhile, Facebook has unveiled plans to create its own intelligent chat bot that can perform tasks on your behalf. As of this week, Apple has more firepower in the AI department. The Cupertino, Calif.-based company has purchased Emotient, a company that uses artificial intelligence to interpret a person's emotions, The Wall Street Journal first reported Thursday.