Move aside, backseat driver! New tech at CES monitors...

Daily Mail - Science & tech

Cars are getting smarter - and while many focus on seeing the road ahead, they are also set to begin analyzing drivers and passengers. This week at CES, the international consumer electronics show in Las Vegas, a host of startup companies are showing off inward facing cameras that watch and analyze drivers, passengers and objects in cars. Carmakers say they will boost safety - but privacy campaigners warn they could be used to make money by analyzing every movement - even being able to track a passenger's gaze to see what ads they are looking at, and monitor the emotions of people through their facial expressions. Occupants, inside a car, are seen on a monitor using technology by Silicon Valley company Eyeris, which uses cameras and AI to track drivers and passengers for safety benefits, shown during an interview in San Jose, California, U.S., December 28, 2018. Carmakers could gather anonymized data and sell it.


Feeling sad, angry? Your future car will know - Roadshow

#artificialintelligence

Did you know that passengers invariably show a fear reaction when the brakes are applied in a car? That is just one of the things facial monitoring company Eyeris learned when developing its Emovu Driver Monitoring System (DMS). Using a combination of cameras, graphic processing and deep learning, Emovu analyzes the passengers in a car, determining from facial movement which of seven emotions these passengers are feeling. Modar JR Alaoui, CEO of Eyeris, demonstrated the company's in-car technology during Nvidia's GTC developer conference, putting forth a few ideas of how monitoring the emotions of drivers can lead to safer driving. The company used deep learning to train its Emovu software to recognize facial expressions.


Eyeris Brings Emotion Recognition to Automobiles

#artificialintelligence

At NVIDIA's recent GPU Technology Conference, we sat down with the CEO of Eyeris Technologies, Modar (JR) Alaoui, to discuss giving automobiles the ability to interpret human emotions. Eyeris is bringing to market a feature that has not been present in the 130 year history of the automobile. That feature is ability of the car to monitor and register the emotional state of the driver. The product will be able to record the emotions as shown by the driver's face, age identification, gender identification, eye tracking, and gaze estimation of drivers and passengers. "Our goal is to put our software in the back of every camera in the world," said Alouai.


Hey, Wake Up! Eye-Tracking Tech Nags Drivers to Stay Alert

WSJ.com: WSJD - Technology

Future buyers of General Motors Co. GM -3.88 % 's semiautonomous driving system will have to be comfortable with Big Brother sitting in the passenger seat. The nation's largest auto maker aims to release its Super Cruise on a Cadillac next year, and will feature eye tracking in the cabin, a first for a U.S. car maker. GM will duel with Volvo Car Corp.'s Pilot Assist and Tesla Motors Inc. TSLA -1.46 % 's Autopilot, both driver-assistance systems that can control a moving vehicle. While Tesla's Autopilot requires periodic handling by the driver, GM's system is expected to go a step further in monitoring the alertness of human drivers. Super Cruise's 2017 launch will come amid heightened scrutiny of systems that use cameras, sensors or radar to let the car do much of the driving at higher speeds.


Neural Attention: Machine Learning Meets Neuroscience

#artificialintelligence

Neural attention has been applied successfully to a variety of different applications including natural language processing, vision, and memory. An attractive aspect of these neural models is their ability to extract relevant features from data, with minimal feature engineering.Brian Cheung is a PhD Student at UC Berkeley working with Professor Bruno Olshausen, as well as an Intern at Google Brain. By drawing inspiration from the fields of neuroscience and machine learning, he hopes to create systems which can solve complex vision tasks using attention and memory. At the Deep Learning Summit in Singapore, Brian will share expertise on the fovea as an emergent property of visual attention, ways we can extend this ability to learning interpretable structural features of the attention window itself, and finding conditions where these emergent properties are amplified or eliminated providing clues to their function. I asked him a few questions ahead of the summit to learn more.