The European hair care market clocked revenues worth USD 18 billion in 2013; it is anticipated to generate revenues worth $24 billion in 2018. Recently L'Oréal presented its flagship connected beauty innovations at Viva Technology Paris show held at Porte de Versailles in Paris from 15–17th June 2017.Five of it's Group's brands -- Lancome, Kérastase, L'Oréal Paris, La Roche-Posay and L'Oréal Professionnel -- showcased how they leverage advanced digital technologies to create personalized services for consumers. L'Oréal also discovered the new version of sun care innovation My UV Patch by La Roche-Posay, L'Oréal Group's dermatological skincare brand, designed as a wearable,the first stretchable skin sensor designed to monitor exposure to UV radiation(sun rays) minimizing the frequency of sun burns and to select the right sun protection based on user's skin type. The wearer simply scans the patch with his or her smartphone to determine the wearer's daily sun exposure.Thanks to the specific algorithm, the application uses graphs and statistics to provide advice on or optimal sun protection.It also takes account of hair and skin color into consideration and offers personalized UV protection recommendations.The app alerts the user when UV protection becomes insufficient.This ultra-thin self-adhesive patch comes fitted with an electronic sensor and analyses how much UV radiation the body receives.
Eventually, Mercedes plans to have the service recognize any Mercedes-Benz vehicle with the proper systems after it drives into a special valet zone in the parking garage. The parking system would communicate with eh car, syncing with the sensors built into the garage to complete the parking job. There are no headaches circling a cramped garage for spots, no handing of keys to expensive luxury cars over to strangers, no memorizing parking lot zones -- just a few taps on a smartphone, a quick ride, and patrons are free to explore the museum. The car communicates with the sensor system built into the parking garage itself, so driving systems of varying sophistication will theoretically be able to navigate the space with equal precision.
Although we have seen large improvements in the accuracy of recognition as a result of Deep Neural Networks (DNNs), deep learning approaches have two well-known challenges: they require large amounts of labelled data for training, and they require a type of compute that is not amenable to current general purpose processor/memory architectures. It is responsible for processing the information coming from all of the on-board sensors, including Microsoft's custom time-of-flight depth sensor, head-tracking cameras, the inertial measurement unit (IMU), and the infrared camera. The HPU is part of what makes HoloLens the world's first–and still only–fully self-contained holographic computer. The AI coprocessor is designed to work in the next version of HoloLens, running continuously, off the HoloLens battery.
Count Microsoft among the companies preparing to build specialized chips for artificial intelligence (AI): The next version of the company's HoloLens augmented reality headset will come with a chip capable of complex AI computation, revealed Microsoft Research VP Harryn Shum at a computer vision conference Sunday. This will make it possible to improve hand tracking on the device, as well as run object recognition and other computer vision tasks. The next version of this chip will incorporate an artificial intelligence co-processor, Shum said. Microsoft isn't the only company building custom chips for AI and similar tasks.
But, in reality, those two seemingly different worlds are not actually worlds apart -- AI's virtual world has real world applications. That said, just like our own human consciousness, AI can rely on senses similar to humans in order to connect their thought processes to our physical world. The Internet of Things (IoT) provides ready access to sensors that allow more meaningful sensory access to our physical world, thus enabling AI to "come to life." In many ways, these sensors can empower the AI with "superhuman" capabilities, like: AI empowers the perception and meaning of these IoT-driven sensory inputs.
We've already taken a look at neural networks and deep learning techniques in a previous post, so now it's time to address another major component of deep learning: data -- meaning the images, videos, emails, driving patterns, phrases, objects and so on that are used to train neural networks. For example, to train a neural network to identify pictures of apples or oranges, it needs to be fed images that are labeled as such. Fortunately, there already is a large number of free and publicly shared labeled data sets that cover a mind-boggling array of categories (this Wikipedia page hosts links to dozens and dozens). On the first test Ned is shown a Spanish word: azul.
To that end, retailers are increasingly turning to technologies such as artificial intelligence algorithms, messenger bots, and even robots, to gather data and improve the in-store experience for shoppers. First, there's the fact that different technologies measure different things: a beacon can track a customer's movement, but a sensor placed on a shelf might be able to see which item a customer picks up and measure how long they hold it for. Amazon's new grocery stores and bookstores show how technology can be seamlessly integrated into a retail space, improving the customer experience and also facilitating the collection of data on consumers. By requiring each shopper to set up an account with Amazon and equipping each store with technology that is expressly designed to track their movements, Amazon has the ability to collect mountains of data on each individual's shopping habits and behavior outside of online settings.
Robot Academy is an online platform that provides free-to-use undergraduate-level learning resources for robotics and robotic vision. Each lesson is rated in terms of difficulty (on a 5-point scale), and Robot Academy references videos on Khan Academy to help students get up to speed to follow more advanced lessons. Previously he was a Senior Principal Research Scientist at the CSIRO ICT Centre where he founded and led the Autonomous Systems laboratory, the Sensors and Sensor Networks research theme and the Sensors and Sensor Networks Transformational Capability Platform. He was the Editor-in-Chief of the IEEE Robotics and Automation magazine; founding editor of the Journal of Field Robotics; member of the editorial board of the International Journal of Robotics Research, and the Springer STAR series.
Researchers at the University of Toronto are using improved sensors and artificial intelligence to make electric wheelchairs self-driving. Researchers at the University of Toronto (U of T) in Canada have used sensors and artificial intelligence to create self-driving electric wheelchairs. Rather than designing a new autonomous wheelchair from scratch, the researchers focused on retrofitting existing wheelchairs using sensors, controllers, and a small computer. The system could be a significant improvement over sip-and-puff (SNP) controllers, which control a wheelchair by having the user sip or puff air into a plastic straw that is connected to a computer.
South Australian tech firm Ailytic has developed an artificial intelligence (AI) program to significantly increase production efficiency by optimising machine use. Ailytic's list of clients includes world-renown wine companies such as Pernod Ricard, Accolade Wines and Treasury Wine Estates. "Our algorithms work well for things like packaging, bottling, general manufacturing and sink manufacturing – the wine industry is where we are seeing a lot of appetite and the most uptake," he said. Ailytic's other clients are also based out of South Australia and include Australia's lone sink manufacturer Tasman Sinkware.