WASHINGTON D.C. [USA]: According to a recent study, a new artificial intelligence technology can accurately identify rare genetic disorders using a photograph of a patient's face. Named DeepGestalt, the AI technology outperformed clinicians in identifying a range of syndromes in three trials and could add value in personalised care, CNN reported. The study was published in the journal Nature Medicine. According to the study, eight per cent of the population has disease with key genetic components and many may have recognisable facial features. The study further adds that the technology could identify, for example, Angelman syndrome, a disorder affecting the nervous system with characteristic features such as a wide mouth with widely spaced teeth etc. Speaking about it, Yaron Gurovich, the chief technology officer at FDNA and lead researcher of the study said, "It demonstrates how one can successfully apply state of the art algorithms, such as deep learning, to a challenging field where the available data is small, unbalanced in terms of available patients per condition, and where the need to support a large amount of conditions is great."
Building on earlier work which teaches computers to recognise emotions and expressions in human faces, the system is able to detect the distinct parts of a sheep's face and compare it with a standardised measurement tool developed by veterinarians for diagnosing pain. Their results will be presented today (1 June) at the 12th IEEE International Conference on Automatic Face and Gesture Recognition in Washington, DC. Severe pain in sheep is associated with conditions such as foot rot, an extremely painful and contagious condition which causes the foot to rot away; or mastitis, an inflammation of the udder in ewes caused by injury or bacterial infection. Both of these conditions are common in large flocks, and early detection will lead to faster treatment and pain relief. Reliable and efficient pain assessment would also help with early diagnosis.
The fusiform gyrus is thought to play a role in recognising faces, something that adults are better at doing than children. Brain scans of 47 people of different ages found – after taking into account the differing overall sizes of their brains – that adults had 12.6 per cent more solid brain matter in this area than children did. The team compared the growth of the face recognition region with a different area, responsible for recognising places. Inadequate growth of the brain's face recognition areas might contribute to autism, Duchaine suggests, as well as conditions that make people unable to recognise faces.
In the evolution to humanize technology, Affectiva is carving a niche. Its software development kit (SDK) and cloud-based API allow developers to enrich digital experiences by adding "emotion awareness" to apps from games to medical devices. And that means that machines can collect data and respond to users' emotions in real time, mostly based on facial recognition techniques. It's what the company calls, Emotion AI. As noted in a recent Forbes article: "Affectiva's technology has proven transformative for industries like automotive, market research, robotics, education, and gaming, but also for use cases like teaching autistic children emotion recognition and nonverbal social cues."
In the US, over one million children suffer from autism. These children find it difficult to recognize emotions through facial expressions, making social interactions very challenging. While patients can gain an understanding through behavioral therapy, this can be both time-consuming and expensive. Now, Autism Glass is a wearable aid from researchers at Stanford University, that uses Google Glass, machine learning and real-time social cues to provide those on the autism spectrum with another option. To use, patients put on the wearable glasses, which incorporate an outward-facing camera.