Click click snap: One look at patient's face, and AI can identify rare genetic diseases

#artificialintelligence

WASHINGTON D.C. [USA]: According to a recent study, a new artificial intelligence technology can accurately identify rare genetic disorders using a photograph of a patient's face. Named DeepGestalt, the AI technology outperformed clinicians in identifying a range of syndromes in three trials and could add value in personalised care, CNN reported. The study was published in the journal Nature Medicine. According to the study, eight per cent of the population has disease with key genetic components and many may have recognisable facial features. The study further adds that the technology could identify, for example, Angelman syndrome, a disorder affecting the nervous system with characteristic features such as a wide mouth with widely spaced teeth etc. Speaking about it, Yaron Gurovich, the chief technology officer at FDNA and lead researcher of the study said, "It demonstrates how one can successfully apply state of the art algorithms, such as deep learning, to a challenging field where the available data is small, unbalanced in terms of available patients per condition, and where the need to support a large amount of conditions is great."


Facebook loses first round of court battle over 'unlawful' storing of users' biometric data

The Independent - Tech

Nasa has announced that it has found evidence of flowing water on Mars. Scientists have long speculated that Recurring Slope Lineae -- or dark patches -- on Mars were made up of briny water but the new findings prove that those patches are caused by liquid water, which it has established by finding hydrated salts. Several hundred camped outside the London store in Covent Garden. The 6s will have new features like a vastly improved camera and a pressure-sensitive "3D Touch" display


Study shows face recognition experts perform better with AI as partner

#artificialintelligence

Experts at recognizing faces often play a crucial role in criminal cases. A photo from a security camera can mean prison or freedom for a defendant--and testimony from highly trained forensic face examiners informs the jury whether that image actually depicts the accused. Just how good are facial recognition experts? In work that combines forensic science with psychology and computer vision research, a team of scientists from the National Institute of Standards and Technology (NIST) and three universities has tested the accuracy of professional face identifiers, providing at least one revelation that surprised even the researchers: Trained human beings perform best with a computer as a partner, not another person. "This is the first study to measure face identification accuracy for professional forensic facial examiners, working under circumstances that apply in real-world casework," said NIST electronic engineer P. Jonathon Phillips.


It Ain't Me, Babe: Researchers Find Flaws In Police Facial Recognition

NPR Technology

Stephen Lamm, a supervisor with the ID fraud unit of the North Carolina Department of Motor Vehicles, looks through photos in a facial recognition system in 2009 in Raleigh, N.C. Stephen Lamm, a supervisor with the ID fraud unit of the North Carolina Department of Motor Vehicles, looks through photos in a facial recognition system in 2009 in Raleigh, N.C. Nearly half of all American adults have been entered into law enforcement facial recognition databases, according to a recent report from Georgetown University's law school. But there are many problems with the accuracy of the technology that could have an impact on a lot of innocent people. There's a good chance your driver's license photo is in one of these databases.


It All Matters: Reporting Accuracy, Inference Time and Power Consumption for Face Emotion Recognition on Embedded Systems

arXiv.org Machine Learning

While several approaches to face emotion recognition task are proposed in literature, none of them reports on power consumption nor inference time required to run the system in an embedded environment. Without adequate knowledge about these factors it is not clear whether we are actually able to provide accurate face emotion recognition in the embedded environment or not, and if not, how far we are from making it feasible and what are the biggest bottlenecks we face. The main goal of this paper is to answer these questions and to convey the message that instead of reporting only detection accuracy also power consumption and inference time should be reported as real usability of the proposed systems and their adoption in human computer interaction strongly depends on it. In this paper, we identify the state-of-the art face emotion recognition methods that are potentially suitable for embedded environment and the most frequently used datasets for this task. Our study shows that most of the performed experiments use datasets with posed expressions or in a particular experimental setup with special conditions for image collection. Since our goal is to evaluate the performance of the identified promising methods in the realistic scenario, we collect a new dataset with non-exaggerated emotions and we use it, in addition to the publicly available datasets, for the evaluation of detection accuracy, power consumption and inference time on three frequently used embedded devices with different computational capabilities. Our results show that gray images are still more suitable for embedded environment than color ones and that for most of the analyzed systems either inference time or energy consumption or both are limiting factor for their adoption in real-life embedded applications.