Machine learning app scans faces and listens to speech to quickly spot strokes
Researchers from Penn State University and Houston Methodist Hospital recently outlined their work on a machine learning tool that uses a smartphone camera to quickly gauge facial movements for sign of a stroke. The tool – which was presented as a virtual poster at this month's International Conference on Medical Image Computing and Computer Assisted Intervention – relies on computational facial motion analysis and natural language processing to spot sagging muscles, slurred speech or other stroke-like symptoms. To build and train it, the researchers used an iPhone to record 80 Houston Methodist patients who were experiencing stroke symptoms as they performed a speech test. According to a Penn State release, the machine learning model performed with 79% accuracy when tested again on that dataset, which the researchers said is roughly on par with emergency room diagnoses using CT scans. "Currently, physicians have to use their past training and experience to determine at what stage a patient should be sent for a CT scan," James Wang, professor of information sciences and technology at Penn State, said in a release from the university.
Oct-31-2020, 23:55:09 GMT
- AI-Alerts:
- 2020 > 2020-11 > AAAI AI-Alert for Nov 3, 2020 (1.00)
- Industry:
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
- Technology: