Amyotrophic Lateral Sclerosis (ALS)


Google Seeks People With Down Syndrome To Help Train AIs To Understand Human Speech

#artificialintelligence

The last half decade has ushered in the era of humans interacting with technology through speech, with Amazon's Alexa, Apple's Siri, and Google's AI rapidly becoming ubiquitous elements of the human experience. But, while the migration from typing to voice has brought great convenience for some folks (and improved safety, in the case of people utilizing technology while driving), it has not delivered on its potential for the people who might otherwise stand to benefit the most from it: those of us with disabilities. For people with Down Syndrome, for example, voice-based control of technology offers the promise of increased independence – and even of some new, potentially life-saving products. Yet, for this particular group of people, today's voice-recognizing AIs pose serious problems, as a result of a combination of 3 factors: To address this issue, and as a step forward towards ensuring that people with health conditions that cause AIs to be unable to understand them are able to utilize modern technology, Google is partnering with the Canadian Down Syndrome Society; via an effort called Project Understood, Google hopes to obtain recordings of people with Down Syndrome reading simple phrases, and to use those recordings to help train its AI to understand the speech patterns common to those with Down Syndrome. This effort is an extension of Google's own Project Euphonia, which seeks to improve computers' abilities to understand diverse speech patterns including impaired speech, and, which, earlier this year, began an effort to train AIs to recognize communication from people with the neuro-degenerative condition ALS, commonly known as Lou Gehrig's Disease.


Helping the Disabled Live an Active Life with Robots & Exoskeletons Work in Japan for engineers

#artificialintelligence

In the House of Councillors election of July 2019 two new Diet members were elected who each have severe physical disabilities. One is an Amyotrophic Lateral Sclerosis (ALS) patient and the other has Cerebral Palsy. Both are barely able to move their bodies and require large electric wheelchairs to get about. The assistance of a carer is also necessary. In particular, the ALS patient is dependent on an artificial respirator and is even unable to speak.


Disabled lawmaker first in Japan to use speech synthesizer during Diet session

The Japan Times

A lawmaker with severe physical disabilities attended his first parliamentary interpellation Thursday since being elected in July and became the first lawmaker in Japan ever to use an electronically-generated voice during a Diet session. In the session of the education, culture and science committee, Yasuhiko Funago, who has amyotrophic lateral sclerosis, a condition also known as Lou Gehrig's disease, greeted the committee using a speech synthesizer. He also asked questions through a proxy speaker. "As a newcomer, I am still inexperienced, but with everyone's assistance, I will do my best to tackle (issues)," he said at the beginning of the session. An aide then posed questions on his behalf and expressed his desire to see improvements in the learning environment for disabled children.


r/MachineLearning - [D] How can I go about learning machine learning to help people with ALS, like Jason Becker?

#artificialintelligence

If you don't know him, it's this guy. Maybe someone else here might also be interested. I know a semester of calculus, electronics theory, and starting to learn C . Besides anatomy and neuroscience, what should I really be focusing on to learn how to give more mobility to this guy in the future? Any cutting edge stuff that can possibly even help his brain communicate to his actual limbs and possibly get them to move again, or is it better to try to design full on robotic arms that he could manipulate almost like Doc Oc?


Google devises conversational AI that works better for people with ALS and accents

#artificialintelligence

Google AI researchers working with the ALS Therapy Development Institute today shared details about Project Euphonia, a speech-to-text transcription service for people with speaking impairments. They also say their approach can improve automatic speech recognition for people with non-native English accents as well. People with amyotrophic lateral sclerosis (ALS) often have slurred speech, but existing AI systems are typically trained on voice data without any affliction or accent. The new approach is successful primarily due to the introduction of small amounts of data that represents people with accents and ALS. "We show that 71% of the improvement comes from only 5 minutes of training data," according to a paper published on arXiv July 31 titled "Personalizing ASR for Dysarthric and Accented Speech with Limited Data."


Predicting assisted ventilation in Amyotrophic Lateral Sclerosis using a mixture of experts and conformal predictors

arXiv.org Machine Learning

Amyotrophic Lateral Sclerosis (ALS) is a neurodegenerative disease characterized by a rapid motor decline, leading to respiratory failure and subsequently to death. In this context, researchers have sought for models to automatically predict disease progression to assisted ventilation in ALS patients. However, the clinical translation of such models is limited by the lack of insight 1) on the risk of error for predictions at patient-level, and 2) on the most adequate time to administer the non-invasive ventilation. To address these issues, we combine Conformal Prediction (a machine learning framework that complements predictions with confidence measures) and a mixture experts into a prognostic model which not only predicts whether an ALS patient will suffer from respiratory insufficiency but also the most likely time window of occurrence, at a given reliability level. Promising results were obtained, with near 80% of predictions being correctly identified.


Comcast created an eye-control remote to help users with mobility challenges

USATODAY - Tech Top Stories

Jimmy Curran controls the TV with his eyes through this web-based Comcast remote. Most TV viewers take for granted the ability to change the channel from their couches with a remote control. That task may be near impossible for viewers with the most severe physical challenges. On Monday, Comcast launches a free web-based remote on tablets and computers that lets Xfinity X1 customers with spinal cord injuries, ALS (Lou Gehrig's disease) or other disabilities change channels on the TV, set recordings, launch the program guide and search for a show with their eyes. The free X1 eye control works with whatever eye gaze hardware and software system the customer is using, as well as, "sip-and-puff" switches and other assistive technologies.


Researchers use AI to predict progression of neurodegenerative diseases: Researchers with Ben-Gurion University of the Negev in Israel have created an artificial intelligence platform for tracking and predicting the progression of neurodegenerative diseases.

#artificialintelligence

Researchers with Ben-Gurion University of the Negev in Israel have created an artificial intelligence platform for tracking and predicting the progression of neurodegenerative diseases. The platform, developed by professor Boaz Lerner of the university's department of industrial engineering and management, will first be used for amyotrophic lateral sclerosis, also called Lou Gehrig's disease. ALS is a fatal neurodegenerative disease that causes death of motor neurons that control voluntary muscles. This muscle atrophy leads to progressive weakness and paralysis, difficulty speaking, swallowing and breathing. The researchers then plan to use the platform for Alzheimer's, Parkinson's and other neurodegenerative diseases.


Survival Forests under Test: Impact of the Proportional Hazards Assumption on Prognostic and Predictive Forests for ALS Survival

arXiv.org Machine Learning

We investigate the effect of the proportional hazards assumption on prognostic and predictive models of the survival time of patients suffering from amyotrophic lateral sclerosis (ALS). We theoretically compare the underlying model formulations of several variants of survival forests and implementations thereof, including random forests for survival, conditional inference forests, Ranger, and survival forests with $L_1$ splitting, with two novel variants, namely distributional and transformation survival forests. Theoretical considerations explain the low power of log-rank-based splitting in detecting patterns in non-proportional hazards situations in survival trees and corresponding forests. This limitation can potentially be overcome by the alternative split procedures suggested herein. We empirically investigated this effect using simulation experiments and a re-analysis of the PRO-ACT database of ALS survival, giving special emphasis to both prognostic and predictive models.


What To Know About How Artificial Intelligence Is Shaping Healthcare

#artificialintelligence

There isn't any question that artificial intelligence is a transformative technology that will continue to completely change the way every human being operates in the modern world. One of the major industries where artificial intelligence has influence in healthcare. You can debate on how and where you want your healthcare delivered, but artificial intelligence will make healthcare much more efficient and accessible for us all. In fact, artificial intelligence may be able to find congenital heart defects in children before they are born. Just think of all the lives that can be saved using this exact technology.