Amyotrophic Lateral Sclerosis (ALS)


Training AI To Transform Brain Activity Into Text

#artificialintelligence

Back in 2008, theoretical physicist Stephen Hawking used a speech synthesizer program on an Apple II computer to "talk." He had to use hand controls to work the system, which became problematic as his case of Lou Gehrig's disease progressed. When he upgraded to a new device, called a "cheek switch," it detected when Hawking tensed the muscle in his cheek, helping him speak, write emails, or surf the Web. Now, neuroscientists at the University of California, San Francisco have come up with a far more advanced technology--an artificial intelligence program that can turn thoughts into text. In time, it has the potential to help millions of people with speech disabilities communicate with ease.


Google Seeks People With Down Syndrome To Help Train AIs To Understand Human Speech

#artificialintelligence

The last half decade has ushered in the era of humans interacting with technology through speech, with Amazon's Alexa, Apple's Siri, and Google's AI rapidly becoming ubiquitous elements of the human experience. But, while the migration from typing to voice has brought great convenience for some folks (and improved safety, in the case of people utilizing technology while driving), it has not delivered on its potential for the people who might otherwise stand to benefit the most from it: those of us with disabilities. For people with Down Syndrome, for example, voice-based control of technology offers the promise of increased independence – and even of some new, potentially life-saving products. Yet, for this particular group of people, today's voice-recognizing AIs pose serious problems, as a result of a combination of 3 factors: To address this issue, and as a step forward towards ensuring that people with health conditions that cause AIs to be unable to understand them are able to utilize modern technology, Google is partnering with the Canadian Down Syndrome Society; via an effort called Project Understood, Google hopes to obtain recordings of people with Down Syndrome reading simple phrases, and to use those recordings to help train its AI to understand the speech patterns common to those with Down Syndrome. This effort is an extension of Google's own Project Euphonia, which seeks to improve computers' abilities to understand diverse speech patterns including impaired speech, and, which, earlier this year, began an effort to train AIs to recognize communication from people with the neuro-degenerative condition ALS, commonly known as Lou Gehrig's Disease.


Helping the Disabled Live an Active Life with Robots & Exoskeletons Work in Japan for engineers

#artificialintelligence

In the House of Councillors election of July 2019 two new Diet members were elected who each have severe physical disabilities. One is an Amyotrophic Lateral Sclerosis (ALS) patient and the other has Cerebral Palsy. Both are barely able to move their bodies and require large electric wheelchairs to get about. The assistance of a carer is also necessary. In particular, the ALS patient is dependent on an artificial respirator and is even unable to speak.


Disabled lawmaker first in Japan to use speech synthesizer during Diet session

The Japan Times

A lawmaker with severe physical disabilities attended his first parliamentary interpellation Thursday since being elected in July and became the first lawmaker in Japan ever to use an electronically-generated voice during a Diet session. In the session of the education, culture and science committee, Yasuhiko Funago, who has amyotrophic lateral sclerosis, a condition also known as Lou Gehrig's disease, greeted the committee using a speech synthesizer. He also asked questions through a proxy speaker. "As a newcomer, I am still inexperienced, but with everyone's assistance, I will do my best to tackle (issues)," he said at the beginning of the session. An aide then posed questions on his behalf and expressed his desire to see improvements in the learning environment for disabled children.


r/MachineLearning - [D] How can I go about learning machine learning to help people with ALS, like Jason Becker?

#artificialintelligence

If you don't know him, it's this guy. Maybe someone else here might also be interested. I know a semester of calculus, electronics theory, and starting to learn C . Besides anatomy and neuroscience, what should I really be focusing on to learn how to give more mobility to this guy in the future? Any cutting edge stuff that can possibly even help his brain communicate to his actual limbs and possibly get them to move again, or is it better to try to design full on robotic arms that he could manipulate almost like Doc Oc?


Google devises conversational AI that works better for people with ALS and accents

#artificialintelligence

Google AI researchers working with the ALS Therapy Development Institute today shared details about Project Euphonia, a speech-to-text transcription service for people with speaking impairments. They also say their approach can improve automatic speech recognition for people with non-native English accents as well. People with amyotrophic lateral sclerosis (ALS) often have slurred speech, but existing AI systems are typically trained on voice data without any affliction or accent. The new approach is successful primarily due to the introduction of small amounts of data that represents people with accents and ALS. "We show that 71% of the improvement comes from only 5 minutes of training data," according to a paper published on arXiv July 31 titled "Personalizing ASR for Dysarthric and Accented Speech with Limited Data."


Predicting assisted ventilation in Amyotrophic Lateral Sclerosis using a mixture of experts and conformal predictors

arXiv.org Machine Learning

Amyotrophic Lateral Sclerosis (ALS) is a neurodegenerative disease characterized by a rapid motor decline, leading to respiratory failure and subsequently to death. In this context, researchers have sought for models to automatically predict disease progression to assisted ventilation in ALS patients. However, the clinical translation of such models is limited by the lack of insight 1) on the risk of error for predictions at patient-level, and 2) on the most adequate time to administer the non-invasive ventilation. To address these issues, we combine Conformal Prediction (a machine learning framework that complements predictions with confidence measures) and a mixture experts into a prognostic model which not only predicts whether an ALS patient will suffer from respiratory insufficiency but also the most likely time window of occurrence, at a given reliability level. Promising results were obtained, with near 80% of predictions being correctly identified.


Comcast created an eye-control remote to help users with mobility challenges

USATODAY - Tech Top Stories

Jimmy Curran controls the TV with his eyes through this web-based Comcast remote. Most TV viewers take for granted the ability to change the channel from their couches with a remote control. That task may be near impossible for viewers with the most severe physical challenges. On Monday, Comcast launches a free web-based remote on tablets and computers that lets Xfinity X1 customers with spinal cord injuries, ALS (Lou Gehrig's disease) or other disabilities change channels on the TV, set recordings, launch the program guide and search for a show with their eyes. The free X1 eye control works with whatever eye gaze hardware and software system the customer is using, as well as, "sip-and-puff" switches and other assistive technologies.


The sounds of silence: New device could create words out of thoughts

USATODAY - Tech Top Stories

Study author Gopala Anumanchipalli holds an example of the gadget that could literally give voice to the voiceless. Trapped inside their bodies, stroke patients may be able to think – but not speak. But now, according to a new study, a device could one day literally give a voice to the voiceless. "For the first time, this study demonstrates that we can generate entire spoken sentences based on an individual's brain activity," said study lead author Edward Chang, a professor of neurological surgery at the University of California at San Francisco. In fact, he said the technology could potentially restore the voices of people who have lost the ability to speak due to paralysis and other forms of neurological damage such as from ALS (Lou Gehrig's Disease).


Researchers use AI to predict progression of neurodegenerative diseases: Researchers with Ben-Gurion University of the Negev in Israel have created an artificial intelligence platform for tracking and predicting the progression of neurodegenerative diseases.

#artificialintelligence

Researchers with Ben-Gurion University of the Negev in Israel have created an artificial intelligence platform for tracking and predicting the progression of neurodegenerative diseases. The platform, developed by professor Boaz Lerner of the university's department of industrial engineering and management, will first be used for amyotrophic lateral sclerosis, also called Lou Gehrig's disease. ALS is a fatal neurodegenerative disease that causes death of motor neurons that control voluntary muscles. This muscle atrophy leads to progressive weakness and paralysis, difficulty speaking, swallowing and breathing. The researchers then plan to use the platform for Alzheimer's, Parkinson's and other neurodegenerative diseases.