Goto

Collaborating Authors

Amyotrophic Lateral Sclerosis (ALS)


How thoughts could one day control electronic prostheses, wirelessly

#artificialintelligence

The team has been focusing on improving a brain-computer interface, a device implanted beneath the skull on the surface of a patient's brain. This implant connects the human nervous system to an electronic device that might, for instance, help restore some motor control to a person with a spinal cord injury, or someone with a neurological condition like amyotrophic lateral sclerosis, also called Lou Gehrig's disease. The current generation of these devices record enormous amounts of neural activity, then transmit these brain signals through wires to a computer. But when researchers have tried to create wireless brain-computer interfaces to do this, it took so much power to transmit the data that the devices would generate too much heat to be safe for the patient. Now, a team led by electrical engineers and neuroscientists Krishna Shenoy, PhD, and Boris Murmann, PhD, and neurosurgeon and neuroscientist Jaimie Henderson, MD, have shown how it would be possible to create a wireless device, capable of gathering and transmitting accurate neural signals, but using a tenth of the power required by current wire-enabled systems.


#ICML2020 invited talk: Lester Mackey โ€“ "Doing some good with machine learning"

AIHub

There were three invited talks at this year's virtual ICML. The first was given by Lester Mackey, and he highlighted some of his efforts to do some good with machine learning. During the talk he also outlined several ways in which social good efforts can be organised, and described numerous social good problems that would benefit from the community's attention. Lester took the audience on a journey from his grad school days to the present, focusing on the social good projects he's been involved in along the way. His research in this area has included efforts to combat nuclear proliferation, climate forecasting, and COVID-19 work.


Stephen Hawking: Artificial Intelligence Could End Human Race

#artificialintelligence

The eminent British physicist Stephen Hawking warns that the development of intelligent machines could pose a major threat to humanity. "The development of full artificial intelligence (AI) could spell the end of the human race," Hawking told the BBC. The famed scientist's warnings about AI came in response to a question about his new voice system. Hawking has a form of the progressive neurological disease called amyotrophic lateral sclerosis (ALS or Lou Gehrig's disease), and uses a voice synthesizer to communicate. Recently, he has been using a new system that employs artificial intelligence.


Training AI To Transform Brain Activity Into Text

#artificialintelligence

Back in 2008, theoretical physicist Stephen Hawking used a speech synthesizer program on an Apple II computer to "talk." He had to use hand controls to work the system, which became problematic as his case of Lou Gehrig's disease progressed. When he upgraded to a new device, called a "cheek switch," it detected when Hawking tensed the muscle in his cheek, helping him speak, write emails, or surf the Web. Now, neuroscientists at the University of California, San Francisco have come up with a far more advanced technology--an artificial intelligence program that can turn thoughts into text. In time, it has the potential to help millions of people with speech disabilities communicate with ease.


Google Seeks People With Down Syndrome To Help Train AIs To Understand Human Speech

#artificialintelligence

The last half decade has ushered in the era of humans interacting with technology through speech, with Amazon's Alexa, Apple's Siri, and Google's AI rapidly becoming ubiquitous elements of the human experience. But, while the migration from typing to voice has brought great convenience for some folks (and improved safety, in the case of people utilizing technology while driving), it has not delivered on its potential for the people who might otherwise stand to benefit the most from it: those of us with disabilities. For people with Down Syndrome, for example, voice-based control of technology offers the promise of increased independence โ€“ and even of some new, potentially life-saving products. Yet, for this particular group of people, today's voice-recognizing AIs pose serious problems, as a result of a combination of 3 factors: To address this issue, and as a step forward towards ensuring that people with health conditions that cause AIs to be unable to understand them are able to utilize modern technology, Google is partnering with the Canadian Down Syndrome Society; via an effort called Project Understood, Google hopes to obtain recordings of people with Down Syndrome reading simple phrases, and to use those recordings to help train its AI to understand the speech patterns common to those with Down Syndrome. This effort is an extension of Google's own Project Euphonia, which seeks to improve computers' abilities to understand diverse speech patterns including impaired speech, and, which, earlier this year, began an effort to train AIs to recognize communication from people with the neuro-degenerative condition ALS, commonly known as Lou Gehrig's Disease.


Helping the Disabled Live an Active Life with Robots & Exoskeletons Work in Japan for engineers

#artificialintelligence

In the House of Councillors election of July 2019 two new Diet members were elected who each have severe physical disabilities. One is an Amyotrophic Lateral Sclerosis (ALS) patient and the other has Cerebral Palsy. Both are barely able to move their bodies and require large electric wheelchairs to get about. The assistance of a carer is also necessary. In particular, the ALS patient is dependent on an artificial respirator and is even unable to speak.


Disabled lawmaker first in Japan to use speech synthesizer during Diet session

The Japan Times

A lawmaker with severe physical disabilities attended his first parliamentary interpellation Thursday since being elected in July and became the first lawmaker in Japan ever to use an electronically-generated voice during a Diet session. In the session of the education, culture and science committee, Yasuhiko Funago, who has amyotrophic lateral sclerosis, a condition also known as Lou Gehrig's disease, greeted the committee using a speech synthesizer. He also asked questions through a proxy speaker. "As a newcomer, I am still inexperienced, but with everyone's assistance, I will do my best to tackle (issues)," he said at the beginning of the session. An aide then posed questions on his behalf and expressed his desire to see improvements in the learning environment for disabled children.


r/MachineLearning - [D] How can I go about learning machine learning to help people with ALS, like Jason Becker?

#artificialintelligence

If you don't know him, it's this guy. Maybe someone else here might also be interested. I know a semester of calculus, electronics theory, and starting to learn C . Besides anatomy and neuroscience, what should I really be focusing on to learn how to give more mobility to this guy in the future? Any cutting edge stuff that can possibly even help his brain communicate to his actual limbs and possibly get them to move again, or is it better to try to design full on robotic arms that he could manipulate almost like Doc Oc?


Google devises conversational AI that works better for people with ALS and accents

#artificialintelligence

Google AI researchers working with the ALS Therapy Development Institute today shared details about Project Euphonia, a speech-to-text transcription service for people with speaking impairments. They also say their approach can improve automatic speech recognition for people with non-native English accents as well. People with amyotrophic lateral sclerosis (ALS) often have slurred speech, but existing AI systems are typically trained on voice data without any affliction or accent. The new approach is successful primarily due to the introduction of small amounts of data that represents people with accents and ALS. "We show that 71% of the improvement comes from only 5 minutes of training data," according to a paper published on arXiv July 31 titled "Personalizing ASR for Dysarthric and Accented Speech with Limited Data."


Predicting assisted ventilation in Amyotrophic Lateral Sclerosis using a mixture of experts and conformal predictors

arXiv.org Machine Learning

Amyotrophic Lateral Sclerosis (ALS) is a neurodegenerative disease characterized by a rapid motor decline, leading to respiratory failure and subsequently to death. In this context, researchers have sought for models to automatically predict disease progression to assisted ventilation in ALS patients. However, the clinical translation of such models is limited by the lack of insight 1) on the risk of error for predictions at patient-level, and 2) on the most adequate time to administer the non-invasive ventilation. To address these issues, we combine Conformal Prediction (a machine learning framework that complements predictions with confidence measures) and a mixture experts into a prognostic model which not only predicts whether an ALS patient will suffer from respiratory insufficiency but also the most likely time window of occurrence, at a given reliability level. Promising results were obtained, with near 80% of predictions being correctly identified.