Neurology


IBM's AI classifies seizures with 98.4% accuracy using EEG data

#artificialintelligence

In a paper published on the preprint server Arxiv.org this week, IBM researchers describe SeizureNet, a machine learning framework that learns the features of seizures to classify various types. They say that it achieves state-of-the-art classification accuracy on a popular data set, and that it helps to improve the classification accuracy of smaller networks for applications with low memory and faster inference. If the claims stand up to academic scrutiny, the framework could, for instance, help the over 3.4 million people with epilepsy better understand the factors that trigger their seizures. The World Health Organization estimates that up to 70% of people living with epilepsy could live seizure-free if properly diagnosed and treated. SeizureNet is a machine learning framework consisting of individual classifiers (specifically convolutional neural networks) that learn the features of electroencephalograms (EEGs) -- i.e., tests that evaluate the electrical activity in the brain -- to predict seizure types.


Artificial Intelligence Decodes Speech from Brain Activity: Study

#artificialintelligence

The readout of brain activity and audio of the spoken sentences were input to an algorithm, which learned to recognize how the parts of speech were formed. The initial results were highly inaccurate, for instance, interpreting brain activity from hearing the sentence "she wore warm fleecy woolen overalls" as "the oasis was a mirage." As the program learned over time, it was able to make translations with limited errors, such as interpreting brain activity in response to hearing "the ladder was used to rescue the cat and the man" as "which ladder will be used to rescue the cat and the man."


Brain Implants and AI Model Used To Translate Thought Into Text

#artificialintelligence

Researchers at the University of California, San Francisco have recently created an AI system that can produce text by analyzing a person's brain activity, essentially translating their thoughts into text. The AI takes neural signals from a user and decodes them, and it can decipher up to 250 words in real-time based on a set of between 30 to 50 sentences. As reported by the Independent, the AI model was trained on neural signals collected from four women. The participants in the experiment had electrodes implanted in their brains to monitor for the occurrence of epileptic seizures. The participants were instructed to read sentences aloud, and their neural signals were fed to the AI model.


Artificial Intelligence (AI) Applications in 2020

#artificialintelligence

Let's take a detailed look. This is the most common form of AI that you'd find in the market now. These Artificial Intelligence systems are designed to solve one single problem and would be able to execute a single task really well. By definition, they have narrow capabilities, like recommending a product for an e-commerce user or predicting the weather.This is the only kind of Artificial Intelligence that exists today. They're able to come close to human functioning in very specific contexts, and even surpass them in many instances, but only excelling in very controlled environments with a limited set of parameters. AGI is still a theoretical concept. It's defined as AI which has a human-level of cognitive function, across a wide variety of domains such as language processing, image processing, computational functioning and reasoning and so on.


Turning brain activity to text with AI: Time for nations to debate what is ethical, and what isn't

#artificialintelligence

A few years ago, when researchers successfully demonstrated the use of augmented reality (AR) to treat PTSD by examining which parts of the brain it impacted, no one would have thought that scientists would be able to able to use artificial intelligence (AI) to turn brain activity to text. According to The Guardian, scientists at the University of California (UC) have been able to do so using electrode arrays implanted in the brain. Although the results are not too revolutionary, and AI commits mistakes more often than not, the fact that this is now possible is an achievement in itself. While, in the AR experiment, scientists were looking at which part of the brain gets impacted by certain images and videos to decode neural response, in the UC experiment, AI converted brain activity to numbers related to aspects of speech. The AI, with great difficulty, could only do this for the 50 sentences in which it was trained.


Artificial intelligence decodes the facial expressions of mice

#artificialintelligence

Mice move their ears, cheeks and eyes to convey emotion.Credit: Getty Researchers have used a machine-learning algorithm to decipher the seemingly inscrutable facial expressions of laboratory mice. The work could have implications for pinpointing neurons in the human brain that encode particular expressions. Their study "is an important first step" in understanding some of the mysterious aspects of emotions and how they manifest in the brain, says neuroscientist David Anderson at the California Institute of Technology in Pasadena. Nearly 150 years ago, Charles Darwin proposed that facial expressions in animals might provide a window onto their emotions, as they do in humans. But researchers have only recently gained the tools -- such as powerful microscopes, cameras and genetic techniques -- to reliably capture and analyse facial movement, and investigate how emotions arise in the brain.


AI fuels research that could lead to positive impact on health care

#artificialintelligence

Brainstorm guest contributor Paul Fraumeni speaks with four York U researchers who are applying artificial intelligence to their research ventures in ways that, ultimately, could lead to profound and positive impacts on health care in this country. Meet four York University researchers: Lauren Sergio and Doug Crawford have academic backgrounds in physiology; Shayna Rosenbaum has a PhD in psychology; Joel Zylberberg has a doctorate in physics. They share two things in common: They focus on neuroscience – the study of the brain and its functions – and they leverage advanced computing technology using artificial intelligence (AI) in their research ventures, the application of which could have a profound and positive impact on health care. In a nondescript room in the Sherman Health Sciences Research Centre, Lauren Sergio sits down and places her right arm in a sleeve on an armrest. It's an odd-looking contraption; the lower part looks like a sling attached to a video game joystick.


Training AI To Transform Brain Activity Into Text

#artificialintelligence

Back in 2008, theoretical physicist Stephen Hawking used a speech synthesizer program on an Apple II computer to "talk." He had to use hand controls to work the system, which became problematic as his case of Lou Gehrig's disease progressed. When he upgraded to a new device, called a "cheek switch," it detected when Hawking tensed the muscle in his cheek, helping him speak, write emails, or surf the Web. Now, neuroscientists at the University of California, San Francisco have come up with a far more advanced technology--an artificial intelligence program that can turn thoughts into text. In time, it has the potential to help millions of people with speech disabilities communicate with ease.


Artificial Intelligence turns a person's thoughts into text - Times of India

#artificialintelligence

Scientists have developed an artificial intelligence system that can translate a person's thoughts into text by analysing their brain activity. Researchers at the University of California developed the AI to decipher up to 250 words in real-time from a set of between 30 and 50 sentences. The algorithm was trained using the neural signals of four women with electrodes implanted in their brains, which were already in place to monitor epileptic seizures. The volunteers repeatedly read sentences aloud while the researchers fed the brain data to the AI to unpick patterns that could be associated with individual words. The average word error rate across a repeated set was as low as 3%.