Goto

Collaborating Authors

Epilepsy


New research uses artificial intelligence to identify epileptic seizures in real-time - Mental Daily

#artificialintelligence

New research in Scientific Reports conducted by Washington University shows how comprehending brain activity as a network rather than by electroencephalography readings, provides more accurate identification of epileptic seizures in real-time. The study, which mixes machine learning with systems theory, was steered by lead author Walter Bomela. "Our technique allows us to get raw data, process it and extract a feature that's more informative for the machine learning model to use," Bomela stated in a news release. "The major advantage of our approach is to fuse signals from 23 electrodes to one parameter that can be efficiently processed with much less computing resources." As explained by researchers, using an EEG, epileptic seizures can be observed through irregular brain activity in the form of spikes and waves during the measurement of electrical output.


Hope grows for targeting the brain with ultrasound

Science

As a way to see inside the body, revealing a tumor or a fetus, ultrasound is tried and true. But neuroscientists have a newer ambition for the technology: tinkering with the brain. At frequencies lower than those of a sonogram but still beyond the range of human hearing, ultrasound can penetrate the skull and boost or suppress brain activity. If researchers can prove that ultrasound safely and predictably changes human brain function, it could become a powerful, noninvasive research tool and a new means of treating brain disorders. How ultrasound works on the brain remains mysterious. But recent experiments have offered reassurance about safety, and small studies hint at meaningful effects in humans—dampening pain, for example, or subtly enhancing perception. “I've seen a lot of tantalizing data,” says Mark Cohen, a neuroscientist at the University of California, Los Angeles (UCLA). “While the challenges are very large, the potential of this thing is so much larger that we really have to pursue it.” Scientists can already modulate the brain noninvasively by delivering electric current or magnetic pulses across the skull. The U.S. Food and Drug Administration (FDA) has approved transcranial magnetic stimulation (TMS) to treat depression, migraine pain, and obsessive-compulsive disorder (OCD). But unlike magnetic or electric fields, sound waves can be focused—like light through a magnifying glass—on a point deep in the brain without affecting shallower tissue. For now, that combination of depth and focus is possible only with a surgically implanted wire. But ultrasound could temporarily disrupt a deep human brain region—the almond-shaped amygdala, a driver of emotional responses, for example, or the thalamus, a relay station for pain and regulator of alertness—to test its function or treat disease. Results in animals are encouraging. Experiments in the 1950s first showed ultrasound waves could suppress neural activity in a visual region of the cat brain. In rodents, aiming ultrasound at motor regions has triggered movements such as a twitch of a paw or whisker. And focusing it on a frontal region of monkey brains can change how the animals perform at eye movement tasks. But it's technically tricky to aim ultrasound through thick, dense skull bone and to show its energy has landed at the intended point. And ultrasound's effects on the brain can be hard to predict. How much it boosts or suppresses neural activity depends on many parameters, including the timing and intensity of ultrasound pulses, and even characteristics of the targeted neurons themselves. “I have tremendous excitement about the potential,” says Sarah Hollingsworth Lisanby, a psychiatrist at the National Institute of Mental Health who studies noninvasive neuromodulation. “We also need to acknowledge that there's a lot we have to learn,” she says. For one thing, researchers are largely in the dark about how sound waves and brain cells interact. “That's the million-dollar question in this field,” says Mikhail Shapiro, a biochemical engineer at the California Institute of Technology. At high intensities, ultrasound can heat up and kill brain cells—a feature neurosurgeons have exploited to burn away sections of brain responsible for tremors. Even at intensities that don't significantly increase temperature, ultrasound exerts a mechanical force on cells. Some studies suggest this force alters ion channels on neurons, changing the cells' likelihood of firing a signal to neighbors. If ultrasound works primarily via ion channels, “That's great news,” Shapiro says, “because that means we can look at where those channels are expressed and make some predictions about what cell types will be excited.” In a preprint on bioRxiv last month, Shapiro's team reported that exposing mouse neurons in a dish to ultrasound opens a particular set of calcium ion channels to render certain cells more excitable. But these channels alone won't explain ultrasound's effects, says Seung-Schik Yoo, a neuroscientist at Harvard University. He notes that ultrasound also appears to affect receptors on nonneuronal brain cells called glia. “It's very hard to [develop] any unifying theory about the exact mechanism” of ultrasound, he says. Regardless of mechanism, ultrasound is starting to show clear, if subtle, effects in humans. In 2014, a team at Virginia Polytechnic Institute and State University showed focused ultrasound could increase electrical activity in a sensory processing region of the human brain and improve participants' ability to discern the number of points being touched on their fingers. Neurologist Christopher Butler at the University of Oxford and colleagues have tested ultrasound during a more complex sensory task: judging the motion of drifting, jiggling dots on a screen. Last month at the Cognitive Neuroscience Society's annual meeting online, he reported that stimulating a motion-processing visual region called MT improved subjects' ability to judge which way the majority of the dots drifted. Ultrasound's effects have so far been subtler than those of TMS, says Mark George, a psychiatrist at the Medical University of South Carolina, who helped develop and refine that technology. With TMS, “you put it on your head and turn it on and your thumb moves,” he says. But the ultrasound experiments that prompted paw twitches in mice used intensities “so, so, so much higher than what we're being allowed to use in humans.” Regulators have limited human studies in part because ultrasound has the potential to cook the brain or cause damage through cavitation—the creation of tiny bubbles in tissue. In 2015, Yoo and colleagues found microbleeds, a sign of blood vessel damage, in sheep brains repeatedly exposed to ultrasound. “This was a huge speed bump,” says Kim Butts Pauly, a biophysicist at Stanford University. But in February in Brain Stimulation , her group reported microbleeds in control animals as well, suggesting this damage might result from dissection of the brains. Butts Pauly and Yoo now say they're confident the technology can be used safely. Cohen and collaborators recently tested safety in people by aiming ultrasound at regions slated for surgical removal to treat epilepsy. With FDA's OK, they used intensities up to eight times as high as the limit for diagnostic ultrasound. As they reported in a preprint on medRxiv in April, they found no significant damage to brain tissue or blood vessels. However, to find the limit of safety, researchers will likely need to go all the way to levels that damage tissue, Cohen says. Several teams are cautiously moving into tests of ultrasound as treatment. In 2016, UCLA neuroscientist Martin Monti and colleagues reported that a man in a minimally conscious state regained consciousness following ultrasound stimulation of his thalamus. Monti is preparing a publication on a follow-up study of three people with chronically impaired states of consciousness. After ultrasound, they showed increased responsiveness over a period of days—much faster than expected, Monti says, although the study included no control group. That research and the tests in epilepsy patients used an ultrasound device developed by BrainSonix Corporation. Its founder, UCLA neuropsychiatrist Alexander Bystritsky, hopes ultrasound can disrupt neural circuits that drive symptoms of OCD. A team at Massachusetts General Hospital and Baylor College of Medicine is planning a study in humans using the BrainSonix device, he says. Columbia University biomedical engineer Elisa Konofagou hopes to use ultrasound to treat Alzheimer's disease. Before COVID-19 interrupted participant recruitment, she and colleagues were preparing a pilot study to inject tiny gas-filled bubbles into the bloodstream of six people with Alzheimer's and use pulses of ultrasound to oscillate the microbubbles in blood vessels lining the brain. The mechanical force of those vibrations can temporarily pull apart the cells lining these vessels. The researchers hope opening this blood-brain barrier will help the brain clear toxic proteins. (Konofagou's team and others are also exploring this ultrasound-microbubble combination to deliver drugs to the brain.) In his first test of ultrasound after years of studying TMS, George looked to reduce pain. His team applied increasing heat to the arms of 19 participants, who tended to become more sensitive over repeated tests, reporting pain at lower temperatures by the last test. But if, between the first and last test, they had pulses of ultrasound aimed at the thalamus, their pain threshold dipped half as much. “This is definitely a double green light” to keep pursuing the technology, George says. George regularly treats depressed patients with TMS and has seen the technology save lives. “But everybody wonders if we could go deep with a different technology—that would be a game changer,” he says. “Ultrasound holds that promise, but the question is can it really deliver?”


Human brains use dreams to replay recent events and help form memories, study finds

Daily Mail - Science & tech

Human brains use dreams to replay recent events and help form memories -- and experts have gotten the first glimpse of this process in action, a study has reported. When we sleep, our brains replay the firing patterns our neurons underwent while awake -- a process that experts refer to as'offline replay'. It is thought that offline replay underlies so-called memory consolidation, the way that recent memories acquire a more permanent representation in the brain. Although replay had previously been observed in animals, it had not be witnessed before in humans. Using implanted electrodes, US researchers were able to show that people's brains replayed the neuron activity of a memory game while they slept.


Artificial Intelligence May Speed up Epilepsy Diagnoses, Study...

#artificialintelligence

A newly developed artificial intelligence (AI) system could help expedite the diagnosis of epileptic conditions such as Dravet syndrome. The AI system was described in a study, titled "A propositional AI system for supporting epilepsy diagnosis based on the 2017 epilepsy classification: Illustrated by Dravet syndrome," in the journal Epilepsy & Behavior. Epilepsy is a broad disease category for many different conditions that involve seizures. Properly diagnosing epileptic conditions can be a challenge, especially given their different causes and symptoms. For example, mutations in the SCN1A gene are the most common cause of Dravet syndrome, but not all people with Dravet syndrome have such mutations, and SCN1A mutations can also be associated with other conditions, such as febrile seizures plus.


IBM's AI classifies seizures with 98.4% accuracy using EEG data

#artificialintelligence

In a paper published on the preprint server Arxiv.org this week, IBM researchers describe SeizureNet, a machine learning framework that learns the features of seizures to classify various types. They say that it achieves state-of-the-art classification accuracy on a popular data set, and that it helps to improve the classification accuracy of smaller networks for applications with low memory and faster inference. If the claims stand up to academic scrutiny, the framework could, for instance, help the over 3.4 million people with epilepsy better understand the factors that trigger their seizures. The World Health Organization estimates that up to 70% of people living with epilepsy could live seizure-free if properly diagnosed and treated. SeizureNet is a machine learning framework consisting of individual classifiers (specifically convolutional neural networks) that learn the features of electroencephalograms (EEGs) -- i.e., tests that evaluate the electrical activity in the brain -- to predict seizure types.


Binary and Multiclass Classifiers based on Multitaper Spectral Features for Epilepsy Detection

arXiv.org Machine Learning

Epilepsy is one of the most common neurological disorders that can be diagnosed through electroencephalogram (EEG), in which the following epileptic events can be observed: pre-ictal, ictal, post-ictal, and interictal. In this paper, we present a novel method for epilepsy detection into two differentiation contexts: binary and multiclass classification. For feature extraction, a total of 105 measures were extracted from power spectrum, spectrogram, and bispectrogram. For classifier building, eight different machine learning algorithms were used. Our method was applied in a widely used EEG database. As a result, random forest and backpropagation based on multilayer perceptron algorithms reached the highest accuracy for binary (98.75%) and multiclass (96.25%) classification problems, respectively. Subsequently, the statistical tests did not find a model that would achieve a better performance than the other classifiers. In the evaluation based on confusion matrices, it was also not possible to identify a classifier that stands out in relation to other models for EEG classification. Even so, our results are promising and competitive with the findings in the literature.


Now Artificial Intelligence Can Convert Brain Thoughts into Text - The Hacker Noon

#artificialintelligence

Have you ever wondered about this concept called Mind reading? It might be a myth right. But a team of scientists led by neurosurgeon Edward Chang of University of California, San Francisco (UCSF) has developed an Artificial Intelligence (AI) that can converts someone's brain thoughts into text. Their study was published in Nature Neuroscience. In the study, in which four patients with epilepsy wore the implants to monitor seizures caused by their medical condition, the UCSF team ran a side experiment: having the participants read and repeat a number of set sentences aloud, while the electrodes recorded their brain activity during the exercise.


This bracelet detects epileptic seizures through artificial intelligence - KAWA

#artificialintelligence

To brave the unpredictability of epileptic seizures, three young tunisian entrepreneurs engineered a bracelet for users to monitor their own condition and most crucially, automatically contact their caregivers within seconds of a fit. Since their launch in 2017, Epilert continues to prevent epileptic fatalities and hopes to advance medical treatments by working closely with Tunisian doctors.


EEG-GRAPH: A Factor-Graph-Based Model for Capturing Spatial, Temporal, and Observational Relationships in Electroencephalograms

Neural Information Processing Systems

This paper presents a probabilistic-graphical model that can be used to infer characteristics of instantaneous brain activity by jointly analyzing spatial and temporal dependencies observed in electroencephalograms (EEG). Specifically, we describe a factor-graph-based model with customized factor-functions defined based on domain knowledge, to infer pathologic brain activity with the goal of identifying seizure-generating brain regions in epilepsy patients. We utilize an inference technique based on the graph-cut algorithm to exactly solve graph inference in polynomial time. We validate the model by using clinically collected intracranial EEG data from 29 epilepsy patients to show that the model correctly identifies seizure-generating brain regions. Our results indicate that our model outperforms two conventional approaches used for seizure-onset localization (5-7% better AUC: 0.72, 0.67, 0.65) and that the proposed inference technique provides 3-10% gain in AUC (0.72, 0.62, 0.69) compared to sampling-based alternatives.


Epileptic Seizure Classification with Symmetric and Hybrid Bilinear Models

arXiv.org Machine Learning

Epilepsy affects nearly 1% of the global population, of which two thirds can be treated by anti-epileptic drugs and a much lower percentage by surgery. Diagnostic procedures for epilepsy and monitoring are highly specialized and labour-intensive. The accuracy of the diagnosis is also complicated by overlapping medical symptoms, varying levels of experience and inter-observer variability among clinical professions. This paper proposes a novel hybrid bilinear deep learning network with an application in the clinical procedures of epilepsy classification diagnosis, where the use of surface electroencephalogram (sEEG) and audiovisual monitoring is standard practice. Hybrid bilinear models based on two types of feature extractors, namely Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), are trained using Short-Time Fourier Transform (STFT) of one-second sEEG. In the proposed hybrid models, CNNs extract spatio-temporal patterns, while RNNs focus on the characteristics of temporal dynamics in relatively longer intervals given the same input data. Second-order features, based on interactions between these spatio-temporal features are further explored by bilinear pooling and used for epilepsy classification. Our proposed methods obtain an F1-score of 97.4% on the Temple University Hospital Seizure Corpus and 97.2% on the EPILEPSIAE dataset, comparing favourably to existing benchmarks for sEEG-based seizure type classification. The open-source implementation of this study is available at https://github.com/NeuroSyd/Epileptic-Seizure-Classification