If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
New research in Scientific Reports conducted by Washington University shows how comprehending brain activity as a network rather than by electroencephalography readings, provides more accurate identification of epileptic seizures in real-time. The study, which mixes machine learning with systems theory, was steered by lead author Walter Bomela. "Our technique allows us to get raw data, process it and extract a feature that's more informative for the machine learning model to use," Bomela stated in a news release. "The major advantage of our approach is to fuse signals from 23 electrodes to one parameter that can be efficiently processed with much less computing resources." As explained by researchers, using an EEG, epileptic seizures can be observed through irregular brain activity in the form of spikes and waves during the measurement of electrical output.
By now, it's almost old news that artificial intelligence (AI) will have a transformative role in medicine. Algorithms have the potential to work tirelessly, at faster rates and now with potentially greater accuracy than clinicians. In 2016, it was predicted that'machine learning will displace much of the work of radiologists and anatomical pathologists'. In the same year, a University of Toronto professor controversially announced that'we should stop training radiologists now'. But is it really the beginning of the end for some medical specialties?
The robot-assisted radical prostatectomy was segmented into 12 steps, and for each step, 41 validated automated performance metrics were reported. The predictive models were trained with three data sets: 1) 492 automated performance metrics; 2) 16 clinicopathological data (for example prostate volume, Gleason score); 3) automated performance metrics plus clinicopathological data. The authors utilized a random forest model (800 trees) to predict continence recovery (no pads or one safety pad) at three and six months after surgery. The prediction accuracy was estimated through a 10-fold cross-validation process. The area under the curve (AUC) and standard error (SE) was used to estimate prediction accuracy. Finally, the out-of-bag Gini index was used to rank the variables of importance.
Deep learning (DL) models are known for tackling the nonlinearities associated with data, which the traditional estimators such as logistic regression couldn't. However, there is still a cloud of doubt with regards to the increased use of computationally intensive DL for simple classification tasks. To find out if DL really outperforms shallow models significantly, the researchers from the University of Pennsylvania experiment with three ML pipelines that involve traditional methods, AutoML and DL in a paper titled, 'Is Deep Learning Necessary For Simple Classification Tasks.' The UPenn researchers stated that a support-vector machine (SVM) model might predict more accurately susceptibility to a certain complex genetic disease than a gradient boosting model trained on the same dataset. Moreover, choosing different hyperparameters within that SVM model can vary performances.
As a way to see inside the body, revealing a tumor or a fetus, ultrasound is tried and true. But neuroscientists have a newer ambition for the technology: tinkering with the brain. At frequencies lower than those of a sonogram but still beyond the range of human hearing, ultrasound can penetrate the skull and boost or suppress brain activity. If researchers can prove that ultrasound safely and predictably changes human brain function, it could become a powerful, noninvasive research tool and a new means of treating brain disorders. How ultrasound works on the brain remains mysterious. But recent experiments have offered reassurance about safety, and small studies hint at meaningful effects in humans—dampening pain, for example, or subtly enhancing perception. “I've seen a lot of tantalizing data,” says Mark Cohen, a neuroscientist at the University of California, Los Angeles (UCLA). “While the challenges are very large, the potential of this thing is so much larger that we really have to pursue it.” Scientists can already modulate the brain noninvasively by delivering electric current or magnetic pulses across the skull. The U.S. Food and Drug Administration (FDA) has approved transcranial magnetic stimulation (TMS) to treat depression, migraine pain, and obsessive-compulsive disorder (OCD). But unlike magnetic or electric fields, sound waves can be focused—like light through a magnifying glass—on a point deep in the brain without affecting shallower tissue. For now, that combination of depth and focus is possible only with a surgically implanted wire. But ultrasound could temporarily disrupt a deep human brain region—the almond-shaped amygdala, a driver of emotional responses, for example, or the thalamus, a relay station for pain and regulator of alertness—to test its function or treat disease. Results in animals are encouraging. Experiments in the 1950s first showed ultrasound waves could suppress neural activity in a visual region of the cat brain. In rodents, aiming ultrasound at motor regions has triggered movements such as a twitch of a paw or whisker. And focusing it on a frontal region of monkey brains can change how the animals perform at eye movement tasks. But it's technically tricky to aim ultrasound through thick, dense skull bone and to show its energy has landed at the intended point. And ultrasound's effects on the brain can be hard to predict. How much it boosts or suppresses neural activity depends on many parameters, including the timing and intensity of ultrasound pulses, and even characteristics of the targeted neurons themselves. “I have tremendous excitement about the potential,” says Sarah Hollingsworth Lisanby, a psychiatrist at the National Institute of Mental Health who studies noninvasive neuromodulation. “We also need to acknowledge that there's a lot we have to learn,” she says. For one thing, researchers are largely in the dark about how sound waves and brain cells interact. “That's the million-dollar question in this field,” says Mikhail Shapiro, a biochemical engineer at the California Institute of Technology. At high intensities, ultrasound can heat up and kill brain cells—a feature neurosurgeons have exploited to burn away sections of brain responsible for tremors. Even at intensities that don't significantly increase temperature, ultrasound exerts a mechanical force on cells. Some studies suggest this force alters ion channels on neurons, changing the cells' likelihood of firing a signal to neighbors. If ultrasound works primarily via ion channels, “That's great news,” Shapiro says, “because that means we can look at where those channels are expressed and make some predictions about what cell types will be excited.” In a preprint on bioRxiv last month, Shapiro's team reported that exposing mouse neurons in a dish to ultrasound opens a particular set of calcium ion channels to render certain cells more excitable. But these channels alone won't explain ultrasound's effects, says Seung-Schik Yoo, a neuroscientist at Harvard University. He notes that ultrasound also appears to affect receptors on nonneuronal brain cells called glia. “It's very hard to [develop] any unifying theory about the exact mechanism” of ultrasound, he says. Regardless of mechanism, ultrasound is starting to show clear, if subtle, effects in humans. In 2014, a team at Virginia Polytechnic Institute and State University showed focused ultrasound could increase electrical activity in a sensory processing region of the human brain and improve participants' ability to discern the number of points being touched on their fingers. Neurologist Christopher Butler at the University of Oxford and colleagues have tested ultrasound during a more complex sensory task: judging the motion of drifting, jiggling dots on a screen. Last month at the Cognitive Neuroscience Society's annual meeting online, he reported that stimulating a motion-processing visual region called MT improved subjects' ability to judge which way the majority of the dots drifted. Ultrasound's effects have so far been subtler than those of TMS, says Mark George, a psychiatrist at the Medical University of South Carolina, who helped develop and refine that technology. With TMS, “you put it on your head and turn it on and your thumb moves,” he says. But the ultrasound experiments that prompted paw twitches in mice used intensities “so, so, so much higher than what we're being allowed to use in humans.” Regulators have limited human studies in part because ultrasound has the potential to cook the brain or cause damage through cavitation—the creation of tiny bubbles in tissue. In 2015, Yoo and colleagues found microbleeds, a sign of blood vessel damage, in sheep brains repeatedly exposed to ultrasound. “This was a huge speed bump,” says Kim Butts Pauly, a biophysicist at Stanford University. But in February in Brain Stimulation , her group reported microbleeds in control animals as well, suggesting this damage might result from dissection of the brains. Butts Pauly and Yoo now say they're confident the technology can be used safely. Cohen and collaborators recently tested safety in people by aiming ultrasound at regions slated for surgical removal to treat epilepsy. With FDA's OK, they used intensities up to eight times as high as the limit for diagnostic ultrasound. As they reported in a preprint on medRxiv in April, they found no significant damage to brain tissue or blood vessels. However, to find the limit of safety, researchers will likely need to go all the way to levels that damage tissue, Cohen says. Several teams are cautiously moving into tests of ultrasound as treatment. In 2016, UCLA neuroscientist Martin Monti and colleagues reported that a man in a minimally conscious state regained consciousness following ultrasound stimulation of his thalamus. Monti is preparing a publication on a follow-up study of three people with chronically impaired states of consciousness. After ultrasound, they showed increased responsiveness over a period of days—much faster than expected, Monti says, although the study included no control group. That research and the tests in epilepsy patients used an ultrasound device developed by BrainSonix Corporation. Its founder, UCLA neuropsychiatrist Alexander Bystritsky, hopes ultrasound can disrupt neural circuits that drive symptoms of OCD. A team at Massachusetts General Hospital and Baylor College of Medicine is planning a study in humans using the BrainSonix device, he says. Columbia University biomedical engineer Elisa Konofagou hopes to use ultrasound to treat Alzheimer's disease. Before COVID-19 interrupted participant recruitment, she and colleagues were preparing a pilot study to inject tiny gas-filled bubbles into the bloodstream of six people with Alzheimer's and use pulses of ultrasound to oscillate the microbubbles in blood vessels lining the brain. The mechanical force of those vibrations can temporarily pull apart the cells lining these vessels. The researchers hope opening this blood-brain barrier will help the brain clear toxic proteins. (Konofagou's team and others are also exploring this ultrasound-microbubble combination to deliver drugs to the brain.) In his first test of ultrasound after years of studying TMS, George looked to reduce pain. His team applied increasing heat to the arms of 19 participants, who tended to become more sensitive over repeated tests, reporting pain at lower temperatures by the last test. But if, between the first and last test, they had pulses of ultrasound aimed at the thalamus, their pain threshold dipped half as much. “This is definitely a double green light” to keep pursuing the technology, George says. George regularly treats depressed patients with TMS and has seen the technology save lives. “But everybody wonders if we could go deep with a different technology—that would be a game changer,” he says. “Ultrasound holds that promise, but the question is can it really deliver?”
Small startups and big companies alike are recognizing that modern biotech R&D is as much a data ... [ ] problem as a science problem. Cloud technologies offer a way to bring together massive amounts of complex data to improve the way we feed, fuel, heal, and build our world with biology. These days, biotech R&D is as much a data problem as a science problem. Here's why: in the past decade, the exploding field of synthetic biology has done an incredible job solving the scientific challenges of making biology easier to engineer. I have written about how tools like gene editing, synthesis, sequencing, and automation are changing for the better the way we feed, fuel, heal, and build our world with biology.
Human brains use dreams to replay recent events and help form memories -- and experts have gotten the first glimpse of this process in action, a study has reported. When we sleep, our brains replay the firing patterns our neurons underwent while awake -- a process that experts refer to as'offline replay'. It is thought that offline replay underlies so-called memory consolidation, the way that recent memories acquire a more permanent representation in the brain. Although replay had previously been observed in animals, it had not be witnessed before in humans. Using implanted electrodes, US researchers were able to show that people's brains replayed the neuron activity of a memory game while they slept.
The world of genomics has made abrupt strides in the past several years, with the first CRISPR-edited babies being born just a few weeks ago. Using advanced CRISPR technology, Scientist Jiankui He'announced that twin girls with an edited gene that reduces the risk of contracting HIV "came crying into this world as healthy as any other babies a few weeks ago."' The announcement was met with great backlash, sparking'outrage from many researchers and ethicists who say implanting edited embryos to create babies is premature and exposes the children to unnecessary health risks. Opponents also fear the creation of "designer babies," children edited to enhance their intelligence, athleticism or other traits.' CRISPR technology is used in editing human genomes.
A newly developed artificial intelligence (AI) system could help expedite the diagnosis of epileptic conditions such as Dravet syndrome. The AI system was described in a study, titled "A propositional AI system for supporting epilepsy diagnosis based on the 2017 epilepsy classification: Illustrated by Dravet syndrome," in the journal Epilepsy & Behavior. Epilepsy is a broad disease category for many different conditions that involve seizures. Properly diagnosing epileptic conditions can be a challenge, especially given their different causes and symptoms. For example, mutations in the SCN1A gene are the most common cause of Dravet syndrome, but not all people with Dravet syndrome have such mutations, and SCN1A mutations can also be associated with other conditions, such as febrile seizures plus.
In a paper published on the preprint server Arxiv.org this week, IBM researchers describe SeizureNet, a machine learning framework that learns the features of seizures to classify various types. They say that it achieves state-of-the-art classification accuracy on a popular data set, and that it helps to improve the classification accuracy of smaller networks for applications with low memory and faster inference. If the claims stand up to academic scrutiny, the framework could, for instance, help the over 3.4 million people with epilepsy better understand the factors that trigger their seizures. The World Health Organization estimates that up to 70% of people living with epilepsy could live seizure-free if properly diagnosed and treated. SeizureNet is a machine learning framework consisting of individual classifiers (specifically convolutional neural networks) that learn the features of electroencephalograms (EEGs) -- i.e., tests that evaluate the electrical activity in the brain -- to predict seizure types.