Collaborating Authors


Hope grows for targeting the brain with ultrasound


As a way to see inside the body, revealing a tumor or a fetus, ultrasound is tried and true. But neuroscientists have a newer ambition for the technology: tinkering with the brain. At frequencies lower than those of a sonogram but still beyond the range of human hearing, ultrasound can penetrate the skull and boost or suppress brain activity. If researchers can prove that ultrasound safely and predictably changes human brain function, it could become a powerful, noninvasive research tool and a new means of treating brain disorders. How ultrasound works on the brain remains mysterious. But recent experiments have offered reassurance about safety, and small studies hint at meaningful effects in humans—dampening pain, for example, or subtly enhancing perception. “I've seen a lot of tantalizing data,” says Mark Cohen, a neuroscientist at the University of California, Los Angeles (UCLA). “While the challenges are very large, the potential of this thing is so much larger that we really have to pursue it.” Scientists can already modulate the brain noninvasively by delivering electric current or magnetic pulses across the skull. The U.S. Food and Drug Administration (FDA) has approved transcranial magnetic stimulation (TMS) to treat depression, migraine pain, and obsessive-compulsive disorder (OCD). But unlike magnetic or electric fields, sound waves can be focused—like light through a magnifying glass—on a point deep in the brain without affecting shallower tissue. For now, that combination of depth and focus is possible only with a surgically implanted wire. But ultrasound could temporarily disrupt a deep human brain region—the almond-shaped amygdala, a driver of emotional responses, for example, or the thalamus, a relay station for pain and regulator of alertness—to test its function or treat disease. Results in animals are encouraging. Experiments in the 1950s first showed ultrasound waves could suppress neural activity in a visual region of the cat brain. In rodents, aiming ultrasound at motor regions has triggered movements such as a twitch of a paw or whisker. And focusing it on a frontal region of monkey brains can change how the animals perform at eye movement tasks. But it's technically tricky to aim ultrasound through thick, dense skull bone and to show its energy has landed at the intended point. And ultrasound's effects on the brain can be hard to predict. How much it boosts or suppresses neural activity depends on many parameters, including the timing and intensity of ultrasound pulses, and even characteristics of the targeted neurons themselves. “I have tremendous excitement about the potential,” says Sarah Hollingsworth Lisanby, a psychiatrist at the National Institute of Mental Health who studies noninvasive neuromodulation. “We also need to acknowledge that there's a lot we have to learn,” she says. For one thing, researchers are largely in the dark about how sound waves and brain cells interact. “That's the million-dollar question in this field,” says Mikhail Shapiro, a biochemical engineer at the California Institute of Technology. At high intensities, ultrasound can heat up and kill brain cells—a feature neurosurgeons have exploited to burn away sections of brain responsible for tremors. Even at intensities that don't significantly increase temperature, ultrasound exerts a mechanical force on cells. Some studies suggest this force alters ion channels on neurons, changing the cells' likelihood of firing a signal to neighbors. If ultrasound works primarily via ion channels, “That's great news,” Shapiro says, “because that means we can look at where those channels are expressed and make some predictions about what cell types will be excited.” In a preprint on bioRxiv last month, Shapiro's team reported that exposing mouse neurons in a dish to ultrasound opens a particular set of calcium ion channels to render certain cells more excitable. But these channels alone won't explain ultrasound's effects, says Seung-Schik Yoo, a neuroscientist at Harvard University. He notes that ultrasound also appears to affect receptors on nonneuronal brain cells called glia. “It's very hard to [develop] any unifying theory about the exact mechanism” of ultrasound, he says. Regardless of mechanism, ultrasound is starting to show clear, if subtle, effects in humans. In 2014, a team at Virginia Polytechnic Institute and State University showed focused ultrasound could increase electrical activity in a sensory processing region of the human brain and improve participants' ability to discern the number of points being touched on their fingers. Neurologist Christopher Butler at the University of Oxford and colleagues have tested ultrasound during a more complex sensory task: judging the motion of drifting, jiggling dots on a screen. Last month at the Cognitive Neuroscience Society's annual meeting online, he reported that stimulating a motion-processing visual region called MT improved subjects' ability to judge which way the majority of the dots drifted. Ultrasound's effects have so far been subtler than those of TMS, says Mark George, a psychiatrist at the Medical University of South Carolina, who helped develop and refine that technology. With TMS, “you put it on your head and turn it on and your thumb moves,” he says. But the ultrasound experiments that prompted paw twitches in mice used intensities “so, so, so much higher than what we're being allowed to use in humans.” Regulators have limited human studies in part because ultrasound has the potential to cook the brain or cause damage through cavitation—the creation of tiny bubbles in tissue. In 2015, Yoo and colleagues found microbleeds, a sign of blood vessel damage, in sheep brains repeatedly exposed to ultrasound. “This was a huge speed bump,” says Kim Butts Pauly, a biophysicist at Stanford University. But in February in Brain Stimulation , her group reported microbleeds in control animals as well, suggesting this damage might result from dissection of the brains. Butts Pauly and Yoo now say they're confident the technology can be used safely. Cohen and collaborators recently tested safety in people by aiming ultrasound at regions slated for surgical removal to treat epilepsy. With FDA's OK, they used intensities up to eight times as high as the limit for diagnostic ultrasound. As they reported in a preprint on medRxiv in April, they found no significant damage to brain tissue or blood vessels. However, to find the limit of safety, researchers will likely need to go all the way to levels that damage tissue, Cohen says. Several teams are cautiously moving into tests of ultrasound as treatment. In 2016, UCLA neuroscientist Martin Monti and colleagues reported that a man in a minimally conscious state regained consciousness following ultrasound stimulation of his thalamus. Monti is preparing a publication on a follow-up study of three people with chronically impaired states of consciousness. After ultrasound, they showed increased responsiveness over a period of days—much faster than expected, Monti says, although the study included no control group. That research and the tests in epilepsy patients used an ultrasound device developed by BrainSonix Corporation. Its founder, UCLA neuropsychiatrist Alexander Bystritsky, hopes ultrasound can disrupt neural circuits that drive symptoms of OCD. A team at Massachusetts General Hospital and Baylor College of Medicine is planning a study in humans using the BrainSonix device, he says. Columbia University biomedical engineer Elisa Konofagou hopes to use ultrasound to treat Alzheimer's disease. Before COVID-19 interrupted participant recruitment, she and colleagues were preparing a pilot study to inject tiny gas-filled bubbles into the bloodstream of six people with Alzheimer's and use pulses of ultrasound to oscillate the microbubbles in blood vessels lining the brain. The mechanical force of those vibrations can temporarily pull apart the cells lining these vessels. The researchers hope opening this blood-brain barrier will help the brain clear toxic proteins. (Konofagou's team and others are also exploring this ultrasound-microbubble combination to deliver drugs to the brain.) In his first test of ultrasound after years of studying TMS, George looked to reduce pain. His team applied increasing heat to the arms of 19 participants, who tended to become more sensitive over repeated tests, reporting pain at lower temperatures by the last test. But if, between the first and last test, they had pulses of ultrasound aimed at the thalamus, their pain threshold dipped half as much. “This is definitely a double green light” to keep pursuing the technology, George says. George regularly treats depressed patients with TMS and has seen the technology save lives. “But everybody wonders if we could go deep with a different technology—that would be a game changer,” he says. “Ultrasound holds that promise, but the question is can it really deliver?”

Artificial Intelligence turns a person's thoughts into text - Times of India


Scientists have developed an artificial intelligence system that can translate a person's thoughts into text by analysing their brain activity. Researchers at the University of California developed the AI to decipher up to 250 words in real-time from a set of between 30 and 50 sentences. The algorithm was trained using the neural signals of four women with electrodes implanted in their brains, which were already in place to monitor epileptic seizures. The volunteers repeatedly read sentences aloud while the researchers fed the brain data to the AI to unpick patterns that could be associated with individual words. The average word error rate across a repeated set was as low as 3%.

Now Artificial Intelligence Can Convert Brain Thoughts into Text - The Hacker Noon


Have you ever wondered about this concept called Mind reading? It might be a myth right. But a team of scientists led by neurosurgeon Edward Chang of University of California, San Francisco (UCSF) has developed an Artificial Intelligence (AI) that can converts someone's brain thoughts into text. Their study was published in Nature Neuroscience. In the study, in which four patients with epilepsy wore the implants to monitor seizures caused by their medical condition, the UCSF team ran a side experiment: having the participants read and repeat a number of set sentences aloud, while the electrodes recorded their brain activity during the exercise.

Health State Estimation Artificial Intelligence

Life's most valuable asset is health. Continuously understanding the state of our health and modeling how it evolves is essential if we wish to improve it. Given the opportunity that people live with more data about their life today than any other time in history, the challenge rests in interweaving this data with the growing body of knowledge to compute and model the health state of an individual continually. This dissertation presents an approach to build a personal model and dynamically estimate the health state of an individual by fusing multi-modal data and domain knowledge. The system is stitched together from four essential abstraction elements: 1. the events in our life, 2. the layers of our biological systems (from molecular to an organism), 3. the functional utilities that arise from biological underpinnings, and 4. how we interact with these utilities in the reality of daily life. Connecting these four elements via graph network blocks forms the backbone by which we instantiate a digital twin of an individual. Edges and nodes in this graph structure are then regularly updated with learning techniques as data is continuously digested. Experiments demonstrate the use of dense and heterogeneous real-world data from a variety of personal and environmental sensors to monitor individual cardiovascular health state. State estimation and individual modeling is the fundamental basis to depart from disease-oriented approaches to a total health continuum paradigm. Precision in predicting health requires understanding state trajectory. By encasing this estimation within a navigational approach, a systematic guidance framework can plan actions to transition a current state towards a desired one. This work concludes by presenting this framework of combining the health state and personal graph model to perpetually plan and assist us in living life towards our goals.

Affective Computing Market to Witness a Pronounce Growth During 2017 to 2025 – Market Research Sheets


The global affective computing market is envisioned to create high growth prospects on the back of the rising deployment of machine and human interaction technologies. With enabling technologies already making a mark with their adoption in a range of industry verticals, it could be said that the market has started to evolve. Facial feature extraction software collecting a handsome demand in the recent years is expected to augur well for the growth of the deployment of cameras in affective computing systems. Detection of psychological disorders, facial expression recognition for dyslexia, autism, and other disorders in specially-abled children, and various other applications could increase the use of affective computing technology. Life sciences and healthcare are prognosticated to showcase a promising rise in the demand for affective computing.

C Light Launches AI-Driven Retinal Eye-Tracking; Predicts Neurological Health in 10 Seconds


C. Light Technologies, a Berkeley, CA-based neurotech and AI company participating in UC Berkeley's premier accelerator SkyDeck, is introducing the world's first retinal eye-tracking technology paired with machine learning to assess and predict neurological health. The technology is fast (10 seconds), non-invasive and objective. Eye motion has been used for decades to quickly triage brain health. Now C. Light Technologies, a Berkeley SkyDeck neurotech, and AI company is measuring it down to the cellular level to monitor and track neurological diseases in seconds and determine how well medications are working. Neurological disorders like multiple sclerosis, Alzheimer's disease, Parkinson's disease, Amyotrophic lateral sclerosis (ALS), concussions, etc. affect millions of lives around the world.

Learning to See Analogies: A Connectionist Exploration Artificial Intelligence

This dissertation explores the integration of learning and analogy-making through the development of a computer program, called Analogator, that learns to make analogies by example. By "seeing" many different analogy problems, along with possible solutions, Analogator gradually develops an ability to make new analogies. That is, it learns to make analogies by analogy. This approach stands in contrast to most existing research on analogy-making, in which typically the a priori existence of analogical mechanisms within a model is assumed. The present research extends standard connectionist methodologies by developing a specialized associative training procedure for a recurrent network architecture. The network is trained to divide input scenes (or situations) into appropriate figure and ground components. Seeing one scene in terms of a particular figure and ground provides the context for seeing another in an analogous fashion. After training, the model is able to make new analogies between novel situations. Analogator has much in common with lower-level perceptual models of categorization and recognition; it thus serves as a unifying framework encompassing both high-level analogical learning and low-level perception. This approach is compared and contrasted with other computational models of analogy-making. The model's training and generalization performance is examined, and limitations are discussed.

Artificial Intelligence for Social Good: A Survey Artificial Intelligence

Its impact is drastic and real: Youtube's AIdriven recommendation system would present sports videos for days if one happens to watch a live baseball game on the platform [1]; email writing becomes much faster with machine learning (ML) based auto-completion [2]; many businesses have adopted natural language processing based chatbots as part of their customer services [3]. AI has also greatly advanced human capabilities in complex decision-making processes ranging from determining how to allocate security resources to protect airports [4] to games such as poker [5] and Go [6]. All such tangible and stunning progress suggests that an "AI summer" is happening. As some put it, "AI is the new electricity" [7]. Meanwhile, in the past decade, an emerging theme in the AI research community is the so-called "AI for social good" (AI4SG): researchers aim at developing AI methods and tools to address problems at the societal level and improve the wellbeing of the society.

Data Scientist - IoT BigData Jobs


Master's (or PhD, preferred) in computer science, electrical engineering or related fields (statistics, applied math, computational neuroscience) Intel prohibits discrimination based on race, color, religion, gender, national origin, age, disability, veteran status, marital status, pregnancy, gender expression or identity, sexual orientation or any other legally protected status.

Brain circuit that controls compulsive drinking of alcohol has been discovered in mice

Daily Mail - Science & tech

A brain circuit that controls the compulsive drinking of alcohol has been discovered in mice, offering a hope of one day finding a cure for alcoholism in humans. Scientists have long sought to understand why some people are prone to develop drinking problems while others are not. The team's discovery in mice, if translated to humans, may provide doctors a way to reveal whether someone is likely to become a compulsive drinking later in life. Alcoholism is a chronic brain disease in which an individual drinks compulsively -- often accompanied by negative emotions. Whereas previous studies have focused on examining the brain after a drinking disorder develops, the researchers from the Salk Institute in California set out to prove that brain circuits can make some people more likely to be alcoholics.