New computational algorithms make it possible to build neural networks with many input nodes and many layers, and distinguish "deep learning" of these networks from previous work on artificial neural nets.
Elon Musk said on Wednesday he expects a brain chip developed by his health tech company to begin human trials in the next six months. During a presentation by Musk's company Neuralink, Musk gave updates on the company's wireless brain chip. In addition to forecasting clinical trials, Musk said he plans to get one of the chips himself. "We want to be extremely careful and certain that it will work well before putting a device into a human," said Musk, according to Reuters. Neuralink says it is developing brain-chip interfaces that could restore a person's vision, even in those who were born blind, and restore "full body functionality", including movement and verbal communication, for people with severed spinal cords, reported CNBC.
Elon Musk's Neuralink is set to host its annual'Show and Tell' event tonight at 9 pm ET that is expected to share a progress update on its brain-machine interface. The neuroscience startup shared a teaser for the event on its Twitter account, showing a short video that spelled out the message'please join us for a show and tell,' and some users speculate the world will see a person with Neurlink's chip type on a screen. The goal is to develop a full-implanted brain-computer interface (BCI) for people with paralysis, allowing them to operate computers and mobile devices using their thoughts. The first Show and Tell event, held in 2020, demonstrated the technology with a pig and last year, Musk revealed the update with a monkey that played a video game using only its mind. Neuralink will host its annual'Show and Tell' event tonight at 9 pm ET, which is expected to share the progress of the technology.
Elon Musk plans to hold a'Show and Tell' event for his brain chip company Neuralink on November 30, but a group of physicians claims the firm is'mutilating and killing monkeys' to create a'brain-machine interface.' Musk announced the event, which the company holds each year to showcase its latest updates, on Twitter. The first Show and Tell in 2020 demonstrated the brain implant in a pig and in 2021, the world saw it used by a monkey that died months after receiving the implant. The Physicians Committee for Responsible Medicine (PCRM) recently launched a website detailing the gruesome stories of monkeys that are said to have suffered from sloppy experiments conducted at the University of California, Davis (UC Davis). PCRM shared lab notes with DailyMail.com
The link between chronic pain and a loss of appetite may finally be understood – in mice at least. Zhi Zhang at the University of Science and Technology of China in Hefei and his colleagues injected mice with bacteria that provoke chronic pain. Ten days later, these mice were eating less frequently and for shorter periods of time compared with control mice that had been injected with saline. When the first group of mice were later given pain medication, they ate normally, the researchers wrote in a paper published in Nature Metabolism. To better understand the neuronal activity responsible for this change in behaviour, the researchers analysed the brains of the first group of mice while the animals were in chronic pain.
A new study presents a new neurocomputational model of the human brain, which might shed light on how the brain develops complex cognitive skills and advance neural artificial intelligence research. An international team of scientists from the Institut Pasteur and Sorbonne University in Paris, the CHU Sainte-Justine, Mila – Quebec Artificial Intelligence Institute, and the University of Montreal conducted the study. The model's emphasis on the interaction between two fundamental types of learning--Hebbian learning, associated with statistical regularity (i.e., repetition), or as neuropsychologist Donald Hebb has put it, "neurons that fire together, wire together"--and reinforcement learning, associated with reward and the dopamine neurotransmitter, provides insights into the fundamental mechanisms underlying cognition. The model solves three tasks of increasing complexity across those levels, from visual recognition to cognitive manipulation of conscious percepts. Each time, the team introduced a new core mechanism to enable it to progress.
Mila and IVADO researchers present a new neurocomputational model of the human brain that might bridge the gap in understanding AI and the biological mechanisms underlying mental disorders. A new study presents a new neurocomputational model of the human brain, which might shed light on how the brain develops complex cognitive skills and advance neural artificial intelligence research. An international team of scientists from the Institut Pasteur and Sorbonne University in Paris, the CHU Sainte-Justine, Mila – Quebec Artificial Intelligence Institute, and the University of Montreal conducted the study. The model's emphasis on the interaction between two fundamental types of learning--Hebbian learning, associated with statistical regularity (i.e., repetition), or as neuropsychologist Donald Hebb has put it, "neurons that fire together, wire together"--and reinforcement learning, associated with reward and the dopamine neurotransmitter, provides insights into the fundamental mechanisms underlying cognition. The model solves three tasks of increasing complexity across those levels, from visual recognition to cognitive manipulation of conscious percepts.
Claire talked to Dr Paul Dominick Baniqued from The University of Manchester all about brain-computer interface technology and rehabilitation robotics. Paul Dominick Baniqued received his PhD in robotics and immersive technologies at the University of Leeds. His research tackled the integration of a brain-computer interface with virtual reality and hand exoskeletons for motor rehabilitation and skills learning. He is currently working as a postdoc researcher on cyber-physical systems and digital twins at the Robotics for Extreme Environments Group at the University of Manchester. Sean Katagiri is a robotics engineer who has the pleasure of being surrounded by and working with robots for a living.
Summary: A new 3D electrode array allows researchers to map the activity and location of up to 1 million synaptic links in a living brain. It's a mystery how human thoughts and dreams emerge from electrical pulses in the brain's estimated 100 trillion synapses, and Rice University neuroengineer Chong Xie dreams of changing that by creating a system that can record all the electrical activity in a living brain. In a recently published study in Nature Biomedical Engineering, Xie and colleagues described their latest achievement toward that goal, a 3D electrode array that allows them to map the locations and activity of up to 1 million potential synaptic links in a living brain based on recordings of the millisecond-scale evolution of electrical pulses in tens of thousands of neurons in a cubic millimeter of brain tissue. "The thing that is novel about this work is the recording density," said Xie, an associate professor of electrical and computer engineering at Rice and a core member of the Rice Neuroengineering Initiative. "Microcircuits in the brain are very mysterious. We don't have many ways to map their activity, especially volumetrically. We want to deliver very dense recordings of the cortex because those are important, scientifically, for understanding how brain circuits work."
Artificial intelligence (AI) machine learning is a rapidly emerging brain modeling tool for mental health research, psychiatry, neuroscience, genomics, pharmaceuticals, life sciences, and biotechnology. Scientists have identified areas of potential weak spots in AI brain models and offer solutions on how to prevent bias in a new peer-reviewed study. The research team led by Abigail Greene at Yale School of Medicine along with co-authors affiliated with Yale University, Brigham and Women's Hospital, Harvard Medical School, University of Washington, and Columbia University Irving Medical Center's Department of Psychiatry points out the need to identify why AI algorithms for brain models do not work for everyone when seeking to understand brain-phenotype relationships without biases. "Individual differences in brain functional organization track a range of traits, symptoms and behaviors," wrote the scientists. "So far, work modelling linear brain–phenotype relationships has assumed that a single such relationship generalizes across all individuals, but models do not work equally well in all participants."