New computational algorithms make it possible to build neural networks with many input nodes and many layers, and distinguish "deep learning" of these networks from previous work on artificial neural nets.
The Helmholtz International BigBrain Analytics and Learning Laboratory (HIBALL) is a collaboration between McGill University and Forschungszentrum Jülich to develop next-generation high-resolution human brain models using cutting-edge Machine- and Deep Learning methods and high-performance computing. HIBALL is based on the high-resolution BigBrain model first published by the Jülich and McGill teams in 2013. Over the next five years, the lab will be funded with a total of up to 6 million Euro by the German Helmholtz Association, Forschungszentrum Jülich, and Healthy Brains, Healthy Lives at McGill University. In 2003, when Jülich neuroscientist Katrin Amunts and her Canadian colleague Alan Evans began scanning 7,404 histological sections of a human brain, it was completely unclear whether it would ever be possible to reconstruct this brain on the computer in three dimensions. At that time, there were no technical possibilities to cope with the huge amount of data.
The device that allows the human brain to connect to a computer could be implanted in a person for the first time later this year, announced the founder of Neuralink neurotechnology company, the tycoon Elon Musk. Last year, Musk's Neuralink introduced a special microchip and flexible fiber electrodes that should allow the human brain to connect to computers or machines. At the same time, he announced that the electrodes in question would like to be implanted with a laser in the future because it is more suitable than a mechanical drill for making holes in the skull. This crazy project of Elon Musk and his startup seems to be going well. Elon Musk said on Twitter that the Neuralink is working on an "awesome" new version of the company's signature device.
The human brain operates on roughly 20 watts of power (a third of a 60-watt light bulb) in a space the size of, well, a human head. The biggest machine learning algorithms use closer to a nuclear power plant's worth of electricity and racks of chips to learn. That's not to slander machine learning, but nature may have a tip or two to improve the situation. By mimicking the brain, super-efficient neuromorphic chips aim to take AI off the cloud and put it in your pocket. The latest such chip is smaller than a piece of confetti and has tens of thousands of artificial synapses made out of memristors--chip components that can mimic their natural counterparts in the brain.
In the summer of 2009, the Israeli neuroscientist Henry Markram strode onto the TED stage in Oxford, England, and made an immodest proposal: Within a decade, he said, he and his colleagues would build a complete simulation of the human brain inside a supercomputer. They'd already spent years mapping the cells in the neocortex, the supposed seat of thought and perception. "It's a bit like going and cataloging a piece of the rain forest," Markram explained. "How many trees does it have? What shapes are the trees?"
Neuralink, the AI-brain-chip company spearheaded by Tesla and SpaceX CEO Elon Musk, could be ready to put a version of its implant in a person "within a year," Musk said when recently speaking on Joe Rogan's podcast. Neuralink, which was founded in 2016, is Musk's neural technology company that's developing an implant designed to interface directly with the human brain. The initial goal of the implant, says Musk, is to use it as a means to treat brain injury and trauma. "There's still a lot of work to do," Musk said when speaking with Rogan. "So when I say, you know, we've probably got a shot at putting it in a person, you know, within a year. I think, that's exactly what I mean, I think we have a chance of putting input in one end, having them be healthy, and restoring some functionality that they've lost."
"We're taking subperceptual touch events and boosting them into conscious perception," says first author Patrick Ganzer, a principal research scientist at Battelle. "When we did this, we saw several functional improvements. It was a big eureka moment when we first restored the participant's sense of touch." The participant in this study is Ian Burkhart, a 28-year-old man who suffered a spinal cord injury during a diving accident in 2010. Since 2014, Burkhart has been working with investigators on a project called NeuroLife that aims to restore function to his right arm.
A paralysed man can play Guitar Hero again after having his sense of touch restored with a brain-computer interface (BCI) that provides sensory feedback. Ian Burkhart, 28, suffered a severe spinal cord injury during a diving accident in 2010, which caused him to lose his sense of touch. US researchers found that, although Burkhart had almost no sensation in his hand, when they stimulated his skin, a small neural signal still reached his brain. They have since used their BCI to restore sensation in his hand by rerouting these tiny signals from the brain to the muscle, bypassing his damaged spinal cord. Ian Burkhart (left) is a 28-year-old man who suffered a spinal cord injury during a diving accident in 2010.
This article surveys engineering and neuroscientific models of planning as a cognitive function, which is regarded as a typical function of fluid intelligence in the discussion of general intelligence. It aims to present existing planning models as references for realizing the planning function in brain-inspired AI or artificial general intelligence (AGI). It also proposes themes for the research and development of brain-inspired AI from the viewpoint of tasks and architecture.
Neuroscientific data analysis has traditionally relied on linear algebra and stochastic process theory. However, the tree-like shapes of neurons cannot be described easily as points in a vector space (the subtraction of two neuronal shapes is not a meaningful operation), and methods from computational topology are better suited to their analysis. Here we introduce methods from Discrete Morse (DM) Theory to extract the tree-skeletons of individual neurons from volumetric brain image data, and to summarize collections of neurons labelled by tracer injections. Since individual neurons are topologically trees, it is sensible to summarize the collection of neurons using a consensus tree-shape that provides a richer information summary than the traditional regional 'connectivity matrix' approach. The conceptually elegant DM approach lacks hand-tuned parameters and captures global properties of the data as opposed to previous approaches which are inherently local. For individual skeletonization of sparsely labelled neurons we obtain substantial performance gains over state-of-the-art non-topological methods (over 10% improvements in precision and faster proofreading). The consensus-tree summary of tracer injections incorporates the regional connectivity matrix information, but in addition captures the collective collateral branching patterns of the set of neurons connected to the injection site, and provides a bridge between single-neuron morphology and tracer-injection data.
A brain-computer interface (BCI) allows users to "communicate" with a computer without using their muscles. BCI based on sensori-motor rhythms use imaginary motor tasks, such as moving the right or left hand to send control signals. The performances of a BCI can vary greatly across users but also depend on the tasks used, making the problem of appropriate task selection an important issue. This study presents a new procedure to automatically select as fast as possible a discriminant motor task for a brain-controlled button. We develop for this purpose an adaptive algorithm UCB-classif based on the stochastic bandit theory.