Goto

Collaborating Authors

Neuroscience


From internal models toward metacognitive AI

arXiv.org Artificial Intelligence

In several papers published in Biological Cybernetics in the 1980s and 1990s, Kawato and colleagues proposed computational models explaining how internal models are acquired in the cerebellum. These models were later supported by neurophysiological experiments using monkeys and neuroimaging experiments involving humans. These early studies influenced neuroscience from basic, sensory-motor control to higher cognitive functions. One of the most perplexing enigmas related to internal models is to understand the neural mechanisms that enable animals to learn large-dimensional problems with so few trials. Consciousness and metacognition -- the ability to monitor one's own thoughts, may be part of the solution to this enigma. Based on literature reviews of the past 20 years, here we propose a computational neuroscience model of metacognition. The model comprises a modular hierarchical reinforcement-learning architecture of parallel and layered, generative-inverse model pairs. In the prefrontal cortex, a distributed executive network called the "cognitive reality monitoring network" (CRMN) orchestrates conscious involvement of generative-inverse model pairs in perception and action. Based on mismatches between computations by generative and inverse models, as well as reward prediction errors, CRMN computes a "responsibility signal" that gates selection and learning of pairs in perception, action, and reinforcement learning. A high responsibility signal is given to the pairs that best capture the external world, that are competent in movements (small mismatch), and that are capable of reinforcement learning (small reward prediction error). CRMN selects pairs with higher responsibility signals as objects of metacognition, and consciousness is determined by the entropy of responsibility signals across all pairs.



Identifying partial mouse brain microscopy images from Allen reference atlas using a contrastively learned semantic space

arXiv.org Machine Learning

Precise identification of mouse brain microscopy images is a crucial first step when anatomical structures in the mouse brain are to be registered to a reference atlas. Practitioners usually rely on manual comparison of images or tools that assume the presence of complete images. This work explores Siamese Networks as the method for finding corresponding 2D reference atlas plates for given partial 2D mouse brain images. Siamese networks are a class of convolutional neural networks (CNNs) that use weight-shared paths to obtain low dimensional embeddings of pairs of input images. The correspondence between the partial mouse brain image and reference atlas plate is determined based on the distance between low dimensional embeddings of brain slices and atlas plates that are obtained from Siamese networks using contrastive learning. Experiments showed that Siamese CNNs can precisely identify brain slices using the Allen mouse brain atlas when training and testing images come from the same source. They achieved TOP-1 and TOP-5 accuracy of 25% and 100%, respectively, taking only 7.2 seconds to identify 29 images.


'Neurograins' Could be the Next Brain-Computer Interfaces

WIRED

A team at Brown University has developed a system that uses dozens of silicon microchips to record and transmit brain activity to a computer. Dubbed "neurograins," the chips--each about the size of a grain of salt--are designed to be sprinkled across the brain's surface or throughout its tissue to collect neural signals from more areas than currently possible with other brain implants. "Each grain has enough micro-electronics stuffed into it so that, when embedded in neural tissue, it can listen to neuronal activity on the one hand, and then can also transmit it as a tiny little radio to the outside world," says lead author Arto Nurmikko, a neuroengineer at Brown who led the development of the neurograins. The system, known as a brain-computer interface, is described in a paper published August 12 in Nature Electronics. Alongside other Brown researchers, as well as collaborators from Baylor University, the University of California at San Diego, and Qualcomm, Nurmikko began working on the neurograins four years ago with initial funding from the Defense Advanced Research Projects Agency.


Silicon Valley's race to develop a brain-computer interface

#artificialintelligence

Entrepreneur Bryan Johnson says he wanted to become very rich in order to do something great for humankind. Last year Johnson, founder of the online payments company Braintree, starting making news when he threw $100 million behind Kernel, a startup he founded to enhance human intelligence by developing brain implants capable of linking people's thoughts to computers. Johnson isn't alone in believing that "neurotechnology" could be the next big thing. To many in Silicon Valley, the brain looks like an unconquered frontier whose importance dwarfs any achievement made in computing or the Web. According to neuroscientists, several figures from the tech sector are currently scouring labs across the U.S. for technology that might fuse human and artificial intelligence.


Mercedes unveils wearable brain-computer interface for Avatar-inspired autonomous concept car

Daily Mail - Science & tech

Mercedes-Benz's latest autonomous concept car, the Vision AVTR, boasts a'brain-computer interface' that allows passengers to control various features with their thoughts. To operate, users focus on lights on the digital dashboard and the car's artificial intelligence recognizes their choice and begins a preset function. This could include changing a radio station, opening a window, answering a phone call, or, eventually, sending the car on a predetermined route. 'The BCI device measures the neuronal activity at the cortex in real-time. It analyzes the measured brainwaves and recognizes on which light points the user directs [their] focus and full attention,' Mercedes said in a release.


Mercedes-Benz has a car that can read your mind

Mashable

Are you annoyed by constantly going through the menus on your car's touchscreen? Mercedes-Benz has a very futuristic solution. On Monday, at the IAA Mobility 2021 show in Munich, Germany, the company displayed the next iteration of its Vision AVTR concept car, first shown at CES 2020. According to Mercedes-Benz, the car now has tech that lets you perform certain tasks just by thinking about them. It's based on visual perception -- the car has light dots projected on the car's digital dashboard, and a BCI (brain-computer interface) device with wearable electrodes is attached to the back of the user's head. After a short calibration period, the device can record and measure brain activity, so when the user focuses on a specific light on a dashboard, the device can detect that and perform a certain task.


Meta-brain Models: biologically-inspired cognitive agents

arXiv.org Artificial Intelligence

Artificial Intelligence (AI) systems based solely on neural networks or symbolic computation present a representational complexity challenge. While minimal representations can produce behavioral outputs like locomotion or simple decision-making, more elaborate internal representations might offer a richer variety of behaviors. We propose that these issues can be addressed with a computational approach we call meta-brain models. Meta-brain models are embodied hybrid models that include layered components featuring varying degrees of representational complexity. We will propose combinations of layers composed using specialized types of models. Rather than using a generic black box approach to unify each component, this relationship mimics systems like the neocortical-thalamic system relationship of the Mammalian brain, which utilizes both feedforward and feedback connectivity to facilitate functional communication. Importantly, the relationship between layers can be made anatomically explicit. This allows for structural specificity that can be incorporated into the model's function in interesting ways. We will propose several types of layers that might be functionally integrated into agents that perform unique types of tasks, from agents that simultaneously perform morphogenesis and perception, to agents that undergo morphogenesis and the acquisition of conceptual representations simultaneously. Our approach to meta-brain models involves creating models with different degrees of representational complexity, creating a layered meta-architecture that mimics the structural and functional heterogeneity of biological brains, and an input/output methodology flexible enough to accommodate cognitive functions, social interactions, and adaptive behaviors more generally. We will conclude by proposing next steps in the development of this flexible and open-source approach.


How big science failed to unlock the mysteries of the human brain

MIT Technology Review

In fact, a few years earlier, Henry Markram, a neuroscientist at the École Polytechnique Fédérale de Lausanne in Switzerland, had set an even loftier goal: to make a computer simulation of a living human brain. Markram wanted to build a fully digital, three-dimensional model at the resolution of the individual cell, tracing all of those cells' many connections. "We can do it within 10 years," he boasted during a 2009 TED talk. In January 2013, a few months before the American project was announced, the EU awarded Markram $1.3 billion to build his brain model. The US and EU projects sparked similar large-scale research efforts in countries including Japan, Australia, Canada, China, South Korea, and Israel.


Google AI and Harvard University map a portion of the cerebral cortex in three dimensions - Actu IA

#artificialintelligence

At the beginning of June, Google announced, in collaboration with the Lichtman laboratory of Harvard University, the realization of a cartography of the human cerebral cortex in three dimensions. In parallel, a 1.4 petabyte (1015bytes) dataset has been published: it includes imaging data covering about 1 mm3 of brain tissue as well as tens of thousands of reconstructed neurons. This tool should help advance studies of neural networks in the human brain, which are currently difficult. Thanks to a collaboration between Harvard University's Lichtman Laboratory and Google AI, researchers have successfully mapped the human cerebral cortex in 3D. Already in January 2020, Google offered a database providing information about the morphological structure and synaptic connectivity of half of a fly's brain. From this research around the brain of flies, the research teams then turned to the human brain.