Goto

Collaborating Authors

Results


Mind-Controlled Robots A Step Closer To Realization

#artificialintelligence

Survivors of a severe brain or spinal cord injury are often left with a lifelong disability that impacts their lives negatively. A common problem faced by them is permanent paralysis due to the damage caused to their nervous system. The most severe form of paralysis is tetraplegia, as people suffering from it have lost control of both their arms and legs. Researchers have been working for years to build devices that tetraplegic patients can control using their thoughts and perform certain activities independently. Different institutions and organizations have been working on building seamless mind-controlled robots to perform various tasks.


Neuroscience-inspired perception-action in robotics: applying active inference for state estimation, control and self-perception

arXiv.org Artificial Intelligence

Unlike robots, humans learn, adapt and perceive their bodies by interacting with the world. Discovering how the brain represents the body and generates actions is of major importance for robotics and artificial intelligence. Here we discuss how neuroscience findings open up opportunities to improve current estimation and control algorithms in robotics. In particular, how active inference, a mathematical formulation of how the brain resists a natural tendency to disorder, provides a unified recipe to potentially solve some of the major challenges in robotics, such as adaptation, robustness, flexibility, generalization and safe interaction. This paper summarizes some experiments and lessons learned from developing such a computational model on real embodied platforms, i.e., humanoid and industrial robots. She wakes up, looks at the mirror and reflects -is this me?- while opening the tap and leaving the water to pour out -did I do it?


Hippocampal formation-inspired probabilistic generative model

arXiv.org Artificial Intelligence

We constructed a hippocampal formation (HPF)-inspired probabilistic generative model (HPF-PGM) using the structure-constrained interface decomposition method. By modeling brain regions with PGMs, this model is positioned as a module that can be integrated as a whole-brain PGM. We discuss the relationship between simultaneous localization and mapping (SLAM) in robotics and the findings of HPF in neuroscience. Furthermore, we survey the modeling for HPF and various computational models, including brain-inspired SLAM, spatial concept formation, and deep generative models. The HPF-PGM is a computational model that is highly consistent with the anatomical structure and functions of the HPF, in contrast to typical conventional SLAM models. By referencing the brain, we suggest the importance of the integration of egocentric/allocentric information from the entorhinal cortex to the hippocampus and the use of discrete-event queues.


End-to-End Pixel-Based Deep Active Inference for Body Perception and Action

arXiv.org Artificial Intelligence

We present a pixel-based deep Active Inference algorithm (PixelAI) inspired in human body perception and successfully validated in robot body perception and action as a use case. Our algorithm combines the free energy principle from neuroscience, rooted in variational inference, with deep convolutional decoders to scale the algorithm to directly deal with images input and provide online adaptive inference. The approach enables the robot to perform 1) dynamical body estimation of arm using only raw monocular camera images and 2) autonomous reaching to "imagined" arm poses in the visual space. We statistically analyzed the algorithm performance in a simulated and a real Nao robot. Results show how the same algorithm deals with both perception an action, modelled as an inference optimization problem.


Questions to Guide the Future of Artificial Intelligence Research

arXiv.org Artificial Intelligence

The field of machine learning has focused, primarily, on discretized sub-problems (i.e. vision, speech, natural language) of intelligence. While neuroscience tends to be observation heavy, providing few guiding theories. It is unlikely that artificial intelligence will emerge through only one of these disciplines. Instead, it is likely to be some amalgamation of their algorithmic and observational findings. As a result, there are a number of problems that should be addressed in order to select the beneficial aspects of both fields. In this article, we propose leading questions to guide the future of artificial intelligence research. There are clear computational principles on which the brain operates. The problem is finding these computational needles in a haystack of biological complexity. Biology has clear constraints but by not using it as a guide we are constraining ourselves.


A neural network, connected to a human brain, could mean more advanced prosthetics

#artificialintelligence

In the future, some researchers hope people who lose the use of limbs will be able to control robotic prostheses using brain-computer interfaces -- like Luke Skywalker did effortlessly in "Star Wars." The problem is that brain signals are tricky to decode, meaning that existing brain-computer interfaces that control robotic limbs are often slow or clumsy. But that could be changing. Last week, a team of doctors and neuroscientists released a paper in the journal Nature Medicine about a brain-computer interface that uses a neural network to decode brain signals into precise movements by a lifelike, mind-controlled robotic arm. The researchers took data from a 27-year-old quadriplegic man who had an array of microelectrodes implanted in his brain, and fed it into a series of neural nets, which are artificial intelligence systems loosely modeled after our brains' circuits that excel at finding patterns in large sets of information.


Neuroprosthetic decoder training as imitation learning

arXiv.org Machine Learning

Neuroprosthetic brain-computer interfaces function via an algorithm which decodes neural activity of the user into movements of an end effector, such as a cursor or robotic arm. In practice, the decoder is often learned by updating its parameters while the user performs a task. When the user's intention is not directly observable, recent methods have demonstrated value in training the decoder against a surrogate for the user's intended movement. We describe how training a decoder in this way is a novel variant of an imitation learning problem, where an oracle or expert is employed for supervised training in lieu of direct observations, which are not available. Specifically, we describe how a generic imitation learning meta-algorithm, dataset aggregation (DAgger, [1]), can be adapted to train a generic brain-computer interface. By deriving existing learning algorithms for brain-computer interfaces in this framework, we provide a novel analysis of regret (an important metric of learning efficacy) for brain-computer interfaces. This analysis allows us to characterize the space of algorithmic variants and bounds on their regret rates. Existing approaches for decoder learning have been performed in the cursor control setting, but the available design principles for these decoders are such that it has been impossible to scale them to naturalistic settings. Leveraging our findings, we then offer an algorithm that combines imitation learning with optimal control, which should allow for training of arbitrary effectors for which optimal control can generate goal-oriented control. We demonstrate this novel and general BCI algorithm with simulated neuroprosthetic control of a 26 degree-of-freedom model of an arm, a sophisticated and realistic end effector.


Neural Decoding of Cursor Motion Using a Kalman Filter

Neural Information Processing Systems

The direct neural control of external devices such as computer displays or prosthetic limbs requires the accurate decoding of neural activity representing continuous movement. We develop a real-time control system using the spiking activity of approximately 40 neurons recorded with an electrode array implanted in the arm area of primary motor cortex.


Neural Decoding of Cursor Motion Using a Kalman Filter

Neural Information Processing Systems

The direct neural control of external devices such as computer displays or prosthetic limbs requires the accurate decoding of neural activity representing continuous movement. We develop a real-time control system using the spiking activity of approximately 40 neurons recorded with an electrode array implanted in the arm area of primary motor cortex.


Neural Decoding of Cursor Motion Using a Kalman Filter

Neural Information Processing Systems

The direct neural control of external devices such as computer displays or prosthetic limbs requires the accurate decoding of neural activity representing continuousmovement. We develop a real-time control system using the spiking activity of approximately 40 neurons recorded with an electrode array implanted in the arm area of primary motor cortex. In contrast to previous work, we develop a control-theoretic approach that explicitly models the motion of the hand and the probabilistic relationship betweenthis motion and the mean firing rates of the cells in 70§ bins. We focus on a realistic cursor control task in which the subject mustmove a cursor to "hit" randomly placed targets on a computer monitor. Encoding and decoding of the neural data is achieved with a Kalman filter which has a number of advantages over previous linear filtering techniques. In particular, the Kalman filter reconstructions of hand trajectories in off-line experiments are more accurate than previously reportedresults and the model provides insights into the nature of the neural coding of movement.