Collaborating Authors

Continual One-Shot Learning of Hidden Spike-Patterns with Neural Network Simulation Expansion and STDP Convergence Predictions Machine Learning

This paper presents a constructive algorithm that achieves successful one-shot learning of hidden spike-patterns in a competitive detection task. It has previously been shown (Masquelier et al., 2008) that spike-timing-dependent plasticity (STDP) and lateral inhibition can result in neurons competitively tuned to repeating spike-patterns concealed in high rates of overall presynaptic activity. One-shot construction of neurons with synapse weights calculated as estimates of converged STDP outcomes results in immediate selective detection of hidden spike-patterns. The capability of continual learning is demonstrated through the successful one-shot detection of new sets of spike-patterns introduced after long intervals in the simulation time. Simulation expansion (Lightheart et al., 2013) has been proposed as an approach to the development of constructive algorithms that are compatible with simulations of biological neural networks. A simulation of a biological neural network may have orders of magnitude fewer neurons and connections than the related biological neural systems; therefore, simulated neural networks can be assumed to be a subset of a larger neural system. The constructive algorithm is developed using simulation expansion concepts to perform an operation equivalent to the exchange of neurons between the simulation and the larger hypothetical neural system. The dynamic selection of neurons to simulate within a larger neural system (hypothetical or stored in memory) may be a starting point for a wide range of developments and applications in machine learning and the simulation of biology.

Late to the party


Throughout life, new neurons are added to the brain. Just like people arriving late to a cocktail party, the newbies need to figure out how to integrate with those already embroiled in conversations. The zebrafish brain, already capable of complex visual processing at larval stages, accepts new neurons throughout the fish life span. Boulanger-Weill et al. tracked the location, movement, and functional integration of single newborn neurons in developing zebrafish larvae. Following their own developmental trajectories, newborn neurons began with limited dendritic arbors, no neurotransmitter identity, and spontaneous, but not directed, activity.

World first as artificial neurons developed to cure chronic diseases


Bioelectronic medicine is driving the need for neuromorphic microcircuits that integrate raw nervous stimuli and respond identically to biological neurons. However, designing such circuits remains a challenge. Here we estimate the parameters of highly nonlinear conductance models and derive the ab initio equations of intracellular currents and membrane voltages embodied in analog solid-state electronics. By configuring individual ion channels of solid-state neurons with parameters estimated from large-scale assimilation of electrophysiological recordings, we successfully transfer the complete dynamics of hippocampal and respiratory neurons in silico. The solid-state neurons are found to respond nearly identically to biological neurons under stimulation by a wide range of current injection protocols.

Every single neuron in an animal mapped out for the first time

New Scientist

A COMPLETE map of all the neurons and their connections in both sexes of an animal has been described for the first time. This "connectome" will not only help us understand how neurons work, but could also improve our understanding of human mental-health problems. The tiny soil-dwelling nematode worm Caenorhabditis elegans has long been used for research because it has so few neurons.

Machine Learning and Deep Learning


The idea of an artificial neuron is rather vague. In a CPU, a logic device is created using transistors – if the neural network was'fixed' or'hard-wired' this might also be possible, but generally the point of using a neural network is to use a'learning capability'. This means that a single neuron's response to inputs must be able to change as it learns. This is generally called the'weighting' of a neuron where it puts emphasis on different inputs to generate the desired output. This is more easily achieved in software than hardware, so neurons are usually a mathematical function to relate inputs to outputs.