Spiking neurons can discover predictive features by aggregate-label learning

AITopics Original Links 

To implement aggregate-label learning, I calculated how neurons should modify their synaptic efficacies in order to most effectively adjust their number of output spikes. Because a neuron's discrete number of spikes does not provide a direction of gradual improvement, I derived the multi-spike tempotron learning rule in an abstract space of continuous spike threshold variables. In this space, changes in synaptic efficacies are directed along the steepest path, reducing the discrepancy between a neuron's fixed biological spike threshold and the closest hypothetical threshold at which the neuron would fire a desired number of spikes. With the resulting synaptic learning rule, aggregate-label learning enabled simple neuron models to solve the temporal credit assignment problem. Neurons reliably identified all clues whose occurrences contributed to a delayed feedback signal.