membrane potential
Equivalence of approximation by networks of single- and multi-spike neurons
Dold, Dominik, Petersen, Philipp Christian
In a spiking neural network, is it enough for each neuron to spike at most once? In recent work, approximation bounds for spiking neural networks have been derived, quantifying how well they can fit target functions. However, these results are only valid for neurons that spike at most once, which is commonly thought to be a strong limitation. Here, we show that the opposite is true for a large class of spiking neuron models, including the commonly used leaky integrate-and-fire model with subtractive reset: for every approximation bound that is valid for a set of multi-spike neural networks, there is an equivalent set of single-spike neural networks with only linearly more neurons (in the maximum number of spikes) for which the bound holds. The same is true for the reverse direction too, showing that regarding their approximation capabilities in general machine learning tasks, single-spike and multi-spike neural networks are equivalent. Consequently, many approximation results in the literature for single-spike neural networks also hold for the multi-spike case.
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Vision (0.68)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.67)
- Asia > China > Chongqing Province > Chongqing (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Italy > Calabria > Catanzaro Province > Catanzaro (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.93)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.67)
- Information Technology (0.67)
- Health & Medicine (0.46)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Asia > China > Guangdong Province > Guangzhou (0.04)
- Asia > Singapore (0.04)
- (3 more...)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.93)
- Information Technology (1.00)
- Health & Medicine > Therapeutic Area (0.68)
- Information Technology > Data Science (1.00)
- Information Technology > Communications (1.00)
- Information Technology > Artificial Intelligence > Vision (1.00)
- (5 more...)
c4e5f4de1b3cfc838eec6484d0b85378-Supplemental-Conference.pdf
During training, RNN-based learning algorithms are forced to unfold through the time axis, as explainedinFigure1andSection2. Asaresult,thecorresponding numberofoperations isatleast O(TMN), where Tisthe total time steps and M, N are the number of input and output neurons. On the other hand, event-based learning algorithms only have to deal with cases where a certain neuron fires aspike, and record the relevant information.
- Europe > France > Occitanie > Haute-Garonne > Toulouse (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > China (0.04)