Meilijson, Isaac
Splitting matters: how monotone transformation of predictor variables may improve the predictions of decision tree models
Galili, Tal, Meilijson, Isaac
It is widely believed that the prediction accuracy of decision tree models is invariant under any strictly monotone transformation of the individual predictor variables. However, this statement may be false when predicting new observations with values that were not seen in the training-set and are close to the location of the split point of a tree rule. The sensitivity of the prediction error to the split point interpolation is high when the split point of the tree is estimated based on very few observations, reaching 9% misclassification error when only 10 observations are used for constructing a split, and shrinking to 1% when relying on 100 observations. This study compares the performance of alternative methods for split point interpolation and concludes that the best choice is taking the mid-point between the two closest points to the split point of the tree. Furthermore, if the (continuous) distribution of the predictor variable is known, then using its probability integral for transforming the variable ("quantile transformation") will reduce the model's interpolation error by up to about a half on average. Accordingly, this study provides guidelines for both developers and users of decision tree models (including bagging and random forest).
Who Does What? A Novel Algorithm to Determine Function Localization
Aharonov-Barki, Ranit, Meilijson, Isaac, Ruppin, Eytan
We introduce a novel algorithm, termed PPA (Performance Prediction Algorithm), that quantitatively measures the contributions of elements of a neural system to the tasks it performs. The algorithm identifies the neurons or areas which participate in a cognitive or behavioral task, given data about performance decrease in a small set of lesions. It also allows the accurate prediction of performances due to multi-element lesions. The effectiveness of the new algorithm is demonstrated in two models of recurrent neural networks with complex interactions among the elements. The algorithm is scalable and applicable to the analysis of large neural networks. Given the recent advances in reversible inactivation techniques, it has the potential to significantly contribute to the understanding of the organization of biological nervous systems, and to shed light on the long-lasting debate about local versus distributed computation in the brain.
Who Does What? A Novel Algorithm to Determine Function Localization
Aharonov-Barki, Ranit, Meilijson, Isaac, Ruppin, Eytan
We introduce a novel algorithm, termed PPA (Performance Prediction Algorithm), that quantitatively measures the contributions of elements of a neural system to the tasks it performs. The algorithm identifies the neurons or areas which participate in a cognitive or behavioral task, given data about performance decrease in a small set of lesions. It also allows the accurate prediction of performances due to multi-element lesions. The effectiveness of the new algorithm is demonstrated in two models of recurrent neural networks with complex interactions among the elements. Thealgorithm is scalable and applicable to the analysis of large neural networks. Given the recent advances in reversible inactivation techniques, it has the potential to significantly contribute to the understanding ofthe organization of biological nervous systems, and to shed light on the long-lasting debate about local versus distributed computation inthe brain.
Effective Learning Requires Neuronal Remodeling of Hebbian Synapses
Chechik, Gal, Meilijson, Isaac, Ruppin, Eytan
This paper revisits the classical neuroscience paradigm of Hebbian learning. We find that a necessary requirement for effective associative memory learning is that the efficacies of the incoming synapses should be uncorrelated. This requirement is difficult to achieve in a robust manner by Hebbian synaptic learning, since it depends on network level information. Effective learning can yet be obtained by a neuronal process that maintains a zero sum of the incoming synaptic efficacies. This normalization drastically improves the memory capacity of associative networks, from an essentially bounded capacity to one that linearly scales with the network's size.
Effective Learning Requires Neuronal Remodeling of Hebbian Synapses
Chechik, Gal, Meilijson, Isaac, Ruppin, Eytan
We find that a necessary requirement for effective associative memorylearning is that the efficacies of the incoming synapses should be uncorrelated. This requirement is difficult to achieve in a robust manner by Hebbian synaptic learning, since it depends on network level information. Effective learning can yet be obtained by a neuronal process that maintains a zero sum of the incoming synapticefficacies. This normalization drastically improves the memory capacity of associative networks, from an essentially bounded capacity to one that linearly scales with the network's size. It also enables the effective storage of patterns with heterogeneous coding levels in a single network.
Distributed Synchrony of Spiking Neurons in a Hebbian Cell Assembly
Horn, David, Levy, Nir, Meilijson, Isaac, Ruppin, Eytan
We investigate the behavior of a Hebbian cell assembly of spiking neurons formed via a temporal synaptic learning curve. This learning function is based on recent experimental findings. It includes potentiation for short time delays between pre-and post-synaptic neuronal spiking, and depression for spiking events occuring in the reverse order. The coupling between the dynamics of the synaptic learning and of the neuronal activation leads to interesting results. We find that the cell assembly can fire asynchronously, but may also function in complete synchrony, or in distributed synchrony.
Distributed Synchrony of Spiking Neurons in a Hebbian Cell Assembly
Horn, David, Levy, Nir, Meilijson, Isaac, Ruppin, Eytan
We investigate the behavior of a Hebbian cell assembly of spiking neurons formed via a temporal synaptic learning curve. This learning function is based on recent experimental findings. It includes potentiation for short time delays between pre-and post-synaptic neuronal spiking, and depression for spiking events occuring in the reverse order. The coupling between the dynamics of the synaptic learning and of the neuronal activation leads to interesting results. We find that the cell assembly can fire asynchronously, but may also function in complete synchrony, or in distributed synchrony.
Neuronal Regulation Implements Efficient Synaptic Pruning
Chechik, Gal, Meilijson, Isaac, Ruppin, Eytan
Neuronal Regulation Implements Efficient Synaptic Pruning
Chechik, Gal, Meilijson, Isaac, Ruppin, Eytan