Goto

Collaborating Authors

[Report] Imprinting and recalling cortical ensembles

Science

Donald Hebb's hypothesis that coactivation of neurons leads to the formation of ensembles of neurons has inspired neuroscientists for decades. The experimental creation of such ensembles has been technically challenging. Using two-photon optogenetic stimulation with single-cell resolution, Carrillo-Reid et al. discovered that recurrent activation of a group of neurons creates an ensemble that is imprinted in the brain circuitry. Activation of a single neuron can lead to recall of the entire ensemble in a phenomenon called pattern completion. The artificial ensemble persists over days and can be reactivated at later time points without interfering with endogenous circuitry.


Developing parsimonious ensembles using ensemble diversity within a reinforcement learning framework

arXiv.org Machine Learning

Heterogeneous ensembles built from the predictions of a wide variety and large number of diverse base predictors represent a potent approach to building predictive models for problems where the ideal base/individual predictor may not be obvious. Ensemble selection is an especially promising approach here, not only for improving prediction performance, but also because of its ability to select a collectively predictive subset, often a relatively small one, of the base predictors. In this paper, we present a set of algorithms that explicitly incorporate ensemble diversity, a known factor influencing predictive performance of ensembles, into a reinforcement learning framework for ensemble selection. We rigorously tested these approaches on several challenging problems and associated data sets, yielding that several of them produced more accurate ensembles than those that don't explicitly consider diversity. More importantly, these diversity-incorporating ensembles were much smaller in size, i.e., more parsimonious, than the latter types of ensembles. This can eventually aid the interpretation or reverse engineering of predictive models assimilated into the resultant ensemble(s).


Apple Researchers Propose A Method For Reconstructing Training Data From Diverse Machine Learning Models By Ensemble Inversion

#artificialintelligence

Model inversion (MI), where an adversary abuses access to a trained Machine Learning (ML) model in order to infer sensitive information about the model's original training data, has gotten a lot of attention in recent years. The trained model under assault is frequently frozen during MI and used to direct the training of a generator, such as a Generative Adversarial Network, to rebuild the distribution of the model's original training data. As a result, scrutiny of the capabilities of MI techniques is essential for the creation of appropriate protection techniques. Reconstruction of training data with high quality using a single model is complex. However, existing MI literature does not consider targeting many models simultaneously, which could offer the adversary extra information and viewpoints.


Ensemble methods: bagging, boosting and stacking

#artificialintelligence

This post was co-written with Baptiste Rocca. This old saying expresses pretty well the underlying idea that rules the very powerful "ensemble methods" in machine learning. Roughly, ensemble learning methods, that often trust the top rankings of many machine learning competitions (including Kaggle's competitions), are based on the hypothesis that combining multiple models together can often produce a much more powerful model. The purpose of this post is to introduce various notions of ensemble learning. We will give the reader some necessary keys to well understand and use related methods and be able to design adapted solutions when needed.


Using Ensembles in Kaggle Data Science Competitions- Part 3

@machinelearnbot

Earlier, we showed how to create stacked ensembles with stacked generalization and out-of-fold predictions. Now we'll learn how to implement various stacking techniques.