Goto

Collaborating Authors

 Parga, Néstor


Emergent Computations in Trained Artificial Neural Networks and Real Brains

arXiv.org Artificial Intelligence

New computational techniques [1, 2, 3, 4, 5, 6, 7] enable neural networks to be trained on tasks similar to those used in experiments with behaving animals [8, 9, 10, 11, 12, 13, 14, 15]. Before these techniques became available, a researcher would hypothesize what computations the network should perform to execute the task, and build a network architecture capable of carrying them out. Then, numerical simulations of the model or mean field approximations allowed verifying whether the proposed network model performed the task as desired. This is unsatisfactory, as it does not allow identifying how a neural network could solve these tasks; the models thus constructed only reflect the researcher's intuitions about how the tasks could be performed. In contrast, trained networks provide us with a valuable tool to investigate mechanisms that networks could use to perform the tasks [16, 17, 18, 19, 6, 7, 20, 13, 21, 22, 23].


A Recurrent Model of the Interaction Between Prefrontal and Inferotemporal Cortex in Delay Tasks

Neural Information Processing Systems

A very simple model of two reciprocally connected attractor neural networks isstudied analytically in situations similar to those encountered in delay match-to-sample tasks with intervening stimuli and in tasks of memory guided attention. The model qualitatively reproduces many of the experimental data on these types of tasks and provides a framework for the understanding of the experimental observations in the context of the attractor neural network scenario.


A Recurrent Model of the Interaction Between Prefrontal and Inferotemporal Cortex in Delay Tasks

Neural Information Processing Systems

A very simple model of two reciprocally connected attractor neural networks is studied analytically in situations similar to those encountered in delay match-to-sample tasks with intervening stimuli and in tasks of memory guided attention. The model qualitatively reproduces many of the experimental data on these types of tasks and provides a framework for the understanding of the experimental observations in the context of the attractor neural network scenario.


Self-similarity Properties of Natural Images

Neural Information Processing Systems

Scale invariance is a fundamental property of ensembles of natural images [1]. Their non Gaussian properties [15, 16] are less well understood, but they indicate the existence of a rich statistical structure. In this work we present a detailed study of the marginal statistics of a variable related to the edges in the images. A numerical analysis shows that it exhibits extended self-similarity [3, 4, 5]. This is a scaling property stronger than self-similarity: all its moments can be expressed as a power of any given moment. More interesting, all the exponents can be predicted in terms of a multiplicative log-Poisson process. This is the very same model that was used very recently to predict the correct exponents of the structure functions of turbulent flows [6]. These results allow us to study the underlying multifractal singularities. In particular we find that the most singular structures are one-dimensional: the most singular manifold consists of sharp edges.


Self-similarity Properties of Natural Images

Neural Information Processing Systems

Scale invariance is a fundamental property of ensembles of natural images[1]. Their non Gaussian properties [15, 16] are less well understood, but they indicate the existence of a rich statistical structure.In this work we present a detailed study of the marginal statistics of a variable related to the edges in the images. A numerical analysis shows that it exhibits extended self-similarity [3, 4, 5]. This is a scaling property stronger than self-similarity: all its moments can be expressed as a power of any given moment. More interesting, all the exponents can be predicted in terms of a multiplicative log-Poisson process. This is the very same model that was used very recently to predict the correct exponents of the structure functions of turbulent flows [6]. These results allow us to study the underlying multifractal singularities. In particular we find that the most singular structures are one-dimensional: the most singular manifold consists of sharp edges.


Self-similarity Properties of Natural Images

Neural Information Processing Systems

Scale invariance is a fundamental property of ensembles of natural images [1]. Their non Gaussian properties [15, 16] are less well understood, but they indicate the existence of a rich statistical structure. In this work we present a detailed study of the marginal statistics of a variable related to the edges in the images. A numerical analysis shows that it exhibits extended self-similarity [3, 4, 5]. This is a scaling property stronger than self-similarity: all its moments can be expressed as a power of any given moment. More interesting, all the exponents can be predicted in terms of a multiplicative log-Poisson process. This is the very same model that was used very recently to predict the correct exponents of the structure functions of turbulent flows [6]. These results allow us to study the underlying multifractal singularities. In particular we find that the most singular structures are one-dimensional: the most singular manifold consists of sharp edges.