Plotting

 Poggio, Tomaso


Incremental and Decremental Support Vector Machine Learning

Neural Information Processing Systems

An online recursive algorithm for training support vector machines, one vector at a time, is presented. Adiabatic increments retain the Kuhn Tucker conditions on all previously seen training data, in a number of steps each computed analytically. The incremental procedure is reversible, anddecremental "unlearning" offers an efficient method to exactly evaluate leave-one-out generalization performance.


Machine Learning, Machine Vision, and the Brain

AI Magazine

The problem of learning is arguably at the very core of the problem of intelligence, both biological and artificial. In this article, we review our work over the last 10 years in the area of supervised learning, focusing on three interlinked directions of research -- (1) theory, (2) engineering applications (making intelligent software), and (3) neuroscience (understanding the brain's mechanisms of learnings) -- that contribute to and complement each other.


Machine Learning, Machine Vision, and the Brain

AI Magazine

The figure shows an ideal continuous loop from theory to feasibility understanding the problem of intelligence. In reality, the learning is also becoming a key to the study of interactions--as one might expect--are less artificial and biological vision. For example in years, both computer vision--which attempts 1990, ideas from the mathematics of learning to build machines that see--and visual neuroscience--which theory--radial basis function networks--suggested aims to understand how our a model for biological object recognition visual system works--are undergoing a fundamental that led to the physiological experiments change in their approaches. Visual neuroscience in cortex described later in the article. It was is beginning to focus on the mechanisms only later that the same idea found its way into that allow the cortex to adapt its the computer graphics applications described circuitry and learn a new task. In this article, we concentrate on one aspect of Vision systems that learn and adapt represent learning: supervised learning.


Just One View: Invariances in Inferotemporal Cell Tuning

Neural Information Processing Systems

In macaque inferotemporal cortex (IT), neurons have been found to respond selectively to complex shapes while showing broad tuning ("invariance") with respect to stimulus transformations such as translation and scale changes and a limited tuning to rotation in depth.


Just One View: Invariances in Inferotemporal Cell Tuning

Neural Information Processing Systems

In macaque inferotemporal cortex (IT), neurons have been found to respond selectivelyto complex shapes while showing broad tuning ("invariance") withrespect to stimulus transformations such as translation and scale changes and a limited tuning to rotation in depth.


Just One View: Invariances in Inferotemporal Cell Tuning

Neural Information Processing Systems

In macaque inferotemporal cortex (IT), neurons have been found to respond selectively to complex shapes while showing broad tuning ("invariance") with respect to stimulus transformations such as translation and scale changes and a limited tuning to rotation in depth.


3D Object Recognition: A Model of View-Tuned Neurons

Neural Information Processing Systems

Recognition of specific objects, such as recognition of a particular face, can be based on representations that are object centered, such as 3D structural models. Alternatively, a 3D object may be represented for the purpose of recognition in terms of a set of views. This latter class of models is biologically attractive because model acquisition - the learning phase - is simpler and more natural. A simple model for this strategy of object recognition was proposed by Poggio and Edelman (Poggio and Edelman, 1990). They showed that, with few views of an object usedas training examples, a classification network, such as a Gaussian radial basis function network, can learn to recognize novel views of that object, in partic- 42 E.Bricolo, T. Poggio and N. Logothetis (a) (b) View angle Figure 1: (a) Schematic representation of the architecture of the Poggio-Edelman model. The shaded circles correspond to the view-tuned units, each tuned to a view of the object, while the open circle correspond to the view-invariant, object specific output unit.


3D Object Recognition: A Model of View-Tuned Neurons

Neural Information Processing Systems

Recognition of specific objects, such as recognition of a particular face, can be based on representations that are object centered, such as 3D structural models. Alternatively, a 3D object may be represented for the purpose of recognition in terms of a set of views. This latter class of models is biologically attractive because model acquisition - the learning phase - is simpler and more natural. A simple model for this strategy of object recognition was proposed by Poggio and Edelman (Poggio and Edelman, 1990). They showed that, with few views of an object used as training examples, a classification network, such as a Gaussian radial basis function network, can learn to recognize novel views of that object, in partic- 42 E. Bricolo, T. Poggio and N. Logothetis