Goto

Collaborating Authors

 Zemel, Richard S.


Directional-Unit Boltzmann Machines

Neural Information Processing Systems

University of Toronto University of Toronto University of Colorado Toronto, ONT M5S lA4 Toronto, ONT M5S lA4 Boulder, CO 80309-0430 Abstract We present a general formulation for a network of stochastic directional units. This formulation is an extension of the Boltzmann machine in which the units are not binary, but take on values in a cyclic range, between 0 and 271' radians. The conditional distribution of a unit's stochastic state is a circular version of the Gaussian probability distribution, known as the von Mises distribution. This combination of a value and a certainty provides additional representational power in a unit. Many kinds of information can naturally be represented in terms of angular, or directional, variables.


Learning to Segment Images Using Dynamic Feature Binding

Neural Information Processing Systems

Despite the fact that complex visual scenes contain multiple, overlapping objects, people perform object recognition with ease and accuracy. One operation that facilitates recognition is an early segmentation process in which features of objects are grouped and labeled according to which object theybelong. Current computational systems that perform this operation arebased on predefined grouping heuristics.


Learning to Segment Images Using Dynamic Feature Binding

Neural Information Processing Systems

Despite the fact that complex visual scenes contain multiple, overlapping objects, people perform object recognition with ease and accuracy. One operation that facilitates recognition is an early segmentation process in which features of objects are grouped and labeled according to which object they belong. Current computational systems that perform this operation are based on predefined grouping heuristics.


Discovering Viewpoint-Invariant Relationships That Characterize Objects

Neural Information Processing Systems

Richard S. Zemel and Geoffrey E. Hinton Department of Computer Science University of Toronto Toronto, ONT M5S lA4 Abstract Using an unsupervised learning procedure, a network is trained on an ensemble ofimages of the same two-dimensional object at different positions, orientations and sizes. Each half of the network "sees" one fragment of the object, and tries to produce as output a set of 4 parameters that have high mutual information with the 4 parameters output by the other half of the network. Given the ensemble of training patterns, the 4 parameters on which the two halves of the network can agree are the position, orientation, and size of the whole object, or some recoding of them. After training, the network can reject instances of other shapes by using the fact that the predictions made by its two halves disagree. If two competing networks are trained on an unlabelled mixture of images of two objects, they cluster the training cases on the basis of the objects' shapes, independently of the position, orientation, and size. 1 INTRODUCTION A difficult problem for neural networks is to recognize objects independently of their position, orientation, or size.


Discovering Viewpoint-Invariant Relationships That Characterize Objects

Neural Information Processing Systems

Richard S. Zemel and Geoffrey E. Hinton Department of Computer Science University of Toronto Toronto, ONT M5S lA4 Abstract Using an unsupervised learning procedure, a network is trained on an ensemble of images of the same two-dimensional object at different positions, orientations and sizes. Each half of the network "sees" one fragment of the object, and tries to produce as output a set of 4 parameters that have high mutual information with the 4 parameters output by the other half of the network. Given the ensemble of training patterns, the 4 parameters on which the two halves of the network can agree are the position, orientation, and size of the whole object, or some recoding of them. After training, the network can reject instances of other shapes by using the fact that the predictions made by its two halves disagree. If two competing networks are trained on an unlabelled mixture of images of two objects, they cluster the training cases on the basis of the objects' shapes, independently of the position, orientation, and size. 1 INTRODUCTION A difficult problem for neural networks is to recognize objects independently of their position, orientation, or size.


TRAFFIC: Recognizing Objects Using Hierarchical Reference Frame Transformations

Neural Information Processing Systems

We describe a model that can recognize two-dimensional shapes in an unsegmented image, independent of their orientation, position, and scale. The model, called TRAFFIC, efficiently represents the structural relation between an object and each of its component features by encoding the fixed viewpoint-invariant transformation from the feature's reference frame to the object's in the weights of a connectionist network. Using a hierarchy of such transformations, with increasing complexity of features at each successive layer, the network can recognize multiple objects in parallel. An implementation of TRAFFIC is described, along with experimental results demonstrating the network's ability to recognize constellations of stars in a viewpoint-invariant manner. 1 INTRODUCTION A key goal of machine vision is to recognize familiar objects in an unsegmented image, independent of their orientation, position, and scale. Massively parallel models have long been used for lower-level vision tasks, such as primitive feature extraction and stereo depth. Models addressing "higher-level" vision have generally been restricted to pattern matching types of problems, in which much of the inherent complexity of the domain has been eliminated or ignored.


TRAFFIC: Recognizing Objects Using Hierarchical Reference Frame Transformations

Neural Information Processing Systems

We describe a model that can recognize two-dimensional shapes in an unsegmented image, independent of their orientation, position, and scale. The model, called TRAFFIC, efficiently represents the structural relation between an object and each of its component features by encoding the fixed viewpoint-invariant transformation from the feature's reference frame to the object's in the weights of a connectionist network. Using a hierarchy of such transformations, with increasing complexity of features at each successive layer, the network can recognize multiple objects in parallel. An implementation ofTRAFFIC is described, along with experimental results demonstrating the network's ability to recognize constellations of stars in a viewpoint-invariant manner. 1 INTRODUCTION A key goal of machine vision is to recognize familiar objects in an unsegmented image, independent of their orientation, position, and scale. Massively parallel models have long been used for lower-level vision tasks, such as primitive feature extraction and stereo depth.