stereogram
The Computation of Stereo Disparity for Transparent and for Opaque Surfaces
Madarasmi, Suthep, Kersten, Daniel, Pong, Ting-Chuen
The classical computational model for stereo vision incorporates a uniqueness inhibition constraint to enforce a one-to-one feature match, thereby sacrificing the ability to handle transparency. Critics ofthe model disregard the uniqueness constraint and argue that the smoothness constraint can provide the excitation support required for transparency computation.
The Computation of Stereo Disparity for Transparent and for Opaque Surfaces
Madarasmi, Suthep, Kersten, Daniel, Pong, Ting-Chuen
The classical computational model for stereo vision incorporates a uniqueness inhibition constraint to enforce a one-to-one feature match, thereby sacrificing the ability to handle transparency. Critics of the model disregard the uniqueness constraint and argue that the smoothness constraint can provide the excitation support required for transparency computation. However, this modification fails in neighborhoods with sparse features. We propose a Bayesian approach to stereo vision with priors favoring cohesive over transparent surfaces. The disparity and its segmentation into a multi-layer "depth planes" representation are simultaneously computed. The smoothness constraint propagates support within each layer, providing mutual excitation for non-neighboring transparent or partially occluded regions. Test results for various random-dot and other stereograms are presented.
The Computation of Stereo Disparity for Transparent and for Opaque Surfaces
Madarasmi, Suthep, Kersten, Daniel, Pong, Ting-Chuen
The classical computational model for stereo vision incorporates a uniqueness inhibition constraint to enforce a one-to-one feature match, thereby sacrificing the ability to handle transparency. Critics of the model disregard the uniqueness constraint and argue that the smoothness constraint can provide the excitation support required for transparency computation. However, this modification fails in neighborhoods with sparse features. We propose a Bayesian approach to stereo vision with priors favoring cohesive over transparent surfaces. The disparity and its segmentation into a multi-layer "depth planes" representation are simultaneously computed. The smoothness constraint propagates support within each layer, providing mutual excitation for non-neighboring transparent or partially occluded regions. Test results for various random-dot and other stereograms are presented.
Competitive Anti-Hebbian Learning of Invariants
Schraudolph, Nicol N., Sejnowski, Terrence J.
Although the detection of invariant structure in a given set of input patterns is vital to many recognition tasks, connectionist learning rules tend to focus on directions of high variance (principal components). The prediction paradigm is often used to reconcile this dichotomy; here we suggest a more direct approach to invariant learning based on an anti-Hebbian learning rule. An unsupervised tWO-layer network implementing this method in a competitive setting learns to extract coherent depth information from random-dot stereograms. 1 INTRODUCTION: LEARNING INVARIANT STRUCTURE Many connectionist learning algorithms share with principal component analysis (Jolliffe, 1986) the strategy of extracting the directions of highest variance from the input. A single Hebbian neuron, for instance, will come to encode the input's first principal component (Oja and Karhunen, 1985); various forms of lateral interaction can be used to force a layer of such nodes to differentiate and span the principal component subspace - cf. (Sanger, 1989; Kung, 1990; Leen, 1991), and others. The same type of representation also develops in the hidden layer of backpropagation autoassociator networks (Baldi and Hornik, 1989).
- North America > United States > California > San Diego County > La Jolla (0.05)
- North America > United States > New York (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)
Learning to Make Coherent Predictions in Domains with Discontinuities
Becker, Suzanna, Hinton, Geoffrey E.
We have previously described an unsupervised learning procedure that discovers spatially coherent propertit _; of the world by maximizing the information thatparameters extracted from different parts of the sensory input convey about some common underlying cause. When given random dot stereograms of curved surfaces, this procedure learns to extract surface depthbecause that is the property that is coherent across space. It also learns how to interpolate the depth at one location from the depths at nearby locations (Becker and Hint.oll.
- North America > Canada > Ontario > Toronto (0.14)
- Asia > Middle East > Jordan (0.05)
Competitive Anti-Hebbian Learning of Invariants
Schraudolph, Nicol N., Sejnowski, Terrence J.
Although the detection of invariant structure in a given set of input patterns is vital to many recognition tasks, connectionist learning rules tend to focus on directions of high variance (principal components). The prediction paradigm is often used to reconcile this dichotomy; here we suggest a more direct approach to invariant learning based on an anti-Hebbian learning rule. An unsupervised tWO-layer network implementing this method in a competitive setting learns to extract coherent depth information from random-dot stereograms. 1 INTRODUCTION: LEARNING INVARIANT STRUCTURE Many connectionist learning algorithms share with principal component analysis (Jolliffe, 1986) the strategy of extracting the directions of highest variance from the input. A single Hebbian neuron, for instance, will come to encode the input's first principal component (Oja and Karhunen, 1985); various forms of lateral interaction can be used to force a layer of such nodes to differentiate and span the principal component subspace - cf. (Sanger, 1989; Kung, 1990; Leen, 1991), and others. The same type of representation also develops in the hidden layer of backpropagation autoassociator networks (Baldi and Hornik, 1989).
- North America > United States > California > San Diego County > San Diego (0.05)
- North America > United States > California > San Diego County > La Jolla (0.05)
- North America > United States > New York (0.04)
Learning to Make Coherent Predictions in Domains with Discontinuities
Becker, Suzanna, Hinton, Geoffrey E.
We have previously described an unsupervised learning procedure that discovers spatially coherent propertit _; of the world by maximizing the information that parameters extracted from different parts of the sensory input convey about some common underlying cause. When given random dot stereograms of curved surfaces, this procedure learns to extract surface depth because that is the property that is coherent across space. It also learns how to interpolate the depth at one location from the depths at nearby locations (Becker and Hint.oll.
- North America > Canada > Ontario > Toronto (0.14)
- Asia > Middle East > Jordan (0.05)
Competitive Anti-Hebbian Learning of Invariants
Schraudolph, Nicol N., Sejnowski, Terrence J.
Although the detection of invariant structure in a given set of input patterns is vital to many recognition tasks, connectionist learning rules tend to focus on directions of high variance (principal components). The prediction paradigm is often used to reconcile this dichotomy; here we suggest a more direct approach to invariant learning based on an anti-Hebbian learning rule. An unsupervised tWO-layer network implementing this method in a competitive setting learns to extract coherent depth information from random-dot stereograms. 1 INTRODUCTION: LEARNING INVARIANT STRUCTURE Many connectionist learning algorithms share with principal component analysis (Jolliffe, 1986) the strategy of extracting the directions of highest variance from the input. A single Hebbian neuron, for instance, will come to encode the input's first principal component (Oja and Karhunen, 1985); various forms of lateral interaction can be used to force a layer of such nodes to differentiate and span the principal component subspace - cf. (Sanger, 1989; Kung, 1990; Leen, 1991), and others. The same type of representation also develops in the hidden layer of backpropagation autoassociator networks (Baldi and Hornik, 1989).
- North America > United States > California > San Diego County > San Diego (0.05)
- North America > United States > California > San Diego County > La Jolla (0.05)
- North America > United States > New York (0.04)
Learning to Make Coherent Predictions in Domains with Discontinuities
Becker, Suzanna, Hinton, Geoffrey E.
We have previously described an unsupervised learning procedure that discovers spatially coherent propertit _; of the world by maximizing the information that parameters extracted from different parts of the sensory input convey about some common underlying cause. When given random dot stereograms of curved surfaces, this procedure learns to extract surface depth because that is the property that is coherent across space. It also learns how to interpolate the depth at one location from the depths at nearby locations (Becker and Hint.oll.
- North America > Canada > Ontario > Toronto (0.14)
- Asia > Middle East > Jordan (0.05)
Stereopsis by a Neural Network Which Learns the Constraints
Khotanzad, Alireza, Lee, Ying-Wung
This paper presents a neural network (NN) approach to the problem of stereopsis. The correspondence problem (finding the correct matches between the pixels of the epipolar lines of the stereo pair from amongst all the possible matches) is posed as a non-iterative many-to-one mapping. A two-layer feed forward NN architecture is developed to learn and code this nonlinear and complex mapping using the back-propagation learning rule and a training set. The important aspect of this technique is that none of the typical constraints such as uniqueness and continuity are explicitly imposed. All the applicable constraints are learned and internally coded by the NN enabling it to be more flexible and more accurate than the existing methods. The approach is successfully tested on several randomdot stereograms.It is shown that the net can generalize its learned mapping tocases outside its training set. Advantages over the Marr-Poggio Algorithm are discussed and it is shown that the NN performance is superIOr.
- North America > United States > Texas > Dallas County > Dallas (0.05)
- North America > United States > New York (0.04)
- North America > United States > California > San Diego County > San Diego (0.04)