Goto

Collaborating Authors

 Sensing and Signal Processing



Oriented Non-Radial Basis Functions for Image Coding and Analysis

Neural Information Processing Systems

We introduce oriented non-radial basis function networks (ONRBF) as a generalization of Radial Basis Function networks (RBF)- wherein the Euclidean distance metric in the exponent of the Gaussian is replaced by a more general polynomial. This permits the definition of more general regions and in particular-hyper-ellipses with orientations. In the case of hyper-surface estimation this scheme requires a smaller number of hidden units and alleviates the "curse of dimensionality" associated kernel type approximators.In the case of an image, the hidden units correspond to features in the image and the parameters associated with each unit correspond to the rotation, scaling and translation properties of that particular "feature". In the context of the ONBF scheme, this means that an image can be represented by a small number of features. Since, transformation of an image by rotation, scaling and translation correspond to identical transformations of the individual features, the ONBF scheme can be used to considerable advantage for the purposes of image recognition and analysis.


Model Based Image Compression and Adaptive Data Representation by Interacting Filter Banks

Neural Information Processing Systems

To achieve high-rate image data compression while maintainig a high quality reconstructed image, a good image model and an efficient way to represent the specific data of each image must be introduced. Based on the physiological knowledge of multi - channel characteristics and inhibitory interactions between them in the human visual system, a mathematically coherent parallel architecture for image data compression which utilizes the Markov random field Image model and interactions between a vast number of filter banks, is proposed.


TRAFFIC: Recognizing Objects Using Hierarchical Reference Frame Transformations

Neural Information Processing Systems

We describe a model that can recognize two-dimensional shapes in an unsegmented image, independent of their orientation, position, and scale. The model, called TRAFFIC, efficiently represents the structural relation between an object and each of its component features by encoding the fixed viewpoint-invariant transformation from the feature's reference frame to the object's in the weights of a connectionist network. Using a hierarchy of such transformations, with increasing complexity of features at each successive layer, the network can recognize multiple objects in parallel. An implementation of TRAFFIC is described, along with experimental results demonstrating the network's ability to recognize constellations of stars in a viewpoint-invariant manner. 1 INTRODUCTION A key goal of machine vision is to recognize familiar objects in an unsegmented image, independent of their orientation, position, and scale. Massively parallel models have long been used for lower-level vision tasks, such as primitive feature extraction and stereo depth. Models addressing "higher-level" vision have generally been restricted to pattern matching types of problems, in which much of the inherent complexity of the domain has been eliminated or ignored.


Comparing the Performance of Connectionist and Statistical Classifiers on an Image Segmentation Problem

Neural Information Processing Systems

In the development of an image segmentation system for real time image processing applications, we apply the classical decision analysis paradigmby viewing image segmentation as a pixel classifica.


TRAFFIC: Recognizing Objects Using Hierarchical Reference Frame Transformations

Neural Information Processing Systems

We describe a model that can recognize two-dimensional shapes in an unsegmented image, independent of their orientation, position, and scale. The model, called TRAFFIC, efficiently represents the structural relation between an object and each of its component features by encoding the fixed viewpoint-invariant transformation from the feature's reference frame to the object's in the weights of a connectionist network. Using a hierarchy of such transformations, with increasing complexity of features at each successive layer, the network can recognize multiple objects in parallel. An implementation ofTRAFFIC is described, along with experimental results demonstrating the network's ability to recognize constellations of stars in a viewpoint-invariant manner. 1 INTRODUCTION A key goal of machine vision is to recognize familiar objects in an unsegmented image, independent of their orientation, position, and scale. Massively parallel models have long been used for lower-level vision tasks, such as primitive feature extraction and stereo depth.


Comparing the Performance of Connectionist and Statistical Classifiers on an Image Segmentation Problem

Neural Information Processing Systems

In the development of an image segmentation system for real time image processing applications, we apply the classical decision analysis paradigm by viewing image segmentation as a pixel classifica.


Model Based Image Compression and Adaptive Data Representation by Interacting Filter Banks

Neural Information Processing Systems

To achieve high-rate image data compression while maintainig a high quality reconstructed image, a good image model and an efficient way to represent the specific data of each image must be introduced. Based on the physiological knowledge of multi - channel characteristics and inhibitory interactions between them in the human visual system, a mathematically coherent parallel architecture for image data compression which utilizes the Markov random field Image model and interactions between a vast number of filter banks, is proposed.


Review of Pattern Recognizition

AI Magazine

Pattern Recognition (New York: John Wiley and Sons, 1987, 144 pages, ISBN 0-471-61120-4) by Mike James is a concise survey of the practice of image recognition.


A Network for Image Segmentation Using Color

Neural Information Processing Systems

Otherwise it might ascribe different characteristics to the same object under different lights. But the first step in using color for recognition, segmenting the scene into regions of different colors, does not require color constancy.