Technology
An Integrated Vision Sensor for the Computation of Optical Flow Singular Points
Higgins, Charles M., Koch, Christof
A robust, integrative algorithm is presented for computing the position of the focus of expansion or axis of rotation (the singular point) in optical flow fields such as those generated by self-motion. Measurements are shown of a fully parallel CMOS analog VLSI motion sensor array which computes the direction of local motion (sign of optical flow) at each pixel and can directly implement this algorithm. The flow field singular point is computed in real time with a power consumption of less than 2 m W. Computation of the singular point for more general flow fields requires measures of field expansion and rotation, which it is shown can also be computed in real-time hardware, again using only the sign of the optical flow field. These measures, along with the location of the singular point, provide robust real-time self-motion information for the visual guidance of a moving platform such as a robot. 1 INTRODUCTION Visually guided navigation of autonomous vehicles requires robust measures of self-motion in the environment. The heading direction, which corresponds to the focus of expansion in the visual scene for a fixed viewing angle, is one of the primary sources of guidance information.
Vertex Identification in High Energy Physics Experiments
Dror, Gideon, Abramowicz, Halina, Horn, David
In High Energy Physics experiments one has to sort through a high flux of events, at a rate of tens of MHz, and select the few that are of interest. One of the key factors in making this decision is the location of the vertex where the interaction, that led to the event, took place. Here we present a novel solution to the problem of finding the location of the vertex, based on two feedforward neural networkswith fixed architectures, whose parameters are chosen so as to obtain a high accuracy. The system is tested on simulated datasets, and is shown to perform better than conventional algorithms. 1 Introduction An event in High Energy Physics (HEP) is the experimental result of an interaction during the collision of particles in an accelerator. The result of this interaction is the production of tens of particles, each of which is ejected in a different direction and energy. Due to the quantum mechanical effects involved, the events differ from one another in the number of particles produced, the types of particles, and their energies. The trajectories of produced particles are detected by a very large and sophisticated detector.
Classification in Non-Metric Spaces
Weinshall, Daphna, Jacobs, David W., Gdalyahu, Yoram
A key question in vision is how to represent our knowledge of previously encountered objects to classify new ones. The answer depends on how we determine the similarity of two objects. Similarity tells us how relevant each previously seen object is in determining the category to which a new object belongs.
A V1 Model of Pop Out and Asymmetty in Visual Search
Unique features of targets enable them to pop out against the background, while targets defined by lacks of features or conjunctions of features are more difficult to spot. It is known that the ease of target detection can change when the roles of figure and ground are switched. The mechanisms underlying the ease of pop out and asymmetry in visual search have been elusive. This paper shows that a model of segmentation in VI based on intracortical interactions can explain many of the qualitative aspects of visual search. 1 Introduction Visual search is closely related to visual segmentation, and therefore can be used to diagnose the mechanisms of visual segmentation. For instance, a red dot can popout againsta background of green distractor dots instantaneously, suggesting that only pre-attentive mechanisms are necessary (Treisman et aI, 1990).
Learning from Dyadic Data
Hofmann, Thomas, Puzicha, Jan, Jordan, Michael I.
Dyadzc data refers to a domain with two finite sets of objects in which observations are made for dyads, i.e., pairs with one element from either set. This type of data arises naturally in many application rangingfrom computational linguistics and information retrieval to preference analysis and computer vision. In this paper, we present a systematic, domain-independent framework of learning fromdyadic data by statistical mixture models. Our approach covers different models with fiat and hierarchical latent class structures. Wepropose an annealed version of the standard EM algorithm for model fitting which is empirically evaluated on a variety of data sets from different domains. 1 Introduction Over the past decade learning from data has become a highly active field of research distributedover many disciplines like pattern recognition, neural computation, statistics,machine learning, and data mining.
Global Optimisation of Neural Network Models via Sequential Sampling
Freitas, Joรฃo F. G. de, Niranjan, Mahesan, Doucet, Arnaud, Gee, Andrew H.
Andrew H Gee Cambridge University Engineering Department Cambridge CB2 1PZ England ahg@eng.cam.ac.uk Abstract We propose a novel strategy for training neural networks using sequential sampling-importanceresampling algorithms. This global optimisation strategy allows us to learn the probability distribution ofthe network weights in a sequential framework. It is well suited to applications involving online, nonlinear, non-Gaussian or non-stationary signal processing. 1 INTRODUCTION This paper addresses sequential training of neural networks using powerful sampling techniques. Sequential techniques are important in many applications of neural networks involvingreal-time signal processing, where data arrival is inherently sequential. Furthermore, one might wish to adopt a sequential training strategy to deal with non-stationarity in signals, so that information from the recent past is lent more credence than information from the distant past.
Finite-Dimensional Approximation of Gaussian Processes
Ferrari-Trecate, Giancarlo, Williams, Christopher K. I., Opper, Manfred
Gaussian process (GP) prediction suffers from O(n3) scaling with the data set size n. By using a finite-dimensional basis to approximate the GP predictor, the computational complexity can be reduced. We derive optimalfinite-dimensional predictors under a number of assumptions, andshow the superiority of these predictors over the Projected Bayes Regression method (which is asymptotically optimal). We also show how to calculate the minimal model size for a given n. The calculations are backed up by numerical experiments.
Probabilistic Image Sensor Fusion
Sharma, Ravi K., Leen, Todd K., Pavel, Misha
We present a probabilistic method for fusion of images produced by multiple sensors. The approach is based on an image formation model in which the sensor images are noisy, locally linear functions of an underlying, true scene. A Bayesian framework then provides for maximum likelihood or maximum a posteriori estimates of the true scene from the sensor images. Maximum likelihood estimates of the parameters of the image formation model involve (local) second order image statistics, and thus are related to local principal component analysis. We demonstrate the efficacy of the method on images from visible-band and infrared sensors. 1 Introduction Advances in sensing devices have fueled the deployment of multiple sensors in several computational vision systems [1, for example].