Sensing and Signal Processing
Hypothesis Testing in Speckled Data with Stochastic Distances
Nascimento, Abraão D. C., Cintra, Renato J., Frery, Alejandro C.
Images obtained with coherent illumination, as is the case of sonar, ultrasound-B, laser and Synthetic Aperture Radar -- SAR, are affected by speckle noise which reduces the ability to extract information from the data. Specialized techniques are required to deal with such imagery, which has been modeled by the G0 distribution and under which regions with different degrees of roughness and mean brightness can be characterized by two parameters; a third parameter, the number of looks, is related to the overall signal-to-noise ratio. Assessing distances between samples is an important step in image analysis; they provide grounds of the separability and, therefore, of the performance of classification procedures. This work derives and compares eight stochastic distances and assesses the performance of hypothesis tests that employ them and maximum likelihood estimation. We conclude that tests based on the triangular distance have the closest empirical size to the theoretical one, while those based on the arithmetic-geometric distances have the best power. Since the power of tests based on the triangular distance is close to optimum, we conclude that the safest choice is using this distance for hypothesis testing, even when compared with classical distances as Kullback-Leibler and Bhattacharyya.
Vision-Based Navigation I: A navigation filter for fusing DTM/correspondence updates
Kupervasser, Oleg, Voronov, Vladimir
An algorithm for pose and motion estimation using corresponding features in images and a digital terrain map is proposed. Using a Digital Terrain (or Digital Elevation) Map (DTM/DEM) as a global reference enables recovering the absolute position and orientation of the camera. In order to do this, the DTM is used to formulate a constraint between corresponding features in two consecutive frames. The utilization of data is shown to improve the robustness and accuracy of the inertial navigation algorithm. Extended Kalman filter was used to combine results of inertial navigation algorithm and proposed vision-based navigation algorithm. The feasibility of this algorithms is established through numerical simulations.
Nonparametric Edge Detection in Speckled Imagery
Girón, Edwin, Frery, Alejandro C., Cribari-Neto, Francisco
We address the issue of edge detection in Synthetic Aperture Radar imagery. In particular, we propose nonparametric methods for edge detection, and numerically compare them to an alternative method that has been recently proposed in the literature. Our results show that some of the proposed methods display superior results and are computationally simpler than the existing method. An application to real (not simulated) data is presented and discussed.
Generalized Statistical Complexity of SAR Imagery
de Almeida, Eliana S., de Medeiros, Antonio Carlos, Rosso, Osvaldo A., Frery, Alejandro C.
A new generalized Statistical Complexity Measure (SCM) was proposed by Rosso et al in 2010. It is a functional that captures the notions of order/disorder and of distance to an equilibrium distribution. The former is computed by a measure of entropy, while the latter depends on the definition of a stochastic divergence. When the scene is illuminated by coherent radiation, image data is corrupted by speckle noise, as is the case of ultrasound-B, sonar, laser and Synthetic Aperture Radar (SAR) sensors. In the amplitude and intensity formats, this noise is multiplicative and non-Gaussian requiring, thus, specialized techniques for image processing and understanding. One of the most successful family of models for describing these images is the Multiplicative Model which leads, among other probability distributions, to the G0 law. This distribution has been validated in the literature as an expressive and tractable model, deserving the "universal" denomination for its ability to describe most types of targets. In order to compute the statistical complexity of a site in an image corrupted by speckle noise, we assume that the equilibrium distribution is that of fully developed speckle, namely the Gamma law in intensity format, which appears in areas with little or no texture. We use the Shannon entropy along with the Hellinger distance to measure the statistical complexity of intensity SAR images, and we show that it is an expressive feature capable of identifying many types of targets.
Decentralized Data Fusion and Active Sensing with Mobile Sensors for Modeling and Predicting Spatiotemporal Traffic Phenomena
Chen, Jie, Low, Kian Hsiang, Tan, Colin Keng-Yan, Oran, Ali, Jaillet, Patrick, Dolan, John M., Sukhatme, Gaurav S.
The problem of modeling and predicting spatiotemporal traffic phenomena over an urban road network is important to many traffic applications such as detecting and forecasting congestion hotspots. This paper presents a decentralized data fusion and active sensing (D2FAS) algorithm for mobile sensors to actively explore the road network to gather and assimilate the most informative data for predicting the traffic phenomenon. We analyze the time and communication complexity of D2FAS and demonstrate that it can scale well with a large number of observations and sensors. We provide a theoretical guarantee on its predictive performance to be equivalent to that of a sophisticated centralized sparse approximation for the Gaussian process (GP) model: The computation of such a sparse approximate GP model can thus be parallelized and distributed among the mobile sensors (in a Google-like MapReduce paradigm), thereby achieving efficient and scalable prediction. We also theoretically guarantee its active sensing performance that improves under various practical environmental conditions. Empirical evaluation on real-world urban road network data shows that our D2FAS algorithm is significantly more time-efficient and scalable than state-of-the-art centralized algorithms while achieving comparable predictive performance.
Modeling Images using Transformed Indian Buffet Processes
Zhai, Ke, Hu, Yuening, Williamson, Sinead, Boyd-Graber, Jordan
Latent feature models are attractive for image modeling, since images generally contain multiple objects. However, many latent feature models ignore that objects can appear at different locations or require pre-segmentation of images. While the transformed Indian buffet process (tIBP) provides a method for modeling transformation-invariant features in unsegmented binary images, its current form is inappropriate for real images because of its computational cost and modeling assumptions. We combine the tIBP with likelihoods appropriate for real images and develop an efficient inference, using the cross-correlation between images and features, that is theoretically and empirically faster than existing inference techniques. Our method discovers reasonable components and achieve effective image reconstruction in natural images.
Deep Lambertian Networks
Tang, Yichuan, Salakhutdinov, Ruslan, Hinton, Geoffrey
Visual perception is a challenging problem in part due to illumination variations. A possible solution is to first estimate an illumination invariant representation before using it for recognition. The object albedo and surface normals are examples of such representations. In this paper, we introduce a multilayer generative model where the latent variables include the albedo, surface normals, and the light source. Combining Deep Belief Nets with the Lambertian reflectance assumption, our model can learn good priors over the albedo from 2D images. Illumination variations can be explained by changing only the lighting latent variable in our model. By transferring learned knowledge from similar objects, albedo and surface normals estimation from a single image is possible in our model. Experiments demonstrate that our model is able to generalize as well as improve over standard baselines in one-shot face recognition.
Leaf vein segmentation using Odd Gabor filters and morphological operations
Leaf vein forms the basis of leaf characterization and classification. Different species have different leaf vein patterns. It is seen that leaf vein segmentation will help in maintaining a record of all the leaves according to their specific pattern of veins thus provide an effective way to retrieve and store information regarding various plant species in database as well as provide an effective means to characterize plants on the basis of leaf vein structure which is unique for every species. The algorithm proposes a new way of segmentation of leaf veins with the use of Odd Gabor filters and the use of morphological operations for producing a better output. The Odd Gabor filter gives an efficient output and is robust and scalable as compared with the existing techniques as it detects the fine fiber like veins present in leaves much more efficiently.
Quantitative Concept Analysis
Formal Concept Analysis (FCA) begins from a context, given as a binary relation between some objects and some attributes, and derives a lattice of concepts, where each concept is given as a set of objects and a set of attributes, such that the first set consists of all objects that satisfy all attributes in the second, and vice versa. Many applications, though, provide contexts with quantitative information, telling not just whether an object satisfies an attribute, but also quantifying this satisfaction. Contexts in this form arise as rating matrices in recommender systems, as occurrence matrices in text analysis, as pixel intensity matrices in digital image processing, etc. Such applications have attracted a lot of attention, and several numeric extensions of FCA have been proposed. We propose the framework of proximity sets (proxets), which subsume partially ordered sets (posets) as well as metric spaces. One feature of this approach is that it extracts from quantified contexts quantified concepts, and thus allows full use of the available information. Another feature is that the categorical approach allows analyzing any universal properties that the classical FCA and the new versions may have, and thus provides structural guidance for aligning and combining the approaches.
Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches
Bioucas-Dias, José M., Plaza, Antonio, Dobigeon, Nicolas, Parente, Mario, Du, Qian, Gader, Paul, Chanussot, Jocelyn
Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.