Plotting

 Country


Crowd Saliency Detection via Global Similarity Structure

arXiv.org Machine Learning

It is common for CCTV operators to overlook inter- esting events taking place within the crowd due to large number of people in the crowded scene (i.e. marathon, rally). Thus, there is a dire need to automate the detection of salient crowd regions acquiring immediate attention for a more effective and proactive surveillance. This paper proposes a novel framework to identify and localize salient regions in a crowd scene, by transforming low-level features extracted from crowd motion field into a global similarity structure. The global similarity structure representation allows the discovery of the intrinsic manifold of the motion dynamics, which could not be captured by the low-level representation. Ranking is then performed on the global similarity structure to identify a set of extrema. The proposed approach is unsupervised so learning stage is eliminated. Experimental results on public datasets demonstrates the effectiveness of exploiting such extrema in identifying salient regions in various crowd scenarios that exhibit crowding, local irregular motion, and unique motion areas such as sources and sinks.


A Fusion Approach for Efficient Human Skin Detection

arXiv.org Machine Learning

A reliable human skin detection method that is adaptable to different human skin colours and illu- mination conditions is essential for better human skin segmentation. Even though different human skin colour detection solutions have been successfully applied, they are prone to false skin detection and are not able to cope with the variety of human skin colours across different ethnic. Moreover, existing methods require high computational cost. In this paper, we propose a novel human skin de- tection approach that combines a smoothed 2D histogram and Gaussian model, for automatic human skin detection in colour image(s). In our approach an eye detector is used to refine the skin model for a specific person. The proposed approach reduces computational costs as no training is required; and it improves the accuracy of skin detection despite wide variation in ethnicity and illumination. To the best of our knowledge, this is the first method to employ fusion strategy for this purpose. Qualitative and quantitative results on three standard public datasets and a comparison with state-of-the-art methods have shown the effectiveness and robustness of the proposed approach.


Scene Image is Non-Mutually Exclusive - A Fuzzy Qualitative Scene Understanding

arXiv.org Artificial Intelligence

Ambiguity or uncertainty is a pervasive element of many real world decision making processes. Variation in decisions is a norm in this situation when the same problem is posed to different subjects. Psychological and metaphysical research had proven that decision making by human is subjective. It is influenced by many factors such as experience, age, background, etc. Scene understanding is one of the computer vision problems that fall into this category. Conventional methods relax this problem by assuming scene images are mutually exclusive; and therefore, focus on developing different approaches to perform the binary classification tasks. In this paper, we show that scene images are non-mutually exclusive, and propose the Fuzzy Qualitative Rank Classifier (FQRC) to tackle the aforementioned problems. The proposed FQRC provides a ranking interpretation instead of binary decision. Evaluations in term of qualitative and quantitative using large numbers and challenging public scene datasets have shown the effectiveness of our proposed method in modeling the non-mutually exclusive scene images.


Sparsity Based Poisson Denoising with Dictionary Learning

arXiv.org Machine Learning

The problem of Poisson denoising appears in various imaging applications, such as low-light photography, medical imaging and microscopy. In cases of high SNR, several transformations exist so as to convert the Poisson noise into an additive i.i.d. Gaussian noise, for which many effective algorithms are available. However, in a low SNR regime, these transformations are significantly less accurate, and a strategy that relies directly on the true noise statistics is required. A recent work by Salmon et al. took this route, proposing a patch-based exponential image representation model based on GMM (Gaussian mixture model), leading to state-of-the-art results. In this paper, we propose to harness sparse-representation modeling to the image patches, adopting the same exponential idea. Our scheme uses a greedy pursuit with boot-strapping based stopping condition and dictionary learning within the denoising process. The reconstruction performance of the proposed scheme is competitive with leading methods in high SNR, and achieving state-of-the-art results in cases of low SNR.


Presence-absence reasoning for evolutionary phenotypes

arXiv.org Artificial Intelligence

Nearly invariably, phenotypes are reported in the scientific literature in meticulous detail, utilizing the full expressivity of natural language. Often it is particularly these detailed observations (facts) that are of interest, and thus specific to the research questions that motivated observing and reporting them. However, research aiming to synthesize or integrate phenotype data across many studies or even fields is often faced with the need to abstract from detailed observations so as to construct phenotypic concepts that are common across many datasets rather than specific to a few. Yet, observations or facts that would fall under such abstracted concepts are typically not directly asserted by the original authors, usually because they are "obvious" according to common domain knowledge, and thus asserting them would be deemed redundant by anyone with sufficient domain knowledge. For example, a phenotype describing the length of a manual digit for an organism implicitly means that the organism must have had a hand, and thus a forelimb; the presence or absence of a forelimb may have supporting data across a far wider range of taxa than the length of a particular manual digit. Here we describe how within the Phenoscape project we use a pipeline of OWL axiom generation and reasoning steps to infer taxon-specific presence/absence of anatomical entities from anatomical phenotypes. Although presence/absence is all but one, and a seemingly simple way to abstract phenotypes across data sources, it can nonetheless be powerful for linking genotype to phenotype, and it is particularly relevant for constructing synthetic morphological supermatrices for comparative analysis; in fact presence/absence is one of the prevailing character observation types in published character matrices.


Computational Understanding and Manipulation of Symmetries

arXiv.org Artificial Intelligence

For natural and artificial systems with some symmetry structure, computational understanding and manipulation can be achieved without learning by exploiting the algebraic structure. Here we describe this algebraic coordinatization method and apply it to permutation puzzles. Coordinatization yields a structural understanding, not just solutions for the puzzles.


Efficient Modeling and Forecasting of the Electricity Spot Price

arXiv.org Machine Learning

The increasing importance of renewable energy, especially solar and wind power, has led to new forces in the formation of electricity prices. Hence, this paper introduces an econometric model for the hourly time series of electricity prices of the European Power Exchange (EPEX) which incorporates specific features like renewable energy. The model consists of several sophisticated and established approaches and can be regarded as a periodic VAR-TARCH with wind power, solar power, and load as influences on the time series. It is able to map the distinct and well-known features of electricity prices in Germany. An efficient iteratively reweighted lasso approach is used for the estimation. Moreover, it is shown that several existing models are outperformed by the procedure developed in this paper.


Propagation Kernels

arXiv.org Machine Learning

We introduce propagation kernels, a general graph-kernel framework for efficiently measuring the similarity of structured data. Propagation kernels are based on monitoring how information spreads through a set of given graphs. They leverage early-stage distributions from propagation schemes such as random walks to capture structural information encoded in node labels, attributes, and edge information. This has two benefits. First, off-the-shelf propagation schemes can be used to naturally construct kernels for many graph types, including labeled, partially labeled, unlabeled, directed, and attributed graphs. Second, by leveraging existing efficient and informative propagation schemes, propagation kernels can be considerably faster than state-of-the-art approaches without sacrificing predictive performance. We will also show that if the graphs at hand have a regular structure, for instance when modeling image or video data, one can exploit this regularity to scale the kernel computation to large databases of graphs with thousands of nodes. We support our contributions by exhaustive experiments on a number of real-world graphs from a variety of application domains.


Neural Hypernetwork Approach for Pulmonary Embolism diagnosis

arXiv.org Machine Learning

This work introduces an integrative approach based on Q-analysis with machine learning. The new approach, called Neural Hypernetwork, has been applied to a case study of pulmonary embolism diagnosis. The objective of the application of neural hyper-network to pulmonary embolism (PE) is to improve diagnose for reducing the number of CT-angiography needed. Hypernetworks, based on topological simplicial complex, generalize the concept of two-relation to many-body relation. Furthermore, Hypernetworks provide a significant generalization of network theory, enabling the integration of relational structure, logic and analytic dynamics. Another important results is that Q-analysis stays close to the data, while other approaches manipulate data, projecting them into metric spaces or applying some filtering functions to highlight the intrinsic relations. A pulmonary embolism (PE) is a blockage of the main artery of the lung or one of its branches, frequently fatal. Our study uses data on 28 diagnostic features of 1,427 people considered to be at risk of PE. The resulting neural hypernetwork correctly recognized 94% of those developing a PE. This is better than previous results that have been obtained with other methods (statistical selection of features, partial least squares regression, topological data analysis in a metric space).


PAC-Bayesian AUC classification and scoring

arXiv.org Machine Learning

We develop a scoring and classification procedure based on the PAC-Bayesian approach and the AUC (Area Under Curve) criterion. We focus initially on the class of linear score functions. We derive PAC-Bayesian non-asymptotic bounds for two types of prior for the score parameters: a Gaussian prior, and a spike-and-slab prior; the latter makes it possible to perform feature selection. One important advantage of our approach is that it is amenable to powerful Bayesian computational tools. We derive in particular a Sequential Monte Carlo algorithm, as an efficient method which may be used as a gold standard, and an Expectation-Propagation algorithm, as a much faster but approximate method. We also extend our method to a class of non-linear score functions, essentially leading to a nonparametric procedure, by considering a Gaussian process prior.