Goto

Collaborating Authors

 Information Technology


A stochastic model of human visual attention with a dynamic Bayesian network

arXiv.org Machine Learning

Recent studies in the field of human vision science suggest that the human responses to the stimuli on a visual display are non-deterministic. People may attend to different locations on the same visual input at the same time. Based on this knowledge, we propose a new stochastic model of visual attention by introducing a dynamic Bayesian network to predict the likelihood of where humans typically focus on a video scene. The proposed model is composed of a dynamic Bayesian network with 4 layers. Our model provides a framework that simulates and combines the visual saliency response and the cognitive state of a person to estimate the most probable attended regions. Sample-based inference with Markov chain Monte-Carlo based particle filter and stream processing with multi-core processors enable us to estimate human visual attention in near real time. Experimental results have demonstrated that our model performs significantly better in predicting human visual attention compared to the previous deterministic models.


Inference with Transposable Data: Modeling the Effects of Row and Column Correlations

arXiv.org Machine Learning

We consider the problem of large-scale inference on the row or column variables of data in the form of a matrix. Often this data is transposable, meaning that both the row variables and column variables are of potential interest. An example of this scenario is detecting significant genes in microarrays when the samples or arrays may be dependent due to underlying relationships. We study the effect of both row and column correlations on commonly used test-statistics, null distributions, and multiple testing procedures, by explicitly modeling the covariances with the matrix-variate normal distribution. Using this model, we give both theoretical and simulation results revealing the problems associated with using standard statistical methodology on transposable data. We solve these problems by estimating the row and column covariances simultaneously, with transposable regularized covariance models, and de-correlating or sphering the data as a pre-processing step. Under reasonable assumptions, our method gives test statistics that follow the scaled theoretical null distribution and are approximately independent. Simulations based on various models with structured and observed covariances from real microarray data reveal that our method offers substantial improvements in two areas: 1) increased statistical power and 2) correct estimation of false discovery rates.


On the Schoenberg Transformations in Data Analysis: Theory and Illustrations

arXiv.org Machine Learning

The class of Schoenberg transformations, embedding Euclidean distances into higher dimensional Euclidean spaces, is presented, and derived from theorems on positive definite and conditionally negative definite matrices. Original results on the arc lengths, angles and curvature of the transformations are proposed, and visualized on artificial data sets by classical multidimensional scaling. A simple distance-based discriminant algorithm illustrates the theory, intimately connected to the Gaussian kernels of Machine Learning.


LEXSYS: Architecture and Implication for Intelligent Agent systems

arXiv.org Artificial Intelligence

LEXSYS, (Legume Expert System) was a project conceived at IITA (International Institute of Tropical Agriculture) Ibadan Nigeria. It was initiated by the COMBS (Collaborative Group on Maize-Based Systems Research in the 1990. It was meant for a general framework for characterizing on-farm testing for technology design for sustainable cereal-based cropping system. LEXSYS is not a true expert system as the name would imply, but simply a user-friendly information system. This work is an attempt to give a formal representation of the existing system and then present areas where intelligent agent can be applied.


Incorporating Side Information in Probabilistic Matrix Factorization with Gaussian Processes

arXiv.org Machine Learning

Probabilistic matrix factorization (PMF) is a powerful method for modeling data associated with pairwise relationships, finding use in collaborative filtering, computational biology, and document analysis, among other areas. In many domains, there is additional information that can assist in prediction. For example, when modeling movie ratings, we might know when the rating occurred, where the user lives, or what actors appear in the movie. It is difficult, however, to incorporate this side information into the PMF model. We propose a framework for incorporating side information by coupling together multiple PMF problems via Gaussian process priors. We replace scalar latent features with functions that vary over the space of side information. The GP priors on these functions require them to vary smoothly and share information. We successfully use this new method to predict the scores of professional basketball games, where side information about the venue and date of the game are relevant for the outcome.


Large Margin Boltzmann Machines and Large Margin Sigmoid Belief Networks

arXiv.org Artificial Intelligence

Current statistical models for structured prediction make simplifying assumptions about the underlying output graph structure, such as assuming a low-order Markov chain, because exact inference becomes intractable as the tree-width of the underlying graph increases. Approximate inference algorithms, on the other hand, force one to trade off representational power with computational efficiency. In this paper, we propose two new types of probabilistic graphical models, large margin Boltzmann machines (LMBMs) and large margin sigmoid belief networks (LMSBNs), for structured prediction. LMSBNs in particular allow a very fast inference algorithm for arbitrary graph structures that runs in polynomial time with a high probability. This probability is data-distribution dependent and is maximized in learning. The new approach overcomes the representation-efficiency trade-off in previous models and allows fast structured prediction with complicated graph structures. We present results from applying a fully connected model to multi-label scene classification and demonstrate that the proposed approach can yield significant performance gains over current state-of-the-art methods.


Investigating Output Accuracy for a Discrete Event Simulation Model and an Agent Based Simulation Model

arXiv.org Artificial Intelligence

In this paper, we investigate output accuracy for a Discrete Event Simulation (DES) model and Agent Based Simulation (ABS) model. The purpose of this investigation is to find out which of these simulation techniques is the best one for modelling human reactive behaviour in the retail sector. In order to study the output accuracy in both models, we have carried out a validation experiment in which we compared the results from our simulation models to the performance of a real system. Our experiment was carried out using a large UK department store as a case study. We had to determine an efficient implementation of management policy in the store's fitting room using DES and ABS. Overall, we have found that both simulation models were a good representation of the real system when modelling human reactive behaviour.


Malicious Code Execution Detection and Response Immune System inspired by the Danger Theory

arXiv.org Artificial Intelligence

The analysis of system calls is one method employed by anomaly detection systems to recognise malicious code execution. Similarities can be drawn between this process and the behaviour of certain cells belonging to the human immune system, and can be applied to construct an artificial immune system. A recently developed hypothesis in immunology, the Danger Theory, states that our immune system responds to the presence of intruders through sensing molecules belonging to those invaders, plus signals generated by the host indicating danger and damage. We propose the incorporation of this concept into a responsive intrusion detection system, where behavioural information of the system and running processes is combined with information regarding individual system calls.


Mimicking the Behaviour of Idiotypic AIS Robot Controllers Using Probabilistic Systems

arXiv.org Artificial Intelligence

Previous work has shown that robot navigation systems that employ an architecture based upon the idiotypic network theory of the immune system have an advantage over control techniques that rely on reinforcement learning only. This is thought to be a result of intelligent behaviour selection on the part of the idiotypic robot. In this paper an attempt is made to imitate idiotypic dynamics by creating controllers that use reinforcement with a number of different probabilistic schemes to select robot behaviour. The aims are to show that the idiotypic system is not merely performing some kind of periodic random behaviour selection, and to try to gain further insight into the processes that govern the idiotypic mechanism. Trials are carried out using simulated Pioneer robots that undertake navigation exercises. Results show that a scheme that boosts the probability of selecting highly-ranked alternative behaviours to 50% during stall conditions comes closest to achieving the properties of the idiotypic system, but remains unable to match it in terms of all round performance.


Development of a Cargo Screening Process Simulator: A First Approach

arXiv.org Artificial Intelligence

The efficiency of current cargo screening processes at sea and air ports is largely unknown as few benchmarks exists against which they could be measured. Some manufacturers provide benchmarks for individual sensors but we found no benchmarks that take a holistic view of the overall screening procedures and no benchmarks that take operator variability into account. Just adding up resources and manpower used is not an effective way for assessing systems where human decision-making and operator compliance to rules play a vital role. Our aim is to develop a decision support tool (cargo-screening system simulator) that will map the right technology and manpower to the right commodity-threat combination in order to maximise detection rates. In this paper we present our ideas for developing such a system and highlight the research challenges we have identified. Then we introduce our first case study and report on the progress we have made so far.