Goto

Collaborating Authors

 Country


Algebraic Statistics in Model Selection

arXiv.org Machine Learning

We develop the necessary theory in computational algebraic geometry to place Bayesian networks into the realm of algebraic statistics. We present an algebra{statistics dictionary focused on statistical modeling. In particular, we link the notion of effiective dimension of a Bayesian network with the notion of algebraic dimension of a variety. We also obtain the independence and non{independence constraints on the distributions over the observable variables implied by a Bayesian network with hidden variables, via a generating set of an ideal of polynomials associated to the network. These results extend previous work on the subject. Finally, the relevance of these results for model selection is discussed.


PAC-learning bounded tree-width Graphical Models

arXiv.org Machine Learning

We show that the class of strongly connected graphical models with treewidth at most k can be properly efficiently PAC-learnt with respect to the Kullback-Leibler Divergence. Previous approaches to this problem, such as those of Chow ([1]), and Ho gen ([7]) have shown that this class is PAC-learnable by reducing it to a combinatorial optimization problem. However, for k > 1, this problem is NP-complete ([15]), and so unless P=NP, these approaches will take exponential amounts of time. Our approach differs significantly from these, in that it first attempts to find approximate conditional independencies by solving (polynomially many) submodular optimization problems, and then using a dynamic programming formulation to combine the approximate conditional independence information to derive a graphical model with underlying graph of the tree-width specified. This gives us an efficient (polynomial time in the number of random variables) PAC-learning algorithm which requires only polynomial number of samples of the true distribution, and only polynomial running time.


Factored Latent Analysis for far-field tracking data

arXiv.org Machine Learning

This paper uses Factored Latent Analysis (FLA) to learn a factorized, segmental representation for observations of tracked objects over time. Factored Latent Analysis is latent class analysis in which the observation space is subdivided and each aspect of the original space is represented by a separate latent class model. One could simply treat these factors as completely independent and ignore their interdependencies or one could concatenate them together and attempt to learn latent class structure for the complete observation space. Alternatively, FLA allows the interdependencies to be exploited in estimating an effective model, which is also capable of representing a factored latent state. In this paper, FLA is used to learn a set of factored latent classes to represent different modalities of observations of tracked objects. Different characteristics of the state of tracked objects are each represented by separate latent class models, including normalized size, normalized speed, normalized direction, and position. This model also enables effective temporal segmentation of these sequences. This method is data-driven, unsupervised using only pairwise observation statistics. This data-driven and unsupervised activity classi- fication technique exhibits good performance in multiple challenging environments.


An Integrated, Conditional Model of Information Extraction and Coreference with Applications to Citation Matching

arXiv.org Machine Learning

Although information extraction and coreference resolution appear together in many applications, most current systems perform them as ndependent steps. This paper describes an approach to integrated inference for extraction and coreference based on conditionally-trained undirected graphical models. We discuss the advantages of conditional probability training, and of a coreference model structure based on graph partitioning. On a data set of research paper citations, we show significant reduction in error by using extraction uncertainty to improve coreference citation matching accuracy, and using coreference to improve the accuracy of the extracted fields.


Variational Chernoff Bounds for Graphical Models

arXiv.org Machine Learning

Recent research has made significant progress on the problem of bounding log partition functions for exponential family graphical models. Such bounds have associated dual parameters that are often used as heuristic estimates of the marginal probabilities required in inference and learning. However these variational estimates do not give rigorous bounds on marginal probabilities, nor do they give estimates for probabilities of more general events than simple marginals. In this paper we build on this recent work by deriving rigorous upper and lower bounds on event probabilities for graphical models. Our approach is based on the use of generalized Chernoff bounds to express bounds on event probabilities in terms of convex optimization problems; these optimization problems, in turn, require estimates of generalized log partition functions. Simulations indicate that this technique can result in useful, rigorous bounds to complement the heuristic variational estimates, with comparable computational cost.


The Minimum Information Principle for Discriminative Learning

arXiv.org Machine Learning

Exponential models of distributions are widely used in machine learning for classiffication and modelling. It is well known that they can be interpreted as maximum entropy models under empirical expectation constraints. In this work, we argue that for classiffication tasks, mutual information is a more suitable information theoretic measure to be optimized. We show how the principle of minimum mutual information generalizes that of maximum entropy, and provides a comprehensive framework for building discriminative classiffiers. A game theoretic interpretation of our approach is then given, and several generalization bounds provided. We present iterative algorithms for solving the minimum information problem and its convex dual, and demonstrate their performance on various classiffication tasks. The results show that minimum information classiffiers outperform the corresponding maximum entropy models.


Similarity-Driven Cluster Merging Method for Unsupervised Fuzzy Clustering

arXiv.org Machine Learning

In this paper, a similarity-driven cluster merging method is proposed for unsuper-vised fuzzy clustering. The cluster merging method is used to resolve the problem of cluster validation. Starting with an overspecified number of clusters in the data, pairs of similar clusters are merged based on the proposed similarity-driven cluster merging criterion. The similarity between clusters is calculated by a fuzzy cluster similarity matrix, while an adaptive threshold is used for merging. In addition, a modified generalized ob- jective function is used for prototype-based fuzzy clustering. The function includes the p-norm distance measure as well as principal components of the clusters. The number of the principal components is determined automatically from the data being clustered. The properties of this unsupervised fuzzy clustering algorithm are illustrated by several experiments.


A Generative Bayesian Model for Aggregating Experts' Probabilities

arXiv.org Machine Learning

In order to improve forecasts, a decisionmaker often combines probabilities given by various sources, such as human experts and machine learning classifiers. When few training data are available, aggregation can be improved by incorporating prior knowledge about the event being forecasted and about salient properties of the experts. To this end, we develop a generative Bayesian aggregation model for probabilistic classi cation. The model includes an event-specific prior, measures of individual experts' bias, calibration, accuracy, and a measure of dependence betweeen experts. Rather than require absolute measures, we show that aggregation may be expressed in terms of relative accuracy between experts. The model results in a weighted logarithmic opinion pool (LogOps) that satis es consistency criteria such as the external Bayesian property. We derive analytic solutions for independent and for exchangeable experts. Empirical tests demonstrate the model's use, comparing its accuracy with other aggregation methods.


Applying Discrete PCA in Data Analysis

arXiv.org Machine Learning

Methods for analysis of principal components in discrete data have existed for some time under various names such as grade of membership modelling, probabilistic latent semantic analysis, and genotype inference with admixture. In this paper we explore a number of extensions to the common theory, and present some application of these methods to some common statistical tasks. We show that these methods can be interpreted as a discrete version of ICA. We develop a hierarchical version yielding components at different levels of detail, and additional techniques for Gibbs sampling. We compare the algorithms on a text prediction task using support vector machines, and to information retrieval.


Propositional and Relational Bayesian Networks Associated with Imprecise and Qualitative Probabilistic Assesments

arXiv.org Artificial Intelligence

This paper investigates a representation language with flexibility inspired by probabilistic logic and compactness inspired by relational Bayesian networks. The goal is to handle propositional and first-order constructs together with precise, imprecise, indeterminate and qualitative probabilistic assessments. The paper shows how this can be achieved through the theory of credal networks. New exact and approximate inference algorithms based on multilinear programming and iterated/loopy propagation of interval probabilities are presented; their superior performance, compared to existing ones, is shown empirically.