Country
Causal models have no complete axiomatic characterization
Markov networks and Bayesian networks are effective graphic representations of the dependencies embedded in probabilistic models. It is well known that independencies captured by Markov networks (called graph-isomorphs) have a finite axiomatic characterization. This paper, however, shows that independencies captured by Bayesian networks (called causal models) have no axiomatization by using even countably many Horn or disjunctive clauses. This is because a sub-independency model of a causal model may be not causal, while graph-isomorphs are closed under sub-models.
A constructive proof of the existence of Viterbi processes
Since the early days of digital communication, hidden Markov models (HMMs) have now been also routinely used in speech recognition, processing of natural languages, images, and in bioinformatics. In an HMM $(X_i,Y_i)_{i\ge 1}$, observations $X_1,X_2,...$ are assumed to be conditionally independent given an ``explanatory'' Markov process $Y_1,Y_2,...$, which itself is not observed; moreover, the conditional distribution of $X_i$ depends solely on $Y_i$. Central to the theory and applications of HMM is the Viterbi algorithm to find {\em a maximum a posteriori} (MAP) estimate $q_{1:n}=(q_1,q_2,...,q_n)$ of $Y_{1:n}$ given observed data $x_{1:n}$. Maximum {\em a posteriori} paths are also known as Viterbi paths or alignments. Recently, attempts have been made to study the behavior of Viterbi alignments when $n\to \infty$. Thus, it has been shown that in some special cases a well-defined limiting Viterbi alignment exists. While innovative, these attempts have relied on rather strong assumptions and involved proofs which are existential. This work proves the existence of infinite Viterbi alignments in a more constructive manner and for a very general class of HMMs.
From Qualitative to Quantitative Proofs of Security Properties Using First-Order Conditional Logic
A first-order conditional logic is considered, with semantics given by a variant of epsilon-semantics, where p -> q means that Pr(q | p) approaches 1 super-polynomially --faster than any inverse polynomial. This type of convergence is needed for reasoning about security protocols. A complete axiomatization is provided for this semantics, and it is shown how a qualitative proof of the correctness of a security protocol can be automatically converted to a quantitative proof appropriate for reasoning about concrete security.
Towards Physarum robots: computing and manipulating on water surface
Plasmodium of Physarym polycephalum is an ideal biological substrate for implementing concurrent and parallel computation, including combinatorial geometry and optimization on graphs. We report results of scoping experiments on Physarum computing in conditions of minimal friction, on the water surface. We show that plasmodium of Physarum is capable for computing a basic spanning trees and manipulating of light-weight objects. We speculate that our results pave the pathways towards design and implementation of amorphous biological robots.
Belief Propagation and Loop Series on Planar Graphs
Chertkov, Michael, Chernyak, Vladimir Y., Teodorescu, Razvan
We discuss a generic model of Bayesian inference with binary variables defined on edges of a planar graph. The Loop Calculus approach of [1, 2] is used to evaluate the resulting series expansion for the partition function. We show that, for planar graphs, truncating the series at single-connected loops reduces, via a map reminiscent of the Fisher transformation [3], to evaluating the partition function of the dimer matching model on an auxiliary planar graph. Thus, the truncated series can be easily re-summed, using the Pfaffian formula of Kasteleyn [4]. This allows to identify a big class of computationally tractable planar models reducible to a dimer model via the Belief Propagation (gauge) transformation. The Pfaffian representation can also be extended to the full Loop Series, in which case the expansion becomes a sum of Pfaffian contributions, each associated with dimer matchings on an extension to a subgraph of the original graph. Algorithmic consequences of the Pfaffian representation, as well as relations to quantum and non-planar models, are discussed.
The Choquet integral for the aggregation of interval scales in multicriteria decision making
Labreuche, Christophe, Grabisch, Michel
This paper addresses the question of which models fit with information concerning the preferences of the decision maker over each attribute, and his preferences about aggregation of criteria (interacting criteria). We show that the conditions induced by these information plus some intuitive conditions lead to a unique possible aggregation operator: the Choquet integral.
A $O(\log m)$, deterministic, polynomial-time computable approximation of Lewis Carroll's scoring rule
Covey, Jason, Homan, Christopher
We provide deterministic, polynomial-time computable voting rules that approximate Dodgson's and (the ``minimization version'' of) Young's scoring rules to within a logarithmic factor. Our approximation of Dodgson's rule is tight up to a constant factor, as Dodgson's rule is $\NP$-hard to approximate to within some logarithmic factor. The ``maximization version'' of Young's rule is known to be $\NP$-hard to approximate by any constant factor. Both approximations are simple, and natural as rules in their own right: Given a candidate we wish to score, we can regard either its Dodgson or Young score as the edit distance between a given set of voter preferences and one in which the candidate to be scored is the Condorcet winner. (The difference between the two scoring rules is the type of edits allowed.) We regard the marginal cost of a sequence of edits to be the number of edits divided by the number of reductions (in the candidate's deficit against any of its opponents in the pairwise race against that opponent) that the edits yield. Over a series of rounds, our scoring rules greedily choose a sequence of edits that modify exactly one voter's preferences and whose marginal cost is no greater than any other such single-vote-modifying sequence.
On the underestimation of model uncertainty by Bayesian K-nearest neighbors
Su, Wanhua, Chipman, Hugh, Zhu, Mu
When using the K-nearest neighbors method, one often ignores uncertainty in the choice of K. To account for such uncertainty, Holmes and Adams (2002) proposed a Bayesian framework for K-nearest neighbors (KNN). Their Bayesian KNN (BKNN) approach uses a pseudo-likelihood function, and standard Markov chain Monte Carlo (MCMC) techniques to draw posterior samples. Holmes and Adams (2002) focused on the performance of BKNN in terms of misclassification error but did not assess its ability to quantify uncertainty. We present some evidence to show that BKNN still significantly underestimates model uncertainty.
On the Influence of Selection Operators on Performances in Cellular Genetic Algorithms
Simoncini, David, Collard, Philippe, Verel, Sébastien, Clergue, Manuel
In this paper, we study the influence of the selective pressure on the performance of cellular genetic algorithms. Cellular genetic algorithms are genetic algorithms where the population is embedded on a toroidal grid. This structure makes the propagation of the best so far individual slow down, and allows to keep in the population potentially good solutions. We present two selective pressure reducing strategies in order to slow down even more the best solution propagation. We experiment these strategies on a hard optimization problem, the quadratic assignment problem, and we show that there is a value for of the control parameter for both which gives the best performance. This optimal value does not find explanation on only the selective pressure, measured either by take over time and diversity evolution. This study makes us conclude that we need other tools than the sole selective pressure measures to explain the performances of cellular genetic algorithms.