Allahverdyan, Armen
Comparative Analysis of Viterbi Training and Maximum Likelihood Estimation for HMMs
Allahverdyan, Armen, Galstyan, Aram
We present an asymptotic analysis of Viterbi Training (VT) and contrast it with a more conventional Maximum Likelihood (ML) approach to parameter estimation in Hidden Markov Models. While ML estimator works by (locally) maximizing the likelihood of the observed data, VT seeks to maximize the probability of the most likely hidden state sequence. We develop an analytical framework based on a generating function formalism and illustrate it on an exactly solvable model of HMM with one unambiguous symbol. For this particular model the ML objective function is continuously degenerate. VT objective, in contrast, is shown to have only finite degeneracy. Furthermore, VT converges faster and results in sparser (simpler) models, thus realizing an automatic Occam's razor for HMM learning. For more general scenario VT can be worse compared to ML but still capable of correctly recovering most of the parameters.
Replicator Dynamics of Coevolving Networks
Galstyan, Aram (University of Southern California) | Kianercy, Ardeshir (University of Southern California) | Allahverdyan, Armen (Yerevan Physics Institute)
We propose a simple model of network co-evolution in a game-dynamical system of interacting agents that play repeated games with their neighbors, and adapt their behaviors and network links based on the outcome of those games. The adaptation is achieved through a simple reinforcement learning scheme. We show that the collective evolution of such a system can be described by appropriately defined replicator dynamics equations. In particular, we suggest an appropriate factorization of the agents strategies thats results in a coupled system of equations characterizing the evolution of both strategies and network structure, and illustrate the framework on two simple examples.
On Maximum a Posteriori Estimation of Hidden Markov Processes
Allahverdyan, Armen, Galstyan, Aram
We present a theoretical analysis of Maximum a Posteriori (MAP) sequence estimation for binary symmetric hidden Markov processes. We reduce the MAP estimation to the energy minimization of an appropriately defined Ising spin model, and focus on the performance of MAP as characterized by its accuracy and the number of solutions corresponding to a typical observed sequence. It is shown that for a finite range of sufficiently low noise levels, the solution is uniquely related to the observed sequence, while the accuracy degrades linearly with increasing the noise strength. For intermediate noise values, the accuracy is nearly noise-independent, but now there are exponentially many solutions to the estimation problem, which is reflected in non-zero ground-state entropy for the Ising model. Finally, for even larger noise intensities, the number of solutions reduces again, but the accuracy is poor. It is shown that these regimes are different thermodynamic phases of the Ising model that are related to each other via first-order phase transitions.