Goto

Collaborating Authors

 continuous time markov chain


Just Another Method to Compute MTTF from Continuous Time Markov Chain

Vasconcelos, Eduardo M.

arXiv.org Artificial Intelligence

The Meantime To Failure (MTTF) is a statistic used for system analysis in several knowledge areas. This value represents the average time to the system enters into one of the possible states of fault, without considering system repairs. Although MTTF be considered to analyze systems with fault states, it also can be used to perform analysis on processes, since it can be used to represent the meantime to one process finishes, given that, processes can be represented by state machine models. This work presents a method to compute MTTF from Continuous Time Markov Chain (CTMC) models. There are no arguments that demonstrate that this method performs better than other methods, but this method has a simpler implementation and is intuitive. This method also allows computing the absorption probabilities and the average holding time of each state without additional steps.


Probabilistic Optimal Transport based on Collective Graphical Models

Akagi, Yasunori, Tanaka, Yusuke, Iwata, Tomoharu, Kurashima, Takeshi, Toda, Hiroyuki

arXiv.org Machine Learning

Optimal Transport (OT) is being widely used in various fields such as machine learning and computer vision, as it is a powerful tool for measuring the similarity between probability distributions and histograms. In previous studies, OT has been defined as the minimum cost to transport probability mass from one probability distribution to another. In this study, we propose a new framework in which OT is considered as a maximum a posteriori (MAP) solution of a probabilistic generative model. With the proposed framework, we show that OT with entropic regularization is equivalent to maximizing a posterior probability of a probabilistic model called Collective Graphical Model (CGM), which describes aggregated statistics of multiple samples generated from a graphical model. Interpreting OT as a MAP solution of a CGM has the following two advantages: (i) We can calculate the discrepancy between noisy histograms by modeling noise distributions. Since various distributions can be used for noise modeling, it is possible to select the noise distribution flexibly to suit the situation. (ii) We can construct a new method for interpolation between histograms, which is an important application of OT. The proposed method allows for intuitive modeling based on the probabilistic interpretations, and a simple and efficient estimation algorithm is available. Experiments using synthetic and real-world spatio-temporal population datasets show the effectiveness of the proposed interpolation method.


Property-driven State-Space Coarsening for Continuous Time Markov Chains

Michaelides, Michalis, Milios, Dimitrios, Hillston, Jane, Sanguinetti, Guido

arXiv.org Machine Learning

Dynamical systems with large state-spaces are often expensive to thoroughly explore experimentally. Coarse-graining methods aim to define simpler systems which are more amenable to analysis and exploration; most current methods, however, focus on a priori state aggregation based on similarities in transition rates, which is not necessarily reflected in similar behaviours at the level of trajectories. We propose a way to coarsen the state-space of a system which optimally preserves the satisfaction of a set of logical specifications about the system's trajectories. Our approach is based on Gaussian Process emulation and Multi-Dimensional Scaling, a dimensionality reduction technique which optimally preserves distances in non-Euclidean spaces. We show how to obtain low-dimensional visualisations of the system's state-space from the perspective of properties' satisfaction, and how to define macro-states which behave coherently with respect to the specifications. Our approach is illustrated on a non-trivial running example, showing promising performance and high computational efficiency.