Goto

Collaborating Authors

Simone Villa and Fabio Stella (2016) Learning Continuous Time Bayesian Networks in Non-stationary Domains

#artificialintelligence

Non-stationary continuous time Bayesian networks are introduced. They allow the parents set of each node to change over continuous time. Three settings are developed for learning non-stationary continuous time Bayesian networks from data: known transition times, known number of epochs and unknown number of epochs. A score function for each setting is derived and the corresponding learning algorithm is developed. A set of numerical experiments on synthetic data is used to compare the effectiveness of non-stationary continuous time Bayesian networks to that of non-stationary dynamic Bayesian networks.


Learning Continuous Time Bayesian Networks in Non-stationary Domains

Journal of Artificial Intelligence Research

Non-stationary continuous time Bayesian networks are introduced. They allow the parents set of each node to change over continuous time. Three settings are developed for learning non-stationary continuous time Bayesian networks from data: known transition times, known number of epochs and unknown number of epochs. A score function for each setting is derived and the corresponding learning algorithm is developed. A set of numerical experiments on synthetic data is used to compare the effectiveness of non-stationary continuous time Bayesian networks to that of non-stationary dynamic Bayesian networks. Furthermore, the performance achieved by non-stationary continuous time Bayesian networks is compared to that achieved by state-of-the-art algorithms on four real-world datasets, namely drosophila, saccharomyces cerevisiae, songbird and macroeconomics.


Constaint-Based Learning for Non-Parametric Continuous Bayesian Networks

AAAI Conferences

Modeling high-dimensional multivariate distributions is a computationally challenging task. Bayesian networks have been successfully used to reduce the complexity and simplify the problem with discrete variables. However, it lacks of a general model for continuous variables. In order to overcome this problem, Elidan (2010) proposed the model of copula bayesian networks (CBN) that reparametrizes bayesian networks with conditional copula functions. We propose a new learning algorithm for CBN based on a PC algorithm and a conditional independence test proposed by Bouezmarni, Rombouts, Taamouti (2009). This test being non-parametric, no model assumptions are made allowing it to be as general as possible. This algorithm is compared on generated data with the score based method proposed by Elidan (2010)}. Not only it proves to be faster, but also it generalizes well on data generated from distributions far from the gaussian model.


Scalable Structure Learning of Continuous-Time Bayesian Networks from Incomplete Data

Neural Information Processing Systems

Continuous-time Bayesian Networks (CTBNs) represent a compact yet powerful framework for understanding multivariate time-series data. Given complete data, parameters and structure can be estimated efficiently in closed-form. However, if data is incomplete, the latent states of the CTBN have to be estimated by laboriously simulating the intractable dynamics of the assumed CTBN. This is a problem, especially for structure learning tasks, where this has to be done for each element of a super-exponentially growing set of possible structures. In order to circumvent this notorious bottleneck, we develop a novel gradient-based approach to structure learning.


Scalable Structure Learning of Continuous-Time Bayesian Networks from Incomplete Data

arXiv.org Machine Learning

Continuous-time Bayesian Networks (CTBNs) represent a compact yet powerful framework for understanding multivariate time-series data. Given complete data, parameters and structure can be estimated efficiently in closed-form. However, if data is incomplete, the latent states of the CTBN have to be estimated by laboriously simulating the intractable dynamics of the assumed CTBN. This is a problem, especially for structure learning tasks, where this has to be done for each element of super-exponentially growing set of possible structures. In order to circumvent this notorious bottleneck, we develop a novel gradient-based approach to structure learning. Instead of sampling and scoring all possible structures individually, we assume the generator of the CTBN to be composed as a mixture of generators stemming from different structures. In this framework, structure learning can be performed via a gradient-based optimization of mixture weights. We combine this approach with a novel variational method that allows for the calculation of the marginal likelihood of a mixture in closed-form. We proof the scalability of our method by learning structures of previously inaccessible sizes from synthetic and real-world data.