Approximate Learning of Dynamic Models
–Neural Information Processing Systems
Inference is a key component in learning probabilistic models from partially observable data. When learning temporal models, each of the many inference phases requires a traversal over an entire long data sequence; furthermore, the data structures manipulated are exponentially large, making this process computationally expensive. In [2], we describe an approximate inference algorithm for monitoring stochastic processes, and prove bounds on its approximation error. In this paper, we apply this algorithm as an approximate forward propagation step in an EM algorithm for learning temporal Bayesian networks. We provide a related approximation for the backward step, and prove error bounds for the combined algorithm.
Neural Information Processing Systems
Dec-31-1999
- Country:
- North America > United States > California > Santa Clara County (0.14)
- Genre:
- Research Report > New Finding (0.47)