Learning Bayesian Networks: A Unification for Discrete and Gaussian Domains
–arXiv.org Artificial Intelligence
At last year's conference, we presented approaches for learning Bayesian networks from a combination of prior knowledge and statistical data. These approaches were presented in two papers: one addressing domains containing only discrete variables (Heckerman et al., 1994), and the other addressing domains containing continuous variables related by an unknown multivariate-Gaussian distribution (Geiger and Heckerman, 1994). Unfortunately, these presentations were substantially different, making the parallels between the two methods difficult to appreciate. In this paper, we unify the two approaches. In particular, we abstract our previous assumptions of likelihood equivalence, parameter modularity, and parameter independence such that they are appropriate for discrete and Gaussian domains (as well as other domains). Using these assumptions, we derive a domain-independent Bayesian scoring metric. We then use this general metric in combination with well-known statistical facts about the Dirichlet and normal-Wishart distributions to derive our metrics for discrete and Gaussian domains. In addition, we provide simple proofs that these assumptions are consistent for both domains.
arXiv.org Artificial Intelligence
May-13-2021
- Country:
- North America > United States
- California > Los Angeles County
- Los Angeles (0.14)
- Massachusetts > Suffolk County
- Boston (0.04)
- New York (0.04)
- Washington > King County
- Seattle (0.04)
- California > Los Angeles County
- North America > United States
- Genre:
- Research Report (0.40)
- Technology: