## Efficient Approximations for the Marginal Likelihood of Bayesian Networks with Hidden Variables (1996)

### Cached

### Download Links

- [research.microsoft.com]
- [www.research.microsoft.com]
- [research.microsoft.com]
- [ftp.research.microsoft.com]
- [ftp.research.microsoft.com]
- DBLP

### Other Repositories/Bibliography

Venue: | Machine Learning |

Citations: | 176 - 10 self |

### BibTeX

@INPROCEEDINGS{Chickering96efficientapproximations,

author = {David Maxwell Chickering and David Heckerman},

title = {Efficient Approximations for the Marginal Likelihood of Bayesian Networks with Hidden Variables},

booktitle = {Machine Learning},

year = {1996},

pages = {181--212}

}

### Years of Citing Articles

### OpenURL

### Abstract

We discuss Bayesian methods for model averaging and model selection among Bayesiannetwork models with hidden variables. In particular, we examine large-sample approximations for the marginal likelihood of naive-Bayes models in which the root node is hidden. Such models are useful for clustering or unsupervised learning. We consider a Laplace approximation and the less accurate but more computationally efficient approximation known as the Bayesian Information Criterion (BIC), which is equivalent to Rissanen's (1987) Minimum Description Length (MDL). Also, we consider approximations that ignore some off-diagonal elements of the observed information matrix and an approximation proposed by Cheeseman and Stutz (1995). We evaluate the accuracy of these approximations using a Monte-Carlo gold standard. In experiments with artificial and real examples, we find that (1) none of the approximations are accurate when used for model averaging, (2) all of the approximations, with the exception of BI...