## An Optimal Approximation Algorithm For Bayesian Inference (1997)

### Cached

### Download Links

- [www.icsi.berkeley.edu]
- [bmir.stanford.edu]
- DBLP

### Other Repositories/Bibliography

Venue: | Artificial Intelligence |

Citations: | 48 - 2 self |

### BibTeX

@ARTICLE{Dagum97anoptimal,

author = {Paul Dagum and Michael Luby},

title = {An Optimal Approximation Algorithm For Bayesian Inference},

journal = {Artificial Intelligence},

year = {1997},

volume = {93},

pages = {1--27}

}

### Years of Citing Articles

### OpenURL

### Abstract

Approximating the inference probability Pr[X = xjE = e] in any sense, even for a single evidence node E, is NP-hard. This result holds for belief networks that are allowed to contain extreme conditional probabilities---that is, conditional probabilities arbitrarily close to 0. Nevertheless, all previous approximation algorithms have failed to approximate efficiently many inferences, even for belief networks without extreme conditional probabilities. We prove that we can approximate efficiently probabilistic inference in belief networks without extreme conditional probabilities. We construct a randomized approximation algorithm---the bounded-variance algorithm---that is a variant of the known likelihood-weighting algorithm. The bounded-variance algorithm is the first algorithm with provably fast inference approximation on all belief networks without extreme conditional probabilities. From the bounded-variance algorithm, we construct a deterministic approximation algorithm u...