Results 1 
8 of
8
Tutorial on Variational Approximation Methods
 IN ADVANCED MEAN FIELD METHODS: THEORY AND PRACTICE
, 2000
"... We provide an introduction to the theory and use of variational methods for inference and estimation in the context of graphical models. Variational methods become useful as ecient approximate methods when the structure of the graph model no longer admits feasible exact probabilistic calculations. T ..."
Abstract

Cited by 88 (1 self)
 Add to MetaCart
(Show Context)
We provide an introduction to the theory and use of variational methods for inference and estimation in the context of graphical models. Variational methods become useful as ecient approximate methods when the structure of the graph model no longer admits feasible exact probabilistic calculations. The emphasis of this tutorial is on illustrating how inference and estimation problems can be transformed into variational form along with describing the resulting approximation algorithms and their properties insofar as these are currently known.
On the Concentration of Expectation and Approximate Inference in Layered Networks
 HW/SW Codesign MEDEA/ESPRIT conference
, 2003
"... We present an analysis of concentrationofexpectation phenomena in layered Bayesian networks that use generalized linear models as the local conditional probabilities. This framework encompasses a wide variety of probability distributions, including both discrete and continuous random variables ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
We present an analysis of concentrationofexpectation phenomena in layered Bayesian networks that use generalized linear models as the local conditional probabilities. This framework encompasses a wide variety of probability distributions, including both discrete and continuous random variables. We utilize ideas from large deviation analysis and the delta method to devise and evaluate a class of approximate inference algorithms for layered Bayesian networks that have superior asymptotic error bounds and very fast computation time.
Active tuplesbased scheme for bounding posterior beliefs
 Journal of Artificial Intelligence Research
, 2010
"... The paper presents a scheme for computing lower and upper bounds on the posterior marginals in Bayesian networks with discrete variables. Its power lies in its ability to use any available scheme that bounds the probability of evidence or posterior marginals and enhance its performance in an anytime ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
The paper presents a scheme for computing lower and upper bounds on the posterior marginals in Bayesian networks with discrete variables. Its power lies in its ability to use any available scheme that bounds the probability of evidence or posterior marginals and enhance its performance in an anytime manner. The scheme uses the cutset conditioning principle to tighten existing bounding schemes and to facilitate anytime behavior, utilizing a fixed number of cutset tuples. The accuracy of the bounds improves as the number of used cutset tuples increases and so does the computation time. We demonstrate empirically the value of our scheme for bounding posterior marginals and probability of evidence using a variant of the bound propagation algorithm as a plugin scheme. 1.
On the concentration of expectation and
"... approximate inference in layered networks ..."
(Show Context)
Improving Bound Propagation
"... Abstract. This paper extends previously proposed bound propagation algorithm [11] for computing lower and upper bounds on posterior marginals in Bayesian networks. We improve the bound propagation scheme by taking advantage of the directionality in Bayesian networks and applying the notion of releva ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. This paper extends previously proposed bound propagation algorithm [11] for computing lower and upper bounds on posterior marginals in Bayesian networks. We improve the bound propagation scheme by taking advantage of the directionality in Bayesian networks and applying the notion of relevant subnetwork. We also propose an approximation scheme for the linear optimization subproblems. We demonstrate empirically that while the resulting bounds loose some precision, we achieve 10100 times speedup compared to original bound propagation using a simplex solver. 1
An Anytime Boosting Scheme for Bounding Posterior Beliefs
, 2009
"... This paper presents an anytime scheme for computing lower and upper bounds on the posterior marginals in Bayesian networks with discrete variables. Its power is in that it can use any available scheme that bounds the probability of evidence, enhance its performance in an anytime manner, and transfo ..."
Abstract
 Add to MetaCart
This paper presents an anytime scheme for computing lower and upper bounds on the posterior marginals in Bayesian networks with discrete variables. Its power is in that it can use any available scheme that bounds the probability of evidence, enhance its performance in an anytime manner, and transform it effectively into bounds for posterior marginals. The scheme is novel in that using the cutset condition principle (Pearl, 1988), it converts a bound on joint probabilities into a bound on the posterior marginals that is tighter than earlier schemes, while at the same time facilitates anytime improved performance. At the heart of the scheme is a new data structure which facilitate the efficient computation of such a bound without enumerating all the cutset tuples. Using a variant of bound propagation algorithm (Leisink & Kappen, 2003) as the pluggedin scheme, we demonstrate empirically the value of our scheme, for bounding posterior marginals and probability of evidence.
Efficient Inference for Complex Queries on Complex Distributions
"... We consider problems of approximate inference in which the query of interest is given by a complex formula (such as a formula in disjunctive formal form (DNF)) over a joint distribution given by a graphical model. We give a general reduction showing that (approximate) marginal inference for a class ..."
Abstract
 Add to MetaCart
(Show Context)
We consider problems of approximate inference in which the query of interest is given by a complex formula (such as a formula in disjunctive formal form (DNF)) over a joint distribution given by a graphical model. We give a general reduction showing that (approximate) marginal inference for a class of distributions yields approximate inference for DNF queries, and extend our techniques to accommodate even more complex queries, and dense graphical models with variational inference, under certain conditions. Our results unify and generalize classical inference techniques (which are generally restricted to simple marginal queries) and approximate counting methods such as those introduced by Karp, Luby and Madras (which are generally restricted to product distributions). 1