Results 1  10
of
67
A general algorithm for approximate inference and its applciation to hybrid bayes nets
 In Uncertainty in Artificial Intelligence (UAI'98
, 1998
"... The clique tree algorithm is the standard method for doing inference in Bayesian networks. It works by manipulating clique potentials — distributions over the variables in a clique. While this approach works well for many networks, it is limited by the need to maintain an exact representation of the ..."
Abstract

Cited by 74 (2 self)
 Add to MetaCart
The clique tree algorithm is the standard method for doing inference in Bayesian networks. It works by manipulating clique potentials — distributions over the variables in a clique. While this approach works well for many networks, it is limited by the need to maintain an exact representation of the clique potentials. This paper presents a new unified approach that combines approximate inference and the clique tree algorithm, thereby circumventing this limitation. Many known approximate inference algorithms can be viewed as instances of this approach. The algorithm essentially does clique tree propagation, using approximate inference to estimate the densities in each clique. In many settings, the computation of the approximate clique potential can be done easily using statistical importance sampling. Iterations are used to gradually improve the quality of the estimation. 1
Hybrid Bayesian Networks for Reasoning about Complex Systems
, 2002
"... Many realworld systems are naturally modeled as hybrid stochastic processes, i.e., stochastic processes that contain both discrete and continuous variables. Examples include speech recognition, target tracking, and monitoring of physical systems. The task is usually to perform probabilistic inferen ..."
Abstract

Cited by 48 (0 self)
 Add to MetaCart
Many realworld systems are naturally modeled as hybrid stochastic processes, i.e., stochastic processes that contain both discrete and continuous variables. Examples include speech recognition, target tracking, and monitoring of physical systems. The task is usually to perform probabilistic inference, i.e., infer the hidden state of the system given some noisy observations. For example, we can ask what is the probability that a certain word was pronounced given the readings of our microphone, what is the probability that a submarine is trying to surface given our sonar data, and what is the probability of a valve being open given our pressure and flow readings. Bayesian networks are
A variational approximation for Bayesian networks with discrete and continuous latent variables
 In UAI
, 1999
"... We show how to use a variational approximation to the logistic function to perform approximate inference in Bayesian networks containing discrete nodes with continuous parents. Essentially, we convert the logistic function to a Gaussian, which facilitates exact inference, and then iteratively adjust ..."
Abstract

Cited by 43 (6 self)
 Add to MetaCart
We show how to use a variational approximation to the logistic function to perform approximate inference in Bayesian networks containing discrete nodes with continuous parents. Essentially, we convert the logistic function to a Gaussian, which facilitates exact inference, and then iteratively adjust the variational parameters to improve the quality of the approximation. We demonstrate experimentally that this approximation is much faster than sampling, but comparable in accuracy. We also introduce a simple new technique for handling evidence, which allows us to handle arbitrary distributionson observed nodes, as well as achieving a significant speedup in networks with discrete variables of large cardinality. 1
Efficient belief propagation for vision using linear constraint nodes
 in CVPR
, 2007
"... Belief propagation over pairwise connected Markov Random Fields has become a widely used approach, and has been successfully applied to several important computer vision problems. However, pairwise interactions are often insufficient to capture the full statistics of the problem. Higherorder intera ..."
Abstract

Cited by 25 (4 self)
 Add to MetaCart
Belief propagation over pairwise connected Markov Random Fields has become a widely used approach, and has been successfully applied to several important computer vision problems. However, pairwise interactions are often insufficient to capture the full statistics of the problem. Higherorder interactions are sometimes required. Unfortunately, the complexity of belief propagation is exponential in the size of the largest clique. In this paper, we introduce a new technique to compute belief propagation messages in time linear with respect to clique size for a large class of potential functions over realvalued variables. We demonstrate this technique in two applications. First, we perform efficient inference in graphical models where the spatial prior of natural images is captured by 2 × 2 cliques. This approach shows significant improvement over the commonly used pairwiseconnected models, and may benefit a variety of applications using belief propagation to infer images or range images. Finally, we apply these techniques to shapefromshading and demonstrate significant improvement over previous methods, both in quality and in flexibility. 1.
Exact inference in networks with discrete children of continuous parents
 in: J. Breese, D. Koller (Eds.), Uncertainty in Artificial Intelligence
, 2001
"... Many real life domains contain a mixture of discrete and continuous variables and can be modeled as hybrid Bayesian Networks (BNs). An important subclass of hybrid BNs are conditional linear Gaussian (CLG) networks, where the conditional distribution of the continuous variables given an assignment t ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
Many real life domains contain a mixture of discrete and continuous variables and can be modeled as hybrid Bayesian Networks (BNs). An important subclass of hybrid BNs are conditional linear Gaussian (CLG) networks, where the conditional distribution of the continuous variables given an assignment to the discrete variables is a multivariate Gaussian. Lauritzen’s extension to the clique tree algorithm can be used for exact inference in CLG networks. However, many domains include discrete variables that depend on continuous ones, and CLG networks do not allow such dependencies to be represented. In this paper, we propose the first “exact ” inference algorithm for augmented CLG networks — CLG networks augmented by allowing discrete children of continuous parents. Our algorithm is based on Lauritzen’s algorithm, and is exact in a similar sense: it computes the exact distributions over the discrete nodes, and the exact first and second moments of the continuous ones, up to inaccuracies resulting from numerical integration used within the algorithm. In the special case of softmax CPDs, we show that integration can often be done efficiently, and that using the first two moments leads to a particularly accurate approximation. We show empirically that our algorithm achieves substantially higher accuracy at lower cost than previous algorithms for this task. 1
Using probability trees to compute marginals with imprecise probabilities
 INTERNATIONAL JOURNAL OF APPROXIMATE REASONING
, 2002
"... This paper presents an approximate algorithm to obtain a posteriori intervals of probability, when available information is also given with intervals. The algorithm uses probability trees as a means of representing and computing with the convex sets of ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
This paper presents an approximate algorithm to obtain a posteriori intervals of probability, when available information is also given with intervals. The algorithm uses probability trees as a means of representing and computing with the convex sets of
Approximating probability density functions with mixtures of truncated exponentials
 Proceedings of the Tenth Conference on Information Processing and Management of Uncertainty in KnowledgeBased Systems (IPMU04), 2004
, 2006
"... Mixtures of truncated exponentials (MTE) potentials are an alternative to discretization and Monte Carlo methods for solving hybrid Bayesian networks. Any probability density function (PDF) can be approximated by an MTE potential, which can always be marginalized in closed form. This allows propagat ..."
Abstract

Cited by 17 (9 self)
 Add to MetaCart
Mixtures of truncated exponentials (MTE) potentials are an alternative to discretization and Monte Carlo methods for solving hybrid Bayesian networks. Any probability density function (PDF) can be approximated by an MTE potential, which can always be marginalized in closed form. This allows propagation to be done exactly using the ShenoyShafer architecture for computing marginals, with no restrictions on the construction of a join tree. This paper presents MTE potentials that approximate standard PDF’s and applications of these potentials for solving inference problems in hybrid Bayesian networks. These approximations will extend the types of inference problems that can be modeled with Bayesian networks, as demonstrated using three examples.
Inference in Hybrid Bayesian Networks with Mixtures of Truncated Exponentials
, 2003
"... Mixtures of truncated exponentials (MTE) potentials are an alternative to discretization for solving hybrid Bayesian networks. Any probability density function can be approximated with an MTE potential, which can always by marginalized in closed form. This allows propagation to be done exactly us ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
Mixtures of truncated exponentials (MTE) potentials are an alternative to discretization for solving hybrid Bayesian networks. Any probability density function can be approximated with an MTE potential, which can always by marginalized in closed form. This allows propagation to be done exactly using the ShenoyShafer architecture for computing marginals, with no restrictions on the construction of a join tree.