Results 1  10
of
82
A general algorithm for approximate inference and its applciation to hybrid bayes nets
 In Uncertainty in Artificial Intelligence (UAI'98
, 1998
"... The clique tree algorithm is the standard method for doing inference in Bayesian networks. It works by manipulating clique potentials — distributions over the variables in a clique. While this approach works well for many networks, it is limited by the need to maintain an exact representation of the ..."
Abstract

Cited by 92 (2 self)
 Add to MetaCart
The clique tree algorithm is the standard method for doing inference in Bayesian networks. It works by manipulating clique potentials — distributions over the variables in a clique. While this approach works well for many networks, it is limited by the need to maintain an exact representation of the clique potentials. This paper presents a new unified approach that combines approximate inference and the clique tree algorithm, thereby circumventing this limitation. Many known approximate inference algorithms can be viewed as instances of this approach. The algorithm essentially does clique tree propagation, using approximate inference to estimate the densities in each clique. In many settings, the computation of the approximate clique potential can be done easily using statistical importance sampling. Iterations are used to gradually improve the quality of the estimation. 1
Hybrid Bayesian Networks for Reasoning about Complex Systems
, 2002
"... Many realworld systems are naturally modeled as hybrid stochastic processes, i.e., stochastic processes that contain both discrete and continuous variables. Examples include speech recognition, target tracking, and monitoring of physical systems. The task is usually to perform probabilistic inferen ..."
Abstract

Cited by 71 (0 self)
 Add to MetaCart
Many realworld systems are naturally modeled as hybrid stochastic processes, i.e., stochastic processes that contain both discrete and continuous variables. Examples include speech recognition, target tracking, and monitoring of physical systems. The task is usually to perform probabilistic inference, i.e., infer the hidden state of the system given some noisy observations. For example, we can ask what is the probability that a certain word was pronounced given the readings of our microphone, what is the probability that a submarine is trying to surface given our sonar data, and what is the probability of a valve being open given our pressure and flow readings. Bayesian networks are
A variational approximation for Bayesian networks with discrete and continuous latent variables
 In UAI
, 1999
"... We show how to use a variational approximation to the logistic function to perform approximate inference in Bayesian networks containing discrete nodes with continuous parents. Essentially, we convert the logistic function to a Gaussian, which facilitates exact inference, and then iteratively adjust ..."
Abstract

Cited by 55 (5 self)
 Add to MetaCart
We show how to use a variational approximation to the logistic function to perform approximate inference in Bayesian networks containing discrete nodes with continuous parents. Essentially, we convert the logistic function to a Gaussian, which facilitates exact inference, and then iteratively adjust the variational parameters to improve the quality of the approximation. We demonstrate experimentally that this approximation is much faster than sampling, but comparable in accuracy. We also introduce a simple new technique for handling evidence, which allows us to handle arbitrary distributionson observed nodes, as well as achieving a significant speedup in networks with discrete variables of large cardinality. 1
Efficient belief propagation for vision using linear constraint nodes
 in CVPR
, 2007
"... Belief propagation over pairwise connected Markov Random Fields has become a widely used approach, and has been successfully applied to several important computer vision problems. However, pairwise interactions are often insufficient to capture the full statistics of the problem. Higherorder intera ..."
Abstract

Cited by 39 (7 self)
 Add to MetaCart
(Show Context)
Belief propagation over pairwise connected Markov Random Fields has become a widely used approach, and has been successfully applied to several important computer vision problems. However, pairwise interactions are often insufficient to capture the full statistics of the problem. Higherorder interactions are sometimes required. Unfortunately, the complexity of belief propagation is exponential in the size of the largest clique. In this paper, we introduce a new technique to compute belief propagation messages in time linear with respect to clique size for a large class of potential functions over realvalued variables. We demonstrate this technique in two applications. First, we perform efficient inference in graphical models where the spatial prior of natural images is captured by 2 × 2 cliques. This approach shows significant improvement over the commonly used pairwiseconnected models, and may benefit a variety of applications using belief propagation to infer images or range images. Finally, we apply these techniques to shapefromshading and demonstrate significant improvement over previous methods, both in quality and in flexibility. 1.
Inference in Hybrid Bayesian Networks using Mixtures of Truncated Exponentials
 International Journal of Approximate Reasoning
, 2006
"... The main goal of this paper is to describe a method for exact inference in general hybrid Bayesian networks (BNs) (with a mixture of discrete and continuous chance variables). Our method consists of approximating general hybrid Bayesian networks by a mixture of Gaussians (MoG) BNs. There exists a fa ..."
Abstract

Cited by 35 (14 self)
 Add to MetaCart
The main goal of this paper is to describe a method for exact inference in general hybrid Bayesian networks (BNs) (with a mixture of discrete and continuous chance variables). Our method consists of approximating general hybrid Bayesian networks by a mixture of Gaussians (MoG) BNs. There exists a fast algorithm by LauritzenJensen (LJ) for making exact inferences in MoG Bayesian networks, and there exists a commercial implementation of this algorithm. However, this algorithm can only be used for MoG BNs. Some limitations of such networks are as follows. All continuous chance variables must have conditional linear Gaussian distributions, and discrete chance nodes cannot have continuous parents. The methods described in this paper will enable us to use the LJ algorithm for a bigger class of hybrid Bayesian networks. This includes networks with continuous chance nodes with nonGaussian distributions, networks with no restrictions on the topology of discrete and continuous variables, networks with conditionally deterministic variables that are a nonlinear function of their continuous parents, and networks with continuous chance variables whose variances are functions of their parents. 1
Exact inference in networks with discrete children of continuous parents
 in: J. Breese, D. Koller (Eds.), Uncertainty in Artificial Intelligence
, 2001
"... Many real life domains contain a mixture of discrete and continuous variables and can be modeled as hybrid Bayesian Networks (BNs). An important subclass of hybrid BNs are conditional linear Gaussian (CLG) networks, where the conditional distribution of the continuous variables given an assignment t ..."
Abstract

Cited by 34 (2 self)
 Add to MetaCart
(Show Context)
Many real life domains contain a mixture of discrete and continuous variables and can be modeled as hybrid Bayesian Networks (BNs). An important subclass of hybrid BNs are conditional linear Gaussian (CLG) networks, where the conditional distribution of the continuous variables given an assignment to the discrete variables is a multivariate Gaussian. Lauritzen’s extension to the clique tree algorithm can be used for exact inference in CLG networks. However, many domains include discrete variables that depend on continuous ones, and CLG networks do not allow such dependencies to be represented. In this paper, we propose the first “exact ” inference algorithm for augmented CLG networks — CLG networks augmented by allowing discrete children of continuous parents. Our algorithm is based on Lauritzen’s algorithm, and is exact in a similar sense: it computes the exact distributions over the discrete nodes, and the exact first and second moments of the continuous ones, up to inaccuracies resulting from numerical integration used within the algorithm. In the special case of softmax CPDs, we show that integration can often be done efficiently, and that using the first two moments leads to a particularly accurate approximation. We show empirically that our algorithm achieves substantially higher accuracy at lower cost than previous algorithms for this task. 1
Approximating probability density functions with mixtures of truncated exponentials
 Proceedings of the Tenth Conference on Information Processing and Management of Uncertainty in KnowledgeBased Systems (IPMU04), 2004
"... Mixtures of truncated exponentials (MTE) potentials are an alternative to discretization for approximating probability density functions (PDF’s). This paper presents MTE potentials that approximate standard PDF’s and applications of these potentials for solving inference problems in hybrid Bayesi ..."
Abstract

Cited by 32 (21 self)
 Add to MetaCart
Mixtures of truncated exponentials (MTE) potentials are an alternative to discretization for approximating probability density functions (PDF’s). This paper presents MTE potentials that approximate standard PDF’s and applications of these potentials for solving inference problems in hybrid Bayesian networks.
Inference in hybrid Bayesian networks with mixtures of truncated exponentials
 Proceedings of the 6th workshop on uncertainty processing (WUPES2003
"... An important class of hybrid Bayesian networks are those that have conditionally deterministic variables (a variable that is a deterministic function of its parents). In this case, if some of the parents are continuous, then the joint density function does not exist. Conditional linear Gaussian (CL ..."
Abstract

Cited by 31 (4 self)
 Add to MetaCart
An important class of hybrid Bayesian networks are those that have conditionally deterministic variables (a variable that is a deterministic function of its parents). In this case, if some of the parents are continuous, then the joint density function does not exist. Conditional linear Gaussian (CLG) distributions can handle such cases when the deterministic function is linear and continuous variables are normally distributed. In this paper, we develop operations required for performing inference with conditionally deterministic variables using relationships derived from joint cumulative distribution functions (CDF’s). These methods allow inference in networks with deterministic variables where continuous variables are nonGaussian. 1
Using probability trees to compute marginals with imprecise probabilities
 INTERNATIONAL JOURNAL OF APPROXIMATE REASONING
, 2002
"... This paper presents an approximate algorithm to obtain a posteriori intervals of probability, when available information is also given with intervals. The algorithm uses probability trees as a means of representing and computing with the convex sets of ..."
Abstract

Cited by 28 (2 self)
 Add to MetaCart
(Show Context)
This paper presents an approximate algorithm to obtain a posteriori intervals of probability, when available information is also given with intervals. The algorithm uses probability trees as a means of representing and computing with the convex sets of