Results 1  10
of
15
Dynamic Bayesian Networks: Representation, Inference and Learning
, 2002
"... Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and biosequence analysis, and KFMs have bee ..."
Abstract

Cited by 759 (3 self)
 Add to MetaCart
Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and biosequence analysis, and KFMs have been used for problems ranging from tracking planes and missiles to predicting the economy. However, HMMs
and KFMs are limited in their “expressive power”. Dynamic Bayesian Networks (DBNs) generalize HMMs by allowing the state space to be represented in factored form, instead of as a single discrete random variable. DBNs generalize KFMs by allowing arbitrary probability distributions, not just (unimodal) linearGaussian. In this thesis, I will discuss how to represent many different kinds of models as DBNs, how to perform exact and approximate inference in DBNs, and how to learn DBN models from sequential data.
In particular, the main novel technical contributions of this thesis are as follows: a way of representing
Hierarchical HMMs as DBNs, which enables inference to be done in O(T) time instead of O(T 3), where T is the length of the sequence; an exact smoothing algorithm that takes O(log T) space instead of O(T); a simple way of using the junction tree algorithm for online inference in DBNs; new complexity bounds on exact online inference in DBNs; a new deterministic approximate inference algorithm called factored frontier; an analysis of the relationship between the BK algorithm and loopy belief propagation; a way of
applying RaoBlackwellised particle filtering to DBNs in general, and the SLAM (simultaneous localization
and mapping) problem in particular; a way of extending the structural EM algorithm to DBNs; and a variety of different applications of DBNs. However, perhaps the main value of the thesis is its catholic presentation of the field of sequential data modelling.
A variational approximation for Bayesian networks with discrete and continuous latent variables
 In UAI
, 1999
"... We show how to use a variational approximation to the logistic function to perform approximate inference in Bayesian networks containing discrete nodes with continuous parents. Essentially, we convert the logistic function to a Gaussian, which facilitates exact inference, and then iteratively adjust ..."
Abstract

Cited by 55 (5 self)
 Add to MetaCart
We show how to use a variational approximation to the logistic function to perform approximate inference in Bayesian networks containing discrete nodes with continuous parents. Essentially, we convert the logistic function to a Gaussian, which facilitates exact inference, and then iteratively adjust the variational parameters to improve the quality of the approximation. We demonstrate experimentally that this approximation is much faster than sampling, but comparable in accuracy. We also introduce a simple new technique for handling evidence, which allows us to handle arbitrary distributionson observed nodes, as well as achieving a significant speedup in networks with discrete variables of large cardinality. 1
Sensitivity analysis in discrete Bayesian networks
 IEEE Transactions on Systems, Man, and Cybernetics
, 1997
"... The paper presents an efficient computational method for performing sensitivity analysis in discrete Bayesian networks. The method exploits the structure of conditional probabilities of a target node given the evidence. First, the set of parameters which are relevant to the calculation of the condit ..."
Abstract

Cited by 44 (4 self)
 Add to MetaCart
The paper presents an efficient computational method for performing sensitivity analysis in discrete Bayesian networks. The method exploits the structure of conditional probabilities of a target node given the evidence. First, the set of parameters which are relevant to the calculation of the conditional probabilities of the target node is identified. Next, this set is reduced by removing those combinations of the parameters which either contradict the available evidence or are incompatible. Finally, using the canonical components associated with the resulting subset of parameters, the desired conditional probabilities are obtained. In this way, an important saving in the calculations is achieved. The proposed method can also be used to compute exact upper and lower bounds for the conditional probabilities, hence a sensitivity analysis can be easily performed. Examples are used to illustrate the proposed methodology.
Inference and Learning in Hybrid Bayesian Networks
, 1998
"... We survey the literature on methods for inference and learning in Bayesian Networks composed of discrete and continuous nodes, in which the continuous nodes have a multivariate Gaussian distribution, whose mean and variance depends on the values of the discrete nodes. We also briefly consider hybrid ..."
Abstract

Cited by 39 (3 self)
 Add to MetaCart
(Show Context)
We survey the literature on methods for inference and learning in Bayesian Networks composed of discrete and continuous nodes, in which the continuous nodes have a multivariate Gaussian distribution, whose mean and variance depends on the values of the discrete nodes. We also briefly consider hybrid Dynamic Bayesian Networks, an extension of switching Kalman filters. This report is meant to summarize what is known at a sufficient level of detail to enable someone to implement the algorithms, but without dwelling on formalities.
Properties of Sensitivity Analysis of Bayesian Belief Networks
 Proceedings of the Joint Session of the 6th Prague Symposium of Asymptotic Statistics and the 13th Prague Conference on Information Theory, Statistical Decision Functions and Random Processes, Union of Czech Mathematicians and Physicists
, 1999
"... The assessments obtained for the various conditional probabilities of a Bayesian belief network inevitably are inaccurate. The inaccuracies involved influence the reliability of the network's output. By subjecting the belief network to a sensitivity analysis with respect to its conditional prob ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
The assessments obtained for the various conditional probabilities of a Bayesian belief network inevitably are inaccurate. The inaccuracies involved influence the reliability of the network's output. By subjecting the belief network to a sensitivity analysis with respect to its conditional probabilities, the reliability of the output can be investigated. Unfortunately, straightforward sensitivity analysis of a Bayesian belief network is highly timeconsuming. In this paper, we show that, by qualitative considerations, several analyses can be identified as being uninformative as the conditional probabilities under study cannot affect the network's output. In addition, we show that the analyses that are informative comply with simple mathematical functions; more specifically, we show that the network's output can be expressed as a quotient of two functions that are linear in a conditional probability under study. These properties allow for considerably reducing the computational burden of se...
Belief update in clg bayesian networks with lazy propagation
 International Journal of Approximate Reasoning
, 2008
"... In recent years Bayesian networks (BNs) with a mixture of continuous and discrete variables have received an increasing level of attention. We present an architecture for exact belief update in Conditional Linear Gaussian BNs (CLG BNs). The architecture is an extension of lazy propagation using oper ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
In recent years Bayesian networks (BNs) with a mixture of continuous and discrete variables have received an increasing level of attention. We present an architecture for exact belief update in Conditional Linear Gaussian BNs (CLG BNs). The architecture is an extension of lazy propagation using operations of Lauritzen & Jensen [6] and Cowell [2]. By decomposing clique and separator potentials into sets of factors, the proposed architecture takes advantage of independence and irrelevance properties induced by the structure of the graph and the evidence. The resulting benefits are illustrated by examples. Results of a preliminary empirical performance evaluation indicate a significant potential of the proposed architecture. 1
Probabilistic Inference from Arbitrary Uncertainty using Mixtures of Factorized Generalized Gaussians
 ControlShell: A Software Architecture for Complex Electromechanical Systems ; International Journal for Robotics Research (IJRR
, 1998
"... This paper presents a general and efficient framework for probabilistic inference and learning from arbitrary uncertain information. It exploits the calculation properties of finite mixture models, conjugate families and factorization. Both the joint probability density of the variables and the like ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
This paper presents a general and efficient framework for probabilistic inference and learning from arbitrary uncertain information. It exploits the calculation properties of finite mixture models, conjugate families and factorization. Both the joint probability density of the variables and the likelihood function of the (objective or subjective) observation are approximated by a special mixture model, in such a way that any desired conditional distribution can be directly obtained without numerical integration. We have developed an extended version of the expectation maximization (EM) algorithm to estimate the parameters of mixture models from uncertain training examples (indirect observations). As a consequence, any piece of exact or uncertain information about both input and output values is consistently handled in the inference and learning stages. This ability, extremely useful in certain situations, is not found in most alternative methods. The proposed framework is formally just...
Symbolic propagation and sensitivity analysis in Gaussian Bayesian networks with application to damage assessment
 Artificial Intelligence in Engineering
, 1997
"... In this paper we show how Bayesian network models can be used to perform a sensitivity analysis using symbolic, as opposed to numeric, computations. An example of damage assessment of concrete structures of buildings is used for illustrative purposes. Initially, normal or Gaussian Bayesian network m ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
In this paper we show how Bayesian network models can be used to perform a sensitivity analysis using symbolic, as opposed to numeric, computations. An example of damage assessment of concrete structures of buildings is used for illustrative purposes. Initially, normal or Gaussian Bayesian network models are described together with an algorithm for numerical propagation of uncertainty in an incremental form. Next, the algorithm is implemented symbolically, in Mathematica code, and applied to answer some queries related to the damage assessment of concrete structures of buildings. Finally, the conditional means and variances of the of nodes given the evidence are shown to be rational functions of the parameters, thus, discovering its parametric structure, which can be efficiently used in sensitivity analysis.
Efficient Inference for Mixed Bayesian Networks
 Proceedings of the 5th ISIF/IEEE International Conference on Information Fusion, 2002
, 2002
"... Bayesian network is a compact representation for probabilistic models and inference. They have been used successfully for multisensor fusion and situation assessment. It is well known that, in general, the inference algorithms to compute the exact posterior probability of the target state are either ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Bayesian network is a compact representation for probabilistic models and inference. They have been used successfully for multisensor fusion and situation assessment. It is well known that, in general, the inference algorithms to compute the exact posterior probability of the target state are either computationally infeasible for dense networks or impossible for mixed discretecontinuous networks. In those cases, one approach is to compute the approximate results using simulation methods. This paper proposes efficient inference methods for those cases. The goal is not to compute the exact or approximate posterior probability of the target state, but to identify the top (most likely) ones in an efficient manner. The approach is to use intelligent simulation techniques where previous samples will be used to guide the future sampling strategy. By focusing the sampling on the "important" space, we are able to sort out the top candidates quickly. Simulation results are included to demonstrate the performances of the algorithms.
Dynamic Probabilistic Networks for Modelling and Identifying Dynamic Systems: a MCMC Approach
 International Journal
, 1997
"... In this paper we deal with the problem of interpreting data coming from a dynamic system by using Causal Probabilistic Networks, a probabilistic graphical model particularly appealing in Intelligent Data Analysis. We discuss the different approaches presented in the literature, outlining their pr ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
In this paper we deal with the problem of interpreting data coming from a dynamic system by using Causal Probabilistic Networks, a probabilistic graphical model particularly appealing in Intelligent Data Analysis. We discuss the different approaches presented in the literature, outlining their pros and cons through a simple training example. Then, we present a new method for reconstructing the state of the dynamic system, based on Markov Chain Monte Carlo algorithms, called Dynamic Probabilistic Network smoothing (DPNsmoothing). Finally, we present an example of the application of DPNsmoothing in the field of signal deconvolution.