Results 1  10
of
308
Dynamic Bayesian Networks: Representation, Inference and Learning
, 2002
"... Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and biosequence analysis, and KFMs have bee ..."
Abstract

Cited by 726 (3 self)
 Add to MetaCart
Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and biosequence analysis, and KFMs have been used for problems ranging from tracking planes and missiles to predicting the economy. However, HMMs
and KFMs are limited in their “expressive power”. Dynamic Bayesian Networks (DBNs) generalize HMMs by allowing the state space to be represented in factored form, instead of as a single discrete random variable. DBNs generalize KFMs by allowing arbitrary probability distributions, not just (unimodal) linearGaussian. In this thesis, I will discuss how to represent many different kinds of models as DBNs, how to perform exact and approximate inference in DBNs, and how to learn DBN models from sequential data.
In particular, the main novel technical contributions of this thesis are as follows: a way of representing
Hierarchical HMMs as DBNs, which enables inference to be done in O(T) time instead of O(T 3), where T is the length of the sequence; an exact smoothing algorithm that takes O(log T) space instead of O(T); a simple way of using the junction tree algorithm for online inference in DBNs; new complexity bounds on exact online inference in DBNs; a new deterministic approximate inference algorithm called factored frontier; an analysis of the relationship between the BK algorithm and loopy belief propagation; a way of
applying RaoBlackwellised particle filtering to DBNs in general, and the SLAM (simultaneous localization
and mapping) problem in particular; a way of extending the structural EM algorithm to DBNs; and a variety of different applications of DBNs. However, perhaps the main value of the thesis is its catholic presentation of the field of sequential data modelling.
Probabilistic Horn abduction and Bayesian networks
 Artificial Intelligence
, 1993
"... This paper presents a simple framework for Hornclause abduction, with probabilities associated with hypotheses. The framework incorporates assumptions about the rule base and independence assumptions amongst hypotheses. It is shown how any probabilistic knowledge representable in a discrete Bayesia ..."
Abstract

Cited by 332 (39 self)
 Add to MetaCart
(Show Context)
This paper presents a simple framework for Hornclause abduction, with probabilities associated with hypotheses. The framework incorporates assumptions about the rule base and independence assumptions amongst hypotheses. It is shown how any probabilistic knowledge representable in a discrete Bayesian belief network can be represented in this framework. The main contribution is in finding a relationship between logical and probabilistic notions of evidential reasoning. This provides a useful representation language in its own right, providing a compromise between heuristic and epistemic adequacy. It also shows how Bayesian networks can be extended beyond a propositional language. This paper also shows how a language with only (unconditionally) independent hypotheses can represent any probabilistic knowledge, and argues that it is better to invent new hypotheses to explain dependence rather than having to worry about dependence in the language. Scholar, Canadian Institute for Advanced...
Bucket Elimination: A Unifying Framework for Reasoning
"... Bucket elimination is an algorithmic framework that generalizes dynamic programming to accommodate many problemsolving and reasoning tasks. Algorithms such as directionalresolution for propositional satisfiability, adaptiveconsistency for constraint satisfaction, Fourier and Gaussian elimination ..."
Abstract

Cited by 314 (63 self)
 Add to MetaCart
(Show Context)
Bucket elimination is an algorithmic framework that generalizes dynamic programming to accommodate many problemsolving and reasoning tasks. Algorithms such as directionalresolution for propositional satisfiability, adaptiveconsistency for constraint satisfaction, Fourier and Gaussian elimination for solving linear equalities and inequalities, and dynamic programming for combinatorial optimization, can all be accommodated within the bucket elimination framework. Many probabilistic inference tasks can likewise be expressed as bucketelimination algorithms. These include: belief updating, finding the most probable explanation, and expected utility maximization. These algorithms share the same performance guarantees; all are time and space exponential in the inducedwidth of the problem's interaction graph. While elimination strategies have extensive demands on memory, a contrasting class of algorithms called "conditioning search" require only linear space. Algorithms in this class split a problem into subproblems by instantiating a subset of variables, called a conditioning set, or a cutset. Typical examples of conditioning search algorithms are: backtracking (in constraint satisfaction), and branch and bound (for combinatorial optimization). The paper presents the bucketelimination framework as a unifying theme across probabilistic and deterministic reasoning tasks and show how conditioning search can be augmented to systematically trade space for time.
Learning Bayesian belief networks: An approach based on the MDL principle
 Computational Intelligence
, 1994
"... A new approach for learning Bayesian belief networks from raw data is presented. The approach is based on Rissanen's Minimal Description Length (MDL) principle, which is particularly well suited for this task. Our approach does not require any prior assumptions about the distribution being lear ..."
Abstract

Cited by 234 (8 self)
 Add to MetaCart
A new approach for learning Bayesian belief networks from raw data is presented. The approach is based on Rissanen's Minimal Description Length (MDL) principle, which is particularly well suited for this task. Our approach does not require any prior assumptions about the distribution being learned. In particular, our method can learn unrestricted multiplyconnected belief networks. Furthermore, unlike other approaches our method allows us to tradeo accuracy and complexity in the learned model. This is important since if the learned model is very complex (highly connected) it can be conceptually and computationally intractable. In such a case it would be preferable to use a simpler model even if it is less accurate. The MDL principle o ers a reasoned method for making this tradeo. We also show that our method generalizes previous approaches based on Kullback crossentropy. Experiments have been conducted to demonstrate the feasibility of the approach. Keywords: Knowledge Acquisition � Bayes Nets � Uncertainty Reasoning. 1
The Bayes Net Toolbox for MATLAB
 Computing Science and Statistics
, 2001
"... The Bayes Net Toolbox (BNT) is an opensource Matlab package for directed graphical models. BNT supports many kinds of nodes (probability distributions), exact and approximate inference, parameter and structure learning, and static and dynamic models. BNT is widely used in teaching and research: the ..."
Abstract

Cited by 226 (1 self)
 Add to MetaCart
(Show Context)
The Bayes Net Toolbox (BNT) is an opensource Matlab package for directed graphical models. BNT supports many kinds of nodes (probability distributions), exact and approximate inference, parameter and structure learning, and static and dynamic models. BNT is widely used in teaching and research: the web page has received over 28,000 hits since May 2000. In this paper, we discuss a broad spectrum of issues related to graphical models (directed and undirected), and describe, at a highlevel, how BNT was designed to cope with them all. We also compare BNT to other software packages for graphical models, and to the nascent OpenBayes effort.
Exploiting Causal Independence in Bayesian Network Inference
 Journal of Artificial Intelligence Research
, 1996
"... A new method is proposed for exploiting causal independencies in exact Bayesian network inference. ..."
Abstract

Cited by 177 (10 self)
 Add to MetaCart
(Show Context)
A new method is proposed for exploiting causal independencies in exact Bayesian network inference.
Propagation of Probabilities, Means and Variances in Mixed Graphical Association Models
 Journal of the American Statistical Association
, 1992
"... A scheme is presented for modelling and local computation of exact probabilities, means and variances for mixed qualitative and quantitative variables. The models assume that the conditional distribution of the quantitative variables, given the qualitative, is multivariate Gaussian. The computationa ..."
Abstract

Cited by 174 (2 self)
 Add to MetaCart
A scheme is presented for modelling and local computation of exact probabilities, means and variances for mixed qualitative and quantitative variables. The models assume that the conditional distribution of the quantitative variables, given the qualitative, is multivariate Gaussian. The computational architecture is set up by forming a tree of belief universes, and the calculations are then performed by local message passing between universes. The asymmetry between the quantitative and qualitative variables sets some additional limitations for the specification and propagation structure. Approximate methods when these are not appropriately fulfilled are sketched. Lauritzen and Spiegelhalter (1988) showed how to exploit the local structure in the specification of a discrete probability model for fast and efficient computation, thereby paving the way for exploiting probability based models as parts of realistic systems for planning and decision support. The technique was subsequently imp...
Inference in belief networks: A procedural guide
 International Journal of Approximate Reasoning
, 1996
"... Belief networks are popular tools for encoding uncertainty in expert systems. These networks rely on inference algorithms to compute beliefs in the context of observed evidence. One established method for exact inference onbelief networks is the Probability Propagation in Trees of Clusters (PPTC) al ..."
Abstract

Cited by 167 (6 self)
 Add to MetaCart
(Show Context)
Belief networks are popular tools for encoding uncertainty in expert systems. These networks rely on inference algorithms to compute beliefs in the context of observed evidence. One established method for exact inference onbelief networks is the Probability Propagation in Trees of Clusters (PPTC) algorithm, as developed byLauritzen and Spiegelhalter and re ned by Jensen et al. [1, 2, 3] PPTC converts the belief network into a secondary structure, then computes probabilities by manipulating the secondary structure. In this document, we provide a selfcontained, procedural guide to understanding and implementing PPTC. We synthesize various optimizations to PPTC that are scattered throughout the literature. We articulate undocumented, \open secrets &quot; that are vital to producing a robust and e cient implementation of PPTC. We hope that this document makes probabilistic inference more accessible and a ordable to those without extensive prior exposure.
From Influence Diagrams to Junction Trees
 PROCEEDINGS OF THE TENTH CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE
, 1994
"... We present an approach to the solution of decision problems formulated as influence diagrams. This approach involves a special triangulation of the underlying graph, the construction of a junction tree with special properties, and a message passing algorithm operating on the junction tree for comput ..."
Abstract

Cited by 120 (17 self)
 Add to MetaCart
(Show Context)
We present an approach to the solution of decision problems formulated as influence diagrams. This approach involves a special triangulation of the underlying graph, the construction of a junction tree with special properties, and a message passing algorithm operating on the junction tree for computation of expected utilities and optimal decision policies.
Optimal Junction Trees
 In UAI
, 1994
"... The paper deals with optimality issues in connection with updating beliefs in networks. We address two processes: triangulation and construction of junction trees. In the first part, we give a simple algorithm for constructing an optimal junction tree from a triangulated network. In the second part, ..."
Abstract

Cited by 91 (0 self)
 Add to MetaCart
(Show Context)
The paper deals with optimality issues in connection with updating beliefs in networks. We address two processes: triangulation and construction of junction trees. In the first part, we give a simple algorithm for constructing an optimal junction tree from a triangulated network. In the second part, we argue that any exact method based on local calculations must either be less efficient than the junction tree method, or it has an optimality problem equivalent to that of triangulation. 1 INTRODUCTION The junction tree propagation method (Jensen et al., 1990