Results 1  10
of
28
Dynamic Bayesian Networks: Representation, Inference and Learning
, 2002
"... Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and biosequence analysis, and KFMs have bee ..."
Abstract

Cited by 564 (3 self)
 Add to MetaCart
Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and biosequence analysis, and KFMs have been used for problems ranging from tracking planes and missiles to predicting the economy. However, HMMs
and KFMs are limited in their “expressive power”. Dynamic Bayesian Networks (DBNs) generalize HMMs by allowing the state space to be represented in factored form, instead of as a single discrete random variable. DBNs generalize KFMs by allowing arbitrary probability distributions, not just (unimodal) linearGaussian. In this thesis, I will discuss how to represent many different kinds of models as DBNs, how to perform exact and approximate inference in DBNs, and how to learn DBN models from sequential data.
In particular, the main novel technical contributions of this thesis are as follows: a way of representing
Hierarchical HMMs as DBNs, which enables inference to be done in O(T) time instead of O(T 3), where T is the length of the sequence; an exact smoothing algorithm that takes O(log T) space instead of O(T); a simple way of using the junction tree algorithm for online inference in DBNs; new complexity bounds on exact online inference in DBNs; a new deterministic approximate inference algorithm called factored frontier; an analysis of the relationship between the BK algorithm and loopy belief propagation; a way of
applying RaoBlackwellised particle filtering to DBNs in general, and the SLAM (simultaneous localization
and mapping) problem in particular; a way of extending the structural EM algorithm to DBNs; and a variety of different applications of DBNs. However, perhaps the main value of the thesis is its catholic presentation of the field of sequential data modelling.
MiniBuckets: A General Scheme for Approximating Inference
 Journal of ACM
, 1998
"... The paper presents a class of approximation algorithms that extend the idea of bounded inference, inspired by successful constraint propagation algorithms, to probabilistic inference and combinatorial optimization. The idea is to bound the dimensionality of dependencies created by inference algor ..."
Abstract

Cited by 46 (16 self)
 Add to MetaCart
The paper presents a class of approximation algorithms that extend the idea of bounded inference, inspired by successful constraint propagation algorithms, to probabilistic inference and combinatorial optimization. The idea is to bound the dimensionality of dependencies created by inference algorithms. This yields a parameterized scheme, called minibuckets, that offers adjustable levels of accuracy and efficiency. The minibucket approach generates both an approximate solution and a bound on the solution quality. We present empirical results demonstrating successful performance of the proposed approximation scheme for probabilistic tasks, both on randomly generated problems and on realistic domains such as medical diagnosis and probabilistic decoding. 1 Introduction Automated reasoning tasks such as constraint satisfaction and optimization, probabilistic inference, decisionmaking, and planning are generally hard (NPhard). One way to cope This work was partially supported...
A Survey of Algorithms for RealTime Bayesian Network Inference
 In In the joint AAAI02/KDD02/UAI02 workshop on RealTime Decision Support and Diagnosis Systems
, 2002
"... As Bayesian networks are applied to more complex and realistic realworld applications, the development of more efficient inference algorithms working under realtime constraints is becoming more and more important. This paper presents a survey of various exact and approximate Bayesian network ..."
Abstract

Cited by 32 (2 self)
 Add to MetaCart
As Bayesian networks are applied to more complex and realistic realworld applications, the development of more efficient inference algorithms working under realtime constraints is becoming more and more important. This paper presents a survey of various exact and approximate Bayesian network inference algorithms. In particular, previous research on realtime inference is reviewed. It provides a framework for understanding these algorithms and the relationships between them. Some important issues in realtime Bayesian networks inference are also discussed.
Approximating Bayesian Belief Networks by Arc Removal
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1997
"... Bayesian belief networks or causal probabilistic networks may reach a certain size and complexity where the computations involved in exact probabilistic inference on the network tend to become rather time consuming. Methods for approximating a network by a simpler one allow the computational complex ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
Bayesian belief networks or causal probabilistic networks may reach a certain size and complexity where the computations involved in exact probabilistic inference on the network tend to become rather time consuming. Methods for approximating a network by a simpler one allow the computational complexity of probabilistic inference on the network to be reduced at least to some extend. We propose a general framework for approximating Bayesian belief networks based on model simplification by arc removal. The approximation method aims at reducing the computational complexity of probabilistic inference on a network at the cost of introducing a bounded error in the prior and posterior probabilities inferred. We present a practical approximation scheme and give some preliminary results. 1 Introduction Today, more and more applications based on the Bayesian belief network 1 formalism are emerging for reasoning and decision making in problem domains with inherent uncertainty. Current applicati...
Tree Approximation for Belief Updating
 IN AAAI02
, 2002
"... The paper presents a parameterized approximation scheme for probabilistic inference. The scheme, called MiniClustering (MC), extends the partitionbased approximation offered by minibucket elimination, to tree decompositions. ..."
Abstract

Cited by 16 (9 self)
 Add to MetaCart
The paper presents a parameterized approximation scheme for probabilistic inference. The scheme, called MiniClustering (MC), extends the partitionbased approximation offered by minibucket elimination, to tree decompositions.
Penniless propagation in join trees
 International Journal of Intelligent Systems
, 2000
"... This paper presents nonrandom algorithms for approximate computation in Bayesian networks. They are based on the use of probability trees to represent probability potentials, using the KullbackLeibler cross entropy as a measure of the error of the approximation. Different alternatives are presente ..."
Abstract

Cited by 16 (7 self)
 Add to MetaCart
This paper presents nonrandom algorithms for approximate computation in Bayesian networks. They are based on the use of probability trees to represent probability potentials, using the KullbackLeibler cross entropy as a measure of the error of the approximation. Different alternatives are presented and tested in several experiments with difficult propagation problems. The results show how it is possible to find good approximations in short time compared with Hugin algorithm. � 2000 John Wiley & Sons, Inc. 1.
Exploiting CaseBased Independence for Approximating Marginal Probabilities
 International Journal of Approximate Reasoning
, 1994
"... Computing marginal probabilities (whether prior or posterior) in Bayesian belief networks is a hard problem. This paper discusses deterministic approximation schemes that work by adding up the probability mass in a small number of value assignments to the network variables. Under certain assumptions ..."
Abstract

Cited by 14 (7 self)
 Add to MetaCart
Computing marginal probabilities (whether prior or posterior) in Bayesian belief networks is a hard problem. This paper discusses deterministic approximation schemes that work by adding up the probability mass in a small number of value assignments to the network variables. Under certain assumptions, the probability mass in the union of these assignments is sufficient to obtain a good approximation. Such methods are especially useful for highlyconnected networks, where the maximum clique size or the cutset size make the standard algorithms intractable. In considering assignments, it is not necessary to assign values to variables that are independent of (dseparated from) the evidence and query nodes. In many cases, however, there is a finer independence structure not evident from the topology, but dependent on the conditional distributions of the nodes. We note that independencebased (IB) assignments, which were originally proposed as theory of abductive explanations, take advantage ...
The Posterior Probability of Bayes Nets with Strong Dependences
 Soft Computing
, 1999
"... Stochastic independence is an idealized relationship located at one end of a continuum of values measuring degrees of dependence. Modeling real world systems, we are often not interested in the distinction between exact independence and any degree of dependence, but between weak ignorable and strong ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
Stochastic independence is an idealized relationship located at one end of a continuum of values measuring degrees of dependence. Modeling real world systems, we are often not interested in the distinction between exact independence and any degree of dependence, but between weak ignorable and strong substantial dependence. Good models map significant deviance from independence and neglect approximate independence or dependence weaker than a noise threshold. This intuition is applied to learning the structure of Bayes nets from data. We determine the conditional posterior probabilities of structures given that the degree of dependence at each of their nodes exceeds a critical noise level. Deviance from independence is measured by mutual information. Arc probabilities are determined by the amount of mutual information the neighbors contribute to a node, is greater than a critical minimum deviance from independence. A Ø 2 approximation for the probability density function of mutual info...
Learning Markov structure by maximum entropy relaxation
 in: 11th International Conference on Artificial Intelligence and Statistics (AISTATS’07
, 2007
"... We propose a new approach for learning a sparse graphical model approximation to a specified multivariate probability distribution (such as the empirical distribution of sample data). The selection of sparse graph structure arises naturally in our approach through solution of a convex optimization p ..."
Abstract

Cited by 10 (8 self)
 Add to MetaCart
We propose a new approach for learning a sparse graphical model approximation to a specified multivariate probability distribution (such as the empirical distribution of sample data). The selection of sparse graph structure arises naturally in our approach through solution of a convex optimization problem, which differentiates our method from standard combinatorial approaches. We seek the maximum entropy relaxation (MER) within an exponential family, which maximizes entropy subject to constraints that marginal distributions on small subsets of variables are close to the prescribed marginals in relative entropy. To solve MER, we present a modified primaldual interior point method that exploits sparsity of the Fisher information matrix in models defined on chordal graphs. This leads to a tractable, scalable approach provided the level of relaxation in MER is sufficient to obtain a thin graph. The merits of our approach are investigated by recovering the structure of some simple graphical models from sample data. 1
Belief Network Algorithms: a Study of Performance Using Domain Characterisation
 In PRICAI Workshops
, 1996
"... In this abstract we give an overview of the work described in [15]. Belief networks provide a graphical representation of causal relationships together with a mechanism for probabilistic inference, allowing belief updating based on incomplete and dynamic information. We present a survey of Belief Ne ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
In this abstract we give an overview of the work described in [15]. Belief networks provide a graphical representation of causal relationships together with a mechanism for probabilistic inference, allowing belief updating based on incomplete and dynamic information. We present a survey of Belief Network belief updating algorithms and propose a domain characterisation system which is used as a basis for algorithm comparison. We give experimental comparative results of algorithm performance using the proposed framework. We show how domain characterisation may be used to predict algorithm performance. Introduction Belief networks are directed acyclic graphs, where nodes correspond to random variables, which are usually assumed to take discrete values. The relationship between any set of state variables can be specified by a joint probability distribution. The nodes in the network are connected by directed arcs, which may be thought of as causal or influence links. The connections also ...