Results 1  10
of
16
Dynamic Bayesian Networks: Representation, Inference and Learning
, 2002
"... Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and biosequence analysis, and KFMs have bee ..."
Abstract

Cited by 563 (3 self)
 Add to MetaCart
Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and biosequence analysis, and KFMs have been used for problems ranging from tracking planes and missiles to predicting the economy. However, HMMs
and KFMs are limited in their “expressive power”. Dynamic Bayesian Networks (DBNs) generalize HMMs by allowing the state space to be represented in factored form, instead of as a single discrete random variable. DBNs generalize KFMs by allowing arbitrary probability distributions, not just (unimodal) linearGaussian. In this thesis, I will discuss how to represent many different kinds of models as DBNs, how to perform exact and approximate inference in DBNs, and how to learn DBN models from sequential data.
In particular, the main novel technical contributions of this thesis are as follows: a way of representing
Hierarchical HMMs as DBNs, which enables inference to be done in O(T) time instead of O(T 3), where T is the length of the sequence; an exact smoothing algorithm that takes O(log T) space instead of O(T); a simple way of using the junction tree algorithm for online inference in DBNs; new complexity bounds on exact online inference in DBNs; a new deterministic approximate inference algorithm called factored frontier; an analysis of the relationship between the BK algorithm and loopy belief propagation; a way of
applying RaoBlackwellised particle filtering to DBNs in general, and the SLAM (simultaneous localization
and mapping) problem in particular; a way of extending the structural EM algorithm to DBNs; and a variety of different applications of DBNs. However, perhaps the main value of the thesis is its catholic presentation of the field of sequential data modelling.
Nonuniform Dynamic Discretization in Hybrid Networks
 In Proc. UAI
, 1997
"... We consider probabilistic inference in general hybrid networks, which include continuous and discrete variables in an arbitrary topology. We reexamine the question of variable discretization in a hybrid network aiming at minimizing the information loss induced by the discretization. We show that a n ..."
Abstract

Cited by 64 (3 self)
 Add to MetaCart
We consider probabilistic inference in general hybrid networks, which include continuous and discrete variables in an arbitrary topology. We reexamine the question of variable discretization in a hybrid network aiming at minimizing the information loss induced by the discretization. We show that a nonuniform partition across all variables as opposed to uniform partition of each variable separately reduces the size of the data structures needed to represent a continuous function. We also provide a simple but efficient procedure for nonuniform partition. To represent a nonuniform discretization in the computer memory, we introduce a new data structure, which we call a Binary Split Partition (BSP) tree. We show that BSP trees can be an exponential factor smaller than the data structures in the standard uniform discretization in multiple dimensions and show how the BSP trees can be used in the standard join tree algorithm. We show that the accuracy of the inference process can be significa...
A variational approximation for Bayesian networks with discrete and continuous latent variables
 In UAI
, 1999
"... We show how to use a variational approximation to the logistic function to perform approximate inference in Bayesian networks containing discrete nodes with continuous parents. Essentially, we convert the logistic function to a Gaussian, which facilitates exact inference, and then iteratively adjust ..."
Abstract

Cited by 42 (6 self)
 Add to MetaCart
We show how to use a variational approximation to the logistic function to perform approximate inference in Bayesian networks containing discrete nodes with continuous parents. Essentially, we convert the logistic function to a Gaussian, which facilitates exact inference, and then iteratively adjust the variational parameters to improve the quality of the approximation. We demonstrate experimentally that this approximation is much faster than sampling, but comparable in accuracy. We also introduce a simple new technique for handling evidence, which allows us to handle arbitrary distributionson observed nodes, as well as achieving a significant speedup in networks with discrete variables of large cardinality. 1
Inference and Learning in Hybrid Bayesian Networks
, 1998
"... We survey the literature on methods for inference and learning in Bayesian Networks composed of discrete and continuous nodes, in which the continuous nodes have a multivariate Gaussian distribution, whose mean and variance depends on the values of the discrete nodes. We also briefly consider hybrid ..."
Abstract

Cited by 25 (2 self)
 Add to MetaCart
We survey the literature on methods for inference and learning in Bayesian Networks composed of discrete and continuous nodes, in which the continuous nodes have a multivariate Gaussian distribution, whose mean and variance depends on the values of the discrete nodes. We also briefly consider hybrid Dynamic Bayesian Networks, an extension of switching Kalman filters. This report is meant to summarize what is known at a sufficient level of detail to enable someone to implement the algorithms, but without dwelling on formalities.
Inference using message propagation and topology transformation in vector gaussian continuous networks
 Proceedings of the Twelfth UAI Conference
, 1996
"... We extend continuous Gaussian networks − directed acyclic graphs that encode probabilistic relationships between variables − to its vector form. Vector Gaussian continuous networks consist of composite nodes representing multivariables, that take continuous values. These vector or composite nodes ca ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
We extend continuous Gaussian networks − directed acyclic graphs that encode probabilistic relationships between variables − to its vector form. Vector Gaussian continuous networks consist of composite nodes representing multivariables, that take continuous values. These vector or composite nodes can represent correlations between parents, as opposed to conventional univariate nodes. We derive rules for inference in these networks based on two methods: message propagation and topology transformation. These two approaches lead to the development of algorithms, that can be implemented in either a centralized or a decentralized manner. The domain of application of these networks are monitoring and estimation problems. This new representation along with the rules for inference developed here can be used to derive current Bayesian algorithms such as the Kalman filter, and provide a rich foundation to develop new algorithms. We illustrate this process by deriving the decentralized form of the Kalman filter. This work unifies concepts from artificial intelligence and modern control theory. 1
A Generic Model for Estimating UserIntentions in HumanRobot Cooperation
 in Proceedings of the 2 nd International Conference on Informatics in Control, Automation and Robotics, ICINCO 05
, 2005
"... The recognition of user intentions is an important feature for humanoid robots to make implicit and humanlike interactions possible. In this paper, we introduce a formal view on userintentions in humanmachine interaction and how they can be estimated by observing user actions. We use Hybrid Dynam ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
The recognition of user intentions is an important feature for humanoid robots to make implicit and humanlike interactions possible. In this paper, we introduce a formal view on userintentions in humanmachine interaction and how they can be estimated by observing user actions. We use Hybrid Dynamic Bayesian Networks to develop a generic model that includes connections between intentions, actions, and sensor measurements. This model can be used to extend arbitrary humanmachine applications by intention recognition. 1
Fuzzy Bayesian Networks  A General Formalism for Representation, Inference and Learning with Hybrid Bayesian Networks
"... This paper proposes a general formalism for representation, inference and learning with general hybrid Bayesian networks in which continuous and discrete variables may appear anywhere in a directed acyclic graph. The formalism fuzzies a hybrid Bayesian network into two alternative forms: The rst ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
This paper proposes a general formalism for representation, inference and learning with general hybrid Bayesian networks in which continuous and discrete variables may appear anywhere in a directed acyclic graph. The formalism fuzzies a hybrid Bayesian network into two alternative forms: The rst form replaces each continuous variable in the given directed acyclic graph (DAG) by a partner discrete variable and adding a directed link from the partner discrete variable to the continuous one. The mapping between two variables is not crisp quantization but is approximated (fuzzied) by a conditional Gaussian (CG) distribution. The CG model is equivalent to a fuzzy set but no fuzzy logic formalism is employed. The conditional distribution of a discrete variable given its discrete parents is still assumed to be multinomial as in discrete Bayesian networks. The second form only replaces each continuous variable whose descendants include discrete variables by a partner discrete variable a...
Optimal mixture approximation of the product of mixtures
 In Proceedings 2005 International Conference on Information Fusion
, 2005
"... Abstract — Gaussian mixture densities are a very common tool for describing arbitrarily structured uncertainties in various applications. Many of these applications have to deal with the fusion of uncertainties, an operation that is usually performed by multiplication of these densities. The product ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract — Gaussian mixture densities are a very common tool for describing arbitrarily structured uncertainties in various applications. Many of these applications have to deal with the fusion of uncertainties, an operation that is usually performed by multiplication of these densities. The product of Gaussian mixtures can be calculated exactly, but the number of mixture components in the resulting mixture increases exponentially. Hence, it is essential to approximate the resulting mixture with less components, to keep it tractable for further processing steps. This paper introduces an approach for approximating the exact product with a mixture that uses less components. The maximum approximation error can be chosen by the user. This choice allows to trade accuracy of the approximation for the number of mixture components used. This is possible due to the usage of a progressive processing scheme that calculates the product operation by means of a system of ordinary differential equations. The solution of this system yields the parameters of the desired Gaussian mixture. I.
Variational inference for continuous sigmoidal Bayesian networks
 In Sixth International Workshop on Artificial Intelligence and Statistics
, 1996
"... Latent random variables can be useful for modelling covariance relationships between observed variables. The choice of whether specific latent variables ought to be continuous or discrete is often an arbitrary one. In a previous paper, I presented a "unit" that could adapt to be continuous or binary ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Latent random variables can be useful for modelling covariance relationships between observed variables. The choice of whether specific latent variables ought to be continuous or discrete is often an arbitrary one. In a previous paper, I presented a "unit" that could adapt to be continuous or binary, as appropriate for the current problem, and showed how a Markov chain Monte Carlo method could be used for inference and parameter estimation in Bayesian networks of these units. In this paper, I develop a variational inference technique in the hope that it will prove to be more computationally efficient than Monte Carlo methods. After presenting promising inference results on a toy problem, I discuss why the variational technique does not work well for parameter estimation as compared to Monte Carlo.
Information Fusion, Causal Probabilistic Network And Probanet II: Inference Algorithms and Probanet System
 Proc. 1st Intl. Workshop on Image Analysis and Information Fusion
, 1997
"... As an extension of an overview paper [Pan and McMichael, 1997] on information fusion and Causal Probabilistic Networks (CPN), this paper formalizes kernel algorithms for probabilistic inferences upon CPNs. Information fusion is realized through updating joint probabilities of the variables upon the ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
As an extension of an overview paper [Pan and McMichael, 1997] on information fusion and Causal Probabilistic Networks (CPN), this paper formalizes kernel algorithms for probabilistic inferences upon CPNs. Information fusion is realized through updating joint probabilities of the variables upon the arrival of new evidences or new hypotheses. Kernel algorithms for some dominant methods of inferences are formalized from discontiguous, mathematicsoriented literatures, with gaps lled in with regards to computability and completeness. In particular, possible optimizations on causal tree algorithm, graph triangulation and junction tree algorithm are discussed. Probanet has been designed and developed as a generic shell, or say, mother system for CPN construction and application. The design aspects and current status of Probanet are described. A few directions for research and system development are pointed out, including hierarchical structuring of network, structure decomposition and adaptive inference algorithms. This paper thus has a nature of integration including literature review, algorithm formalization and future perspective.