Results 1  10
of
75
Dynamic Bayesian Networks: Representation, Inference and Learning
, 2002
"... Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and biosequence analysis, and KFMs have bee ..."
Abstract

Cited by 758 (3 self)
 Add to MetaCart
Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and biosequence analysis, and KFMs have been used for problems ranging from tracking planes and missiles to predicting the economy. However, HMMs
and KFMs are limited in their “expressive power”. Dynamic Bayesian Networks (DBNs) generalize HMMs by allowing the state space to be represented in factored form, instead of as a single discrete random variable. DBNs generalize KFMs by allowing arbitrary probability distributions, not just (unimodal) linearGaussian. In this thesis, I will discuss how to represent many different kinds of models as DBNs, how to perform exact and approximate inference in DBNs, and how to learn DBN models from sequential data.
In particular, the main novel technical contributions of this thesis are as follows: a way of representing
Hierarchical HMMs as DBNs, which enables inference to be done in O(T) time instead of O(T 3), where T is the length of the sequence; an exact smoothing algorithm that takes O(log T) space instead of O(T); a simple way of using the junction tree algorithm for online inference in DBNs; new complexity bounds on exact online inference in DBNs; a new deterministic approximate inference algorithm called factored frontier; an analysis of the relationship between the BK algorithm and loopy belief propagation; a way of
applying RaoBlackwellised particle filtering to DBNs in general, and the SLAM (simultaneous localization
and mapping) problem in particular; a way of extending the structural EM algorithm to DBNs; and a variety of different applications of DBNs. However, perhaps the main value of the thesis is its catholic presentation of the field of sequential data modelling.
Inference in Hybrid Networks: Theoretical Limits and Practical Algorithms
 In UAI
, 2001
"... An important subclass of hybrid Bayesian networks ..."
(Show Context)
Hybrid Bayesian Networks for Reasoning about Complex Systems
, 2002
"... Many realworld systems are naturally modeled as hybrid stochastic processes, i.e., stochastic processes that contain both discrete and continuous variables. Examples include speech recognition, target tracking, and monitoring of physical systems. The task is usually to perform probabilistic inferen ..."
Abstract

Cited by 71 (0 self)
 Add to MetaCart
Many realworld systems are naturally modeled as hybrid stochastic processes, i.e., stochastic processes that contain both discrete and continuous variables. Examples include speech recognition, target tracking, and monitoring of physical systems. The task is usually to perform probabilistic inference, i.e., infer the hidden state of the system given some noisy observations. For example, we can ask what is the probability that a certain word was pronounced given the readings of our microphone, what is the probability that a submarine is trying to surface given our sonar data, and what is the probability of a valve being open given our pressure and flow readings. Bayesian networks are
Tree Consistency and Bounds on the Performance of the MaxProduct Algorithm and Its Generalizations
, 2002
"... Finding the maximum a posteriori (MAP) assignment of a discretestate distribution specified by a graphical model requires solving an integer program. The maxproduct algorithm, also known as the maxplus or minsum algorithm, is an iterative method for (approximately) solving such a problem on gr ..."
Abstract

Cited by 67 (5 self)
 Add to MetaCart
Finding the maximum a posteriori (MAP) assignment of a discretestate distribution specified by a graphical model requires solving an integer program. The maxproduct algorithm, also known as the maxplus or minsum algorithm, is an iterative method for (approximately) solving such a problem on graphs with cycles.
Measuring uncertainty in graph cut solutions  efficiently computing minmarginal energies using dynamic graph cuts
 In ECCV
, 2006
"... Abstract. In recent years the use of graphcuts has become quite popular in computer vision. However, researchers have repeatedly asked the question whether it might be possible to compute a measure of uncertainty associated with the graphcut solutions. In this paper we answer this particular questi ..."
Abstract

Cited by 66 (10 self)
 Add to MetaCart
(Show Context)
Abstract. In recent years the use of graphcuts has become quite popular in computer vision. However, researchers have repeatedly asked the question whether it might be possible to compute a measure of uncertainty associated with the graphcut solutions. In this paper we answer this particular question by showing how the minmarginals associated with the label assignments in a MRF can be efficiently computed using a new algorithm based on dynamic graph cuts. We start by reporting the discovery of a novel relationship between the minmarginal energy corresponding to a latent variable label assignment, and the flow potentials of the node representing that variable in the graph used in the energy minimization procedure. We then proceed to show how the minmarginal energy can be computed by minimizing a projection of the energy function defined by the MRF. We propose a fast and novel algorithm based on dynamic graph cuts to efficiently minimize these energy projections. The minmarginal energies obtained by our proposed algorithm are exact, as opposed to the ones obtained from other inference algorithms like loopy belief propagation and generalized belief propagation. We conclude by showing how minmarginals can be used to compute a confidence measure for label assignments in labelling problems such as image segmentation. 1
Correspondence analysis of genes and tissue types and finding genetic links from microarray data
 Genome Informatics
, 2000
"... In this paper, we propose and use two novel procedures for the analysis of microarray gene expression data. The first is correspondence analysis which visualizes the relationship between genes and tissues as two 2 dimensional graphs, oriented so that distances between genes are preserved, distances ..."
Abstract

Cited by 57 (1 self)
 Add to MetaCart
(Show Context)
In this paper, we propose and use two novel procedures for the analysis of microarray gene expression data. The first is correspondence analysis which visualizes the relationship between genes and tissues as two 2 dimensional graphs, oriented so that distances between genes are preserved, distances between tissues are preserved, and so that genes which primarily distinguish certain types of tissue are spatially close to those tissues. For the inference of genetic links, partial correlations rather than correlations are the key issue. A partial correlation between i and j is the relationship between i and j after the effect of surrounding genes has been subtracted out of their pairwise correlation. This leads to the area of graphical modeling. A limitation of the graphical modeling approach is that the correlation matrix of expression profiles between genes is degenerate whenever the number of genes to be analyzed exceeds the number of distinct expression measurements. This can cause considerable problems, as calculation of partial correlations typically uses the inverse of the correlation matrix. To avoid this limitation, we propose two practical multiple regression procedures with variable selection to measure the net, screened, relationship between pairs of genes. Possible biases arising from the analysis of a subset of genes from the genome are examined in the worked examples. It seems that both these approaches are more natural ways of analyzing gene expression data than the currently popular approach of two way clustering.
Finding the m most probable configurations using loopy belief propagation
 In NIPS 16
, 2004
"... Loopy belief propagation (BP) has been successfully used in a number of difficult graphical models to find the most probable configuration of the hidden variables. In applications ranging from protein folding to image analysis one would like to find not just the best configuration but rather the top ..."
Abstract

Cited by 45 (2 self)
 Add to MetaCart
(Show Context)
Loopy belief propagation (BP) has been successfully used in a number of difficult graphical models to find the most probable configuration of the hidden variables. In applications ranging from protein folding to image analysis one would like to find not just the best configuration but rather the top M. While this problem has been solved using the junction tree formalism, in many real world problems the clique size in the junction tree is prohibitively large. In this work we address the problem of finding the M best configurations when exact inference is impossible. We start by developing a new exact inference algorithm for calculating the best configurations that uses only maxmarginals. For approximate inference, we replace the maxmarginals with the beliefs calculated using maxproduct BP and generalized BP. We show empirically that the algorithm can accurately and rapidly approximate the M best configurations in graphs with hundreds of variables. 1
Partial abductive inference in Bayesian belief networks using a genetic algorithm
 Pattern Recognit. Lett
, 1999
"... Abstract—Abductive inference in Bayesian belief networks (BBNs) is intended as the process of generating the most probable configurations given observed evidence. When we are interested only in a subset of the network’s variables, this problem is called partial abductive inference. Both problems are ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
(Show Context)
Abstract—Abductive inference in Bayesian belief networks (BBNs) is intended as the process of generating the most probable configurations given observed evidence. When we are interested only in a subset of the network’s variables, this problem is called partial abductive inference. Both problems are NPhard and so exact computation is not always possible. In this paper, a genetic algorithm is used to perform partial abductive inference in BBNs. The main contribution is the introduction of new genetic operators designed specifically for this problem. By using these genetic operators, we try to take advantage of the calculations previously carried out, when a new individual is evaluated. The algorithm is tested using a widely used Bayesian network and a randomly generated one and then compared with a previous genetic algorithm based on classical genetic operators. From the experimental results, we conclude that the new genetic operators preserve the accuracy of the previous algorithm, and also reduce the number of operations performed during the evaluation of individuals. The performance of the genetic algorithm is, thus, improved. Index Terms—Abductive inference, bayesian belief networks, evolutionary computation, genetic operators, most probable explanation, probabilistic reasoning. I.
An LP View of the Mbest MAP problem
"... We consider the problem of finding the M assignments with maximum probability in a probabilistic graphical model. We show how this problem can be formulated as a linear program (LP) on a particular polytope. We prove that, for tree graphs (and junction trees in general), this polytope has a particul ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
(Show Context)
We consider the problem of finding the M assignments with maximum probability in a probabilistic graphical model. We show how this problem can be formulated as a linear program (LP) on a particular polytope. We prove that, for tree graphs (and junction trees in general), this polytope has a particularly simple form and differs from the marginal polytope in a single inequality constraint. We use this characterization to provide an approximation scheme for nontree graphs, by using the set of spanning trees over such graphs. The method we present puts the Mbest inference problem in the context of LP relaxations, which have recently received considerable attention and have proven useful in solving difficult inference problems. We show empirically that our method often finds the provably exact M best configurations for problems of high treewidth. A common task in probabilistic modeling is finding the assignment with maximum probability given a model. This is often referred to as the MAP (maximum aposteriori) problem.
Discriminative reranking of diverse segmentations
 In CVPR
, 2013
"... This paper introduces a twostage approach to semantic image segmentation. In the first stage a probabilistic model generates a set of diverse plausible segmentations. In the second stage, a discriminatively trained reranking model selects the best segmentation from this set. The reranking stage c ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
(Show Context)
This paper introduces a twostage approach to semantic image segmentation. In the first stage a probabilistic model generates a set of diverse plausible segmentations. In the second stage, a discriminatively trained reranking model selects the best segmentation from this set. The reranking stage can use much more complex features than what could be tractably used in the probabilistic model, allowing a better exploration of the solution space than possible by simply producing the most probable solution from the probabilistic model. While our proposed approach already achieves stateoftheart results (48.1%) on the challenging VOC 2012 dataset, our machine and human analyses suggest that even larger gains are possible with such an approach. 1.