Results 1  10
of
252
Dynamic Bayesian Networks: Representation, Inference and Learning
, 2002
"... Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and biosequence analysis, and KFMs have bee ..."
Abstract

Cited by 564 (3 self)
 Add to MetaCart
Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and biosequence analysis, and KFMs have been used for problems ranging from tracking planes and missiles to predicting the economy. However, HMMs
and KFMs are limited in their “expressive power”. Dynamic Bayesian Networks (DBNs) generalize HMMs by allowing the state space to be represented in factored form, instead of as a single discrete random variable. DBNs generalize KFMs by allowing arbitrary probability distributions, not just (unimodal) linearGaussian. In this thesis, I will discuss how to represent many different kinds of models as DBNs, how to perform exact and approximate inference in DBNs, and how to learn DBN models from sequential data.
In particular, the main novel technical contributions of this thesis are as follows: a way of representing
Hierarchical HMMs as DBNs, which enables inference to be done in O(T) time instead of O(T 3), where T is the length of the sequence; an exact smoothing algorithm that takes O(log T) space instead of O(T); a simple way of using the junction tree algorithm for online inference in DBNs; new complexity bounds on exact online inference in DBNs; a new deterministic approximate inference algorithm called factored frontier; an analysis of the relationship between the BK algorithm and loopy belief propagation; a way of
applying RaoBlackwellised particle filtering to DBNs in general, and the SLAM (simultaneous localization
and mapping) problem in particular; a way of extending the structural EM algorithm to DBNs; and a variety of different applications of DBNs. However, perhaps the main value of the thesis is its catholic presentation of the field of sequential data modelling.
Multiply sectioned bayesian networks and junction forests for large knowledge based systems
 Computational Intelligence
, 1993
"... Abstract — We extend lazy propagation for inference in singleagent Bayesian networks to multiagent lazy inference in multiply sectioned Bayesian networks (MSBNs). Two methods are proposed using distinct runtime structures. We prove that the new methods are exact and efficient when domain structure ..."
Abstract

Cited by 79 (28 self)
 Add to MetaCart
Abstract — We extend lazy propagation for inference in singleagent Bayesian networks to multiagent lazy inference in multiply sectioned Bayesian networks (MSBNs). Two methods are proposed using distinct runtime structures. We prove that the new methods are exact and efficient when domain structure is sparse. Both improve space and time complexity than the existing method, which allow multiagent probabilistic reasoning to be performed in much larger domains given the computational resource. Relative performance of the three methods are compared analytically and experimentally. I.
LexBFS and partition refinement, with applications to transitive orientation, interval graph recognition and consecutive ones testing
, 2000
"... ..."
Exploiting Sparsity in Semidefinite Programming via Matrix Completion I: General Framework
 SIAM JOURNAL ON OPTIMIZATION
, 1999
"... A critical disadvantage of primaldual interiorpoint methods against dual interiorpoint methods for large scale SDPs (semidefinite programs) has been that the primal positive semidefinite variable matrix becomes fully dense in general even when all data matrices are sparse. Based on some fundamenta ..."
Abstract

Cited by 62 (27 self)
 Add to MetaCart
A critical disadvantage of primaldual interiorpoint methods against dual interiorpoint methods for large scale SDPs (semidefinite programs) has been that the primal positive semidefinite variable matrix becomes fully dense in general even when all data matrices are sparse. Based on some fundamental results about positive semidefinite matrix completion, this article proposes a general method of exploiting the aggregate sparsity pattern over all data matrices to overcome this disadvantage. Our method is used in two ways. One is a conversion of a sparse SDP having a large scale positive semidefinite variable matrix into an SDP having multiple but smaller size positive semidefinite variable matrices to which we can effectively apply any interiorpoint method for SDPs employing a standard blockdiagonal matrix data structure. The other way is an incorporation of our method into primaldual interiorpoint methods which we can apply directly to a given SDP. In Part II of this article, we wi...
Efficient parallel graph algorithms for coarse grained multicomputers and BSP (Extended Abstract)
 in Proc. 24th International Colloquium on Automata, Languages and Programming (ICALP'97
, 1997
"... In this paper, we present deterministic parallel algorithms for the coarse grained multicomputer (CGM) and bulksynchronous parallel computer (BSP) models which solve the following well known graph problems: (1) list ranking, (2) Euler tour construction, (3) computing the connected components and s ..."
Abstract

Cited by 59 (23 self)
 Add to MetaCart
In this paper, we present deterministic parallel algorithms for the coarse grained multicomputer (CGM) and bulksynchronous parallel computer (BSP) models which solve the following well known graph problems: (1) list ranking, (2) Euler tour construction, (3) computing the connected components and spanning forest, (4) lowest common ancestor preprocessing, (5) tree contraction and expression tree evaluation, (6) computing an ear decomposition or open ear decomposition, (7) 2edge connectivity and biconnectivity (testing and component computation), and (8) cordal graph recognition (finding a perfect elimination ordering). The algorithms for Problems 17 require O(log p) communication rounds and linear sequential work per round. Our results for Problems 1 and 2, i.e.they are fully scalable, and for Problems hold for arbitrary ratios n p 38 it is assumed that n p,>0, which is true for all commercially
private communication
"... A rigid interval graph is an interval graph which has only one clique tree. In 2009, Panda and Das show that all connected unit interval graphs are rigid interval graphs. Generalizing the two classic graph search algorithms, Lexicographic BreadthFirst Search (LBFS) and Maximum Cardinality Search (M ..."
Abstract

Cited by 56 (4 self)
 Add to MetaCart
A rigid interval graph is an interval graph which has only one clique tree. In 2009, Panda and Das show that all connected unit interval graphs are rigid interval graphs. Generalizing the two classic graph search algorithms, Lexicographic BreadthFirst Search (LBFS) and Maximum Cardinality Search (MCS), Corneil and Krueger propose in 2008 the socalled Maximal Neighborhood Search (MNS) and show that one sweep of MNS is enough to recognize chordal graphs. We develop the MNS properties of rigid interval graphs and characterize this graph class in several different ways. This allows us obtain several linear time multisweep MNS algorithms for recognizing rigid interval graphs and unit interval graphs, generalizing a corresponding 3sweep LBFS algorithm for unit interval graph recognition designed by Corneil in 2004. For unit interval graphs, we even present a new linear time 2sweep MNS certifying recognition algorithm. Submitted:
Graphical Templates For Model Registration
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1996
"... A new method of model registration is proposed using graphical templates. A graph of landmarks is chosen in the template image. All possible candidates for these landmarks are found in the data image using local operators. A dynamic programming algorithm on decomposable subgraphs of the template gra ..."
Abstract

Cited by 53 (2 self)
 Add to MetaCart
A new method of model registration is proposed using graphical templates. A graph of landmarks is chosen in the template image. All possible candidates for these landmarks are found in the data image using local operators. A dynamic programming algorithm on decomposable subgraphs of the template graph finds the optimal match to a subset of the candidate points in polynomial time. This combination of local operators to describe points of interest/landmarks and a graph to describe their geometric orientation in the plane, yields fast and precise matches of the model to the data, with no initialization required. Key words: Graphical templates, decomposable graphs, model registration, dynamic programming, image matching. Research supported in part by The University of Chicago Block Fund, and ARO DAAL0392 G0322. y Research supported in part by National Institutes of Health grant no. R01GM46800 x1 Introduction In recent years there has been a growing interest in deformable models for ...
Approximating Treewidth, Pathwidth, Frontsize, and Shortest Elimination Tree
, 1995
"... Various parameters of graphs connected to sparse matrix factorization and other applications can be approximated using an algorithm of Leighton et al. that finds vertex separators of graphs. The approximate values of the parameters, which include minimum front size, treewidth, pathwidth, and minimum ..."
Abstract

Cited by 52 (4 self)
 Add to MetaCart
Various parameters of graphs connected to sparse matrix factorization and other applications can be approximated using an algorithm of Leighton et al. that finds vertex separators of graphs. The approximate values of the parameters, which include minimum front size, treewidth, pathwidth, and minimum elimination tree height, are no more than O(logn) (minimum front size and treewidth) and O(log^2 n) (pathwidth and minimum elimination tree height) times the optimal values. In addition, we show that unless P = NP there are no absolute approximation algorithms for any of the parameters.
Hybrid backtracking bounded by treedecomposition of constraint networks
 Artificial Intelligence
, 2003
"... We propose a framework for solving CSPs based both on backtracking techniques and on the notion of treedecomposition of the constraint networks. This mixed approach permits us to define a new framework for the enumeration, which we expect that it will benefit from the advantages of two approaches: ..."
Abstract

Cited by 51 (16 self)
 Add to MetaCart
We propose a framework for solving CSPs based both on backtracking techniques and on the notion of treedecomposition of the constraint networks. This mixed approach permits us to define a new framework for the enumeration, which we expect that it will benefit from the advantages of two approaches: a practical efficiency of enumerative algorithms and a warranty of a limited time complexity by an approximation of the treewidth of the constraint networks. Finally, experimental results allow us to show the advantages of this approach. 1
Highly Parallel Sparse Cholesky Factorization
 SIAM Journal on Scientific and Statistical Computing
, 1992
"... We develop and compare several finegrained parallel algorithms to compute the Cholesky factorization of a sparse matrix. Our experimental implementations are on the Connection Machine, a distributedmemory SIMD machine whose programming model conceptually supplies one processor per data element. In ..."
Abstract

Cited by 45 (1 self)
 Add to MetaCart
We develop and compare several finegrained parallel algorithms to compute the Cholesky factorization of a sparse matrix. Our experimental implementations are on the Connection Machine, a distributedmemory SIMD machine whose programming model conceptually supplies one processor per data element. In contrast to specialpurpose algorithms in which the matrix structure conforms to the connection structure of the machine, our focus is on matrices with arbitrary sparsity structure.