Results 1  10
of
16
Learning in graphical models
, 2004
"... Statistical applications in fields such as bioinformatics, information retrieval, speech processing, image processing and communications often involve largescale models in which thousands or millions of random variables are linked in complex ways. Graphical models provide a general methodology for ..."
Abstract

Cited by 612 (11 self)
 Add to MetaCart
Statistical applications in fields such as bioinformatics, information retrieval, speech processing, image processing and communications often involve largescale models in which thousands or millions of random variables are linked in complex ways. Graphical models provide a general methodology for approaching these problems, and indeed many of the models developed by researchers in these applied fields are instances of the general graphical model formalism. We review some of the basic ideas underlying graphical models, including the algorithmic ideas that allow graphical models to be deployed in largescale data analysis problems. We also present examples of graphical models in bioinformatics, errorcontrol coding and language processing. Key words and phrases: Probabilistic graphical models, junction tree algorithm, sumproduct algorithm, Markov chain Monte Carlo, variational inference, bioinformatics, errorcontrol coding.
Dynamic Bayesian Networks: Representation, Inference and Learning
, 2002
"... Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and biosequence analysis, and KFMs have bee ..."
Abstract

Cited by 564 (3 self)
 Add to MetaCart
Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and biosequence analysis, and KFMs have been used for problems ranging from tracking planes and missiles to predicting the economy. However, HMMs
and KFMs are limited in their “expressive power”. Dynamic Bayesian Networks (DBNs) generalize HMMs by allowing the state space to be represented in factored form, instead of as a single discrete random variable. DBNs generalize KFMs by allowing arbitrary probability distributions, not just (unimodal) linearGaussian. In this thesis, I will discuss how to represent many different kinds of models as DBNs, how to perform exact and approximate inference in DBNs, and how to learn DBN models from sequential data.
In particular, the main novel technical contributions of this thesis are as follows: a way of representing
Hierarchical HMMs as DBNs, which enables inference to be done in O(T) time instead of O(T 3), where T is the length of the sequence; an exact smoothing algorithm that takes O(log T) space instead of O(T); a simple way of using the junction tree algorithm for online inference in DBNs; new complexity bounds on exact online inference in DBNs; a new deterministic approximate inference algorithm called factored frontier; an analysis of the relationship between the BK algorithm and loopy belief propagation; a way of
applying RaoBlackwellised particle filtering to DBNs in general, and the SLAM (simultaneous localization
and mapping) problem in particular; a way of extending the structural EM algorithm to DBNs; and a variety of different applications of DBNs. However, perhaps the main value of the thesis is its catholic presentation of the field of sequential data modelling.
An Introduction to MCMC for Machine Learning
, 2003
"... This purpose of this introductory paper is threefold. First, it introduces the Monte Carlo method with emphasis on probabilistic machine learning. Second, it reviews the main building blocks of modern Markov chain Monte Carlo simulation, thereby providing and introduction to the remaining papers of ..."
Abstract

Cited by 222 (2 self)
 Add to MetaCart
This purpose of this introductory paper is threefold. First, it introduces the Monte Carlo method with emphasis on probabilistic machine learning. Second, it reviews the main building blocks of modern Markov chain Monte Carlo simulation, thereby providing and introduction to the remaining papers of this special issue. Lastly, it discusses new interesting research horizons.
Fast Sampling Of Gaussian Markov Random Fields With Applications
 Journal of the Royal Statistical Society, Series B
, 2000
"... This report has URL http://www.math.ntnu.no/preprint/statistics/2000/S12000.ps ..."
Abstract

Cited by 74 (6 self)
 Add to MetaCart
This report has URL http://www.math.ntnu.no/preprint/statistics/2000/S12000.ps
Variational Probabilistic Inference and the QMRDT Network
 JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH
, 1999
"... We describe a variational approximation method for efficient inference in largescale probabilistic models. Variational methods are deterministic procedures that provide approximations to marginal and conditional probabilities of interest. They provide alternatives to approximate inference method ..."
Abstract

Cited by 57 (3 self)
 Add to MetaCart
We describe a variational approximation method for efficient inference in largescale probabilistic models. Variational methods are deterministic procedures that provide approximations to marginal and conditional probabilities of interest. They provide alternatives to approximate inference methods based on stochastic sampling or search. We describe a variational approach to the problem of diagnostic inference in the "Quick Medical Reference" (QMR) network. The QMR network is a largescale probabilistic graphical model built on statistical and expert knowledge. Exact probabilistic inference is infeasible in this model for all but a small set of cases. We evaluate our variational inference algorithm on a large set of diagnostic test cases, comparing the algorithm to a stateoftheart stochastic sampling method.
Blocking Gibbs Sampling for Linkage Analysis in Large Pedigrees with Many Loops
 AMERICAN JOURNAL OF HUMAN GENETICS
, 1996
"... We will apply the method of blocking Gibbs sampling to a problem of great importance and complexity  linkage analysis. Blocking Gibbs combines exact local computations with Gibbs sampling in a way that complements the strengths of both. The method is able to handle problems with very high complexi ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
We will apply the method of blocking Gibbs sampling to a problem of great importance and complexity  linkage analysis. Blocking Gibbs combines exact local computations with Gibbs sampling in a way that complements the strengths of both. The method is able to handle problems with very high complexity such as linkage analysis in large pedigrees with many loops; a task that no other known method is able to handle. New developments of the method are outlined, and it is applied to a highly complex linkage problem.
Variational probabilistic inference and the QMRDT database
 Journal of Artificial Intelligence Research
, 1999
"... We describe a variational approximation method for efficient inference in largescale probabilistic models. Variational methods are deterministic procedures that provide approximations to marginal and conditional probabilities of interest. They provide alternatives to approximate inference methods b ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
We describe a variational approximation method for efficient inference in largescale probabilistic models. Variational methods are deterministic procedures that provide approximations to marginal and conditional probabilities of interest. They provide alternatives to approximate inference methods based on stochastic sampling or search. We describe a variational approach to the problem of diagnostic inference in the "Quick Medical Reference" (QMR) database. The QMR database is a largescale probabilistic graphical model built on statistical and expert knowledge. Exact probabilistic inference is infeasible in this model for all but a small set of cases. We evaluate our variational inference algorithm on a large set of diagnostic test cases, comparing the algorithm to a stateoftheart stochastic sampling method. 1 Introduction Probabilistic models have become increasingly prevalent in AI in recent years. Beyond the significant representational advantages of probability theory, inclu...
HUGS: Combining Exact Inference and Gibbs Sampling in Junction Trees
 PROC. 11TH CONF. UNCERTAINTY IN ARTIFICIAL INTELLIGENCE
, 1995
"... Dawid, Kjærulff & Lauritzen (1994) provided a preliminary description of a hybrid between MonteCarlo sampling methods and exact local computations in junction trees. Utilizing the strengths of both methods, such hybrid inference methods has the potential of expanding the class of problems which can ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
Dawid, Kjærulff & Lauritzen (1994) provided a preliminary description of a hybrid between MonteCarlo sampling methods and exact local computations in junction trees. Utilizing the strengths of both methods, such hybrid inference methods has the potential of expanding the class of problems which can be solved under bounded resources as well as solving problems which otherwise resist exact solutions. The paper provides a detailed description of a particular instance of such a hybrid scheme; namely, combination of exact inference and Gibbs sampling in discrete Bayesian networks. We argue that this combination calls for an extension of the usual message passing scheme of ordinary junction trees.
Conditional simulation from highly structured Gaussian systems, with application to blockingMCMC for the Bayesian analysis of very large linear models
, 2000
"... This paper examines strategies for simulating exactly from large Gaussian linear models conditional on some Gaussian observations. Local computation strategies based on the conditional independence structure of the model are developed in order to reduced costs associated with storage and computation ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
This paper examines strategies for simulating exactly from large Gaussian linear models conditional on some Gaussian observations. Local computation strategies based on the conditional independence structure of the model are developed in order to reduced costs associated with storage and computation. Application of these algorithms to simulation from nested hierarchical linear models is considered, and the construction of efficient MCMC schemes for Bayesian inference in highdimensional linear models is outlined. Keywords: Block sampling; linear Bayes models; local computation; nested hierarchical random effects; DAG propagation. # This is a University of Newcastle Statistics Preprint, STA00,9. Last updated: June 19, 2000. 1 Contents 1 Introduction 3 1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Blocking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2 Local computation 5 2.1 Introduction . . . . . . ...
Multilocus linkage analysis by blocked Gibbs sampling
 Statistics and Computing
, 2000
"... The problem of multilocus linkage analysis is expressed as a graphical model, making explicit a previously implicit connection, and recent developments in the field are described in this context. A novel application of blocked Gibbs sampling for Bayesian networks is developed to generate inheritance ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
The problem of multilocus linkage analysis is expressed as a graphical model, making explicit a previously implicit connection, and recent developments in the field are described in this context. A novel application of blocked Gibbs sampling for Bayesian networks is developed to generate inheritance matrices from an irreducible Markov chain. This is used as the basis for reconstruction of historical meiotic states and approximate calculation of the likelihood function for the location of an unmapped genetic trait. We believe this to be the only approach that currently makes fully informative multilocus linkage analysis possible on large extended pedigrees.