Results 1  10
of
20
Dynamic Bayesian Networks: Representation, Inference and Learning
, 2002
"... Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and biosequence analysis, and KFMs have bee ..."
Abstract

Cited by 564 (3 self)
 Add to MetaCart
Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and biosequence analysis, and KFMs have been used for problems ranging from tracking planes and missiles to predicting the economy. However, HMMs
and KFMs are limited in their “expressive power”. Dynamic Bayesian Networks (DBNs) generalize HMMs by allowing the state space to be represented in factored form, instead of as a single discrete random variable. DBNs generalize KFMs by allowing arbitrary probability distributions, not just (unimodal) linearGaussian. In this thesis, I will discuss how to represent many different kinds of models as DBNs, how to perform exact and approximate inference in DBNs, and how to learn DBN models from sequential data.
In particular, the main novel technical contributions of this thesis are as follows: a way of representing
Hierarchical HMMs as DBNs, which enables inference to be done in O(T) time instead of O(T 3), where T is the length of the sequence; an exact smoothing algorithm that takes O(log T) space instead of O(T); a simple way of using the junction tree algorithm for online inference in DBNs; new complexity bounds on exact online inference in DBNs; a new deterministic approximate inference algorithm called factored frontier; an analysis of the relationship between the BK algorithm and loopy belief propagation; a way of
applying RaoBlackwellised particle filtering to DBNs in general, and the SLAM (simultaneous localization
and mapping) problem in particular; a way of extending the structural EM algorithm to DBNs; and a variety of different applications of DBNs. However, perhaps the main value of the thesis is its catholic presentation of the field of sequential data modelling.
Marginal likelihood from the Gibbs output
 J. Am. Stat. Assoc
, 1995
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 324 (19 self)
 Add to MetaCart
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
Using simulation methods for Bayesian econometric models: Inference, development and communication
 Econometric Review
, 1999
"... This paper surveys the fundamental principles of subjective Bayesian inference in econometrics and the implementation of those principles using posterior simulation methods. The emphasis is on the combination of models and the development of predictive distributions. Moving beyond conditioning on a ..."
Abstract

Cited by 199 (15 self)
 Add to MetaCart
This paper surveys the fundamental principles of subjective Bayesian inference in econometrics and the implementation of those principles using posterior simulation methods. The emphasis is on the combination of models and the development of predictive distributions. Moving beyond conditioning on a fixed number of completely specified models, the paper introduces subjective Bayesian tools for formal comparison of these models with as yet incompletely specified models. The paper then shows how posterior simulators can facilitate communication between investigators (for example, econometricians) on the one hand and remote clients (for example, decision makers) on the other, enabling clients to vary the prior distributions and functions of interest employed by investigators. A theme of the paper is the practicality of subjective Bayesian methods. To this end, the paper describes publicly available software for Bayesian inference, model development, and communication and provides illustrations using two simple econometric models. *This paper was originally prepared for the Australasian meetings of the Econometric Society in Melbourne, Australia,
Simulating Normalized Constants: From Importance Sampling to Bridge Sampling to Path Sampling
 Statistical Science, 13, 163–185. COMPARISON OF METHODS FOR COMPUTING BAYES FACTORS 435
, 1998
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 146 (4 self)
 Add to MetaCart
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
A Reference Bayesian Test for Nested Hypotheses And its Relationship to the Schwarz Criterion
 Journal of the American Statistical Association
, 1994
"... To compute a Bayes factor for testing H 0 : / = / 0 in the presence of a nuisance parameter fi, priors under the null and alternative hypotheses must be chosen. As in Bayesian estimation, an important problem has been to define automatic or "reference" methods for determining priors based only on t ..."
Abstract

Cited by 125 (4 self)
 Add to MetaCart
To compute a Bayes factor for testing H 0 : / = / 0 in the presence of a nuisance parameter fi, priors under the null and alternative hypotheses must be chosen. As in Bayesian estimation, an important problem has been to define automatic or "reference" methods for determining priors based only on the structure of the model. In this paper we apply the heuristic device of taking the amount of information in the prior on / equal to the amount of information in a single observation. Then, after transforming fi to be "null orthogonal" to /, we take the marginal priors on fi to be equal under the null and alternative hypotheses. Doing so, and taking the prior on / to be Normal, we find that the log of the Bayes factor may be approximated by the Schwarz criterion with an error of order O(n \Gamma1=2 ), rather than the usual error of order O(1). This result suggests the Schwarz criterion should provide sensible approximate solutions to Bayesian testing problems, at least when the hypothese...
Practical Bayesian Density Estimation Using Mixtures Of Normals
 Journal of the American Statistical Association
, 1995
"... this paper, we propose some solutions to these problems. Our goal is to come up with a simple, practical method for estimating the density. This is an interesting problem in its own right, as well as a first step towards solving other inference problems, such as providing more flexible distributions ..."
Abstract

Cited by 116 (2 self)
 Add to MetaCart
this paper, we propose some solutions to these problems. Our goal is to come up with a simple, practical method for estimating the density. This is an interesting problem in its own right, as well as a first step towards solving other inference problems, such as providing more flexible distributions in hierarchical models. To see why the posterior is improper under the usual reference prior, we write the model in the following way. Let Z = (Z 1 ; : : : ; Z n ) and X = (X 1 ; : : : ; X n ). The Z
A Bayesian Approach to Causal Discovery
, 1997
"... We examine the Bayesian approach to the discovery of directed acyclic causal models and compare it to the constraintbased approach. Both approaches rely on the Causal Markov assumption, but the two differ significantly in theory and practice. An important difference between the approaches is that t ..."
Abstract

Cited by 79 (1 self)
 Add to MetaCart
We examine the Bayesian approach to the discovery of directed acyclic causal models and compare it to the constraintbased approach. Both approaches rely on the Causal Markov assumption, but the two differ significantly in theory and practice. An important difference between the approaches is that the constraintbased approach uses categorical information about conditionalindependence constraints in the domain, whereas the Bayesian approach weighs the degree to which such constraints hold. As a result, the Bayesian approach has three distinct advantages over its constraintbased counterpart. One, conclusions derived from the Bayesian approach are not susceptible to incorrect categorical decisions about independence facts that can occur with data sets of finite size. Two, using the Bayesian approach, finer distinctions among model structuresboth quantitative and qualitativecan be made. Three, information from several models can be combined to make better inferences and to better ...
Estimating Bayes Factors via Posterior Simulation with the LaplaceMetropolis Estimator
 Journal of the American Statistical Association
, 1994
"... The key quantity needed for Bayesian hypothesis testing and model selection is the marginal likelihood for a model, also known as the integrated likelihood, or the marginal probability of the data. In this paper we describe a way to use posterior simulation output to estimate marginal likelihoods. W ..."
Abstract

Cited by 33 (11 self)
 Add to MetaCart
The key quantity needed for Bayesian hypothesis testing and model selection is the marginal likelihood for a model, also known as the integrated likelihood, or the marginal probability of the data. In this paper we describe a way to use posterior simulation output to estimate marginal likelihoods. We describe the basic LaplaceMetropolis estimator for models without random effects. For models with random effects the compound LaplaceMetropolis estimator is introduced. This estimator is applied to data from the World Fertility Survey and shown to give accurate results. Batching of simulation output is used to assess the uncertainty involved in using the compound LaplaceMetropolis estimator. The method allows us to test for the effects of independent variables in a random effects model, and also to test for the presence of the random effects. KEY WORDS: LaplaceMetropolis estimator; Random effects models; Marginal likelihoods; Posterior simulation; World Fertility Survey. 1 Introduction...
Foundations of Assisted Cognition Systems
, 2003
"... this report. Kautz [79] modeled plan recognition logically in a manner that allowed goals and plans to be described at various levels of abstraction. Etzioni et al. [94, 95, 92, 93] developed a version space algorithm for plan recognition that is provably sound and polynomial time [94, 93]. Weld et ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
this report. Kautz [79] modeled plan recognition logically in a manner that allowed goals and plans to be described at various levels of abstraction. Etzioni et al. [94, 95, 92, 93] developed a version space algorithm for plan recognition that is provably sound and polynomial time [94, 93]. Weld et al. developed goal recognition algorithms using inductive logic programming [90] and versionspace algebra [89, 168, 88] in the context of programming by demonstration
Learning Bayes net structure from sparse data sets
, 2001
"... There are essentially two kinds of approaches for learning the structure of Bayesian Networks (BNs) from data. The first approach tries to find a graph which satis es all the constraints implied by the empirical conditional independencies measured in the data [PV91, SGS00a, Shi00]. The second approa ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
There are essentially two kinds of approaches for learning the structure of Bayesian Networks (BNs) from data. The first approach tries to find a graph which satis es all the constraints implied by the empirical conditional independencies measured in the data [PV91, SGS00a, Shi00]. The second approach searches through the space of models (either DAGs or PDAGs), and uses some scoring metric (typically Bayesian or some approximation, such as BIC/MDL) to evaluate the models [CH92, Hec95, Hec98, Kra98], typically returning the highest scoring model found. Our main interest is in learning BN structure from gene expression data [FLNP00, HGJY01, MM99, SGS00b]. In domains such as this, where the ratio of the number of observations to the number of variables is low (i.e., when we have sparse data), selecting a threshold for the conditional independence (CI) tests can be tricky, and repeated use of such tests can lead to inconsistencies [DD99]. Bayesian s...