Results 1  10
of
23
An Introduction to Conditional Random Fields
 Foundations and Trends in Machine Learning
, 2012
"... ..."
(Show Context)
Bayesian inference in hidden Markov random fields for binary data defined on large lattices
, 2005
"... this paper is to introduce approximate methods to compute the likelihood for large lattices based on exact likelihood calculations for smaller lattices. We introduce approximate likelihood methods by relaxing some of the dependencies in the latent model, and also by approximating the likelihood by a ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
(Show Context)
this paper is to introduce approximate methods to compute the likelihood for large lattices based on exact likelihood calculations for smaller lattices. We introduce approximate likelihood methods by relaxing some of the dependencies in the latent model, and also by approximating the likelihood by a partially ordered Markov model defined on a collection of sublattices. Results are presented based on simulated data as well as inference for the temporalspatial structure of the interaction between up and downregulated states within the mitochondrial chromosome of the Plasmodium falciparum organism
The Gaussian Process Density Sampler
"... We present the Gaussian Process Density Sampler (GPDS), an exchangeable generative model for use in nonparametric Bayesian density estimation. Samples drawn from the GPDS are consistent with exact, independent samples from a fixed density function that is a transformation of a function drawn from a ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
(Show Context)
We present the Gaussian Process Density Sampler (GPDS), an exchangeable generative model for use in nonparametric Bayesian density estimation. Samples drawn from the GPDS are consistent with exact, independent samples from a fixed density function that is a transformation of a function drawn from a Gaussian process prior. Our formulation allows us to infer an unknown density from data using Markov chain Monte Carlo, which gives samples from the posterior distribution over density functions and from the predictive distribution on data space. We can also infer the hyperparameters of the Gaussian process. We compare this density modeling technique to several existing techniques on a toy problem and a skullreconstruction task. 1
The Hierarchical Dirichlet Process Hidden SemiMarkov Model
"... There is much interest in the Hierarchical Dirichlet Process Hidden Markov Model (HDPHMM) as a natural Bayesian nonparametric extension of the traditional HMM. However, in many settings the HDPHMM’s strict Markovian constraints are undesirable, particularly if we wish to learn or encode nongeomet ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
There is much interest in the Hierarchical Dirichlet Process Hidden Markov Model (HDPHMM) as a natural Bayesian nonparametric extension of the traditional HMM. However, in many settings the HDPHMM’s strict Markovian constraints are undesirable, particularly if we wish to learn or encode nongeometric state durations. We can extend the HDPHMM to capture such structure by drawing upon explicitduration semiMarkovianity, which has been developed in the parametric setting to allow construction of highly interpretable models that admit natural prior information on state durations. In this paper we introduce the explicitduration HDPHSMM and develop posterior sampling algorithms for efficient inference in both the directassignment and weaklimit approximation settings. We demonstrate the utility of the model and our inference methods on synthetic data as well as experiments on a speaker diarization problem and an example of learning the patterns in Morse code. 1
Improving the Asymptotic Performance of Markov Chain MonteCarlo by Inserting Vortices
"... We present a new way of converting a reversible finite Markov chain into a nonreversible one, with a theoretical guarantee that the asymptotic variance of the MCMC estimator based on the nonreversible chain is reduced. The method is applicable to any reversible chain whose states are not connected ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
We present a new way of converting a reversible finite Markov chain into a nonreversible one, with a theoretical guarantee that the asymptotic variance of the MCMC estimator based on the nonreversible chain is reduced. The method is applicable to any reversible chain whose states are not connected through a tree, and can be interpreted graphically as inserting vortices into the state transition graph. Our result confirms that nonreversible chains are fundamentally better than reversible ones in terms of asymptotic performance, and suggests interesting directions for further improving MCMC. 1
Bayesian Model Comparison and Parameter Inference in Systems Biology Using Nested Sampling
, 2013
"... Inferring parameters for models of biological processes is a current challenge in systems biology, as is the related problem of comparing competing models that explain the data. In this work we apply Skilling’s nested sampling to address both of these problems. Nested sampling is a Bayesian method f ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
Inferring parameters for models of biological processes is a current challenge in systems biology, as is the related problem of comparing competing models that explain the data. In this work we apply Skilling’s nested sampling to address both of these problems. Nested sampling is a Bayesian method for exploring parameter space that transforms a multidimensional integral to a 1D integration over likelihood space. This approach focusses on the computation of the marginal likelihood or evidence. The ratio of evidences of different models leads to the Bayes factor, which can be used for model comparison. We demonstrate how nested sampling can be used to reverseengineer a system’s behaviour whilst accounting for the uncertainty in the results. The effect of missing initial conditions of the variables as well as unknown parameters is investigated. We show how the evidence and the model ranking can change as a function of the available data. Furthermore, the addition of data from extra variables of the system can deliver more information for model comparison than increasing the data from one variable, thus providing a basis for experimental design.
Sparse Linear Identifiable Multivariate Modeling
"... In this paper we consider sparse and identifiable linear latent variable (factor) and linear Bayesian network models for parsimonious analysis of multivariate data. We propose a computationally efficient method for joint parameter and model inference, and model comparison. It consists of a fully Bay ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
In this paper we consider sparse and identifiable linear latent variable (factor) and linear Bayesian network models for parsimonious analysis of multivariate data. We propose a computationally efficient method for joint parameter and model inference, and model comparison. It consists of a fully Bayesian hierarchy for sparse models using slab and spike priors (twocomponent δfunction and continuous mixtures), nonGaussian latent factors and a stochastic search over the ordering of the variables. The framework, which we call SLIM (Sparse Linear Identifiable Multivariate modeling), is validated and benchmarked on artificial and real biological data sets. SLIM is closest in spirit to LiNGAM (Shimizu et al., 2006), but differs substantially in inference, Bayesian network structure learning and model comparison. Experimentally, SLIM performs equally well or better than LiNGAM with comparable computational complexity. We attribute this mainly to the stochastic search strategy used, and to parsimony (sparsity and identifiability), which is an explicit part of the model. We propose two extensions to the basic i.i.d. linear framework: nonlinear dependence on observed variables, called SNIM (Sparse Nonlinear Identifiable Multivariate modeling) and allowing for correlations between latent variables, called CSLIM (Correlated SLIM), for the temporal and/or spatial data. The source code and scripts are available from
Nonparametric Bayesian Density Modeling with Gaussian Processes. ICML/UAI Nonparametric Bayes Workshop
, 2008
"... The Gaussian process is a useful prior on functions for Bayesian kernel regression and classification. Density estimation with a Gaussian process prior is difficult, however, as densities must be nonnegative and integrate ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
The Gaussian process is a useful prior on functions for Bayesian kernel regression and classification. Density estimation with a Gaussian process prior is difficult, however, as densities must be nonnegative and integrate
KernelBased Adaptive Estimation: Multidimensional and StateSpace Approaches
, 2014
"... 2Dedicated to my parents ..."