Results 1  10
of
12
The Gaussian Process Density Sampler
"... We present the Gaussian Process Density Sampler (GPDS), an exchangeable generative model for use in nonparametric Bayesian density estimation. Samples drawn from the GPDS are consistent with exact, independent samples from a fixed density function that is a transformation of a function drawn from a ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
We present the Gaussian Process Density Sampler (GPDS), an exchangeable generative model for use in nonparametric Bayesian density estimation. Samples drawn from the GPDS are consistent with exact, independent samples from a fixed density function that is a transformation of a function drawn from a Gaussian process prior. Our formulation allows us to infer an unknown density from data using Markov chain Monte Carlo, which gives samples from the posterior distribution over density functions and from the predictive distribution on data space. We can also infer the hyperparameters of the Gaussian process. We compare this density modeling technique to several existing techniques on a toy problem and a skullreconstruction task. 1
Nonparametric Bayesian Density Modeling with Gaussian Processes. ICML/UAI Nonparametric Bayes Workshop
, 2008
"... The Gaussian process is a useful prior on functions for Bayesian kernel regression and classification. Density estimation with a Gaussian process prior is difficult, however, as densities must be nonnegative and integrate ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
The Gaussian process is a useful prior on functions for Bayesian kernel regression and classification. Density estimation with a Gaussian process prior is difficult, however, as densities must be nonnegative and integrate
The Hierarchical Dirichlet Process Hidden SemiMarkov Model
"... There is much interest in the Hierarchical Dirichlet Process Hidden Markov Model (HDPHMM) as a natural Bayesian nonparametric extension of the traditional HMM. However, in many settings the HDPHMM’s strict Markovian constraints are undesirable, particularly if we wish to learn or encode nongeomet ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
There is much interest in the Hierarchical Dirichlet Process Hidden Markov Model (HDPHMM) as a natural Bayesian nonparametric extension of the traditional HMM. However, in many settings the HDPHMM’s strict Markovian constraints are undesirable, particularly if we wish to learn or encode nongeometric state durations. We can extend the HDPHMM to capture such structure by drawing upon explicitduration semiMarkovianity, which has been developed in the parametric setting to allow construction of highly interpretable models that admit natural prior information on state durations. In this paper we introduce the explicitduration HDPHSMM and develop posterior sampling algorithms for efficient inference in both the directassignment and weaklimit approximation settings. We demonstrate the utility of the model and our inference methods on synthetic data as well as experiments on a speaker diarization problem and an example of learning the patterns in Morse code. 1
TPA and Nested Sampling
"... In isolation, Algorithm 2.1 can be viewed as a special case of Nested Sampling. To recover TPA one could run Nested Sampling with the target distribution as its prior and with the likelihood to: 1 θ ∈ B L(θ) = ɛ/(1 + e β(θ) ) θ / ∈ B ′ , where β = inf{β ′ : θ ∈ A(β ′)}. (1) Skilling (2007) previousl ..."
Abstract
 Add to MetaCart
In isolation, Algorithm 2.1 can be viewed as a special case of Nested Sampling. To recover TPA one could run Nested Sampling with the target distribution as its prior and with the likelihood to: 1 θ ∈ B L(θ) = ɛ/(1 + e β(θ) ) θ / ∈ B ′ , where β = inf{β ′ : θ ∈ A(β ′)}. (1) Skilling (2007) previously identified that the number of steps required to reach a given set is Poisson distributed. Huber and Schott suggest making this special case central, recasting all computations as finding the mass of a distribution on a set. Additional contributions are a theoretical analysis, two general ways of reducing problems to the required form and a link to annealing. The resulting TPA methods are different from a straight application of Nested Sampling. For example, in both variants the initial sampling distribution is set to the posterior of an inference problem rather than the prior.
Improving the Asymptotic Performance of Markov Chain MonteCarlo by Inserting Vortices
"... We present a new way of converting a reversible finite Markov chain into a nonreversible one, with a theoretical guarantee that the asymptotic variance of the MCMC estimator based on the nonreversible chain is reduced. The method is applicable to any reversible chain whose states are not connected ..."
Abstract
 Add to MetaCart
We present a new way of converting a reversible finite Markov chain into a nonreversible one, with a theoretical guarantee that the asymptotic variance of the MCMC estimator based on the nonreversible chain is reduced. The method is applicable to any reversible chain whose states are not connected through a tree, and can be interpreted graphically as inserting vortices into the state transition graph. Our result confirms that nonreversible chains are fundamentally better than reversible ones in terms of asymptotic performance, and suggests interesting directions for further improving MCMC. 1
Applications of nested sampling in systems biology
"... Stochastic models are commonly used in systems biology to represent the interaction of small numbers of molecules, and the discrete states that a molecule might adopt. The optimisation of complex stochastic models is challenging as, typically, they cannot be solved analytically. Nested sampling is a ..."
Abstract
 Add to MetaCart
Stochastic models are commonly used in systems biology to represent the interaction of small numbers of molecules, and the discrete states that a molecule might adopt. The optimisation of complex stochastic models is challenging as, typically, they cannot be solved analytically. Nested sampling is an effective method for sampling the posterior distributions of model parameters. The samples are obtained as a byproduct of calculating the Bayesian evidence. Nested sampling requires a likelihood function, and, in the context of systems biology, the extent to which the data is explained by a given set of model parameters can be computed by an approximate log likelihood function derived from a number of Gillespie simulations. This optimisation strategy is therefore generic, and applicable to kinetic data and steadystate distributions. We have demonstrated that this approach performs well as an optimiser for a number of systems biology models, including models of circadian rhythms. The method can also be used for model comparison – which will be the topic of future work. Nested sampling Nested sampling explores the Bayesian evidence, transforming the multidimensional integral for the evidence into a onedimensional integral over the prior mass (Skilling, 2006). The sorted likelihood function L(x) is used as an evolving constraint in the generation of a set of objects {x 0..x i}, each object is an array of values randomly sampled from the prior range of a model parameter. θ 2 L(x) A simple model of transcription and translation transc M d1 transl A stochastic model of the transcription of mRNA (M) and the translation of protein (P) from mRNA was fitted to a synthetic data set of samples from the distributions of M and P. Nested sampling reveals the obvious tradeoff between transc and d1 in establishing the mean <M>. P d2 θ 1
Manuscrit auteur, publié dans "41èmes Journées de Statistique, SFdS, Bordeaux (2009)" Bayesian Inference for models with intractable likelihoods
"... imaging ..."
A Study of Population MCMC for estimating Bayes Factors over Nonlinear ODE Models
"... Thesis submitted in accordance with the requirements of ..."