Results 1  10
of
18
The Gaussian Process Density Sampler
"... We present the Gaussian Process Density Sampler (GPDS), an exchangeable generative model for use in nonparametric Bayesian density estimation. Samples drawn from the GPDS are consistent with exact, independent samples from a fixed density function that is a transformation of a function drawn from a ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
(Show Context)
We present the Gaussian Process Density Sampler (GPDS), an exchangeable generative model for use in nonparametric Bayesian density estimation. Samples drawn from the GPDS are consistent with exact, independent samples from a fixed density function that is a transformation of a function drawn from a Gaussian process prior. Our formulation allows us to infer an unknown density from data using Markov chain Monte Carlo, which gives samples from the posterior distribution over density functions and from the predictive distribution on data space. We can also infer the hyperparameters of the Gaussian process. We compare this density modeling technique to several existing techniques on a toy problem and a skullreconstruction task. 1
The Hierarchical Dirichlet Process Hidden SemiMarkov Model
"... There is much interest in the Hierarchical Dirichlet Process Hidden Markov Model (HDPHMM) as a natural Bayesian nonparametric extension of the traditional HMM. However, in many settings the HDPHMM’s strict Markovian constraints are undesirable, particularly if we wish to learn or encode nongeomet ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
There is much interest in the Hierarchical Dirichlet Process Hidden Markov Model (HDPHMM) as a natural Bayesian nonparametric extension of the traditional HMM. However, in many settings the HDPHMM’s strict Markovian constraints are undesirable, particularly if we wish to learn or encode nongeometric state durations. We can extend the HDPHMM to capture such structure by drawing upon explicitduration semiMarkovianity, which has been developed in the parametric setting to allow construction of highly interpretable models that admit natural prior information on state durations. In this paper we introduce the explicitduration HDPHSMM and develop posterior sampling algorithms for efficient inference in both the directassignment and weaklimit approximation settings. We demonstrate the utility of the model and our inference methods on synthetic data as well as experiments on a speaker diarization problem and an example of learning the patterns in Morse code. 1
Nonparametric Bayesian Density Modeling with Gaussian Processes. ICML/UAI Nonparametric Bayes Workshop
, 2008
"... The Gaussian process is a useful prior on functions for Bayesian kernel regression and classification. Density estimation with a Gaussian process prior is difficult, however, as densities must be nonnegative and integrate ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
The Gaussian process is a useful prior on functions for Bayesian kernel regression and classification. Density estimation with a Gaussian process prior is difficult, however, as densities must be nonnegative and integrate
Improving the Asymptotic Performance of Markov Chain MonteCarlo by Inserting Vortices
"... We present a new way of converting a reversible finite Markov chain into a nonreversible one, with a theoretical guarantee that the asymptotic variance of the MCMC estimator based on the nonreversible chain is reduced. The method is applicable to any reversible chain whose states are not connected ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
We present a new way of converting a reversible finite Markov chain into a nonreversible one, with a theoretical guarantee that the asymptotic variance of the MCMC estimator based on the nonreversible chain is reduced. The method is applicable to any reversible chain whose states are not connected through a tree, and can be interpreted graphically as inserting vortices into the state transition graph. Our result confirms that nonreversible chains are fundamentally better than reversible ones in terms of asymptotic performance, and suggests interesting directions for further improving MCMC. 1
KernelBased Adaptive Estimation: Multidimensional and StateSpace Approaches
, 2014
"... 2Dedicated to my parents ..."
Application of Bayesian Probability Theory in Astrophysics. arXiv/astroph/0809.0939v1. 29 et al.(1992)Carlin
 Applied Statistics
, 2008
"... Bayesian Inference is a powerful approach to data analysis that is based almost entirely on probability theory. In this approach, probabilities model uncertainty rather than randomness or variability. This thesis is composed of a series of papers that have been published in various astronomical jour ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Bayesian Inference is a powerful approach to data analysis that is based almost entirely on probability theory. In this approach, probabilities model uncertainty rather than randomness or variability. This thesis is composed of a series of papers that have been published in various astronomical journals during the years 20052008. The unifying thread running through the papers is the use of Bayesian Inference to solve underdetermined inverse problems in astrophysics. Firstly, a methodology is developed to solve a question in gravitational lens inversion using the observed images of gravitational lens systems to reconstruct the undistorted source profile and the mass profile of the lensing galaxy. A similar technique is also applied to the task of inferring the number and frequency of modes of oscillation of a star from the time series observations that are used in the field of asteroseismology. For these complex problems, many of the required calculations cannot be done analytically, and so Markov Chain Monte Carlo algorithms have been used. Finally, probabilistic reasoning is applied to a controversial question in astrobiology: does the fact that life formed quite soon after the Earth constitute evidence that the formation of life is quite probable, given the right macroscopic conditions? Statement of Originality This thesis describes work carried out in the Institute of Astronomy, within the School of Physics,
Sparse Linear Identifiable Multivariate Modeling
"... In this paper we consider sparse and identifiable linear latent variable (factor) and linear Bayesian network models for parsimonious analysis of multivariate data. We propose a computationally efficient method for joint parameter and model inference, and model comparison. It consists of a fully Bay ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
In this paper we consider sparse and identifiable linear latent variable (factor) and linear Bayesian network models for parsimonious analysis of multivariate data. We propose a computationally efficient method for joint parameter and model inference, and model comparison. It consists of a fully Bayesian hierarchy for sparse models using slab and spike priors (twocomponent δfunction and continuous mixtures), nonGaussian latent factors and a stochastic search over the ordering of the variables. The framework, which we call SLIM (Sparse Linear Identifiable Multivariate modeling), is validated and benchmarked on artificial and real biological data sets. SLIM is closest in spirit to LiNGAM (Shimizu et al., 2006), but differs substantially in inference, Bayesian network structure learning and model comparison. Experimentally, SLIM performs equally well or better than LiNGAM with comparable computational complexity. We attribute this mainly to the stochastic search strategy used, and to parsimony (sparsity and identifiability), which is an explicit part of the model. We propose two extensions to the basic i.i.d. linear framework: nonlinear dependence on observed variables, called SNIM (Sparse Nonlinear Identifiable Multivariate modeling) and allowing for correlations between latent variables, called CSLIM (Correlated SLIM), for the temporal and/or spatial data. The source code and scripts are available from
Bayesian Model Comparison and Parameter Inference in Systems Biology Using Nested Sampling
, 2013
"... Inferring parameters for models of biological processes is a current challenge in systems biology, as is the related problem of comparing competing models that explain the data. In this work we apply Skilling’s nested sampling to address both of these problems. Nested sampling is a Bayesian method f ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Inferring parameters for models of biological processes is a current challenge in systems biology, as is the related problem of comparing competing models that explain the data. In this work we apply Skilling’s nested sampling to address both of these problems. Nested sampling is a Bayesian method for exploring parameter space that transforms a multidimensional integral to a 1D integration over likelihood space. This approach focusses on the computation of the marginal likelihood or evidence. The ratio of evidences of different models leads to the Bayes factor, which can be used for model comparison. We demonstrate how nested sampling can be used to reverseengineer a system’s behaviour whilst accounting for the uncertainty in the results. The effect of missing initial conditions of the variables as well as unknown parameters is investigated. We show how the evidence and the model ranking can change as a function of the available data. Furthermore, the addition of data from extra variables of the system can deliver more information for model comparison than increasing the data from one variable, thus providing a basis for experimental design.
TPA and Nested Sampling
"... In isolation, Algorithm 2.1 can be viewed as a special case of Nested Sampling. To recover TPA one could run Nested Sampling with the target distribution as its prior and with the likelihood to: 1 θ ∈ B L(θ) = ɛ/(1 + e β(θ) ) θ / ∈ B ′ , where β = inf{β ′ : θ ∈ A(β ′)}. (1) Skilling (2007) previousl ..."
Abstract
 Add to MetaCart
(Show Context)
In isolation, Algorithm 2.1 can be viewed as a special case of Nested Sampling. To recover TPA one could run Nested Sampling with the target distribution as its prior and with the likelihood to: 1 θ ∈ B L(θ) = ɛ/(1 + e β(θ) ) θ / ∈ B ′ , where β = inf{β ′ : θ ∈ A(β ′)}. (1) Skilling (2007) previously identified that the number of steps required to reach a given set is Poisson distributed. Huber and Schott suggest making this special case central, recasting all computations as finding the mass of a distribution on a set. Additional contributions are a theoretical analysis, two general ways of reducing problems to the required form and a link to annealing. The resulting TPA methods are different from a straight application of Nested Sampling. For example, in both variants the initial sampling distribution is set to the posterior of an inference problem rather than the prior.