Results 1  10
of
123
Central limit theorem for sequential monte carlo methods and its application to bayesian inference
 Ann. Statist
"... “particle filters, ” refers to a general class of iterative algorithms that performs Monte Carlo approximations of a given sequence of distributions of interest (πt). We establish in this paper a central limit theorem for the Monte Carlo estimates produced by these computational methods. This result ..."
Abstract

Cited by 58 (2 self)
 Add to MetaCart
“particle filters, ” refers to a general class of iterative algorithms that performs Monte Carlo approximations of a given sequence of distributions of interest (πt). We establish in this paper a central limit theorem for the Monte Carlo estimates produced by these computational methods. This result holds under minimal assumptions on the distributions πt, and applies in a general framework which encompasses most of the sequential Monte Carlo methods that have been considered in the literature, including the resamplemove algorithm of Gilks and Berzuini [J. R. Stat. Soc. Ser. B Stat. Methodol. 63 (2001) 127–146] and the residual resampling scheme. The corresponding asymptotic variances provide a convenient measurement of the precision of a given particle filter. We study, in particular, in some typical examples of Bayesian applications, whether and at which rate these asymptotic variances diverge in time, in order to assess the long term reliability of the considered algorithm. 1. Introduction. Sequential Monte Carlo methods form an emerging
Fast automatic heart chamber segmentation from 3D CT data using marginal space learning and steerable features
 In Proc. ICCV
"... Multichamber heart segmentation is a prerequisite for global quantification of the cardiac function. The complexity of cardiac anatomy, poor contrast, noise or motion artifacts makes this segmentation problem a challenging task. In this paper, we present an efficient, robust, and fully automatic se ..."
Abstract

Cited by 42 (21 self)
 Add to MetaCart
Multichamber heart segmentation is a prerequisite for global quantification of the cardiac function. The complexity of cardiac anatomy, poor contrast, noise or motion artifacts makes this segmentation problem a challenging task. In this paper, we present an efficient, robust, and fully automatic segmentation method for 3D cardiac computed tomography (CT) volumes. Our approach is based on recent advances in learning discriminative object models and we exploit a large database of annotated CT volumes. We formulate the segmentation as a two step learning problem: anatomical structure localization and boundary delineation. A novel algorithm, Marginal Space Learning (MSL), is introduced to solve the 9dimensional similarity search problem for localizing the heart chambers. MSL reduces the number of testing hypotheses by about six orders of magnitude. We also propose to use steerable image features, which incorporate the orientation and scale information into the distribution of sampling points, thus avoiding the timeconsuming volume data rotation operations. After determining the similarity transformation of the heart chambers, we estimate the 3D shape through learningbased boundary delineation. Extensive experiments on multichamber heart segmentation demonstrate the efficiency and robustness of the proposed approach, comparing favorably to the stateoftheart. This is the first study reporting stable results on a large cardiac CT dataset with 323 volumes. In addition, we achieve a speed of less than eight seconds for automatic segmentation of all four chambers. 1.
Evaluation methods for topic models
 In ICML
, 2009
"... A natural evaluation metric for statistical topic models is the probability of heldout documents given a trained model. While exact computation of this probability is intractable, several estimators for this probability have been used in the topic modeling literature, including the harmonic mean me ..."
Abstract

Cited by 42 (5 self)
 Add to MetaCart
A natural evaluation metric for statistical topic models is the probability of heldout documents given a trained model. While exact computation of this probability is intractable, several estimators for this probability have been used in the topic modeling literature, including the harmonic mean method and empirical likelihood method. In this paper, we demonstrate experimentally that commonlyused methods are unlikely to accurately estimate the probability of heldout documents, and propose two alternative methods that are both accurate and efficient. 1.
Practical Filtering with Sequential Parameter Learning
, 2003
"... In this paper we develop a general simulationbased approach to filtering and sequential parameter learning. We begin with an algorithm for filtering in a general dynamic state space model and then extend this to incorporate sequential parameter learning. The key idea is to express the filtering ..."
Abstract

Cited by 25 (7 self)
 Add to MetaCart
In this paper we develop a general simulationbased approach to filtering and sequential parameter learning. We begin with an algorithm for filtering in a general dynamic state space model and then extend this to incorporate sequential parameter learning. The key idea is to express the filtering distribution as a mixture of lagsmoothing distributions and to implement this sequentially. Our approach has a number of advantages over current methodologies. First, it allows for sequential parmeter learning where sequential importance sampling approaches have difficulties. Second
Efficient block sampling strategies for sequential Monte Carlo
 Journal of Computational and Graphical Statistics
, 2006
"... Sequential Monte Carlo (SMC) methods are a powerful set of simulationbased techniques for sampling sequentially from a sequence of complex probability distributions. These methods rely on a combination of importance sampling and resampling techniques. In a Markov chain Monte Carlo (MCMC) framework, ..."
Abstract

Cited by 23 (5 self)
 Add to MetaCart
Sequential Monte Carlo (SMC) methods are a powerful set of simulationbased techniques for sampling sequentially from a sequence of complex probability distributions. These methods rely on a combination of importance sampling and resampling techniques. In a Markov chain Monte Carlo (MCMC) framework, block sampling strategies often perform much better than algorithms based on oneatatime sampling strategies if “good ” proposal distributions to update blocks of variables can be designed. In an SMC framework, standard algorithms sequentially sample the variables one at a time whereas, like MCMC, the efficiency of algorithms could be improved significantly by using block sampling strategies. Unfortunately, a direct implementation of such strategies is impossible as it requires the knowledge of integrals which do not admit closedform expressions. This article introduces a new methodology which bypasses this problem and is a natural extension of standard SMC methods. Applications to several sequential Bayesian inference problems demonstrate these methods.
Reinforcement learning with limited reinforcement: Using bayes risk for active learning in pomdps. ISAIM (online proceedings
, 2008
"... Partially Observable Markov Decision Processes (POMDPs) have succeeded in planning domains that require balancing actions that increase an agent’s knowledge and actions that increase an agent’s reward. Unfortunately, most POMDPs are defined with a large number of parameters which are difficult to sp ..."
Abstract

Cited by 21 (7 self)
 Add to MetaCart
Partially Observable Markov Decision Processes (POMDPs) have succeeded in planning domains that require balancing actions that increase an agent’s knowledge and actions that increase an agent’s reward. Unfortunately, most POMDPs are defined with a large number of parameters which are difficult to specify only from domain knowledge. In this paper, we present an approximation approach that allows us to treat the POMDP model parameters as additional hidden state in a “modeluncertainty ” POMDP. Coupled with modeldirected queries, our planner actively learns good policies. We demonstrate our approach on several POMDP problems. 1.
Nonlinear Markov chain Monte Carlo via self interacting approximations
, 2006
"... Abstract. Let P(E) be the space of probability measures on a measurable space (E, E). In this paper we introduce a class of nonlinear Markov Chain Monte Carlo (MCMC) methods for simulating from a probability measure π ∈ P(E). Nonlinear Markov kernels (e.g. Del Moral (2004); Del Moral & Doucet (200 ..."
Abstract

Cited by 16 (7 self)
 Add to MetaCart
Abstract. Let P(E) be the space of probability measures on a measurable space (E, E). In this paper we introduce a class of nonlinear Markov Chain Monte Carlo (MCMC) methods for simulating from a probability measure π ∈ P(E). Nonlinear Markov kernels (e.g. Del Moral (2004); Del Moral & Doucet (2003)) K: P(E) × E → P(E) can be constructed to admit π as an invariant distribution and have superior mixing properties to ordinary (linear) MCMC kernels. However, such nonlinear kernels cannot be simulated exactly, so, in the spirit of particle approximations of FeynmanKac formulae (Del Moral 2004), we construct approximations of the nonlinear kernels via SelfInteracting Markov Chains (Del Moral & Miclo 2004) (SIMC). We present several nonlinear kernels and demonstrate that, under some conditions, the associated selfinteracting approximations exhibit a strong law of large numbers; our proof technique is via the Poisson equation and FosterLyapunov conditions. We investigate the performance of our approximations with some simulations, combining the methodology with populationbased Markov chain Monte Carlo (e.g. Jasra et al. (2007)). We also provide
Minimum variance importance sampling via population Monte Carlo. ESAIM: Probability and Statistics
, 2007
"... Variance reduction has always been a central issue in Monte Carlo experiments. Population Monte Carlo can be used to this effect, in that a mixture of importance functions, called a Dkernel, can be iteratively optimised to achieve the minimum asymptotic variance for a function of interest among all ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
Variance reduction has always been a central issue in Monte Carlo experiments. Population Monte Carlo can be used to this effect, in that a mixture of importance functions, called a Dkernel, can be iteratively optimised to achieve the minimum asymptotic variance for a function of interest among all possible mixtures. The implementation of this iterative scheme is illustrated for the computation of the price of a European option in the CoxIngersollRoss model. A Central Limit theorem as well as moderate deviations are established for the Dkernel Population Monte Carlo methodology.
Sequential Monte Carlo for Bayesian Computation
"... Sequential Monte Carlo (SMC) methods are a class of importance sampling and resampling techniques designed to simulate from a sequence of probability distributions. These approaches have become very popular over the last few years to solve sequential Bayesian inference problems (e.g. Doucet et al. 2 ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
Sequential Monte Carlo (SMC) methods are a class of importance sampling and resampling techniques designed to simulate from a sequence of probability distributions. These approaches have become very popular over the last few years to solve sequential Bayesian inference problems (e.g. Doucet et al. 2001). However, in comparison to Markov chain Monte Carlo (MCMC), the application of SMC remains limited when, in fact, such methods are also appropriate in such contexts (e.g. Chopin (2002); Del Moral et al. (2006)). In this paper, we present a simple unifying framework which allows us to extend both the SMC methodology and its range of applications. Additionally, reinterpreting SMC algorithms as an approximation of nonlinear MCMC kernels, we present alternative SMC and iterative selfinteracting approximation (Del Moral & Miclo 2004; 2006) schemes. We demonstrate the performance of the SMC methodology on static and sequential Bayesian inference problems.