Results 1 
5 of
5
Analysis of the Gibbs sampler for hierarchical inverse problems
"... Abstract Many inverse problems arising in applications come from continuum models where the unknown parameter is a field. In practice the unknown field is discretized resulting in a problem in RN, with an understanding that refining the discretization, that is increasing N, will often be desirable. ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract Many inverse problems arising in applications come from continuum models where the unknown parameter is a field. In practice the unknown field is discretized resulting in a problem in RN, with an understanding that refining the discretization, that is increasing N, will often be desirable. In the context of Bayesian inversion this situation suggests the importance of two issues: (i) defining hyperparameters in such a way that they are interpretable in the continuum limit N →∞ and so that their values may be compared between different discretization levels; (ii) understanding the efficiency of algorithms for probing the posterior distribution, as a function of large N. Here we address these two issues in the context of linear inverse problems subject to additive Gaussian noise within a hierarchical modelling framework based on a Gaussian prior for the unknown field and an inversegamma prior for a hyperparameter, namely the amplitude of the prior variance. The structure of the model is such that the Gibbs sampler can be easily implemented for probing the posterior distribution. Subscribing to the dogma that one should think infinitedimensionally before implementing in finite dimensions, we present function space intuition and provide rigorous theory showing that as N increases, the component of the Gibbs sampler for sampling the amplitude of the prior variance becomes increasingly slower. We discuss a reparametrization of the prior variance that
Parameter Expansion and Efficient Inference
, 2010
"... This EM review article focuses on parameter expansion, a simple technique introduced in the PXEM algorithm to make EM converge faster while maintaining its simplicity and stability. The primary objective concerns the connection between parameter expansion and efficient inference. It reviews the st ..."
Abstract
 Add to MetaCart
This EM review article focuses on parameter expansion, a simple technique introduced in the PXEM algorithm to make EM converge faster while maintaining its simplicity and stability. The primary objective concerns the connection between parameter expansion and efficient inference. It reviews the statistical interpretation of the PXEM algorithm, in terms of efficient inference via bias reduction, and further unfolds the PXEM mystery by looking at PXEM from different perspectives. In addition, it briefly discusses potential applications of parameter expansion to statistical inference and the broader impact of statistical thinking on understanding and developing other iterative optimization algorithms.
Parameter Expansion and Efficient Inference
"... Abstract. This EM review article focuses on parameter expansion, a simple technique introduced in the PXEM algorithm to make EM converge faster while maintaining its simplicity and stability. The primary objective concerns the connection between parameter expansion and efficient inference. It revie ..."
Abstract
 Add to MetaCart
Abstract. This EM review article focuses on parameter expansion, a simple technique introduced in the PXEM algorithm to make EM converge faster while maintaining its simplicity and stability. The primary objective concerns the connection between parameter expansion and efficient inference. It reviews the statistical interpretation of the PXEM algorithm, in terms of efficient inference via bias reduction, and further unfolds the PXEM mystery by looking at PXEM from different perspectives. In addition, it briefly discusses potential applications of parameter expansion to statistical inference and the broader impact of statistical thinking on understanding and developing other iterative optimization algorithms. Key words and phrases: EM algorithm, PXEM algorithm, robit regression, nonidentifiability. 1.
Introduction The Model Fitting The Model An Interweaving Strategy Results Conclusions Bayesian Computation in ColorMagnitude Diagrams
"... ..."
(Show Context)
Improving the Data Augmentation algorithm in the twoblock setup
"... The Data Augmentation (DA) approach to approximate sampling from an intractable probability density fX is based on the construction of a joint density, fX,Y, whose conditional densities, fXY and fY X, can be straightforwardly sampled. However, many applications of the DA algorithm do not fall in ..."
Abstract
 Add to MetaCart
The Data Augmentation (DA) approach to approximate sampling from an intractable probability density fX is based on the construction of a joint density, fX,Y, whose conditional densities, fXY and fY X, can be straightforwardly sampled. However, many applications of the DA algorithm do not fall in this “singleblock ” setup. In these applications, X is partitioned into two components, X = (U, V), in such a way that it is easy to sample from fY X, fU V,Y and fV U,Y. We refer to this alternative version of DA, which is effectively a threevariable Gibbs sampler, as “twoblock ” DA. We develop two methods to improve the performance of the DA algorithm in the twoblock setup. These methods are motivated by the Haar PXDA algorithm, which has been developed in previous literature to improve the performance of the singleblock DA algorithm. The Haar PXDA algorithm, which adds a computationally inexpensive extra step in each iteration of the DA algorithm while preserving the stationary density, has been shown to be optimal among similar techniques. However, as we illustrate, the Haar PXDA algorithm does not lead to the required stationary density fX in the twoblock setup. Our methods incorporate suitable generalizations and modifications to this approach, and work in the twoblock setup. A theoretical comparison of our methods to the twoblock DA algorithm, a much harder task than the singleblock setup due to nonreversibility and structural complexities, is provided. We successfully apply our methods to applications of the twoblock DA algorithm in Bayesian robit regression and Bayesian quantile regression.