Results 1 
8 of
8
To Center or Not To Center: That Is Not The Question
 in progress) Paul Baines 101909 Bayesian Computation in ColorMagnitude Diagrams
, 2009
"... For a broad class of multilevel models, there exist two wellknown competing parameterizations, the centered parametrization (CP) and the noncentered parametrization (NCP), for effective MCMC implementation. Much literature has been devoted to the questions of when to use which and how to compromi ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
For a broad class of multilevel models, there exist two wellknown competing parameterizations, the centered parametrization (CP) and the noncentered parametrization (NCP), for effective MCMC implementation. Much literature has been devoted to the questions of when to use which and how to compromise between them via partial CP/NCP. This paper introduces an alternative strategy for boosting MCMC efficiency via simply interweaving— but not alternating—the two parameterizations. This strategy has the surprising property that failure of both the CP and NCP chains to converge geometrically does not prevent the interweaving algorithm from doing so. It achieves this seemingly magical property by taking advantage of the discordance of the two parameterizations, namely, the sufficiency of CP and the ancillarity of NCP, to substantially reduce the Markovian dependence, especially when the original CP and NCP form a “beauty and beast ” pair (i.e., when one chain mixes far more rapidly than the other). The ancillaritysufficiency reformulation of the CPNCP dichotomy allows us to borrow insight from the wellknown Basu’s theorem on the independence of (complete) sufficient and ancillary statistics, albeit a Bayesian version of Basu’s
Marginal Markov Chain Monte Carlo Methods
, 2008
"... Marginal Data Augmentation and ParameterExpanded Data Augmentation are related methods for improving the the convergence properties of the twostep Gibbs sampler know as the Data Augmentation sampler. These methods expand the parameter space with a socallled working parameter that is unidentifiabl ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Marginal Data Augmentation and ParameterExpanded Data Augmentation are related methods for improving the the convergence properties of the twostep Gibbs sampler know as the Data Augmentation sampler. These methods expand the parameter space with a socallled working parameter that is unidentifiable given the observed data but is identifiable given the socalled augmented data. Although these methods can result in enormous computational gains, their use has been somewhat limited do the constrained framework they are constructed under and the necessary identification of a working parameter. This article proposes a new prescriptive framework that greatly expands the class of problems that can benefit from the key idea underlying these methods. In particular, we show how working parameters can automatically be introduced into any Gibbs sampler. Since these samplers are typically used in a Bayesian framework, the working parameter requires a prior distribution and the convergence properties of the Markov chain depend on the choice of this choice distribution. Under certain conditions the optimal choice is improper and results in a joint Markov chain on the expanded parameter space that is not positive recurrent. This leads to unexplored technical difficulties when one attempts to exploit the computational advantage in multistep mcmc samplers, the very chains that might benefit most from this technology. In this article we develop strategies and theory that allow optimal marginal methods to be used in multistep samplers. We illustrate the potential to dramatically improve the convergence properties of mcmc samplers by applying the marginal Gibbs sampler to a logistic mixed model. 1 Expanding State Spaces in MCMC Constructing a Markov chain on an expanded state space in the context of Monte Carlo sampling can greatly simplify the required component draws or lead to chains with better mixing properties.
Parameter Expansion and Efficient Inference
, 2010
"... This EM review article focuses on parameter expansion, a simple technique introduced in the PXEM algorithm to make EM converge faster while maintaining its simplicity and stability. The primary objective concerns the connection between parameter expansion and efficient inference. It reviews the st ..."
Abstract
 Add to MetaCart
This EM review article focuses on parameter expansion, a simple technique introduced in the PXEM algorithm to make EM converge faster while maintaining its simplicity and stability. The primary objective concerns the connection between parameter expansion and efficient inference. It reviews the statistical interpretation of the PXEM algorithm, in terms of efficient inference via bias reduction, and further unfolds the PXEM mystery by looking at PXEM from different perspectives. In addition, it briefly discusses potential applications of parameter expansion to statistical inference and the broader impact of statistical thinking on understanding and developing other iterative optimization algorithms.
Rejoinder: Be All Our Insomnia Remembered...
, 2011
"... The evolutionary history from DA to GIS and more generally to CIS may well be cited by a future Stephen Stigler to advance a new Stigler’s Law: “No scientific idea is originated from a single team. ” Putting aside the wellknown connection between EM and DA(see Tanner and Wong, 2010 and van Dyk and ..."
Abstract
 Add to MetaCart
The evolutionary history from DA to GIS and more generally to CIS may well be cited by a future Stephen Stigler to advance a new Stigler’s Law: “No scientific idea is originated from a single team. ” Putting aside the wellknown connection between EM and DA(see Tanner and Wong, 2010 and van Dyk and Meng, 2010), we have witnessed how the idea of introducing a nonidentifiable parameter into DA schemes—for the purpose of better algorithmic efficiency—was independently and simultaneously developed by two research teams (Liu and Wu, 1999 and Meng and van Dyk, 1999). Subsequently, the idea of utilizing or combining multiple DA schemes has been pursued by (at least) three teams, from seemingly different angles. Roberts and Papaspiliopoulos’s team has been investigating the partially noncentered parametrization (see, e.g., Papaspiliopoulos, Roberts and Sköld, 2007), whose power and versatility are nicely illustrated by the discussion by Papaspiliopoulos, Roberts, and Sermaidis (PRS). The idea of partially noncentering is to introduce a tuning parameter w into the noncentering scheme (i.e., a DA scheme) and then seek its optimal value for the fastest convergence. It is therefore equivalent to the conditional augmentation approach (Meng and van Dyk, 1999 and van Dyk and Meng, 2001), where w is known as a working parameter and is determined by the same optimality criterion; often the
Confidence Sets for Network Structure
, 2011
"... Abstract: Latent variable models are frequently used to identify structure in dichotomous network data, in part, because they give rise to a Bernoulli product likelihood that is both well understood and consistent with the notion of exchangeable random graphs. In this article, we propose conservativ ..."
Abstract
 Add to MetaCart
Abstract: Latent variable models are frequently used to identify structure in dichotomous network data, in part, because they give rise to a Bernoulli product likelihood that is both well understood and consistent with the notion of exchangeable random graphs. In this article, we propose conservative confidence sets that hold with respect to these underlying Bernoulli parameters as a function of any given partition of network nodes, enabling us to assess estimates of residual network structure, that is, structure that cannot be explained by known covariates and thus cannot be easily verified by manual inspection. We demonstrate the proposed methodology by analyzing student friendship networks from the National Longitudinal Survey of Adolescent Health that include race, gender, and school year as covariates. We employ a stochastic expectationmaximization algorithm to fit a logistic regression model that includes these explanatory variables as well as a latent stochastic blockmodel component and additional nodespecific effects. Although maximumlikelihood estimates do not appear consistent in this context, we are able to evaluate confidence sets as a function of different blockmodel partitions, which enables us to qualitatively assess the significance of estimated residual network structure relative to a baseline, which models covariates but lacks block structure.
unknown title
, 2012
"... Data augmentation for nonGaussian regression models using variancemean mixtures ..."
Abstract
 Add to MetaCart
Data augmentation for nonGaussian regression models using variancemean mixtures