Results 1  10
of
1,856
Gibbs Sampling
 Journal of the American Statistical Association
, 1995
"... 8> R f(`)d`. To marginalize, say for ` i ; requires h(` i ) = R f(`)d` (i) = R f(`)d` where ` (i) denotes all components of ` save ` i : To obtain Eg(` i ) requires similar integration; to obtain the marginal distribution of say g(`) or its expectation requires similar integration. When p i ..."
Abstract

Cited by 28 (0 self)
 Add to MetaCart
is large (as it will be in the applications we envision) such integration is analytically infeasible (the socalled curse of dimensionality*). Gibbs sampling provides a Monte Carlo approach for carrying out such integrations. In what sorts of settings would we have need to mar
Incorporating nonlocal information into information extraction systems by Gibbs sampling
 IN ACL
, 2005
"... Most current statistical natural language processing models use only local features so as to permit dynamic programming in inference, but this makes them unable to fully account for the long distance structure that is prevalent in language use. We show how to solve this dilemma with Gibbs sampling, ..."
Abstract

Cited by 730 (25 self)
 Add to MetaCart
Most current statistical natural language processing models use only local features so as to permit dynamic programming in inference, but this makes them unable to fully account for the long distance structure that is prevalent in language use. We show how to solve this dilemma with Gibbs sampling
Gibbs Sampling Methods for StickBreaking Priors
"... ... In this paper we present two general types of Gibbs samplers that can be used to fit posteriors of Bayesian hierarchical models based on stickbreaking priors. The first type of Gibbs sampler, referred to as a Polya urn Gibbs sampler, is a generalized version of a widely used Gibbs sampling meth ..."
Abstract

Cited by 388 (19 self)
 Add to MetaCart
... In this paper we present two general types of Gibbs samplers that can be used to fit posteriors of Bayesian hierarchical models based on stickbreaking priors. The first type of Gibbs sampler, referred to as a Polya urn Gibbs sampler, is a generalized version of a widely used Gibbs sampling
Gibbs Sampling
"... It is a pleasure to congratulate the authors for this excellent, original and pedagogical paper. I read a preliminary draft at the end of 2006 and I then mentioned to the authors that their work should be set within the framework of Lancaster probabilities, a remoted corner of the theory of probab ..."
Abstract
 Add to MetaCart
It is a pleasure to congratulate the authors for this excellent, original and pedagogical paper. I read a preliminary draft at the end of 2006 and I then mentioned to the authors that their work should be set within the framework of Lancaster probabilities, a remoted corner of the theory of probability, now described in their Section 6.1. The reader is referred to Lancaster (1958, 1963, 1975) and the synthesis by Koudou (1995, 1996) for more details. Given probabilities μ(dx) and ν(dy) on spaces X and Y, and given orthonormal bases p = (pn(x)) and q = (qn(y)) of L2(μ) and L2(ν), a probability σ on X×Y is said to be of the Lancaster type if either there exists a sequence ρ = (ρn) in 2 such that σ(dx, dy) = n ρnpn(x)qn(y) μ(dx)ν(dy) or σ is a weak limit of such probabilities. Alternatively, one can say that the sequence of signed measures [∑Nn=0 ρnpn(x)qn(y)]μ(dx)ν(dy) converges weakly toward the probability σ when N → ∞ (here ρ does not need to be in 2). An acceptable sequence ρ = (ρn) is called a Lancaster sequence for the quadruple (μ, ν,p, q). If p0 = q0 = 1 the margins of σ are (μ, ν). Writing σ(dx, dy) = μ(dx)K(x, dy) = ν(dy)L(y, dx) the probability kernel of the “xchain ” considered in the paper is k(x, dx ′) = Y
On Lifting the Gibbs Sampling Algorithm
"... Firstorder probabilistic models combine the power of firstorder logic, the de facto tool for handling relational structure, with probabilistic graphical models, the de facto tool for handling uncertainty. Lifted probabilistic inference algorithms for them have been the subject of much recent resea ..."
Abstract

Cited by 15 (10 self)
 Add to MetaCart
research. The main idea in these algorithms is to improve the accuracy and scalability of existing graphical models’ inference algorithms by exploiting symmetry in the firstorder representation. In this paper, we consider blocked Gibbs sampling, an advanced MCMC scheme, and lift it to the first
Gibbs Sampling For Signal Reconstruction
 University of Warwick
, 1997
"... This paper describes the use of stochastic simulation techniques to reconstruct biomedical signals not directly measurable. In particular, a deconvolution problem with an uncertain clearance parameter is considered. The problem is addressed using a Monte Carlo Markov Chain method, called the Gibb ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
the Gibbs Sampling, in which the joint posterior probability distribution of the stochastic parameters is estimated through sampling from conditional distributions. This method provides a fully bayesian solution to signal smoothing and deconvolution.
Bayesian inference in the semiparametric log normal frailty model using Gibbs sampling
 Genetics, Selection, Evolution
, 1998
"... Gibbs sampling ..."
Alternatives to the Gibbs Sampling Scheme
, 1992
"... A variation of the Gibbs sampling scheme is defined by driving the simulated Markov chain by the conditional distributions of an approximation to the posterior rather than the posterior distribution itself. Choosing a multivariate normal mixture form for the approximation enables reparametrization w ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
A variation of the Gibbs sampling scheme is defined by driving the simulated Markov chain by the conditional distributions of an approximation to the posterior rather than the posterior distribution itself. Choosing a multivariate normal mixture form for the approximation enables reparametrization
Gibbs Sampling for the Uninitiated
, 2009
"... VERSION 0.3 This document is intended for computer scientists who would like to try out a Markov Chain Monte Carlo (MCMC) technique, particularly in order to do inference with Bayesian models on problems related to text processing. We try to keep theory to the absolute minimum needed, and we work th ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
through the details much more explicitly than you usually see even in “introductory ” explanations. That means we’ve attempted to be ridiculously explicit in our exposition and notation. After providing the reasons and reasoning behind Gibbs sampling (and at least nodding our heads in the direction
Results 1  10
of
1,856