Results 1  10
of
16
Exact Sampling with Coupled Markov Chains and Applications to Statistical Mechanics
, 1996
"... For many applications it is useful to sample from a finite set of objects in accordance with some particular distribution. One approach is to run an ergodic (i.e., irreducible aperiodic) Markov chain whose stationary distribution is the desired distribution on this set; after the Markov chain has ..."
Abstract

Cited by 411 (13 self)
 Add to MetaCart
For many applications it is useful to sample from a finite set of objects in accordance with some particular distribution. One approach is to run an ergodic (i.e., irreducible aperiodic) Markov chain whose stationary distribution is the desired distribution on this set; after the Markov chain has run for M steps, with M sufficiently large, the distribution governing the state of the chain approximates the desired distribution. Unfortunately it can be difficult to determine how large M needs to be. We describe a simple variant of this method that determines on its own when to stop, and that outputs samples in exact accordance with the desired distribution. The method uses couplings, which have also played a role in other sampling schemes; however, rather than running the coupled chains from the present into the future, one runs from a distant point in the past up until the present, where the distance into the past that one needs to go is determined during the running of the al...
Exact Sampling From AntiMonotone Systems
 Statistica Neerlandica
, 1998
"... A new approach to Markov chain Monte Carlo simulation was recently proposed by Propp and Wilson. This approach, unlike traditional ones, yields samples which have exactly the desired distribution. The ProppWilson algorithm requires this distribution to have a certain structure called monotonicity. ..."
Abstract

Cited by 39 (1 self)
 Add to MetaCart
A new approach to Markov chain Monte Carlo simulation was recently proposed by Propp and Wilson. This approach, unlike traditional ones, yields samples which have exactly the desired distribution. The ProppWilson algorithm requires this distribution to have a certain structure called monotonicity. In this paper an idea of Kendall is applied to show how the algorithm can be extended to the case where monotonicity is replaced by antimonotonicity. As illustrating examples, simulations of the hardcore model and the randomcluster model are presented.
Markov Chain Monte Carlo for Statistical Inference
 University of Washington, Center for
, 2000
"... These notes provide an introduction to Markov chain Monte Carlo methods that are useful in both Bayesian and frequent... ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
These notes provide an introduction to Markov chain Monte Carlo methods that are useful in both Bayesian and frequent...
Markov Connected Component Fields
"... A new class of Gibbsian models with potentials associated to the connected components or homogeneous parts of images is introduced. For these models the neighbourhood of a pixel is not fixed as for Markov random fields, but given by the components which are adjacent to the pixel. The relationship to ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
A new class of Gibbsian models with potentials associated to the connected components or homogeneous parts of images is introduced. For these models the neighbourhood of a pixel is not fixed as for Markov random fields, but given by the components which are adjacent to the pixel. The relationship to Markov random fields and marked point processes is explored and spatial Markov properties are established. Also extensions to infinite lattices are studied, and statistical inference problems including geostatistical applications and statistical image analysis are discussed. Finally, simulation studies are presented which show that the models may be appropiate for a variety of interesting patterns including images exhibiting intermediate degrees of spatial continuity and images of objects against background.
Markov chain Monte Carlo methods for statistical inference
, 2004
"... These notes provide an introduction to Markov chain Monte Carlo methods and their applications to both Bayesian and frequentist statistical inference. Such methods have revolutionized what can be achieved computationally, especially in the Bayesian paradigm. The account begins by discussing ordinary ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
These notes provide an introduction to Markov chain Monte Carlo methods and their applications to both Bayesian and frequentist statistical inference. Such methods have revolutionized what can be achieved computationally, especially in the Bayesian paradigm. The account begins by discussing ordinary Monte Carlo methods: these have the same goals as the Markov chain versions but can only rarely be implemented. Subsequent sections describe basic Markov chain Monte Carlo, based on the Hastings algorithm and including both the Metropolis method and the Gibbs sampler as special cases, and go on to discuss some more specialized developments, including adaptive slice sampling, exact goodness–of–fit tests, maximum likelihood estimation, the Langevin–Hastings algorithm, auxiliary variables techniques, perfect sampling via coupling from the past, reversible jumps methods for target spaces of varying dimensions, and simulated annealing. Specimen applications are described throughout the notes.
A bounding chain for swendsenwang
 Random Structures & Algorithms
"... ABSTRACT: The greatst drawback of Monte Carlo Markov chain methods is lack of knowledge of the mixing time of the chain. The use of bounding chains solves this difficulty for some chains by giving theoretical and experimental upper bounds on the mixing time. Moreover, when used with methodologies su ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
ABSTRACT: The greatst drawback of Monte Carlo Markov chain methods is lack of knowledge of the mixing time of the chain. The use of bounding chains solves this difficulty for some chains by giving theoretical and experimental upper bounds on the mixing time. Moreover, when used with methodologies such as coupling from the past, bounding chains allow the user to take samples drawn exactly from the stationary distribution without knowledge of the mixing time. Here we present a bounding chain for the SwendsenWang process. The SwendsenWang bounding chain allow us to efficiently obtain exact samples from the ferromagnetic Qstate Potts model for certain classes of graphs. Also, by analyzing this bounding chain, we will show that SwendsenWang is rapidly mixing over a slightly larger range of parameters than was known previously. © 2002 Wiley
The Monte Carlo Method in Science and Engineering
, 2006
"... Since 1953, researchers have applied the Monte Carlo method to a wide range of areas. Specialized algorithms have also been developed to extend the method’s applicability and efficiency. The author describes some of the algorithms that have been developed to ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Since 1953, researchers have applied the Monte Carlo method to a wide range of areas. Specialized algorithms have also been developed to extend the method’s applicability and efficiency. The author describes some of the algorithms that have been developed to
Aspects Of Spatial Statistics, Stochastic Geometry And Markov Chain Monte Carlo Methods
, 1999
"... ..."
Exact Sampling with Markov Chains
, 1996
"... Random sampling has found numerous applications in computer science, statistics, and physics. The most widely applicable method of random sampling is to use a Markov chain whose steady state distribution is the probability distribution ß from which we wish to sample. After the Markov chain has been ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Random sampling has found numerous applications in computer science, statistics, and physics. The most widely applicable method of random sampling is to use a Markov chain whose steady state distribution is the probability distribution ß from which we wish to sample. After the Markov chain has been run for long enough, its state is approximately distributed according to ß. The principal problem with this approach is that it is often difficult to determine how long to run the Markov chain. In this thesis we present several algorithms that use Markov chains to return samples distributed exactly according to ß. The algorithms determine on their own how long to run the Markov chain. Two of the algorithms may be used with any Markov chain, but are useful only if the state space is not too large. Nonetheless, a spinoff of these two algorithms is a procedure for sampling random spanning trees of a directed graph that runs more quickly than the Aldous/Broder algorithm. Another of the exact sa...