Results 1  10
of
83
NonUniform Random Variate Generation
, 1986
"... Abstract. This is a survey of the main methods in nonuniform random variate generation, and highlights recent research on the subject. Classical paradigms such as inversion, rejection, guide tables, and transformations are reviewed. We provide information on the expected time complexity of various ..."
Abstract

Cited by 646 (21 self)
 Add to MetaCart
Abstract. This is a survey of the main methods in nonuniform random variate generation, and highlights recent research on the subject. Classical paradigms such as inversion, rejection, guide tables, and transformations are reviewed. We provide information on the expected time complexity of various algorithms, before addressing modern topics such as indirectly specified distributions, random processes, and Markov chain methods.
An Introduction to MCMC for Machine Learning
, 2003
"... This purpose of this introductory paper is threefold. First, it introduces the Monte Carlo method with emphasis on probabilistic machine learning. Second, it reviews the main building blocks of modern Markov chain Monte Carlo simulation, thereby providing and introduction to the remaining papers of ..."
Abstract

Cited by 235 (2 self)
 Add to MetaCart
This purpose of this introductory paper is threefold. First, it introduces the Monte Carlo method with emphasis on probabilistic machine learning. Second, it reviews the main building blocks of modern Markov chain Monte Carlo simulation, thereby providing and introduction to the remaining papers of this special issue. Lastly, it discusses new interesting research horizons.
Iterated random functions
 SIAM Review
, 1999
"... Abstract. Iterated random functions are used to draw pictures or simulate large Ising models, among other applications. They offer a method for studying the steady state distribution of a Markov chain, and give useful bounds on rates of convergence in a variety of examples. The present paper surveys ..."
Abstract

Cited by 135 (1 self)
 Add to MetaCart
Abstract. Iterated random functions are used to draw pictures or simulate large Ising models, among other applications. They offer a method for studying the steady state distribution of a Markov chain, and give useful bounds on rates of convergence in a variety of examples. The present paper surveys the field and presents some new examples. There is a simple unifying idea: the iterates of random Lipschitz functions converge if the functions are contracting on the average. 1. Introduction. The
Mixing times of lozenge tiling and card shuffling Markov chains
, 1997
"... Abstract. We show how to combine Fourier analysis with coupling arguments to bound the mixing times of a variety of Markov chains. The mixing time is the number of steps a Markov chain takes to approach its equilibrium distribution. One application is to a class of Markov chains introduced by Luby, ..."
Abstract

Cited by 69 (1 self)
 Add to MetaCart
Abstract. We show how to combine Fourier analysis with coupling arguments to bound the mixing times of a variety of Markov chains. The mixing time is the number of steps a Markov chain takes to approach its equilibrium distribution. One application is to a class of Markov chains introduced by Luby, Randall, and Sinclair to generate random tilings of regions by lozenges. For an ℓ×ℓ region we bound the mixing time by O(ℓ 4 log ℓ), which improves on the previous bound of O(ℓ 7), and we show the new bound to be essentially tight. In another application we resolve a few questions raised by Diaconis and SaloffCoste by lower bounding the mixing time of various cardshuffling Markov chains. Our lower bounds are within a constant factor of their upper bounds. When we use our methods to modify a pathcoupling analysis of Bubley and Dyer, we obtain an O(n 3 log n) upper bound on the mixing time of the KarzanovKhachiyan Markov chain for linear extensions. 1.
On Markov chains for independent sets
 Journal of Algorithms
, 1997
"... Random independent sets in graphs arise, for example, in statistical physics, in the hardcore model of a gas. A new rapidly mixing Markov chain for independent sets is defined in this paper. We show that it is rapidly mixing for a wider range of values of the parameter than the LubyVigoda chain, ..."
Abstract

Cited by 66 (16 self)
 Add to MetaCart
Random independent sets in graphs arise, for example, in statistical physics, in the hardcore model of a gas. A new rapidly mixing Markov chain for independent sets is defined in this paper. We show that it is rapidly mixing for a wider range of values of the parameter than the LubyVigoda chain, the best previously known. Moreover the new chain is apparently more rapidly mixing than the LubyVigoda chain for larger values of (unless the maximum degree of the graph is 4). An extension of the chain to independent sets in hypergraphs is described. This chain gives an efficient method for approximately counting the number of independent sets of hypergraphs with maximum degree two, or with maximum degree three and maximum edge size three. Finally, we describe a method which allows one, under certain circumstances, to deduce the rapid mixing of one Markov chain from the rapid mixing of another, with the same state space and stationary distribution. This method is applied to two Markov ch...
An efficient Markov chain Monte Carlo method for distributions with intractable normalising constants
 Biometrika
, 2006
"... Maximum likelihood parameter estimation and sampling from Bayesian posterior distributions are problematic when the probability density for the parameter of interest involves an intractable normalising constant which is also a function of that parameter. In this paper, an auxiliary variable method i ..."
Abstract

Cited by 51 (2 self)
 Add to MetaCart
Maximum likelihood parameter estimation and sampling from Bayesian posterior distributions are problematic when the probability density for the parameter of interest involves an intractable normalising constant which is also a function of that parameter. In this paper, an auxiliary variable method is presented which requires only that independent samples can be drawn from the unnormalised density at any particular parameter value. The proposal distribution is constructed so that the normalising constant cancels from the Metropolis–Hastings ratio. The method is illustrated by producing posterior samples for parameters of the Ising model given a particular lattice realisation.
Extension of Fill’s perfect rejection sampling algorithm to general chains (extended abstract
 Pages 37–52 in Monte Carlo Methods
, 2000
"... By developing and applying a broad framework for rejection sampling using auxiliary randomness, we provide an extension of the perfect sampling algorithm of Fill (1998) to general chains on quite general state spaces, and describe how use of bounding processes can ease computational burden. Along th ..."
Abstract

Cited by 42 (13 self)
 Add to MetaCart
By developing and applying a broad framework for rejection sampling using auxiliary randomness, we provide an extension of the perfect sampling algorithm of Fill (1998) to general chains on quite general state spaces, and describe how use of bounding processes can ease computational burden. Along the way, we unearth a simple connection between the Coupling From The Past (CFTP) algorithm originated by Propp and Wilson (1996) and our extension of Fill’s algorithm. Key words and phrases. Fill’s algorithm, Markov chain Monte Carlo, perfect sampling, exact sampling, rejection sampling, interruptibility, coupling from the past, readonce coupling from the past, monotone transition rule, realizable monotonicity, stochastic monotonicity, partially ordered set, coalescence, imputation,
Exact Sampling From AntiMonotone Systems
 Statistica Neerlandica
, 1998
"... A new approach to Markov chain Monte Carlo simulation was recently proposed by Propp and Wilson. This approach, unlike traditional ones, yields samples which have exactly the desired distribution. The ProppWilson algorithm requires this distribution to have a certain structure called monotonicity. ..."
Abstract

Cited by 39 (1 self)
 Add to MetaCart
A new approach to Markov chain Monte Carlo simulation was recently proposed by Propp and Wilson. This approach, unlike traditional ones, yields samples which have exactly the desired distribution. The ProppWilson algorithm requires this distribution to have a certain structure called monotonicity. In this paper an idea of Kendall is applied to show how the algorithm can be extended to the case where monotonicity is replaced by antimonotonicity. As illustrating examples, simulations of the hardcore model and the randomcluster model are presented.
Perfect simulation for interacting point processes, loss networks and Ising models
, 1999
"... We present a perfect simulation algorithm for measures that are absolutely continuous with respect to some Poisson process and can be obtained as invariant measures of birthanddeath processes. Examples include area and perimeterinteracting point processes (with stochastic grains), invariant meas ..."
Abstract

Cited by 38 (11 self)
 Add to MetaCart
We present a perfect simulation algorithm for measures that are absolutely continuous with respect to some Poisson process and can be obtained as invariant measures of birthanddeath processes. Examples include area and perimeterinteracting point processes (with stochastic grains), invariant measures of loss networks, and the Ising contour and random cluster models. The algorithm does not involve any coupling hence it is not tied up to monotonicity requirements and it directly provides samples of the infinitevolume measure. The algorithm is based on a twostep procedure: (i) a perfectsimulation scheme for (spacetime) marked Poisson processes (free birthanddeath process, free loss networks), and (ii) a "cleaning" algorithm that trims out this process according to the interaction rules of the target process. The first step involves the perfect generation of "ancestors" of a given object, that is of predecessors 1 that may have an influence on the birthrate under the targe...
How to Couple from the Past Using a ReadOnce Source of Randomness
, 1999
"... We give a new method for generating perfectly random samples from the stationary distribution of a Markov chain. The method is related to coupling from the past (CFTP), but only runs the Markov chain forwards in time, and never restarts it at previous times in the past. The method is also related ..."
Abstract

Cited by 33 (1 self)
 Add to MetaCart
We give a new method for generating perfectly random samples from the stationary distribution of a Markov chain. The method is related to coupling from the past (CFTP), but only runs the Markov chain forwards in time, and never restarts it at previous times in the past. The method is also related to an idea known as PASTA (Poisson arrivals see time averages) in the operations research literature. Because the new algorithm can be run using a readonce stream of randomness, we call it readonce CFTP. The memory and time requirements of readonce CFTP are on par with the requirements of the usual form of CFTP, and for a variety of applications the requirements may be noticeably less. Some perfect sampling algorithms for point processes are based on an extension of CFTP known as coupling into and from the past; for completeness, we give a readonce version of coupling into and from the past, but it remains unpractical. For these point process applications, we give an alternative...