Results 1  10
of
18
Degree Fluctuations and the Convergence Time of Consensus Algorithms
, 2012
"... We consider a consensus algorithm in which every node in a sequence of undirected, Bconnected graphs assigns equal weight to each of its neighbors. Under the assumption that the degree of each node is fixed (except for times when the node has no connections to other nodes), we show that consensus i ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
We consider a consensus algorithm in which every node in a sequence of undirected, Bconnected graphs assigns equal weight to each of its neighbors. Under the assumption that the degree of each node is fixed (except for times when the node has no connections to other nodes), we show that consensus is achieved within a given accuracy ɛ on n nodes in time B+4n3Bln(2n/ɛ). Because there is a direct relation between consensus algorithms in timevarying environments and inhomogeneous random walks, our result also translates into a general statement on such random walks. Moreover, we give a simple proof of a result of Cao, Spielman, and Morse that the worst case convergence time becomes exponentially large in the number of nodes n under slight relaxation of the degree constancy assumption.
Time inhomogeneous markov chains with wave like behavior
 Ann. Appl. Probab
"... ar ..."
(Show Context)
Merging and stability for time inhomogeneous finite Markov chains
"... As is apparent from most text books, the definition of a Markov process includes, in the most natural way, processes that are time inhomogeneous. Nevertheless, most modern references quickly restrict themselves to the time homogeneous case by assuming the existence of a time homogeneous transition f ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
As is apparent from most text books, the definition of a Markov process includes, in the most natural way, processes that are time inhomogeneous. Nevertheless, most modern references quickly restrict themselves to the time homogeneous case by assuming the existence of a time homogeneous transition function, a case for which there is a vast
Merging for time inhomogeneous finite Markov chains
 II. Nash and logSobolev
"... b a b i l i t y ..."
(Show Context)
Mixing time of the cardcyclictorandom shuffle
 The Annals of Applied Probability
, 2014
"... Abstract The CardCyclictoRandom shuffle on n cards is defined as follows: at time t remove the card with label t mod n and randomly reinsert it back into the deck. Pinsky [9] introduced this shuffle and asked how many steps are needed to mix the deck. He showed n steps do not suffice. Here we sh ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Abstract The CardCyclictoRandom shuffle on n cards is defined as follows: at time t remove the card with label t mod n and randomly reinsert it back into the deck. Pinsky [9] introduced this shuffle and asked how many steps are needed to mix the deck. He showed n steps do not suffice. Here we show that the mixing time is on the order of Θ(n log n).
COMPARISON INEQUALITIES AND FASTESTMIXING MARKOV CHAINS
, 2011
"... We introduce a new partial order on the class of stochastically monotone Markov kernels having a given stationary distribution π on a given finite partially ordered state space X. When K ≼ L in this partial order we say that K and L satisfy a comparison inequality. We establish that if K1,...,Kt and ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
We introduce a new partial order on the class of stochastically monotone Markov kernels having a given stationary distribution π on a given finite partially ordered state space X. When K ≼ L in this partial order we say that K and L satisfy a comparison inequality. We establish that if K1,...,Kt and L1,...,Lt are reversible and Ks ≼ Ls for s = 1,...,t, then K1···Kt ≼ L1···Lt. In particular, in the timehomogeneous case we have K t ≼ L t for every t if K and L are reversible and K ≼ L, and using this we show that (for suitable common initial distributions) the Markov chain Y with kernel K mixes faster than the chain Z with kernel L, in the strong sense that at every time t the discrepancy—measured by total variation distance or separation or L 2distance—between the law of Yt and π is smaller than that between the law of Zt and π. Using comparison inequalities together with specialized arguments to remove the stochastic monotonicity restriction, we answer a question of Persi Diaconis by showing that, among all symmetric birthanddeath kernels on the path X = {0,...,n}, the one (we call it the uniform chain) that produces fastest convergence from initial state 0 to the uniform distribution has transition probability 1/2 in each direction along each edge of the path, with holding probability 1/2 at each endpoint. We also use comparison inequalities (i) to identify, when π is a given logconcave distribution on the path, the fastestmixing stochastically monotone birthanddeath chain started at 0, and (ii) to recover and extend a result of Peres and Winkler that extra updates do not delay mixing for monotone spin systems. Among the fastestmixing chains in (i), we show that the chain for uniform π is slowest in the sense of maximizing separation at every time.
Parallel Gibbs Sampling for Hierarchical Dirichlet Processes via Gamma Processes Equivalence
"... The hierarchical Dirichlet process (HDP) is an intuitive and elegant technique to model data with latent groups. However, it has not been widely used for practical applications due to the high computational costs associated with inference. In this paper, we propose an effective parallel Gibbs samp ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
The hierarchical Dirichlet process (HDP) is an intuitive and elegant technique to model data with latent groups. However, it has not been widely used for practical applications due to the high computational costs associated with inference. In this paper, we propose an effective parallel Gibbs sampling algorithm for HDP by exploring its connections with the gammagammaPoisson process. Specifically, we develop a novel framework that combines bootstrap and Reversible Jump MCMC algorithm to enable parallel variable updates. We also provide theoretical convergence analysis based on Gibbs sampling with asynchronous variable updates. Experiment results on both synthetic datasets and two largescale text collections show that our algorithm can achieve considerable speedup as well as better inference accuracy for HDP compared with existing parallel sampling algorithms.