Results 1  10
of
1,281
A blockGTH algorithm for finding the stationary vector of a Markov chain
 Inst. for Advanced Computer Studies
, 1993
"... Abstract. Grassman, Taksar, and Heyman have proposed an algorithm for computing the stationary vector of a Markov chain. Analysis by O’Cinneide confirmed the results of numerical experiments, proving that the GTH algorithm computes an approximation to the stationary vector with low relative error in ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. Grassman, Taksar, and Heyman have proposed an algorithm for computing the stationary vector of a Markov chain. Analysis by O’Cinneide confirmed the results of numerical experiments, proving that the GTH algorithm computes an approximation to the stationary vector with low relative error
A Parallel Implementation of the BlockGTH algorithm
, 1994
"... The GTH algorithm is a very accurate direct method for finding the stationary distribution of a finitestate, discrete time, irreducible Markov chain. O'Leary and Wu developed the blockGTH algorithm and successfully demonstrated the efficiency of the algorithm on vector pipeline machines and o ..."
Abstract
 Add to MetaCart
The GTH algorithm is a very accurate direct method for finding the stationary distribution of a finitestate, discrete time, irreducible Markov chain. O'Leary and Wu developed the blockGTH algorithm and successfully demonstrated the efficiency of the algorithm on vector pipeline machines
Finite state Markovchain approximations to univariate and vector autoregressions
 Economics Letters
, 1986
"... The paper develops a procedure for finding a discretevalued Markov chain whose sample paths approximate well those of a vector autoregression. The procedure has applications in those areas of economics, finance, and econometrics where approximate solutions to integral equations are required. 1. ..."
Abstract

Cited by 493 (0 self)
 Add to MetaCart
The paper develops a procedure for finding a discretevalued Markov chain whose sample paths approximate well those of a vector autoregression. The procedure has applications in those areas of economics, finance, and econometrics where approximate solutions to integral equations are required. 1.
A gentle tutorial on the EM algorithm and its application to parameter estimation for gaussian mixture and hidden markov models
, 1997
"... We describe the maximumlikelihood parameter estimation problem and how the Expectationform of the EM algorithm as it is often given in the literature. We then develop the EM parameter estimation procedure for two applications: 1) finding the parameters of a mixture of Gaussian densities, and 2) fi ..."
Abstract

Cited by 693 (4 self)
 Add to MetaCart
) finding the parameters of a hidden Markov model (HMM) (i.e., the BaumWelch algorithm) for both discrete and Gaussian mixture observation models. We derive the update equations in fairly explicit detail but we do not prove any convergence properties. We try to emphasize intuition rather than mathematical
Loopy belief propagation for approximate inference: An empirical study. In:
 Proceedings of Uncertainty in AI,
, 1999
"... Abstract Recently, researchers have demonstrated that "loopy belief propagation" the use of Pearl's polytree algorithm in a Bayesian network with loops can perform well in the context of errorcorrecting codes. The most dramatic instance of this is the near Shannonlimit performanc ..."
Abstract

Cited by 676 (15 self)
 Add to MetaCart
limit performance of "Turbo Codes" codes whose decoding algorithm is equivalent to loopy belief propagation in a chainstructured Bayesian network. In this paper we ask: is there something spe cial about the errorcorrecting code context, or does loopy propagation work as an ap proximate inference scheme
Markov chain monte carlo convergence diagnostics
 JASA
, 1996
"... A critical issue for users of Markov Chain Monte Carlo (MCMC) methods in applications is how to determine when it is safe to stop sampling and use the samples to estimate characteristics of the distribution of interest. Research into methods of computing theoretical convergence bounds holds promise ..."
Abstract

Cited by 371 (6 self)
 Add to MetaCart
A critical issue for users of Markov Chain Monte Carlo (MCMC) methods in applications is how to determine when it is safe to stop sampling and use the samples to estimate characteristics of the distribution of interest. Research into methods of computing theoretical convergence bounds holds promise
Policy gradient methods for reinforcement learning with function approximation.
 In NIPS,
, 1999
"... Abstract Function approximation is essential to reinforcement learning, but the standard approach of approximating a value function and determining a policy from it has so far proven theoretically intractable. In this paper we explore an alternative approach in which the policy is explicitly repres ..."
Abstract

Cited by 439 (20 self)
 Add to MetaCart
and in the statevisitation distribution. In this paper we prove that an unbiased estimate of the gradient (1) can be obtained from experience using an approximate value function satisfying certain properties. Our result also suggests a way of proving the convergence of a wide variety of algorithms based on "
Time varying structural vector autoregressions and monetary policy
 REVIEW OF ECONOMIC STUDIES
, 2005
"... Monetary policy and the private sector behavior of the US economy are modeled as a time varying structural vector autoregression, where the sources of time variation are both the coefficients and the variance covariance matrix of the innovations. The paper develops a new, simple modeling strategy f ..."
Abstract

Cited by 306 (8 self)
 Add to MetaCart
for the law of motion of the variance covariance matrix and proposes an efficient Markov chain Monte Carlo algorithm for the model likelihood/posterior numerical evaluation. The main empirical conclusions are: 1) both systematic and nonsystematic monetary policy have changed during the last forty years
Algebraic Algorithms for Sampling from Conditional Distributions
 Annals of Statistics
, 1995
"... We construct Markov chain algorithms for sampling from discrete exponential families conditional on a sufficient statistic. Examples include generating tables with fixed row and column sums and higher dimensional analogs. The algorithms involve finding bases for associated polynomial ideals and so a ..."
Abstract

Cited by 268 (20 self)
 Add to MetaCart
We construct Markov chain algorithms for sampling from discrete exponential families conditional on a sufficient statistic. Examples include generating tables with fixed row and column sums and higher dimensional analogs. The algorithms involve finding bases for associated polynomial ideals and so
Variable Length Markov Chains
 Annals of Statistics
, 1999
"... We study estimation in the class of stationary variable length Markov chains (VLMC) on a finite space. The processes in this class are still Markovian of higher order, but with memory of variable length yielding a much bigger and structurally richer class of models than ordinary higher order Markov ..."
Abstract

Cited by 134 (5 self)
 Add to MetaCart
We study estimation in the class of stationary variable length Markov chains (VLMC) on a finite space. The processes in this class are still Markovian of higher order, but with memory of variable length yielding a much bigger and structurally richer class of models than ordinary higher order Markov
Results 1  10
of
1,281