Results 1  10
of
127
Stochastic volatility: likelihood inference and comparison with ARCH models. The Review of Economic Studies
, 1998
"... ..."
How many clusters? Which clustering method? Answers via modelbased cluster analysis
 THE COMPUTER JOURNAL
, 1998
"... ..."
Markov Chain Monte Carlo Simulation Methods in Econometrics
, 1993
"... We present several Markov chain Monte Carlo simulation methods that have been widely used in recent years in econometrics and statistics. Among these is the Gibbs sampler, which has been of particular interest to econometricians. Although the paper summarizes some of the relevant theoretical literat ..."
Abstract

Cited by 138 (8 self)
 Add to MetaCart
We present several Markov chain Monte Carlo simulation methods that have been widely used in recent years in econometrics and statistics. Among these is the Gibbs sampler, which has been of particular interest to econometricians. Although the paper summarizes some of the relevant theoretical literature, its emphasis is on the presentation and explanation of applications to important models that are studied in econometrics. We include a discussion of some implementation issues, the use of the methods in connection with the EM algorithm, and how the methods can be helpful in model specification questions. Many of the applications of these methods are of particular interest to Bayesians, but we also point out ways in which frequentist statisticians may find the techniques useful.
EM procedures using mean fieldlike approximations for Markov modelbased image segmentation
, 2001
"... This paper deals with Markov random field modelbased image segmentation. This involves parameter estimation in hidden Markov models for which one of the most widely used procedures is the EM algorithm. In practice, difficulties arise due to the dependence structure in the models and approximations ..."
Abstract

Cited by 67 (13 self)
 Add to MetaCart
This paper deals with Markov random field modelbased image segmentation. This involves parameter estimation in hidden Markov models for which one of the most widely used procedures is the EM algorithm. In practice, difficulties arise due to the dependence structure in the models and approximations are required to make the algorithm tractable. We propose a class of algorithms in which the idea is to deal with systems of independent variables. This corresponds to approximations of the pixels' interactions similar to the mean field approximation. It follows algorithms that have the advantage of taking the Markovian structure into account while preserving the good features of EM. In addition, this class, that includes new and already known procedures, is presented in a unified framework, showing that apparently distant algorithms come from similar approximation principles. We illustrate the algorithms performance on synthetic and real images. These experiments point out the ability of o...
Frailty Correlated Default
, 2008
"... This paper shows that the probability of extreme default losses on portfolios of U.S. corporate debt is much greater than would be estimated under the standard assumption that default correlation arises only from exposure to observable risk factors. At the high confidence levels at which bank loan p ..."
Abstract

Cited by 54 (4 self)
 Add to MetaCart
This paper shows that the probability of extreme default losses on portfolios of U.S. corporate debt is much greater than would be estimated under the standard assumption that default correlation arises only from exposure to observable risk factors. At the high confidence levels at which bank loan portfolio and CDO default losses are typically measured for economiccapital and rating purposes, our empirical results indicate that conventionally based estimates are downward biased by a full order of magnitude on test portfolios. Our estimates are based on U.S. public nonfinancial firms existing between 1979 and 2004. We find strong evidence for the presence of common latent factors, even when controlling for observable factors that provide the most accurate available model of firmbyfirm default probabilities. ∗ We are grateful for financial support from Moody’s Corporation and Morgan Stanley, and for research assistance from Sabri Oncu and Vineet Bhagwat. We are also grateful for remarks from Torben Andersen, André Lucas, Richard Cantor, Stav Gaon, Tyler Shumway, and especially Michael Johannes. This revision is much improved because of suggestions by a referee, an associate editor, and Campbell Harvey. We are thankful to Moodys and to Ed Altman for generous assistance with data. Duffie is at The Graduate School of Business, Stanford University. Eckner and Horel are at Merrill Lynch. Saita is at Lehman
Language Evolution by Iterated Learning With Bayesian Agents
, 2007
"... Languages are transmitted from person to person and generation to generation via a process of iterated learning: people learn a language from other people who once learned that language themselves. We analyze the consequences of iterated learning for learning algorithms based on the principles of Ba ..."
Abstract

Cited by 38 (8 self)
 Add to MetaCart
Languages are transmitted from person to person and generation to generation via a process of iterated learning: people learn a language from other people who once learned that language themselves. We analyze the consequences of iterated learning for learning algorithms based on the principles of Bayesian inference, assuming that learners compute a posterior distribution over languages by combining a prior (representing their inductive biases) with the evidence provided by linguistic data. We show that when learners sample languages from this posterior distribution, iterated learning converges to a distribution over languages that is determined entirely by the prior. Under these conditions, iterated learning is a form of Gibbs sampling, a widelyused Markov chain Monte Carlo algorithm. The consequences of iterated learning are more complicated when learners choose the language with maximum posterior probability, being affected by both the prior of the learners and the amount of information transmitted between generations. We show that in this case, iterated learning corresponds to another statistical inference algorithm, a variant of the expectationmaximization (EM) algorithm. These results clarify the role of iterated learning in explanations of linguistic universals and provide a formal connection between constraints on language acquisition and the languages that come to be spoken, suggesting that information transmitted via iterated learning will ultimately come to mirror the minds of the learners.
On Stochastic Versions of the EM Algorithm
, 1995
"... We compare three different stochastic versions of the EM
algorithm: The SEM algorithm, the SAEM algorithm and the MCEM algorithm. We suggest that the most relevant contribution of the MCEM methodology is what we call the
simulated annealing MCEM algorithm, which turns out to be very close to SAEM. ..."
Abstract

Cited by 35 (1 self)
 Add to MetaCart
We compare three different stochastic versions of the EM
algorithm: The SEM algorithm, the SAEM algorithm and the MCEM algorithm. We suggest that the most relevant contribution of the MCEM methodology is what we call the
simulated annealing MCEM algorithm, which turns out to be very close to SAEM. We focus particularly on the mixture of
distributions problem. In this context, we review the available theoretical results on the convergence of these algorithms and on the behavior of SEM as the sample size tends to infinity. The second part is devoted to intensive Monte Carlo numerical simulations and a real data study. We show that, for some particular mixture situations, the SEM algorithm is almost always preferable to the EM and
simulated annealing versions SAEM and MCEM. For
some very intricate mixtures, however, none of these algorithms can be confidently used. Then, SEM can be used as an efficient data exploratory tool for locating significant maxima of the likelihood function. In the real data case, we show that the SEM stationary distribution provides a contrasted view of the loglikelihood by emphasizing sensible maxima.
Marginal maximum a posteriori estimation using Markov chain Monte Carlo
 Statistics and Computing
, 2002
"... this article we propose a new Monte Carlo method for performing MMAP estimation in general Bayesian models. The method is related to SA in that we also simulate from a distribution proportional to the marginal posterior raised to a power # , but the means of achieving this are quite different: we em ..."
Abstract

Cited by 28 (6 self)
 Add to MetaCart
(Show Context)
this article we propose a new Monte Carlo method for performing MMAP estimation in general Bayesian models. The method is related to SA in that we also simulate from a distribution proportional to the marginal posterior raised to a power # , but the means of achieving this are quite different: we employ an augmented probability model constructed in such a way that the marginal density of # 1 is proportional to p ( 1 y). The algorithm is conceptually very simple and straightforward to implement in most cases, requiring only small modifications to MCMC code written for sampling from p(# 1 , 2 y)
Unsupervised Non Stationary Image Segmentation Using Triplet Markov Chains
 In Advanced Concepts for Intelligent Vision Systems (ACVIS 04
, 2004
"... This work deals with the unsupervised Bayesian hidden Markov chain restoration extended to the non stationary case. Unsupervised restoration based on "ExpectationMaximization " (EM) or "Stochastic EM" (SEM) estimates considering the "Hidden Markov Chain" (HMC) model is ..."
Abstract

Cited by 19 (7 self)
 Add to MetaCart
(Show Context)
This work deals with the unsupervised Bayesian hidden Markov chain restoration extended to the non stationary case. Unsupervised restoration based on "ExpectationMaximization " (EM) or "Stochastic EM" (SEM) estimates considering the "Hidden Markov Chain" (HMC) model is quite efficient when the hidden chain is stationary. However, when the latter is not stationary, the unsupervised restoration results can be poor, due to a bad match between the real and estimated models. In this paper we present a more appropriate model for non stationary HMC, via recent Triplet Markov Chains (TMC) model. Using TMC, we show that the classical restoration results can be significantly improved in the case of non stationary data. The latter improvement is performed in an unsupervised way using a SEM parameter estimation method. Some application examples to unsupervised image segmentation are also provided.