Results 11  20
of
181
Distributed Inference for Latent Dirichlet Allocation
 NIPS2007_672
, 2008
"... 1 Introduction Very large data sets, such as collections of images, text, and related data, are becoming increasinglycommon, with examples ranging from digitized collections of books by companies such as Google and Amazon, to large collections of images at Web sites such as Flickr, to the recent Net ..."
Abstract

Cited by 54 (5 self)
 Add to MetaCart
1 Introduction Very large data sets, such as collections of images, text, and related data, are becoming increasinglycommon, with examples ranging from digitized collections of books by companies such as Google and Amazon, to large collections of images at Web sites such as Flickr, to the recent Netflix customerrecommendation data set. These data sets present major opportunities for machine learning, such as the ability to explore much richer and more expressive models, as well as providing new andinteresting domains for the application of learning algorithms.
EM procedures using mean fieldlike approximations for Markov modelbased image segmentation
, 2001
"... This paper deals with Markov random field modelbased image segmentation. This involves parameter estimation in hidden Markov models for which one of the most widely used procedures is the EM algorithm. In practice, difficulties arise due to the dependence structure in the models and approximations ..."
Abstract

Cited by 46 (11 self)
 Add to MetaCart
This paper deals with Markov random field modelbased image segmentation. This involves parameter estimation in hidden Markov models for which one of the most widely used procedures is the EM algorithm. In practice, difficulties arise due to the dependence structure in the models and approximations are required to make the algorithm tractable. We propose a class of algorithms in which the idea is to deal with systems of independent variables. This corresponds to approximations of the pixels' interactions similar to the mean field approximation. It follows algorithms that have the advantage of taking the Markovian structure into account while preserving the good features of EM. In addition, this class, that includes new and already known procedures, is presented in a unified framework, showing that apparently distant algorithms come from similar approximation principles. We illustrate the algorithms performance on synthetic and real images. These experiments point out the ability of o...
Unsupervised Deconvolution of Sparse Spike Trains Using Stochastic Approximation
, 1996
"... This paper presents an unsupervised method for restoration of sparse spike trains. These signals are modeled as random BernoulliGaussian processes, and their unsupervised restoration requires (i) estimation of the hyperparameters that control the stochastic models of the input and noise signals ..."
Abstract

Cited by 34 (8 self)
 Add to MetaCart
This paper presents an unsupervised method for restoration of sparse spike trains. These signals are modeled as random BernoulliGaussian processes, and their unsupervised restoration requires (i) estimation of the hyperparameters that control the stochastic models of the input and noise signals and (ii) deconvolutlon of the pulse process. Classically, the problem is solved iteratively using a maximum generalized likelihood approach despite questionable statistical properties.
Frailty Correlated Default
, 2008
"... This paper shows that the probability of extreme default losses on portfolios of U.S. corporate debt is much greater than would be estimated under the standard assumption that default correlation arises only from exposure to observable risk factors. At the high confidence levels at which bank loan p ..."
Abstract

Cited by 34 (2 self)
 Add to MetaCart
This paper shows that the probability of extreme default losses on portfolios of U.S. corporate debt is much greater than would be estimated under the standard assumption that default correlation arises only from exposure to observable risk factors. At the high confidence levels at which bank loan portfolio and CDO default losses are typically measured for economiccapital and rating purposes, our empirical results indicate that conventionally based estimates are downward biased by a full order of magnitude on test portfolios. Our estimates are based on U.S. public nonfinancial firms existing between 1979 and 2004. We find strong evidence for the presence of common latent factors, even when controlling for observable factors that provide the most accurate available model of firmbyfirm default probabilities. ∗ We are grateful for financial support from Moody’s Corporation and Morgan Stanley, and for research assistance from Sabri Oncu and Vineet Bhagwat. We are also grateful for remarks from Torben Andersen, André Lucas, Richard Cantor, Stav Gaon, Tyler Shumway, and especially Michael Johannes. This revision is much improved because of suggestions by a referee, an associate editor, and Campbell Harvey. We are thankful to Moodys and to Ed Altman for generous assistance with data. Duffie is at The Graduate School of Business, Stanford University. Eckner and Horel are at Merrill Lynch. Saita is at Lehman
Bayesian Mixture Modeling by Monte Carlo Simulation
, 1991
"... . It is shown that Bayesian inference from data modeled by a mixture distribution can feasibly be performed via Monte Carlo simulation. This method exhibits the true Bayesian predictive distribution, implicitly integrating over the entire underlying parameter space. An infinite number of mixture com ..."
Abstract

Cited by 29 (0 self)
 Add to MetaCart
. It is shown that Bayesian inference from data modeled by a mixture distribution can feasibly be performed via Monte Carlo simulation. This method exhibits the true Bayesian predictive distribution, implicitly integrating over the entire underlying parameter space. An infinite number of mixture components can be accommodated without difficulty, using a prior distribution for mixing proportions that selects a reasonable subset of components to explain any finite training set. The need to decide on a "correct" number of components is thereby avoided. The feasibility of the method is shown empirically for a simple classification task. Introduction Mixture distributions [8, 20] are an appropriate tool for modeling processes whose output is thought to be generated by several different underlying mechanisms, or to come from several different populations. One aim of a mixture model analysis may be to identify and characterize these underlying "latent classes" [2, 7], either for some scient...
Bayesian estimation of a multilevel IRT model using Gibbs sampling
 Psychometrika
, 2001
"... In this article, atwolevel regression model is imposed on the ability parameters in an item response theory (IRT) model. The advantage of using latent rather an observed scores as dependent variables of a multilevel model is that it offers the possibility of separating the influence of item difficu ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
In this article, atwolevel regression model is imposed on the ability parameters in an item response theory (IRT) model. The advantage of using latent rather an observed scores as dependent variables of a multilevel model is that it offers the possibility of separating the influence of item difficulty and ability level and modeling response variation and measurement rror. Another advantage is that, contrary to observed scores, latent scores are testindependent, which offers the possibility of using results from different tests in one analysis where the parameters of the IRT model and the multilevel model can be concurrently estimated. The twoparameter no mal ogive model is used for the IRT measurement model. It will be shown that he parameters of the twoparameter normal ogive model and the multilevel model can be estimated in a Bayesian framework using Gibbs sampling. Examples using simulated and real data are given.
On Stochastic Versions of the EM Algorithm
, 1995
"... We compare three different stochastic versions of the EM
algorithm: The SEM algorithm, the SAEM algorithm and the MCEM algorithm. We suggest that the most relevant contribution of the MCEM methodology is what we call the
simulated annealing MCEM algorithm, which turns out to be very close to SAEM. ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
We compare three different stochastic versions of the EM
algorithm: The SEM algorithm, the SAEM algorithm and the MCEM algorithm. We suggest that the most relevant contribution of the MCEM methodology is what we call the
simulated annealing MCEM algorithm, which turns out to be very close to SAEM. We focus particularly on the mixture of
distributions problem. In this context, we review the available theoretical results on the convergence of these algorithms and on the behavior of SEM as the sample size tends to infinity. The second part is devoted to intensive Monte Carlo numerical simulations and a real data study. We show that, for some particular mixture situations, the SEM algorithm is almost always preferable to the EM and
simulated annealing versions SAEM and MCEM. For
some very intricate mixtures, however, none of these algorithms can be confidently used. Then, SEM can be used as an efficient data exploratory tool for locating significant maxima of the likelihood function. In the real data case, we show that the SEM stationary distribution provides a contrasted view of the loglikelihood by emphasizing sensible maxima.
Semiparametric Bayesian Analysis Of Survival Data
 Journal of the American Statistical Association
, 1996
"... this paper are motivated and aimed at analyzing some common types of survival data from different medical studies. We will center our attention to the following topics. ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
this paper are motivated and aimed at analyzing some common types of survival data from different medical studies. We will center our attention to the following topics.
Marginal maximum a posteriori estimation using Markov chain Monte Carlo
 Statistics and Computing
, 2002
"... this article we propose a new Monte Carlo method for performing MMAP estimation in general Bayesian models. The method is related to SA in that we also simulate from a distribution proportional to the marginal posterior raised to a power # , but the means of achieving this are quite different: we em ..."
Abstract

Cited by 21 (5 self)
 Add to MetaCart
this article we propose a new Monte Carlo method for performing MMAP estimation in general Bayesian models. The method is related to SA in that we also simulate from a distribution proportional to the marginal posterior raised to a power # , but the means of achieving this are quite different: we employ an augmented probability model constructed in such a way that the marginal density of # 1 is proportional to p ( 1 y). The algorithm is conceptually very simple and straightforward to implement in most cases, requiring only small modifications to MCMC code written for sampling from p(# 1 , 2 y)
Fast and robust parameter estimation for statistical partial volume models in brain MRI
 NEUROIMAGE
, 2004
"... Due to the finite spatial resolution of imaging devices, a single voxel in a medical image may be composed of mixture of tissue types, an effect known as partial volume effect (PVE). Partial volume estimation, that is, the estimation of the amount of each tissue type within each voxel, has received ..."
Abstract

Cited by 17 (7 self)
 Add to MetaCart
Due to the finite spatial resolution of imaging devices, a single voxel in a medical image may be composed of mixture of tissue types, an effect known as partial volume effect (PVE). Partial volume estimation, that is, the estimation of the amount of each tissue type within each voxel, has received considerable interest in recent years. Much of this work has been focused on the mixel model, a statistical model of PVE. We propose a novel trimmed minimum covariance determinant (TMCD) method for the estimation of the parameters of the mixel PVE model. In this method, each voxel is first labeled according to the most dominant tissue type. Voxels that are prone to PVE are removed from this labeled set, following which robust location estimators with high breakdown points are used to estimate the mean and the covariance of each tissue class. Comparisons between different methods for parameter estimation based on classified images as well as expectation–maximizationlike (EMlike) procedure for simultaneous parameter and