Results 11  20
of
49
Learning in Gibbsian Fields: How Accurate and How Fast Can It Be?
, 2002
"... Gibbsian elds or Markov random elds are widely used in Bayesian image analysis, but learning Gibbs models is computationally expensive. The computational complexity is pronounced by the recent minimax entropy (FRAME) models which use large neighborhoods and hundreds of parameters[22]. In this pape ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
Gibbsian elds or Markov random elds are widely used in Bayesian image analysis, but learning Gibbs models is computationally expensive. The computational complexity is pronounced by the recent minimax entropy (FRAME) models which use large neighborhoods and hundreds of parameters[22]. In this paper, we present a common framework for learning Gibbs models. We identify two key factors that determine the accuracy and speed of learning Gibbs models: The eciency of likelihood functions and the variance in approximating partition functions using Monte Carlo integration. We propose three new algorithms. In particular, we are interested in a maximum satellite likelihood estimator, which makes use of a set of precomputed Gibbs models called \satellites" to approximate likelihood functions. This algorithm can approximately estimate the minimax entropy model for textures in seconds in a HP workstation. The performances of various learning algorithms are compared in our experiments.
A survey of Monte Carlo algorithms for maximizing the likelihood of a twostage hierarchical model
, 2001
"... Likelihood inference with hierarchical models is often complicated by the fact that the likelihood function involves intractable integrals. Numerical integration (e.g. quadrature) is an option if the dimension of the integral is low but quickly becomes unreliable as the dimension grows. An alternati ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
Likelihood inference with hierarchical models is often complicated by the fact that the likelihood function involves intractable integrals. Numerical integration (e.g. quadrature) is an option if the dimension of the integral is low but quickly becomes unreliable as the dimension grows. An alternative approach is to approximate the intractable integrals using Monte Carlo averages. Several dierent algorithms based on this idea have been proposed. In this paper we discuss the relative merits of simulated maximum likelihood, Monte Carlo EM, Monte Carlo NewtonRaphson and stochastic approximation. Key words and phrases : Eciency, Monte Carlo EM, Monte Carlo NewtonRaphson, Rate of convergence, Simulated maximum likelihood, Stochastic approximation All three authors partially supported by NSF Grant DMS0072827. 1 1
Using a Markov chain to construct a tractable approximation of an intractable probability distribution
 Scandinavian Journal of Statistics
, 2005
"... Abbreviated title. Approximating an intractable distribution ..."
Abstract

Cited by 8 (7 self)
 Add to MetaCart
Abbreviated title. Approximating an intractable distribution
Monte Carlo maximum likelihood estimation for discretely observed diffusion porcesses
 The Annals of Statistics
, 2009
"... This paper introduces a Monte Carlo method for maximum likelihood inference in the context of discretely observed diffusion processes. The method gives unbiased and a.s. continuous estimators of the likelihood function for a family of diffusion models and its performance in numerical examples is com ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
This paper introduces a Monte Carlo method for maximum likelihood inference in the context of discretely observed diffusion processes. The method gives unbiased and a.s. continuous estimators of the likelihood function for a family of diffusion models and its performance in numerical examples is computationally efficient. It uses a recently developed technique for the exact simulation of diffusions, and involves no discretization error. We show that, under regularity conditions, the Monte Carlo MLE converges a.s. to the true MLE. For datasize n → ∞, we show that the number of Monte Carlo iterations should be tuned as O(n 1/2) and we demonstrate the consistency properties of the Monte Carlo MLE as an estimator of the true parameter value. 1. Introduction. We introduce a Monte Carlo
Simulationbased Inference for Spatial Point Processes
, 2001
"... Introduction Spatial point processes play a fundamental role in spatial statistics. In the simplest case they model \small" objects that may be identied by a map of points showing stores, towns, plants, nests, galaxies or cases of a disease observed in a two or three dimensional region. The points ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Introduction Spatial point processes play a fundamental role in spatial statistics. In the simplest case they model \small" objects that may be identied by a map of points showing stores, towns, plants, nests, galaxies or cases of a disease observed in a two or three dimensional region. The points may be decorated with marks (such as sizes or types) whereby marked point processes are obtained. The areas of applications are manifold: astronomy, geography, ecology, forestry, spatial epidemiology, image analysis, and many more. Currently spatial point processes is an active area of research, which probably will be of increasing importance for many new applications, as new technology such as geographical information systems makes huge amounts of spatial point process data available. Textbooks and review articles on dierent aspects of spatial point processes include Matheron (1975), Ripley (1977), Ripley (1981), Diggle (1983), Penttinen (1984), Daley &VereJones (1988),
Markov Connected Component Fields
"... A new class of Gibbsian models with potentials associated to the connected components or homogeneous parts of images is introduced. For these models the neighbourhood of a pixel is not fixed as for Markov random fields, but given by the components which are adjacent to the pixel. The relationship to ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
A new class of Gibbsian models with potentials associated to the connected components or homogeneous parts of images is introduced. For these models the neighbourhood of a pixel is not fixed as for Markov random fields, but given by the components which are adjacent to the pixel. The relationship to Markov random fields and marked point processes is explored and spatial Markov properties are established. Also extensions to infinite lattices are studied, and statistical inference problems including geostatistical applications and statistical image analysis are discussed. Finally, simulation studies are presented which show that the models may be appropiate for a variety of interesting patterns including images exhibiting intermediate degrees of spatial continuity and images of objects against background.
Ascentbased Monte Carlo EM
, 2004
"... The EM algorithm is a popular tool for maximizing likelihood functions in the presence of missing data. Unfortunately, EM often requires the evaluation of analytically intractable and highdimensional integrals. The Monte Carlo EM (MCEM) algorithm is the natural extension of EM that employs Monte Ca ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
The EM algorithm is a popular tool for maximizing likelihood functions in the presence of missing data. Unfortunately, EM often requires the evaluation of analytically intractable and highdimensional integrals. The Monte Carlo EM (MCEM) algorithm is the natural extension of EM that employs Monte Carlo methods to estimate the relevant integrals. Typically, a very large Monte Carlo sample size is required to estimate these integrals within an acceptable tolerance when the algorithm is near convergence. Even if this sample size were known at the onset of implementation of MCEM, its use throughout all iterations is wasteful, especially when accurate starting values are not available. We propose a datadriven strategy for controlling Monte Carlo resources in MCEM. The proposed algorithm improves on similar existing methods by: (i) recovering EM’s ascent (i.e., likelihoodincreasing) property with high probability, (ii) being more robust to the impact of user defined inputs, and (iii) handling classical Monte Carlo and Markov chain Monte Carlo within a common framework. Because of (i) we refer to the algorithm as “Ascentbased MCEM”. We apply Ascentbased MCEM to a variety of examples, including one where it is used to dramatically accelerate the convergence of deterministic EM.
NoiseContrastive Estimation of Unnormalized Statistical Models, with Applications to Natural Image Statistics
"... We consider the task of estimating, from observed data, a probabilistic model that is parameterized by a finite number of parameters. In particular, we are considering the situation where the model probability density function is unnormalized. That is, the model is only specified up to the partition ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
We consider the task of estimating, from observed data, a probabilistic model that is parameterized by a finite number of parameters. In particular, we are considering the situation where the model probability density function is unnormalized. That is, the model is only specified up to the partition function. The partition function normalizes a model so that it integrates to one for any choice of the parameters. However, it is often impossible to obtain it in closed form. Gibbs distributions, Markov and multilayer networks are examples of models where analytical normalization is often impossible. Maximum likelihood estimation can then not be used without resorting to numerical approximations which are often computationally expensive. We propose here a new objective function for the estimation of both normalized and unnormalized models. The basic idea is to perform nonlinear logistic regression to discriminate between the observed data and some artificially generated noise. With this approach, the normalizing partition function can be estimated like any other parameter. We prove that the new estimation method leads to a consistent (convergent) estimator of the parameters. For large noise sample sizes, the new estimator is furthermore shown to behave like the maximum likelihood estimator. In the estimation of unnormalized models, there is a tradeoff between statistical and computational performance. We show that the new method strikes a competitive tradeoff in comparison to other estimation methods for unnormalized models. As an application to real data, we estimate novel twolayer models of natural image statistics with spline nonlinearities. Keywords: statistics unnormalized models, partition function, computation, estimation, natural image 1.
SimulationBased Optimal Design
, 1999
"... We review simulation based methods in optimal design. Expected utility maximization, i.e., optimal design, is concerned with maximizing an mtegra! expression representing expected utility with respect to some design parameter. Except in special cases neither the maximization nor the integration can ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We review simulation based methods in optimal design. Expected utility maximization, i.e., optimal design, is concerned with maximizing an mtegra! expression representing expected utility with respect to some design parameter. Except in special cases neither the maximization nor the integration can be solved analytically and approximations and/or simulation based methods are needed. On one hand the integration problem is easier to solve than the integration appearing in posterior inference problems. This is because: the expectation is with respect to the joint distribution of parameters and data, which typically allows efficient random variate generation. On the other hand, the problem is difficult because the integration is embedded in the maximization and has to possibly be evaluated many times for different design parameters. We discuss four related strategies: prior simulation; smoothing of Monte Carlo simulations; Markov chain Monte Carlo (MCMC) simulation in an augmented probability model; a simulated annealing type approach.
The Simulated Likelihood Ratio (SLR) Method
, 1998
"... . A simulation method based on importance sampling, Gibbs and MetropolisHastings techniques allows to approximate the ratio between the likelihood function computed for two different parameter values. Thus it is possible to approximate the maximum likelihood estimator in the general framework of dy ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
. A simulation method based on importance sampling, Gibbs and MetropolisHastings techniques allows to approximate the ratio between the likelihood function computed for two different parameter values. Thus it is possible to approximate the maximum likelihood estimator in the general framework of dynamic latent variable models. Some examples of this class of models are factor models, switching regime models, dynamic limited dependent variable models, stochastic volatility models and discretised continuous time models. 1 University Ca' Foscari of Venice 2 CRESTINSEE 3 CRESTINSEE 1 INTRODUCTION The class of parametric dynamic latent variable models (also called factor models or hidden variable models or hierarchical models) is becoming increasingly popular, because of the flexibility they offer in the modelling of complex phenomena. They jointly specify a sequence (y t ) of time dependent variables and a second sequence (y t ) of partially unobserved variables in such a way ...