Results 1  10
of
16
Computational and Inferential Difficulties With Mixture Posterior Distributions
 Journal of the American Statistical Association
, 1999
"... This paper deals with both exploration and interpretation problems related to posterior distributions for mixture models. The specification of mixture posterior distributions means that the presence of k! modes is known immediately. Standard Markov chain Monte Carlo techniques usually have difficult ..."
Abstract

Cited by 113 (12 self)
 Add to MetaCart
This paper deals with both exploration and interpretation problems related to posterior distributions for mixture models. The specification of mixture posterior distributions means that the presence of k! modes is known immediately. Standard Markov chain Monte Carlo techniques usually have difficulties with wellseparated modes such as occur here; the Markov chain Monte Carlo sampler stays within a neighbourhood of a local mode and fails to visit other equally important modes. We show that exploration of these modes can be imposed on the Markov chain Monte Carlo sampler using tempered transitions based on Langevin algorithms. However, as the prior distribution does not distinguish between the different components, the posterior mixture distribution is symmetric and thus standard estimators such as posterior means cannot be used. Since this is also true for most nonsymmetric priors, we propose alternatives for Bayesian inference for permutation invariant posteriors, including a cluster...
Estimating mixtures of regressions
"... In this paper, we show how Bayesian inference for switching regression models and their generalisations can be achieved by the specification of loss functions which overcome the label switching problem common to all mixture models. We also derive an extension to models where the number of components ..."
Abstract

Cited by 32 (2 self)
 Add to MetaCart
In this paper, we show how Bayesian inference for switching regression models and their generalisations can be achieved by the specification of loss functions which overcome the label switching problem common to all mixture models. We also derive an extension to models where the number of components in the mixture is unknown, based on the birthand death technique developed in Stephens (2000a). The methods are illustrated on various real datasets.
Double Markov Random Fields and Bayesian Image Segmentation
, 2002
"... Markov random fields are used extensively in modelbased approaches to image segmentation and, under the Bayesian paradigm, are implemented through Markov chain Monte Carlo (MCMC) methods. In this paper, we describe a class of such models (the double Markov random field) for images composed of severa ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
Markov random fields are used extensively in modelbased approaches to image segmentation and, under the Bayesian paradigm, are implemented through Markov chain Monte Carlo (MCMC) methods. In this paper, we describe a class of such models (the double Markov random field) for images composed of several textures, which we consider to be the natural hierarchical model for such a task. We show how several of the Bayesian approaches in the literature can be viewed as modifications of this model, made in order to make MCMC implementation possible. From a simulation study, conclusions are made concerning the performance of these modified models.
Bayesian Object Recognition with Baddeley's Delta Loss
, 1995
"... A common problem in Bayesian object recognition using marked point process models is to produce a point estimate of the true underlying object configuration. In the Bayesian framework we could use decision theory and the concept of loss functions to design a more reasonable estimator for the true ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
A common problem in Bayesian object recognition using marked point process models is to produce a point estimate of the true underlying object configuration. In the Bayesian framework we could use decision theory and the concept of loss functions to design a more reasonable estimator for the true underlying object configuration than the common zeroone loss, which corresponds to the maximum aposteriori estimator. We propose to use the \Deltametric of Baddeley (1992) as our loss function. It is demonstrated that the optimal Bayesian estimator corresponding to the \Deltametric can be well approximated by combining Markov chain Monte Carlo methods with Simulated Annealing into a two step algorithm. The proposed loss function is tested using a marked point process model developed for locating cells in confocal microscopy images. In order to obtain reliable results, we must include moves that split and fuse objects within the Markov chain Monte Carlo framework. The exper...
The Nishimori line and Bayesian Statistics
 J. Phys. A: Math. Gen
, 1999
"... Abstract. “Nishimori line ” is a line or hypersurface in the parameter space of systems with quenched disorder, where simple expressions of the averages of physical quantities over the quenched random variables are obtained. It has been playing an important role in the theoretical studies of the ran ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
Abstract. “Nishimori line ” is a line or hypersurface in the parameter space of systems with quenched disorder, where simple expressions of the averages of physical quantities over the quenched random variables are obtained. It has been playing an important role in the theoretical studies of the random frustrated systems since its discovery around 1980. In this paper, a novel interpretation of the Nishimori line from the viewpoint of statistical information processing is presented. Our main aim is the reconstruction of the whole theory of the Nishimori line from the viewpoint of Bayesian statistics, or, almost equivalently, from the viewpoint of the theory of errorcorrecting codes. As a byproduct of our interpretation, counterparts of the Nishimori line in models without gauge invariance are given. We also discussed the issues on the “finite temperature decoding ” of errorcorrecting codes in connection with our theme and clarify the role of gauge invariance in this topic. Submitted to: J. Phys. A: Math. Gen. 1.
Bayesian Image Classification with Baddeley's Delta Loss
 Journal of Computer Graphics and Statistics
, 1997
"... In this paper we adopt Baddeley's delta metric (Baddeley 1992) as a loss function in Bayesian image restoration and classification. We develop a new algorithm that allows to estimate the corresponding optimal Bayesian estimator, for which good practical estimates can be obtained by approximatel ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
In this paper we adopt Baddeley's delta metric (Baddeley 1992) as a loss function in Bayesian image restoration and classification. We develop a new algorithm that allows to estimate the corresponding optimal Bayesian estimator, for which good practical estimates can be obtained by approximately the same cost as traditional estimators like the Marginal Posterior Mode (MPM). A comparison of such classification with MPM shows significant advantages, especially with respect to fine structures. Keywords: Bayesian inference; Unsymmetric loss functions; Image restoration; Markov chain Monte Carlo methods; Metropolis algorithm; Distance between binary images. Address for correspondence: Department of Mathematical Sciences, The Norwegian Institute of Technology, N7034 Trondheim, Norway. Email: havard.rue@imf.unit.no. 1 Introduction In image analysis one must often compare two different images. For instance, consider the problem of restoring a noisy image. For this we develop a new met...
Identification Of Partly Destroyed Objects Using Deformable Templates
 Statist. Comput
, 1997
"... This article addresses the problem of identification of partly destroyed human melanoma cancer cells in confocal microscopy imaging. Complete cancer cells are nearly circular and most of them have a nearly homogeneous boundary and interior region. A deformable template (Grenander, 1993) is well suit ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
This article addresses the problem of identification of partly destroyed human melanoma cancer cells in confocal microscopy imaging. Complete cancer cells are nearly circular and most of them have a nearly homogeneous boundary and interior region. A deformable template (Grenander, 1993) is well suited for these complete cells and models a cell as a natural deformed template or prototype. We will in this article focus on the remaining cells which have lost parts of the boundary region most probably due to a "capping" phenomenon. We can interpret these cells as being partly destroyed, where in our statistical model the lost part of the boundary region is generated by a destructive deformation field acting and living on the cell or template. By doing simultaneous inference for both the natural and destructive deformation field, we are able to obtain reliable estimates of the outline in addition to where on the boundary the cell is destroyed. We apply our model to identifying partly destro...
An Estimator For Images With Ordered Colors
, 1994
"... . A common problem in Bayesian imaging is to find a point estimate of the true image based on the observed data and prior information through the posterior distribution. Common estimators for this purpose include the posterior mode and mean, which corresponds to the zeroone and (pixelbypixel) ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
. A common problem in Bayesian imaging is to find a point estimate of the true image based on the observed data and prior information through the posterior distribution. Common estimators for this purpose include the posterior mode and mean, which corresponds to the zeroone and (pixelbypixel) quadratic loss. However, both estimators have a fundamental weakness; the loss does not depend on the spatial structure of the errors. This is important because systematic structure in the errors can lead to misinterpretation of the estimated image. Typically, false objects may be introduced or true objects hidden. We propose a loss function that checks local linear dependency in the error. Furthermore, we obtain an estimator which has a low expected squared sample covariance in the errors. We present simulation results for some artificial data. Keywords. Bayesian inference; Loss functions; Image reconstruction; Image restoration. Address. The Norwegian Institute of Technology, Department ...
Loss Functions for Bayesian Image Analysis
, 1997
"... This paper discusses the role of loss functions in Bayesian image classification, object recognition and identification, and reviews the use of a particular loss function which produces visually attractive estimates. 1 INTRODUCTION Bayesian image analysis often involves calculating point estimates ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper discusses the role of loss functions in Bayesian image classification, object recognition and identification, and reviews the use of a particular loss function which produces visually attractive estimates. 1 INTRODUCTION Bayesian image analysis often involves calculating point estimates of some appropriate quantity, for example the grey level at each pixel (in image reconstruction), the label type of each pixel (in image classification), or the number of objects together with their contours and types (in object recognition and identification). Within a decision theoretic framework, the approach to finding an estimator is first to select an appropriate loss function L(x; z), which gives the loss when the true value is x but we estimate it by z. The optimal Bayes estimator (OBE) x is then computed by minimising the posterior expectation of this loss (the Bayes risk), x = arg min z EL(x; z): (1) Commonly used L(x; z) are LMAP = 1 [x6=z] , LMPM = P i 1 [x i 6=z i ] ...