Results 1  10
of
19
Computational and Inferential Difficulties With Mixture Posterior Distributions
 Journal of the American Statistical Association
, 1999
"... This paper deals with both exploration and interpretation problems related to posterior distributions for mixture models. The specification of mixture posterior distributions means that the presence of k! modes is known immediately. Standard Markov chain Monte Carlo techniques usually have difficult ..."
Abstract

Cited by 167 (14 self)
 Add to MetaCart
(Show Context)
This paper deals with both exploration and interpretation problems related to posterior distributions for mixture models. The specification of mixture posterior distributions means that the presence of k! modes is known immediately. Standard Markov chain Monte Carlo techniques usually have difficulties with wellseparated modes such as occur here; the Markov chain Monte Carlo sampler stays within a neighbourhood of a local mode and fails to visit other equally important modes. We show that exploration of these modes can be imposed on the Markov chain Monte Carlo sampler using tempered transitions based on Langevin algorithms. However, as the prior distribution does not distinguish between the different components, the posterior mixture distribution is symmetric and thus standard estimators such as posterior means cannot be used. Since this is also true for most nonsymmetric priors, we propose alternatives for Bayesian inference for permutation invariant posteriors, including a cluster...
Estimating mixtures of regressions
"... In this paper, we show how Bayesian inference for switching regression models and their generalisations can be achieved by the specification of loss functions which overcome the label switching problem common to all mixture models. We also derive an extension to models where the number of components ..."
Abstract

Cited by 55 (2 self)
 Add to MetaCart
In this paper, we show how Bayesian inference for switching regression models and their generalisations can be achieved by the specification of loss functions which overcome the label switching problem common to all mixture models. We also derive an extension to models where the number of components in the mixture is unknown, based on the birthand death technique developed in Stephens (2000a). The methods are illustrated on various real datasets.
Double Markov Random Fields and Bayesian Image Segmentation
, 2002
"... Markov random fields are used extensively in modelbased approaches to image segmentation and, under the Bayesian paradigm, are implemented through Markov chain Monte Carlo (MCMC) methods. In this paper, we describe a class of such models (the double Markov random field) for images composed of severa ..."
Abstract

Cited by 28 (0 self)
 Add to MetaCart
(Show Context)
Markov random fields are used extensively in modelbased approaches to image segmentation and, under the Bayesian paradigm, are implemented through Markov chain Monte Carlo (MCMC) methods. In this paper, we describe a class of such models (the double Markov random field) for images composed of several textures, which we consider to be the natural hierarchical model for such a task. We show how several of the Bayesian approaches in the literature can be viewed as modifications of this model, made in order to make MCMC implementation possible. From a simulation study, conclusions are made concerning the performance of these modified models.
Bayesian modelbased clustering procedures
 Journal of Computational and Graphical Statistics
"... This article establishes a general formulation for Bayesian modelbased clustering, in which subset labels are exchangeable, and items are also exchangeable, possibly up to covariate effects. The notational framework is rich enough to encompass a variety of existing procedures, including some recent ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
This article establishes a general formulation for Bayesian modelbased clustering, in which subset labels are exchangeable, and items are also exchangeable, possibly up to covariate effects. The notational framework is rich enough to encompass a variety of existing procedures, including some recently discussed methods involving stochastic search or hierarchical clustering, but more importantly allows the formulation of clustering procedures that are optimal with respect to a specified loss function. Our focus is on loss functions based on pairwise coincidences, that is, whether pairs of items are clustered into the same subset or not. Optimization of the posterior expected loss function can be formulated as a binary integer programming problem, which can be readily solved by standard software when clustering a modest number of items, but quickly becomes impractical as problem scale increases. To combat this, a new heuristic itemswapping algorithm is introduced. This performs well in our numerical experiments, on both simulated and real data examples. The article includes a comparison of the statistical performance of the (approximate) optimal clustering with earlier methods that are modelbased but ad hoc in their detailed definition.
Bayesian Object Recognition with Baddeley's Delta Loss
, 1995
"... A common problem in Bayesian object recognition using marked point process models is to produce a point estimate of the true underlying object configuration. In the Bayesian framework we could use decision theory and the concept of loss functions to design a more reasonable estimator for the true ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
A common problem in Bayesian object recognition using marked point process models is to produce a point estimate of the true underlying object configuration. In the Bayesian framework we could use decision theory and the concept of loss functions to design a more reasonable estimator for the true underlying object configuration than the common zeroone loss, which corresponds to the maximum aposteriori estimator. We propose to use the \Deltametric of Baddeley (1992) as our loss function. It is demonstrated that the optimal Bayesian estimator corresponding to the \Deltametric can be well approximated by combining Markov chain Monte Carlo methods with Simulated Annealing into a two step algorithm. The proposed loss function is tested using a marked point process model developed for locating cells in confocal microscopy images. In order to obtain reliable results, we must include moves that split and fuse objects within the Markov chain Monte Carlo framework. The exper...
The Nishimori line and Bayesian Statistics
 J. Phys. A: Math. Gen
, 1999
"... Abstract. “Nishimori line ” is a line or hypersurface in the parameter space of systems with quenched disorder, where simple expressions of the averages of physical quantities over the quenched random variables are obtained. It has been playing an important role in the theoretical studies of the ran ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
(Show Context)
Abstract. “Nishimori line ” is a line or hypersurface in the parameter space of systems with quenched disorder, where simple expressions of the averages of physical quantities over the quenched random variables are obtained. It has been playing an important role in the theoretical studies of the random frustrated systems since its discovery around 1980. In this paper, a novel interpretation of the Nishimori line from the viewpoint of statistical information processing is presented. Our main aim is the reconstruction of the whole theory of the Nishimori line from the viewpoint of Bayesian statistics, or, almost equivalently, from the viewpoint of the theory of errorcorrecting codes. As a byproduct of our interpretation, counterparts of the Nishimori line in models without gauge invariance are given. We also discussed the issues on the “finite temperature decoding ” of errorcorrecting codes in connection with our theme and clarify the role of gauge invariance in this topic. Submitted to: J. Phys. A: Math. Gen. 1.
Bayesian Image Classification with Baddeley's Delta Loss
 Journal of Computer Graphics and Statistics
, 1997
"... In this paper we adopt Baddeley's delta metric (Baddeley 1992) as a loss function in Bayesian image restoration and classification. We develop a new algorithm that allows to estimate the corresponding optimal Bayesian estimator, for which good practical estimates can be obtained by approximatel ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
(Show Context)
In this paper we adopt Baddeley's delta metric (Baddeley 1992) as a loss function in Bayesian image restoration and classification. We develop a new algorithm that allows to estimate the corresponding optimal Bayesian estimator, for which good practical estimates can be obtained by approximately the same cost as traditional estimators like the Marginal Posterior Mode (MPM). A comparison of such classification with MPM shows significant advantages, especially with respect to fine structures. Keywords: Bayesian inference; Unsymmetric loss functions; Image restoration; Markov chain Monte Carlo methods; Metropolis algorithm; Distance between binary images. Address for correspondence: Department of Mathematical Sciences, The Norwegian Institute of Technology, N7034 Trondheim, Norway. Email: havard.rue@imf.unit.no. 1 Introduction In image analysis one must often compare two different images. For instance, consider the problem of restoring a noisy image. For this we develop a new met...
Identification Of Partly Destroyed Objects Using Deformable Templates
 Statist. Comput
, 1997
"... This article addresses the problem of identification of partly destroyed human melanoma cancer cells in confocal microscopy imaging. Complete cancer cells are nearly circular and most of them have a nearly homogeneous boundary and interior region. A deformable template (Grenander, 1993) is well suit ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
This article addresses the problem of identification of partly destroyed human melanoma cancer cells in confocal microscopy imaging. Complete cancer cells are nearly circular and most of them have a nearly homogeneous boundary and interior region. A deformable template (Grenander, 1993) is well suited for these complete cells and models a cell as a natural deformed template or prototype. We will in this article focus on the remaining cells which have lost parts of the boundary region most probably due to a "capping" phenomenon. We can interpret these cells as being partly destroyed, where in our statistical model the lost part of the boundary region is generated by a destructive deformation field acting and living on the cell or template. By doing simultaneous inference for both the natural and destructive deformation field, we are able to obtain reliable estimates of the outline in addition to where on the boundary the cell is destroyed. We apply our model to identifying partly destro...
Bridging Viterbi and Posterior Decoding: A Generalized Risk Approach to Hidden Path Inference Based on Hidden Markov Models
"... Motivated by the unceasing interest in hidden Markov models (HMMs), this paper reexamines hidden path inference in these models, using primarily a riskbased framework. While the most common maximum a posteriori (MAP), or Viterbi, path estimator and the minimum error, or Posterior Decoder (PD) have ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Motivated by the unceasing interest in hidden Markov models (HMMs), this paper reexamines hidden path inference in these models, using primarily a riskbased framework. While the most common maximum a posteriori (MAP), or Viterbi, path estimator and the minimum error, or Posterior Decoder (PD) have long been around, other path estimators, or decoders, have been either only hinted at or applied more recently and in dedicated applications generally unfamiliar to the statistical learning community. Over a decade ago, however, a family of algorithmically defined decoders aiming to hybridize the two standard ones was proposed elsewhere. The present paper gives a careful analysis of this hybridization approach, identifies several problems and issues with it and other previously proposed approaches, and proposes practical resolutions of those. Furthermore, simple modifications of the classical criteria for hidden path recognition are shown to lead to a new class of decoders. Dynamic programming algorithms to compute these decoders in the usual forwardbackward manner are presented. A particularly interesting subclass of such estimators can be also viewed as hybrids of the MAP and PD estimators. Similar to