Results 1  10
of
99
ModelBased Clustering, Discriminant Analysis, and Density Estimation
 JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
, 2000
"... Cluster analysis is the automated search for groups of related observations in a data set. Most clustering done in practice is based largely on heuristic but intuitively reasonable procedures and most clustering methods available in commercial software are also of this type. However, there is little ..."
Abstract

Cited by 265 (24 self)
 Add to MetaCart
Cluster analysis is the automated search for groups of related observations in a data set. Most clustering done in practice is based largely on heuristic but intuitively reasonable procedures and most clustering methods available in commercial software are also of this type. However, there is little systematic guidance associated with these methods for solving important practical questions that arise in cluster analysis, such as \How many clusters are there?", "Which clustering method should be used?" and \How should outliers be handled?". We outline a general methodology for modelbased clustering that provides a principled statistical approach to these issues. We also show that this can be useful for other problems in multivariate analysis, such as discriminant analysis and multivariate density estimation. We give examples from medical diagnosis, mineeld detection, cluster recovery from noisy data, and spatial density estimation. Finally, we mention limitations of the methodology, a...
Sequential Monte Carlo Samplers
, 2002
"... In this paper, we propose a general algorithm to sample sequentially from a sequence of probability distributions known up to a normalizing constant and de ned on a common space. A sequence of increasingly large arti cial joint distributions is built; each of these distributions admits a marginal ..."
Abstract

Cited by 139 (24 self)
 Add to MetaCart
In this paper, we propose a general algorithm to sample sequentially from a sequence of probability distributions known up to a normalizing constant and de ned on a common space. A sequence of increasingly large arti cial joint distributions is built; each of these distributions admits a marginal which is a distribution of interest. To sample from these distributions, we use sequential Monte Carlo methods. We show that these methods can be interpreted as interacting particle approximations of a nonlinear FeynmanKac ow in distribution space. One interpretation of the FeynmanKac ow corresponds to a nonlinear Markov kernel admitting a speci ed invariant distribution and is a natural nonlinear extension of the standard MetropolisHastings algorithm. Many theoretical results have already been established for such ows and their particle approximations. We demonstrate the use of these algorithms through simulation.
Geometric Ergodicity and Hybrid Markov Chains
, 1997
"... Various notions of geometric ergodicity for Markov chains on general state spaces exist. In this paper, we review certain relations and implications among them. We then apply these results to a collection of chains commonly used in Markov chain Monte Carlo simulation algorithms, the socalled hybrid ..."
Abstract

Cited by 75 (24 self)
 Add to MetaCart
Various notions of geometric ergodicity for Markov chains on general state spaces exist. In this paper, we review certain relations and implications among them. We then apply these results to a collection of chains commonly used in Markov chain Monte Carlo simulation algorithms, the socalled hybrid chains. We prove that under certain conditions, a hybrid chain will "inherit" the geometric ergodicity of its constituent parts. 1 Introduction A question of increasing importance in the Markov chain Monte Carlo literature (Gelfand and Smith, 1990; Smith and Roberts, 1993) is the issue of geometric ergodicity of Markov chains (Tierney, 1994, Section 3.2; Meyn and Tweedie, 1993, Chapters 15 and 16; Roberts and Tweedie, 1996). However, there are a number of different notions of the phrase "geometrically ergodic", depending on perspective (total variation distance vs. in L 2 ; with reference to a particular V function; etc.). One goal of this paper is to review and clarify the relationship...
Bayesian Curve Fitting Using MCMC With Applications to Signal Segmentation
 IEEE Transactions on Signal Processing
, 2002
"... We propose some Bayesian methods to address the problem of fitting a signal modeled by a sequence of piecewise constant linear (in the parameters) regression models, for example, autoregressive or Volterra models. A joint prior distribution is set up over the number of the changepoints/knots, their ..."
Abstract

Cited by 54 (0 self)
 Add to MetaCart
We propose some Bayesian methods to address the problem of fitting a signal modeled by a sequence of piecewise constant linear (in the parameters) regression models, for example, autoregressive or Volterra models. A joint prior distribution is set up over the number of the changepoints/knots, their positions, and over the orders of the linear regression models within each segment if these are unknown. Hierarchical priors are developed and, as the resulting posterior probability distributions and Bayesian estimators do not admit closedform analytical expressions, reversible jump Markov chain Monte Carlo (MCMC) methods are derived to estimate these quantities. Results are obtained for standard denoising and segmentation of speech data problems that have already been examined in the literature. These results demonstrate the performance of our methods.
Prediction via Orthogonalized Model Mixing
 JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
, 1994
"... In this paper we introduce an approach and algorithms for model mixing in large prediction problems with correlated predictors. We focus on the choice of predictors in linear models, and mix over possible subsets of candidate predictors. Our approach is based on expressing the space of models in ter ..."
Abstract

Cited by 50 (9 self)
 Add to MetaCart
In this paper we introduce an approach and algorithms for model mixing in large prediction problems with correlated predictors. We focus on the choice of predictors in linear models, and mix over possible subsets of candidate predictors. Our approach is based on expressing the space of models in terms of an orthogonalization of the design matrix. Advantages are both statistical and computational. Statistically, orthogonalization often leads to a reduction in the number of competing models by eliminating correlations. Computationally, large model spaces cannot be enumerated; recent approaches are based on sampling models with high posterior probability via Markov chains. Based on orthogonalization of the space of candidate predictors, we can approximate the posterior probabilities of models by products of predictorspecific terms. This leads to an importance sampling function for sampling directly from the joint distribution over the model space, without resorting to Markov chains. Comp...
Joint Bayesian Model Selection and Estimation of Noisy Sinusoids via Reversible Jump MCMC
, 1999
"... In this paper, the problem of joint Bayesian model selection and parameter estimation for sinusoids in white Gaussian noise is addressed. An original Bayesian model is proposed that allows us to define a posterior distribution on the parameter space. All Bayesian inference is then based on this dist ..."
Abstract

Cited by 40 (3 self)
 Add to MetaCart
In this paper, the problem of joint Bayesian model selection and parameter estimation for sinusoids in white Gaussian noise is addressed. An original Bayesian model is proposed that allows us to define a posterior distribution on the parameter space. All Bayesian inference is then based on this distribution. Unfortunately, a direct evaluation of this distribution and of its features, including posterior model probabilities, requires evaluation of some complicated highdimensional integrals. We develop an efficient stochastic algorithm based on reversible jump Markov chain Monte Carlo methods to perform the Bayesian computation. A convergence result for this algorithm is established. In simulation, it appears that the performance of detection based on posterior model probabilities outperforms conventional detection schemes.
Efficient construction of reversible jump markov chain monte carlo proposal distributions
 Journal of the Royal Statistical Society: Series B (Statistical Methodology
"... Summary. The major implementational problem for reversible jump Markov chain Monte Carlo methods is that there is commonly no natural way to choose jump proposals since there is no Euclidean structure in the parameter space to guide our choice. We consider mechanisms for guiding the choice of propos ..."
Abstract

Cited by 38 (2 self)
 Add to MetaCart
Summary. The major implementational problem for reversible jump Markov chain Monte Carlo methods is that there is commonly no natural way to choose jump proposals since there is no Euclidean structure in the parameter space to guide our choice. We consider mechanisms for guiding the choice of proposal. The first group of methods is based on an analysis of acceptance probabilities for jumps. Essentially, these methods involve a Taylor series expansion of the acceptance probability around certain canonical jumps and turn out to have close connections to Langevin algorithms.The second group of methods generalizes the reversible jump algorithm by using the socalled saturated space approach. These allow the chain to retain some degree of memory so that, when proposing to move from a smaller to a larger model, information is borrowed from the last time that the reverse move was performed. The main motivation for this paper is that, in complex problems, the probability that the Markov chain moves between such spaces may be prohibitively small, as the probability mass can be very thinly spread across the space. Therefore, finding reasonable jump proposals becomes extremely important. We illustrate the procedure by using several examples of reversible jump Markov chain Monte Carlo applications including the analysis of autoregressive time series, graphical Gaussian modelling and mixture modelling.
Sequential MCMC for Bayesian model selection
 IEEE Higher Order Statistics Workshop
, 1999
"... In this paper, we address the problem of sequential Bayesian model selection. This problem does not usually admit any closedform analytical solution. We propose here an original sequential simulationbased method to solve the associated Bayesian computational problems. This method combines sequenti ..."
Abstract

Cited by 36 (16 self)
 Add to MetaCart
In this paper, we address the problem of sequential Bayesian model selection. This problem does not usually admit any closedform analytical solution. We propose here an original sequential simulationbased method to solve the associated Bayesian computational problems. This method combines sequential importance sampling, a resampling procedure and reversible jump MCMC moves. We describe a generic algorithm and then apply it to the problem of sequential Bayesian model order estimation of autoregressive (AR) time series observed in additive noise. 1
Semisupervised linear spectral unmixing using a hierarchical Bayesian model for hyperspectral imagery,” IRIT/ENSEEIHT/TeSA
, 2007
"... Abstract—This paper proposes a hierarchical Bayesian model that can be used for semisupervised hyperspectral image unmixing. The model assumes that the pixel reflectances result from linear combinations of pure component spectra contaminated by an additive Gaussian noise. The abundance parameters a ..."
Abstract

Cited by 31 (21 self)
 Add to MetaCart
Abstract—This paper proposes a hierarchical Bayesian model that can be used for semisupervised hyperspectral image unmixing. The model assumes that the pixel reflectances result from linear combinations of pure component spectra contaminated by an additive Gaussian noise. The abundance parameters appearing in this model satisfy positivity and additivity constraints. These constraints are naturally expressed in a Bayesian context by using appropriate abundance prior distributions. The posterior distributions of the unknown model parameters are then derived. A Gibbs sampler allows one to draw samples distributed according to the posteriors of interest and to estimate the unknown abundances. An extension of the algorithm is finally studied for mixtures with unknown numbers of spectral components belonging to a know library. The performance of the different unmixing strategies is evaluated via simulations conducted on synthetic and real data. Index Terms—Gibbs sampler, hierarchical Bayesian analysis, hyperspectral images, linear spectral unmixing, Markov chain Monte Carlo (MCMC) methods, reversible jumps. I.
A note on MetropolisHastings kernel for general state spaces. The Annals of Applied Probability
, 1998
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 30 (2 self)
 Add to MetaCart
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at