Results 1  10
of
25
Separation of Nonnegative Mixture of Nonnegative Sources using a Bayesian Approach and MCMC Sampling
, 2004
"... This paper considers the problem of blind source separation in the case where both the source signals and the mixing coefficients are nonnegatives. The problem is referred to as nonnegative source separation and the analysis is achieved in a Bayesian framework by taking the nonnegativity of sourc ..."
Abstract

Cited by 40 (16 self)
 Add to MetaCart
This paper considers the problem of blind source separation in the case where both the source signals and the mixing coefficients are nonnegatives. The problem is referred to as nonnegative source separation and the analysis is achieved in a Bayesian framework by taking the nonnegativity of source signals and mixing coefficients as prior information. Since the main application concerns the analysis of spectral signals, to encode jointly nonnegativity, sparsity and possible background in the sources, Gamma densities are used as priors. The source signals and the mixing coefficients are estimated by implementing a Monte Carlo Markov Chain (MCMC) for sampling their joint posterior density. Synthetic and experimental results motivate the problem of nonnegative source separation and illustrate the effectiveness of the proposed method.
Supervised Learning of Quantizer Codebooks by Information Loss Minimization
, 2007
"... This paper proposes a technique for jointly quantizing continuous features and the posterior distributions of their class labels based on minimizing empirical information loss, such that the index K of the quantizer region to which a given feature X is assigned approximates a sufficient statistic fo ..."
Abstract

Cited by 33 (0 self)
 Add to MetaCart
This paper proposes a technique for jointly quantizing continuous features and the posterior distributions of their class labels based on minimizing empirical information loss, such that the index K of the quantizer region to which a given feature X is assigned approximates a sufficient statistic for its class label Y. We derive an alternating minimization procedure for simultaneously learning codebooks in the Euclidean feature space and in the simplex of posterior class distributions. The resulting quantizer can be used to encode unlabeled points outside the training set and to predict their posterior class distributions, and has an elegant interpretation in terms of lossless source coding. The proposed method is extensively validated on synthetic and real datasets, and is applied to two diverse problems: learning discriminative visual vocabularies for bagoffeatures image classification, and image segmentation.
Transdimensional Markov Chains: A Decade of Progress and Future Perspectives
 Journal of the American Statistical Association
, 2005
"... The last ten years have witnessed the development of sampling frameworks that permit the construction of Markov chains which simultaneously traverse both parameter and model space. In this time substantial methodological progress has been made. In this article we present a survey of the current stat ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
The last ten years have witnessed the development of sampling frameworks that permit the construction of Markov chains which simultaneously traverse both parameter and model space. In this time substantial methodological progress has been made. In this article we present a survey of the current state of the art and evaluate some of the most recent advances in this field. We also discuss future research perspectives in the context of the drive to develop sampling mechanisms with high degrees of both efficiency and automation. 1
An empirical bayes approach to contextual region classification
 In CVPR
, 2009
"... This paper presents a nonparametric approach to labeling of local image regions that is inspired by recent developments in informationtheoretic denoising. The chief novelty of this approach rests in its ability to derive an unsupervised contextual prior over image classes from unlabeled test data. ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
This paper presents a nonparametric approach to labeling of local image regions that is inspired by recent developments in informationtheoretic denoising. The chief novelty of this approach rests in its ability to derive an unsupervised contextual prior over image classes from unlabeled test data. Labeled training data is needed only to learn a local appearance model for image patches (although additional supervisory information can optionally be incorporated when it is available). Instead of assuming a parametric prior such as a Markov random field for the class labels, the proposed approach uses the empirical Bayes technique of statistical inversion to recover a contextual model directly from the test data, either as a spatially varying or as a globally constant prior distribution over the classes in the image. Results on two challenging datasets convincingly demonstrate that useful contextual information can indeed be learned from unlabeled data. 1.
Assessing the Distinguishability of Models and the Informativeness of Data
"... A difficulty in the development and testing of psychological models is that they are typically evaluated solely on their ability to fit experimental data, with little consideration given to their ability to fit other possible data patterns. By examining how well model A fits data generated by mod ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
A difficulty in the development and testing of psychological models is that they are typically evaluated solely on their ability to fit experimental data, with little consideration given to their ability to fit other possible data patterns. By examining how well model A fits data generated by model B, and vice versa (a technique that we call landscaping), much safer inferences can be made about the meaning of a models fit to data. We demonstrate the landscaping technique using four models of retention and 77 historical data sets, and show how the method can be used to (1) evaluate the distinguishability of models, (2) evaluate the informativeness of data in distinguishing between models, and (3) suggest new ways to distinguish between models. The generality of the method is demonstrated in two other research areas (information integration and categorization), and its relationship to the important notion of model complexity is discussed.
Computational advances for and from Bayesian analysis
 Statist. Sci
, 2004
"... Abstract. The emergence in the past years of Bayesian analysis in many methodological and applied fields as the solution to the modeling of complex problems cannot be dissociated from major changes in its computational implementation. We show in this review how the advances in Bayesian analysis and ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
Abstract. The emergence in the past years of Bayesian analysis in many methodological and applied fields as the solution to the modeling of complex problems cannot be dissociated from major changes in its computational implementation. We show in this review how the advances in Bayesian analysis and statistical computation are intermingled. Key words and phrases: Monte Carlo methods, importance sampling, Markov chain Monte Carlo (MCMC) algorithms.
Posterior propriety and admissibility of hyperpriors in normal hierarchical models
 The Annals of Statistics
, 2005
"... Hierarchical modeling is wonderful and here to stay, but hyperparameter priors are often chosen in a casual fashion. Unfortunately, as the number of hyperparameters grows, the effects of casual choices can multiply, leading to considerably inferior performance. As an extreme, but not uncommon, examp ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Hierarchical modeling is wonderful and here to stay, but hyperparameter priors are often chosen in a casual fashion. Unfortunately, as the number of hyperparameters grows, the effects of casual choices can multiply, leading to considerably inferior performance. As an extreme, but not uncommon, example use of the wrong hyperparameter priors can even lead to impropriety of the posterior. For exchangeable hierarchical multivariate normal models, we first determine when a standard class of hierarchical priors results in proper or improper posteriors. We next determine which elements of this class lead to admissible estimators of the mean under quadratic loss; such considerations provide one useful guideline for choice among hierarchical priors. Finally, computational issues with the resulting posterior distributions are addressed. 1. Introduction. 1.1. The model and the problems. Consider the block multivariate normal situation (sometimes called the “matrix of means problem”) specified by the following hierarchical Bayesian model:
MULTIVARIATE BAYESIAN FUNCTION ESTIMATION 1
, 2006
"... Bayesian methods are developed for the multivariate nonparametric regression problem where the domain is taken to be a compact Riemannian manifold. In terms of the latter, the underlying geometry of the manifold induces certain symmetries on the multivariate nonparametric regression function. The Ba ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Bayesian methods are developed for the multivariate nonparametric regression problem where the domain is taken to be a compact Riemannian manifold. In terms of the latter, the underlying geometry of the manifold induces certain symmetries on the multivariate nonparametric regression function. The Bayesian approach then allows one to incorporate hierarchical Bayesian methods directly into the spectral structure, thus providing a symmetryadaptive multivariate Bayesian function estimator. One can also diffuse away some prior information in which the limiting case is a smoothing spline on the manifold. This, together with the result that the smoothing spline solution obtains the minimax rate of convergence in the multivariate nonparametric regression problem, provides good frequentist properties for the Bayes estimators. An application to astronomy is included.
Algorithms for Planning under Uncertainty in Prediction and Sensing
 CHAPTER 18 IN AUTONOMOUS MOBILE ROBOTS: SENSING, CONTROL, DECISIONMAKING, AND APPLICATIONS
, 2005
"... ..."
Detecting Poor Convergence of Posterior Samplers due to Multimodality
"... Computation in Bayesian statistical models is often performed using sampling techniques such as Markov chain Monte Carlo (MCMC) or adaptive Monte Carlo methods. The convergence of the sampler to the posterior distribution is typically assessed using a set of standard diagnostics; recent draft Food ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Computation in Bayesian statistical models is often performed using sampling techniques such as Markov chain Monte Carlo (MCMC) or adaptive Monte Carlo methods. The convergence of the sampler to the posterior distribution is typically assessed using a set of standard diagnostics; recent draft Food and Drug Administration guidelines for the use of Bayesian statistics in medical device trials, for instance, advocate this approach for validating computations. We give several examples showing that this approach may be insufficient when the posterior distribution is multimodal–that lack of convergence due to posterior multimodality can be undetected using the standard convergence diagnostics, including the GelmanRubin diagnostic that was introduced for exactly this problem. We show that the poor convergence can be detected by modifying a validation technique that was originally proposed for detecting coding errors in MCMC soft