Results 1  10
of
12
Generalized SURE for exponential families: Applications to regularization
 IEEE Trans. on Signal Processing
, 2009
"... ..."
Learning to be Bayesian without supervision
 in Adv. Neural Information Processing Systems (NIPS*06
, 2007
"... Bayesian estimators are defined in terms of the posterior distribution. Typically, this is written as the product of the likelihood function and a prior probability density, both of which are assumed to be known. But in many situations, the prior density is not known, and is difficult to learn from ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
(Show Context)
Bayesian estimators are defined in terms of the posterior distribution. Typically, this is written as the product of the likelihood function and a prior probability density, both of which are assumed to be known. But in many situations, the prior density is not known, and is difficult to learn from data since one does not have access to uncorrupted samples of the variable being estimated. We show that for a wide variety of observation models, the Bayes least squares (BLS) estimator may be formulated without explicit reference to the prior. Specifically, we derive a direct expression for the estimator, and a related expression for the mean squared estimation error, both in terms of the density of the observed measurements. Each of these priorfree formulations allows us to approximate the estimator given a sufficient amount of observed data. We use the first form to develop practical nonparametric approximations of BLS estimators for several different observation processes, and the second form to develop a parametric family of estimators for use in the additive Gaussian noise case. We examine the empirical performance of these estimators as a function of the amount of observed data. 1
Eaton's Markov chain, its conjugate partner and Padmissibility
 Annals of Statistics
, 1999
"... Suppose that X is a random variable with density f(xj`) and that ï¿½ï¿½(`jx) is a proper posterior corresponding to an improper prior (`). The prior is called Padmissible if the generalized Bayes estimator of every bounded function of ` is almostadmissible under squared error loss. Eaton (1992) s ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
(Show Context)
Suppose that X is a random variable with density f(xj`) and that ï¿½ï¿½(`jx) is a proper posterior corresponding to an improper prior (`). The prior is called Padmissible if the generalized Bayes estimator of every bounded function of ` is almostadmissible under squared error loss. Eaton (1992) showed that recurrence of the Markov chain with transition density R(jj`) = R ï¿½ï¿½(jjx)f(xj`)dx is a sufficient condition for Padmissibility of (`). We show that Eaton's Markov chain is recurrent if and only if its conjugate partner, with transition density
Least Squares Estimation Without Priors or Supervision
, 2011
"... Selection of an optimal estimator typically relies on either supervised training samples (pairs of measurements and their associated true values) or a prior probability model for the true values. Here, we consider the problem of obtaining a least squares estimator given a measurement process with kn ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
Selection of an optimal estimator typically relies on either supervised training samples (pairs of measurements and their associated true values) or a prior probability model for the true values. Here, we consider the problem of obtaining a least squares estimator given a measurement process with known statistics (i.e., a likelihood function) and a set of unsupervised measurements, each arising from a corresponding true value drawn randomly from an unknown distribution. We develop a general expression for a nonparametric empirical Bayes least squares (NEBLS) estimator, which expresses the optimal least squares estimator in terms of the measurement density, with no explicit reference to the unknown (prior) density. We study the conditions under which such estimators exist and derive specific forms for a variety of different measurement processes. We further show that each of these NEBLS estimators may be used to express the mean squared estimation error as an expectation over the measurement density alone, thus generalizing Stein’s unbiased
Efficient Multivariate Skellam Shrinkage for Denoising PhotonLimited Image Data: An Empirical Bayes Approach
 In Proceeding of International Conference on Image Processing
, 2009
"... In this article we address the issue of denoising photonlimited image data by deriving new and efficient multivariate Bayesian estimators that approximate the conditional expectation of Haar wavelet and filterbank transform coefficients of Poisson data—coefficients that take the socalled Skellam d ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
In this article we address the issue of denoising photonlimited image data by deriving new and efficient multivariate Bayesian estimators that approximate the conditional expectation of Haar wavelet and filterbank transform coefficients of Poisson data—coefficients that take the socalled Skellam distribution. We show that in this setting, the posterior mean under a Bayesian model forms the solution to a linear differential equation, owing in part to the recursive property of the Skellam distribution. We then propose a practical approach to solve—approximately—this differential equation, and arrive at a near meansquareoptimal Skellam mean estimator that is both computationally efficient and amenable to an Empirical Bayes approach. We then derive three approaches to shrinkage based on smoothing the marginal likelihood of the data, and demonstrate their superior performance relative to stateoftheart approaches for both natural test images and examples from nuclear medicine. Index Terms — Image denoising, Poisson distribution, wavelets. 1.
C.P.: Inference from multinomial data based on a MLEdominance criterion
 In: Proc. on European Conf. on Symbolic and Quantitative Approaches to Reasoning and Uncertainty (Ecsqaru
, 2009
"... Abstract. We consider the problem of inference from multinomial data with chances θ, subject to the apriori information that the true parameter vector θ belongs to a known convex polytope Θ. The proposed estimator has the parametrized structure of the conditionalmean estimator with a prior Dirichl ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We consider the problem of inference from multinomial data with chances θ, subject to the apriori information that the true parameter vector θ belongs to a known convex polytope Θ. The proposed estimator has the parametrized structure of the conditionalmean estimator with a prior Dirichlet distribution, whose parameters (s,t) are suitably designed via a dominance criterion so as to guarantee, for any θ ∈ Θ, an improvement of the Mean Squared Error over the Maximum Likelihood Estimator (MLE). The solution of this MLEdominance problem allows us togiveadifferent interpretation of: (1) theseveral Bayesian estimators proposed in the literature for the problem of inference from multinomial data; (2) the Imprecise Dirichlet Model (IDM) developed by Walley [13]. 1
Improving on the maximum likelihood estimators of the means in Poisson decomposable graphical models
, 2005
"... The METR technical reports are published as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electron ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
The METR technical reports are published as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author’s copyright. These works may not be reposted without the explicit permission of the copyright holder.
Learning least squares estimators without assumed priors or supervision
, 2009
"... The two standard methods of obtaining a leastsquares optimal estimator are (1) Bayesian estimation, in which one assumes a prior distribution on the true values and combines this with a model of the measurement process to obtain an optimal estimator, and (2) supervised regression, in which one opti ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
The two standard methods of obtaining a leastsquares optimal estimator are (1) Bayesian estimation, in which one assumes a prior distribution on the true values and combines this with a model of the measurement process to obtain an optimal estimator, and (2) supervised regression, in which one optimizes a parametric estimator over a training set containing pairs of corrupted measurements and their associated true values. But many realworld systems do not have access to either supervised training examples or a prior model. Here, we study the problem of obtaining an optimal estimator given a measurement process with known statistics, and a set of corrupted measurements of random values drawn from an unknown prior. We develop a general form of nonparametric empirical Bayesian estimator that is written as a direct function of the measurement density, with no explicit reference to the prior. We study the observation conditions under which such “priorfree ” estimators may be obtained, and we derive specific forms for a variety of different corruption processes. Each of these priorfree estimators may also be used to express the mean squared estimation error as an expectation over the measurement density, thus generalizing Stein’s unbiased risk estimator (SURE) which provides such an expression for the additive Gaussian noise case. Minimizing this expression over measurement samples provides an “unsupervised
Simultaneous estimation of the means in some Poisson log linear models
, 2004
"... In this article we study the simultaneous estimation of the Poisson means in Jway multiplicative models and a decomposable model for threeway layouts. The estimators which improve on the maximum likelihood estimators under the normalized squared error losses are provided for each model. The propo ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
In this article we study the simultaneous estimation of the Poisson means in Jway multiplicative models and a decomposable model for threeway layouts. The estimators which improve on the maximum likelihood estimators under the normalized squared error losses are provided for each model. The proposed estimators