Results 1  10
of
78
An analysis of transformations
 Journal of the Royal Statistical Society. Series B (Methodological
, 1964
"... In the analysis of data it is often assumed that observations y,, y,,...,y, are independently normally distributed with constant variance and with expectations specified by a model linear in a set of parameters 0. In this paper we make the less restrictive assumption that such a normal, homoscedasti ..."
Abstract

Cited by 404 (0 self)
 Add to MetaCart
In the analysis of data it is often assumed that observations y,, y,,...,y, are independently normally distributed with constant variance and with expectations specified by a model linear in a set of parameters 0. In this paper we make the less restrictive assumption that such a normal, homoscedastic, linear model is appropriate after some suitable transformation has been applied to the y's. Inferences about the transformation and about the parameters of the linear model are made by computing the likelihood function and the relevant posterior distribution. The contributions of normality, homoscedasticity and additivity to the transformation are separated. The relation of the present methods to earlier procedures for finding transformations is discussed. The methods are illustrated with examples. 1.
The minimum description length principle in coding and modeling
 IEEE Trans. Inform. Theory
, 1998
"... Abstract — We review the principles of Minimum Description Length and Stochastic Complexity as used in data compression and statistical modeling. Stochastic complexity is formulated as the solution to optimum universal coding problems extending Shannon’s basic source coding theorem. The normalized m ..."
Abstract

Cited by 305 (12 self)
 Add to MetaCart
Abstract — We review the principles of Minimum Description Length and Stochastic Complexity as used in data compression and statistical modeling. Stochastic complexity is formulated as the solution to optimum universal coding problems extending Shannon’s basic source coding theorem. The normalized maximized likelihood, mixture, and predictive codings are each shown to achieve the stochastic complexity to within asymptotically vanishing terms. We assess the performance of the minimum description length criterion both from the vantage point of quality of data compression and accuracy of statistical inference. Context tree modeling, density estimation, and model selection in Gaussian linear regression serve as examples. Index Terms—Complexity, compression, estimation, inference, universal modeling.
Calibration and Empirical Bayes Variable Selection
 Biometrika
, 1997
"... this paper, is that with F =2logp. This choice was proposed by Foster &G eorge (1994) where it was called the Risk Inflation Criterion (RIC) because it asymptotically minimises the maximum predictive risk inflation due to selection when X is orthogonal. This choice and its minimax property were also ..."
Abstract

Cited by 114 (19 self)
 Add to MetaCart
this paper, is that with F =2logp. This choice was proposed by Foster &G eorge (1994) where it was called the Risk Inflation Criterion (RIC) because it asymptotically minimises the maximum predictive risk inflation due to selection when X is orthogonal. This choice and its minimax property were also discovered independently by Donoho & Johnstone (1994) in the wavelet regression context, where they refer to it as the universal hard thresholding rule
Approximate Bayes Factors and Accounting for Model Uncertainty in Generalized Linear Models
, 1993
"... Ways of obtaining approximate Bayes factors for generalized linear models are described, based on the Laplace method for integrals. I propose a new approximation which uses only the output of standard computer programs such as GUM; this appears to be quite accurate. A reference set of proper priors ..."
Abstract

Cited by 96 (28 self)
 Add to MetaCart
Ways of obtaining approximate Bayes factors for generalized linear models are described, based on the Laplace method for integrals. I propose a new approximation which uses only the output of standard computer programs such as GUM; this appears to be quite accurate. A reference set of proper priors is suggested, both to represent the situation where there is not much prior information, and to assess the sensitivity of the results to the prior distribution. The methods can be used when the dispersion parameter is unknown, when there is overdispersion, to compare link functions, and to compare error distributions and variance functions. The methods can be used to implement the Bayesian approach to accounting for model uncertainty. I describe an application to inference about relative risks in the presence of control factors where model uncertainty is large and important. Software to implement the
Comparing Dynamic Equilibrium Models to Data: A Bayesian Approach
"... This paper studies the properties of the Bayesian approach to estimation and comparison of dynamic equilibrium economies. Both tasks can be performed even if the models are nonnested, misspecified, and nonlinear. First, we show that Bayesian methods have a classical interpretation: asymptotically ..."
Abstract

Cited by 52 (12 self)
 Add to MetaCart
This paper studies the properties of the Bayesian approach to estimation and comparison of dynamic equilibrium economies. Both tasks can be performed even if the models are nonnested, misspecified, and nonlinear. First, we show that Bayesian methods have a classical interpretation: asymptotically, the parameter point estimates converge to their pseudotrue values, and the best model under the KullbackLeibler distance will have the highest posterior probability. Second, we illustrate the strong small sample behavior of the approach using a wellknown application: the U.S. cattle cycle. Bayesian estimates outperform maximum likelihood results, and the proposed model is easily compared with a set of BVARs.
Joint Bayesian Endmember Extraction and Linear Unmixing for Hyperspectral Imagery
"... Abstract—This paper studies a fully Bayesian algorithm for endmember extraction and abundance estimation for hyperspectral imagery. Each pixel of the hyperspectral image is decomposed as a linear combination of pure endmember spectra following the linear mixing model. The estimation of the unknown e ..."
Abstract

Cited by 37 (26 self)
 Add to MetaCart
Abstract—This paper studies a fully Bayesian algorithm for endmember extraction and abundance estimation for hyperspectral imagery. Each pixel of the hyperspectral image is decomposed as a linear combination of pure endmember spectra following the linear mixing model. The estimation of the unknown endmember spectra is conducted in a unified manner by generating the posterior distribution of abundances and endmember parameters under a hierarchical Bayesian model. This model assumes conjugate prior distributions for these parameters, accounts for nonnegativity and fulladditivity constraints, and exploits the fact that the endmember proportions lie on a lower dimensional simplex. A Gibbs sampler is proposed to overcome the complexity of evaluating the resulting posterior distribution. This sampler generates samples distributed according to the posterior distribution and estimates the unknown parameters using these generated samples. The accuracy of the joint Bayesian estimator is illustrated by simulations conducted on synthetic and real AVIRIS images. Index Terms—Bayesian inference, endmember extraction, hyperspectral imagery, linear spectral unmixing, MCMC methods. I.
The Horseshoe Estimator for Sparse Signals
, 2008
"... This paper proposes a new approach to sparsity called the horseshoe estimator. The horseshoe is a close cousin of other widely used Bayes rules arising from, for example, doubleexponential and Cauchy priors, in that it is a member of the same family of multivariate scale mixtures of normals. But th ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
This paper proposes a new approach to sparsity called the horseshoe estimator. The horseshoe is a close cousin of other widely used Bayes rules arising from, for example, doubleexponential and Cauchy priors, in that it is a member of the same family of multivariate scale mixtures of normals. But the horseshoe enjoys a number of advantages over existing approaches, including its robustness, its adaptivity to different sparsity patterns, and its analytical tractability. We prove two theorems that formally characterize both the horseshoe’s adeptness at large outlying signals, and its superefficient rate of convergence to the correct estimate of the sampling density in sparse situations. Finally, using a combination of real and simulated data, we show that the horseshoe estimator corresponds quite closely to the answers one would get by pursuing a full Bayesian modelaveraging approach using a discrete mixture prior to model signals and noise.
Hierarchical Bayesian Sparse Image Reconstruction With Application to MRFM
"... Abstract—This paper presents a hierarchical Bayesian model to reconstruct sparse images when the observations are obtained from linear transformations and corrupted by an additive white Gaussian noise. Our hierarchical Bayes model is well suited to such naturally sparse image applications as it seam ..."
Abstract

Cited by 17 (8 self)
 Add to MetaCart
Abstract—This paper presents a hierarchical Bayesian model to reconstruct sparse images when the observations are obtained from linear transformations and corrupted by an additive white Gaussian noise. Our hierarchical Bayes model is well suited to such naturally sparse image applications as it seamlessly accounts for properties such as sparsity and positivity of the image via appropriate Bayes priors. We propose a prior that is based on a weighted mixture of a positive exponential distribution and a mass at zero. The prior has hyperparameters that are tuned automatically by marginalization over the hierarchical Bayesian model. To overcome the complexity of the posterior distribution, a Gibbs sampling strategy is proposed. The Gibbs samples can be used to estimate the image to be recovered, e.g., by maximizing the estimated posterior distribution. In our fully Bayesian approach, the posteriors of all the parameters are available. Thus, our algorithm provides more information than other previously proposed sparse reconstruction methods that only give a point estimate. The performance of the proposed hierarchical Bayesian sparse reconstruction method is illustrated on synthetic data and real data collected from a tobacco virus sample using a prototype MRFM instrument. Index Terms—Bayesian inference, deconvolution, Markov chain Monte Carlo (MCMC) methods, magnetic resonance force microscopy
Bayesian Multiple Comparisons Using Dirichlet Process Priors
 Journal of the American Statistical Association
, 1996
"... We consider the problem of multiple comparisons from a Bayesian viewpoint. The family of Dirichlet process priors is applied in the form of baseline prior/likelihood combinations, to obtain posterior probabilities for various hypotheses. The baseline prior/likelihood combinations considered here are ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
We consider the problem of multiple comparisons from a Bayesian viewpoint. The family of Dirichlet process priors is applied in the form of baseline prior/likelihood combinations, to obtain posterior probabilities for various hypotheses. The baseline prior/likelihood combinations considered here are beta/binomial, normal/inverted gamma with equal variances and a hierarchical nonconjugate normal/inverted gamma prior on treatment means. The prior probabilities of the hypotheses depend directly on the concentration parameter of the Dirichlet process prior. The problem is analytically intractable; we use Gibbs sampling. The posterior probabilities of the hypotheses are easily obtained as a byproduct in evaluating the marginal posterior distributions of the parameters. The proposed procedure is compared with Duncan's multiple range test and shown to be more powerful under certain alternative hypotheses. Keywords: Gibbs sampling, beta/binomial prior, normal/inverted gamma prior, hierarchica...
A tutorial introduction to decision theory
 IEEE Transactions on Systems Science and Cybernetics
, 1968
"... AbstractDecision theory provides a rational framework for choosing between alternative courses of action when the consequences resulting from this choice are imperfectly known. Two ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
AbstractDecision theory provides a rational framework for choosing between alternative courses of action when the consequences resulting from this choice are imperfectly known. Two