Results 1 
8 of
8
Nonlinear Wavelet Shrinkage With Bayes Rules and Bayes Factors
 Journal of the American Statistical Association
, 1998
"... this article a wavelet shrinkage by coherent ..."
Measures of Surprise in Bayesian Analysis
 Duke University
, 1997
"... Measures of surprise refer to quantifications of the degree of incompatibility of data with some hypothesized model H 0 without any reference to alternative models. Traditional measures of surprise have been the pvalues, which are however known to grossly overestimate the evidence against H 0 . Str ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Measures of surprise refer to quantifications of the degree of incompatibility of data with some hypothesized model H 0 without any reference to alternative models. Traditional measures of surprise have been the pvalues, which are however known to grossly overestimate the evidence against H 0 . Strict Bayesian analysis calls for an explicit specification of all possible alternatives to H 0 so Bayesians have not made routine use of measures of surprise. In this report we CRITICALLY REVIEw the proposals that have been made in this regard. We propose new modifications, stress the connections with robust Bayesian analysis and discuss the choice of suitable predictive distributions which allow surprise measures to play their intended role in the presence of nuisance parameters. We recommend either the use of appropriate likelihoodratio type measures or else the careful calibration of pvalues so that they are closer to Bayesian answers. Key words and phrases. Bayes factors; Bayesian pvalues; Bayesian robustness; Conditioning; Model checking; Predictive distributions. 1.
Full Bayesian Significance Test for Coefficients of Variation
"... New application of the Full Bayesian Significance Test (FBST) for precise hypotheses is presented. The FBST is an alternative to significance tests or, equivalently, to pvalues. In the FBST we compute the evidence of the precise hypothesis. This evidence is the complement of the probability of a cr ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
New application of the Full Bayesian Significance Test (FBST) for precise hypotheses is presented. The FBST is an alternative to significance tests or, equivalently, to pvalues. In the FBST we compute the evidence of the precise hypothesis. This evidence is the complement of the probability of a credible set "tangent" to the submanifold (of the parameter space) that defines the null hypothesis. We use the FBST to compare coefficients of variation, in applications arising in finance and industrial engineering.
ISyE8843A, Brani Vidakovic Handout 7 1 Estimation and Beyond in the Bayes Universe.
"... No Bayes estimate can be unbiased but Bayesians are not upset! No Bayes estimate with respect to the squared error loss can be unbiased, except in a trivial case when its Bayes ’ risk is 0. Suppose that for a proper prior π the Bayes estimator δπ(X) is unbiased, (∀θ)E Xθ δπ(X) = θ. This implies th ..."
Abstract
 Add to MetaCart
No Bayes estimate can be unbiased but Bayesians are not upset! No Bayes estimate with respect to the squared error loss can be unbiased, except in a trivial case when its Bayes ’ risk is 0. Suppose that for a proper prior π the Bayes estimator δπ(X) is unbiased, (∀θ)E Xθ δπ(X) = θ. This implies that the Bayes risk is 0. The Bayes risk of δπ(X) can be calculated as repeated expectation in two ways, r(π, δπ) = E θ E Xθ (θ − δπ(X)) 2 = E X E θX (θ − δπ(X)) 2. Thus, conveniently choosing either E θ E Xθ or E X E θX and using the properties of conditional expectation we have, r(π, δπ) = E θ E Xθ θ 2 − E θ E Xθ θδπ(X) − E X E θX θδπ(X) + E X E θX δ 2 π(X) = E θ E Xθ θ 2 − E θ θ[E Xθ δπ(X)] − E X δπ(X)E θX θ + E X E θX δ 2 π(X) = E θ E Xθ θ 2 − E θ θ · θ − E X δπ(X)δπ(X) + E X E θX δ 2 π(X) = 0. Bayesians are not upset. To check for its unbiasedness, the Bayes estimator is averaged with respect to the model measure (Xθ), and one of the Bayesian commandments is: Thou shall not average with respect to sample space, unless you have Bayesian design in mind. Even frequentist agree that insisting on unbiasedness can lead to bad estimators, and that in their quest to minimize the risk by trading off between variance and biassquared a small dosage of bias can help. The relationship between Bayes estimators and unbiasedness is discussed in Lehmann (1951), Girshick (1954), Bickel and Blackwell (1967), Noorbaloochi
A Bayesian Approach to Large Scale Simultaneous Inference
, 2009
"... We discuss Bayesian decision rules for highly multiple comparisons in the context of differential expression in microarray studies. Some of the advantages of our Bayesian approach include: flexible modeling of gene expressions, many options for decision rules to control either expectation or nonexpe ..."
Abstract
 Add to MetaCart
We discuss Bayesian decision rules for highly multiple comparisons in the context of differential expression in microarray studies. Some of the advantages of our Bayesian approach include: flexible modeling of gene expressions, many options for decision rules to control either expectation or nonexpectation error rates, insensitivity to weak dependencies in the data, and pooling of results while retaining control of expected error rates. The proposed approach is demonstrated by the analysis of the spikein HGU133 data. Key words: false discovery rate, false discovery percentage, mixture model, MCMC. ∗1
ISyE8843A, Brani Vidakovic Handout 7 1 Estimation and Beyond in the Bayes Universe.
"... No Bayes estimate can be unbiased but Bayesians are not upset! No Bayes estimate with respect to the squared error loss can be unbiased, except in a trivial case when its Bayes ’ risk is 0. Suppose that for a proper prior π the Bayes estimator δπ(X) is unbiased, (∀θ)E Xθ δπ(X) = θ. This implies th ..."
Abstract
 Add to MetaCart
No Bayes estimate can be unbiased but Bayesians are not upset! No Bayes estimate with respect to the squared error loss can be unbiased, except in a trivial case when its Bayes ’ risk is 0. Suppose that for a proper prior π the Bayes estimator δπ(X) is unbiased, (∀θ)E Xθ δπ(X) = θ. This implies that the Bayes risk is 0. The Bayes risk of δπ(X) can be calculated as repeated expectation in two ways, r(π, δπ) = E θ E Xθ (θ − δπ(X)) 2 = E X E θX (θ − δπ(X)) 2. Thus, conveniently choosing either E θ E Xθ or E X E θX and using the properties of conditional expectation we have,
A Weibull Wearout Test: Full Bayesian Approach Telba
, 2000
"... The Full Bayesian Signi cance Test (FBST) for precise hypotheses is presented, with some applications relevant to reliability theory. The FBST is an alternative to signi cance tests or, equivalently, to pvalues. In the FBST we compute the evidence of the precise hypothesis. This evidence is the pro ..."
Abstract
 Add to MetaCart
The Full Bayesian Signi cance Test (FBST) for precise hypotheses is presented, with some applications relevant to reliability theory. The FBST is an alternative to signi cance tests or, equivalently, to pvalues. In the FBST we compute the evidence of the precise hypothesis. This evidence is the probability of the complement of a credible set \tangent " to the submanifold (of the parameter space) that de nes the null hypothesis. We use the FBST in an application requiring a quality control of used components, based on remaining life statistics.
Full Bayesian Signi cance Test for Coe cients of Variation
"... Jul012000 rev. Oct102000 New application of the Full Bayesian Signi cance Test (FBST) for precise hypotheses are presented. The FBST is an alternative to signi cance tests or, equivalently, topvalues. In the FBST we compute the evidence of the precise hypothesis. This evidence is the probabilit ..."
Abstract
 Add to MetaCart
Jul012000 rev. Oct102000 New application of the Full Bayesian Signi cance Test (FBST) for precise hypotheses are presented. The FBST is an alternative to signi cance tests or, equivalently, topvalues. In the FBST we compute the evidence of the precise hypothesis. This evidence is the probability of a credible set \tangent " to the submanifold (of the parameter space) that de nes the null hypothesis. We use the FBST to compare coe cients of variation, in applications arising in nance and industrial engineering.