Results 1  10
of
42
Monte Carlo Statistical Methods
, 1998
"... This paper is also the originator of the Markov Chain Monte Carlo methods developed in the following chapters. The potential of these two simultaneous innovations has been discovered much latter by statisticians (Hastings 1970; Geman and Geman 1984) than by of physicists (see also Kirkpatrick et al. ..."
Abstract

Cited by 931 (23 self)
 Add to MetaCart
This paper is also the originator of the Markov Chain Monte Carlo methods developed in the following chapters. The potential of these two simultaneous innovations has been discovered much latter by statisticians (Hastings 1970; Geman and Geman 1984) than by of physicists (see also Kirkpatrick et al. 1983). 5.5.5 ] PROBLEMS 211
Statistical Methods for Eliciting Probability Distributions
 Journal of the American Statistical Association
, 2005
"... Elicitation is a key task for subjectivist Bayesians. While skeptics hold that it cannot (or perhaps should not) be done, in practice it brings statisticians closer to their clients and subjectmatterexpert colleagues. This paper reviews the stateoftheart, reflecting the experience of statisticia ..."
Abstract

Cited by 39 (2 self)
 Add to MetaCart
Elicitation is a key task for subjectivist Bayesians. While skeptics hold that it cannot (or perhaps should not) be done, in practice it brings statisticians closer to their clients and subjectmatterexpert colleagues. This paper reviews the stateoftheart, reflecting the experience of statisticians informed by the fruits of a long line of psychological research into how people represent uncertain information cognitively, and how they respond to questions about that information. In a discussion of the elicitation process, the first issue to address is what it means for an elicitation to be successful, i.e. what criteria should be employed? Our answer is that a successful elicitation faithfully represents the opinion of the person being elicited. It is not necessarily “true ” in some objectivistic sense, and cannot be judged that way. We see elicitation as simply part of the process of statistical modeling. Indeed in a hierarchical model it is ambiguous at which point the likelihood ends and the prior begins. Thus the same kinds of judgment that inform statistical modeling in general also inform elicitation of prior distributions.
Using probability trees to compute marginals with imprecise probabilities
 INTERNATIONAL JOURNAL OF APPROXIMATE REASONING
, 2002
"... This paper presents an approximate algorithm to obtain a posteriori intervals of probability, when available information is also given with intervals. The algorithm uses probability trees as a means of representing and computing with the convex sets of ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
This paper presents an approximate algorithm to obtain a posteriori intervals of probability, when available information is also given with intervals. The algorithm uses probability trees as a means of representing and computing with the convex sets of
Experimental Uncertainty Estimation and Statistics for Data Having Interval Uncertainty
 11733, SAND20070939. hal00839639, version 1  28 Jun 2013
"... Sandia is a multiprogram laboratory operated by Sandia Corporation, ..."
Abstract

Cited by 21 (14 self)
 Add to MetaCart
Sandia is a multiprogram laboratory operated by Sandia Corporation,
Estimating Risk and Rate Levels, Ratios, and Differences in CaseControl Studies
, 2001
"... Classic (or "cumulative") casecontrol sampling designs do not admit inferences about quantities of interest other than risk ratios, and then only by making the rare events assumption. Probabilities, risk differences, and other quantities cannot be computed without knowledge of the populat ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Classic (or "cumulative") casecontrol sampling designs do not admit inferences about quantities of interest other than risk ratios, and then only by making the rare events assumption. Probabilities, risk differences, and other quantities cannot be computed without knowledge of the population incidence fraction. Similarly, density (or "risk set") casecontrol sampling designs do not allow inferences about quantities other than the rate ratio. Rates, rate differences, cumulative rates, risks, and other quantities cannot be estimated unless auxiliary information about the underlying cohort such as the number of controls in each full risk set is available. Most scholars who have considered the issue recommend reporting more than just risk and rate ratios, but auxiliary population information needed to do this is not usually available. We address this problem by developing methods that allow valid inferences about all relevant quantities of interest from either type of casecontrol study when completely ignorant of or only partially knowledgeable about relevant auxiliary population information.
Objective Bayesian analysis of contingency tables
, 2002
"... The statistical analysis of contingency tables is typically carried out with a hypothesis test. In the Bayesian paradigm, default priors for hypothesis tests are typically improper, and cannot be used. Although such priors are available, and proper, for testing contingency tables, we show that for t ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
The statistical analysis of contingency tables is typically carried out with a hypothesis test. In the Bayesian paradigm, default priors for hypothesis tests are typically improper, and cannot be used. Although such priors are available, and proper, for testing contingency tables, we show that for testing independence they can be greatly improved on by socalled intrinsic priors. We also argue that because there is no realistic situation that corresponds to the case of conditioning on both margins of a contingency table, the proper analysis of an a × b contingency table should only condition on either the table total or on only one of the margins. The posterior probabilities from the intrinsic priors provide reasonable answers in these cases. Examples using simulated and real data are given.
Robust Bayesianism: Imprecise and Paradoxical Reasoning
, 2004
"... We are interested in understanding the relationship between Bayesian inference and evidence theory, in particular imprecise and paradoxical reasoning. The concept of a set of probability distributions is central both in robust Bayesian analysis and in some versions of DempsterShafer theory. Most of ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
We are interested in understanding the relationship between Bayesian inference and evidence theory, in particular imprecise and paradoxical reasoning. The concept of a set of probability distributions is central both in robust Bayesian analysis and in some versions of DempsterShafer theory. Most of the literature regards these two theories as incomparable. We interpret imprecise probabilities as imprecise posteriors obtainable from imprecise likelihoods and priors, both of which can be considered as evidence and represented with, e.g., DSstructures. The natural and simple robust combination operator makes all pairwise combinations of elements from the two sets. The DSstructures can represent one particular family of imprecise distributions, Choquet capacities. These are not closed under our combination rule, but can be made so by rounding. The proposed combination operator is unique, and has interesting normative and factual properties. We compare its behavior on Zadeh's example with other proposed fusion rules. We also show how the paradoxical reasoning method appears in the robust framework.
Reconciling Frequentist Properties With The Likelihood Principle
, 1998
"... This paper is devoted primarily to a presentation of some main features of these developments, which seem to have intrinsic as well as historical interest. These developments include an apparently decisive negative outcome. It has seemed to some (including this writer) that any adequate concept of s ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
This paper is devoted primarily to a presentation of some main features of these developments, which seem to have intrinsic as well as historical interest. These developments include an apparently decisive negative outcome. It has seemed to some (including this writer) that any adequate concept of statistical evidence must meet at least certain minimum versions of both of the criteria just indicated. But the difficulties of developing such a concept have become increasingly apparent, and it now seems rather clear that no such adequate concept of statistical evidence can exist.
On the Foundations of Bayesianism
 Bayesian Inference and Maximum Entropy Methods in Science and Engineering, 20th International Workshop, GifsurYvette, 2000
"... We discuss precise assumptions entailing Bayesianism in the line of investigations started by Cox, and relate them to a recent critique by Halpern. We show that every finite model which cannot be rescaled to probability violates a natural and simple refinability principle. A new condition, separabil ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
We discuss precise assumptions entailing Bayesianism in the line of investigations started by Cox, and relate them to a recent critique by Halpern. We show that every finite model which cannot be rescaled to probability violates a natural and simple refinability principle. A new condition, separability, was found sufficient and necessary for rescalability of infinite models. We finally characterize the acceptable ways to handle uncertainty in infinite models based on Coxs assumptions. Certain closure properties must be assumed before all the axioms of ordered fields are satisfied. Once this is done, a proper plausibility model can be embedded in an ordered field containing the reals, namely either standard probability (field of reals) for a real valued plausibility model, or extended probability (field of reals and infinitesimals) for an ordered plausibility model. The end result is that if our assumptions are accepted, all reasonable uncertainty management schemes must be based on sets of extended probability distributions and Bayes conditioning.
On a Global Sensitivity Measure for Bayesian Inference
"... We define a global sensitivity measure that is useful in assessing sensitivity to deviations from a specified prior. We argue that this measure has a common interpretation irrespective of the context of the problem, or the unit of measurements, and is therefore easy to interpret. We also study the a ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We define a global sensitivity measure that is useful in assessing sensitivity to deviations from a specified prior. We argue that this measure has a common interpretation irrespective of the context of the problem, or the unit of measurements, and is therefore easy to interpret. We also study the asymptotic behavior of this global sensitivity measure. We find that it does not always converge to 0 as the sample size goes to infinity. We also show that, under certain conditions, this measure does go to 0 as the sample size goes to infinity. Thus, unlike the usual global sensitivity measure range, this measure behaves asymptotically like the usual local sensitivity measure. 1 AMS 1991 subject classifications. Primary 62F35; secondary 62C10 Key words and phrases. Bayesian robustness, global sensitivity, asymptotics. 1 1 Introduction In a Bayesian analysis involving a subjectively elicited prior, one is usually concerned with sensitivity to deviations from the specified prior, 0 . In ...