Results 1  10
of
13
Diagnostic Measures for Model Criticism
 JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
, 1996
"... ... In this article we present the general outlook and discuss general families of elaborations for use in practice; the exponential connection elaboration plays a key role. We then describe model elaborations for use in diagnosing: departures from normality, goodness of fit in generalized linear mo ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
... In this article we present the general outlook and discuss general families of elaborations for use in practice; the exponential connection elaboration plays a key role. We then describe model elaborations for use in diagnosing: departures from normality, goodness of fit in generalized linear models, and variable selection in regression and outlier detection. We illustrate our approach with two applications.
A Review of Propagation Algorithms for Imprecise Probabilities
, 1999
"... This paper reviews algorithms for local computation with imprecise probabilities. These algorithms try to solve problems of inference (calculation of conditional or unconditional probabilities) in cases in which there are a large number of variables. There are two main types depending on the nature ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
This paper reviews algorithms for local computation with imprecise probabilities. These algorithms try to solve problems of inference (calculation of conditional or unconditional probabilities) in cases in which there are a large number of variables. There are two main types depending on the nature of assumed independence relationships in each case. In both of them the global knowledge is composed of several pieces of local information. The objective is to carry out a sound global computation but mainly using the initial local representation. Keywords. Propagation algorithms, valuations based systems, imprecise probabilities. 1
On the Symbiosis of Two Concepts of Conditional Interval Probability
 Munich, Ludwigstr
, 2003
"... This paper argues in favor of the thesis that two different concepts of conditional interval probability are needed, in order to serve the huge variety of tasks conditional probability has in the classical setting of precise probabilities. ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
This paper argues in favor of the thesis that two different concepts of conditional interval probability are needed, in order to serve the huge variety of tasks conditional probability has in the classical setting of precise probabilities.
Sensitivity in risk analyses with uncertain numbers
, 2006
"... Sensitivity analysis is a study of how changes in the inputs to a model influence the results of the model. Many techniques have recently been proposed for use when the model is probabilistic. This report considers the related problem of sensitivity analysis when the model includes uncertain numbers ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Sensitivity analysis is a study of how changes in the inputs to a model influence the results of the model. Many techniques have recently been proposed for use when the model is probabilistic. This report considers the related problem of sensitivity analysis when the model includes uncertain numbers that can involve both aleatory and epistemic uncertainty and the method of calculation is DempsterShafer evidence theory or probability bounds analysis. Some traditional methods for sensitivity analysis generalize directly for use with uncertain numbers, but, in some respects, sensitivity analysis for these analyses differs from traditional deterministic or probabilistic sensitivity analyses. A case study of a dike reliability assessment illustrates several methods of sensitivity analysis, including traditional probabilistic assessment, local derivatives, and a “pinching ” strategy that hypothetically reduces the epistemic uncertainty or aleatory uncertainty, or both, in an input variable to estimate the reduction of uncertainty in the outputs. The prospects for applying the methods to black box models are also considered. 3
Symmetric, Coherent, Choquet Capacities
"... This paper is concerned with such characterizations. In particular, we are interested in the following question: what are the extreme points in the set of all distribution functions corresponding to symmetric capacities? A capacity is 2alternating if C(A [ B) C(A) + C(B) \Gamma C(A " B) (1) for al ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
This paper is concerned with such characterizations. In particular, we are interested in the following question: what are the extreme points in the set of all distribution functions corresponding to symmetric capacities? A capacity is 2alternating if C(A [ B) C(A) + C(B) \Gamma C(A " B) (1) for all A; B 2 B. Many capacities used in statistics are 2alternating. Furthermore, the 2alternating condition is crucial for many important results. For example, a particular generalization of the NeymanPearson lemma holds if and only if the capacity generated by the underlying models is 2alternating (Huber and Strassen 1973). Similarly, a particular generalization of Bayes' theorem for capacities holds if and only if the capacity is 2alternating (Wasserman and Kadane 1990). In game theory, 2alternating capacities represent certain convex games (Shapley 1971). Most work on coherent capacities have focused on the 2alternating case. Little is known about the non2alternating case. We shall consider the general case in sections 2 and 3 and the 2alternating case in section 4. The following is an outline of the paper and serves as a summary of the main contributions of this paper. In section two we give a majorization representation of symmetric capacities (Theorem 2.1) which generalizes a theorem in Wasserman and Kadane (1992). In section three, which is the main section of the paper, we study the distribution functions of symmetric capacities. There we establish (Lemmas 3.1 and 3.2) a onetoone correspondence between distribution functions of symmetric capacities and functions ff taking [0; 1] to [0; 1] that satisfy lim
Prior Density Ratio Class Robustness in Econometrics
, 1998
"... This paper provides a generic, very fast method for computing exact density ratio class bounds on posterior expectations, given the output of a posterior simulator. It illustrates application of the method in an econometric model of typical complexity. In this model, the exact bounds for expectation ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This paper provides a generic, very fast method for computing exact density ratio class bounds on posterior expectations, given the output of a posterior simulator. It illustrates application of the method in an econometric model of typical complexity. In this model, the exact bounds for expectations of some functions of interest are well approximated by the established asymptotic approximation, but others are not. Software for the computations is publicly available in a variety of programming languages. Key words and phrases: Bayesian inference, Markov chain Monte Carlo, normal mixture, probit model 1 1. INTRODUCTION Good Bayesian investigators seek to report results in a way that will be most useful to their clients, that is, to those who read their work and go on to modify their opinions on the basis of the reported results, or who incorporate them in public or private decision making. Generally the investigator does not know what her clients' priors will be, or the posterior ex...
Alternatives to Lavine's algorithm for calculation of posterior bounds given convex sets of distributions
, 1997
"... This paper presents alternatives to Lavine's algorithm, currently the most popular method for calculation of expectation bounds induced by sets of probability distributions. The WhiteSnow algorithm is first analyzed and demonstrated to be superior to Lavine's algorithm in a variety of situations. T ..."
Abstract
 Add to MetaCart
This paper presents alternatives to Lavine's algorithm, currently the most popular method for calculation of expectation bounds induced by sets of probability distributions. The WhiteSnow algorithm is first analyzed and demonstrated to be superior to Lavine's algorithm in a variety of situations. The calculation of posterior bounds is then reduced to a fractional programming problem. From the unifying perspective of fractional programming, Lavine's algorithm is identical to Dinkelbach's algorithm, and the WhiteSnow algorithm is essentially identical to the CharnesCooper transformation. A novel algorithm for expectation bounds is given for the situation where both prior and likelihood functions are specified as convex sets of distributions. 1 Introduction This paper presents alternatives to Lavine's algorithm, currently the most popular method for calculation of expectation bounds induced by sets of probability distributions. Models based on convex sets of distributions have been ...
Projecting uncertainty through black boxes
, 2008
"... Computational models whose internal details are not accessible to the analyst are called black boxes. They arise because of security restrictions or because of the loss of the source code for legacy software programs. Computational models whose internal details are extremely complex are also sometim ..."
Abstract
 Add to MetaCart
Computational models whose internal details are not accessible to the analyst are called black boxes. They arise because of security restrictions or because of the loss of the source code for legacy software programs. Computational models whose internal details are extremely complex are also sometimes treated as black boxes. It is often important to assess the uncertainty that should be ascribed to the output from a black box owing to uncertainty about its input quantities, their statistical distributions, or interdependencies. Sensitivity or ‘whatif ’ studies are commonly used for this purpose. In such studies, the space of possible inputs is sampled as a vector of real values which is then provided to the black box to compute the output(s) that corresponds to those inputs. Such studies are often cumbersome to implement and understand, and they generally require many samples, depending on the complexity of the model and the dimensionality of the inputs. This report reviews methods that can be used to propagate about inputs through black boxes, especially ‘hard ’ black boxes whose computational complexity restricts the total number of samples that can be evaluated. The focus is on methods that estimate the uncertainty of the outputs from the outside inward. That is, we are interested in methods that produce conservative characterizations of uncertainty that become tighter and tighter as the total computational effort increases.