Results 1  10
of
20
Sampling and Bayes inference in scientific modeling and robustness. (with discussion
 Journal of the Royal Statistical Society, Series A
, 1980
"... t ..."
Bayesian model averaging
 STAT.SCI
, 1999
"... Standard statistical practice ignores model uncertainty. Data analysts typically select a model from some class of models and then proceed as if the selected model had generated the data. This approach ignores the uncertainty in model selection, leading to overcon dent inferences and decisions tha ..."
Abstract

Cited by 62 (1 self)
 Add to MetaCart
Standard statistical practice ignores model uncertainty. Data analysts typically select a model from some class of models and then proceed as if the selected model had generated the data. This approach ignores the uncertainty in model selection, leading to overcon dent inferences and decisions that are more risky than one thinks they are. Bayesian model averaging (BMA) provides a coherent mechanism for accounting for this model uncertainty. Several methods for implementing BMA haverecently emerged. We discuss these methods and present anumber of examples. In these examples, BMA provides improved outofsample predictive performance. We also provide a catalogue of
Combining probability distributions from dependent information sources
 Management Sci
, 1981
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 59 (1 self)
 Add to MetaCart
(Show Context)
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
Computational Experiments and Reality
, 1999
"... This study explores three alternative econometric interpretations of dynamic, stochastic general equilibrium (DSGE) models. (1) A strong econometric interpretation takes the model literally and directly produces a likelihood function for observed prices and quantities. It is widely recognized that u ..."
Abstract

Cited by 52 (0 self)
 Add to MetaCart
This study explores three alternative econometric interpretations of dynamic, stochastic general equilibrium (DSGE) models. (1) A strong econometric interpretation takes the model literally and directly produces a likelihood function for observed prices and quantities. It is widely recognized that under this interpretation, most DSGE models are rejected using classical econometrics and assigned zero probability in a Bayesian approach. (2) A weak econometric interpretation commonly made in the calibration literature confines attention to only a few functions of observed prices and interest rates and evaluates a model on its predictive distribution for these functions. This approach is equivalent to a Bayesian prior predictive analysis, developed by Box (1980) and predecessors. This study shows that the weak interpretation retains the implications of the strong interpretation, and therefore DSGE’s fare no better under this approach. (3) Under a minimal econometric interpretation, DSGE’s provide only prior distributions for specified population moments. When coupled with an econometric model (e.g., a vector autoregression) that includes the same moments, DSGE’s may be compared and used for inference using conventional Bayesian methods. This interpretation extends and formalizes an approach suggested by Dejong, Ingram and Whiteman (1996). All three interpretations are illustrated using models of the equity premium, and it is shown that the conclusions from a minimal interpretation differ substantially from those under a weak interpretation. This revision was prepared for the DYNARE Conference, CEPREMAP, Paris, September 45, 2006. It is work in progress. Comments welcome. Please do not cite or quote without the author’s permission. 1 1
A Theory Of Classifier Combination: The Neural Network Approach
, 1995
"... There is a trend in recent OCR development to improve system performance by combining recognition results of several complementary algorithms. This thesis examines the classifier combination problem under strict separation of the classifier and combinator design. None other than the fact that every ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
There is a trend in recent OCR development to improve system performance by combining recognition results of several complementary algorithms. This thesis examines the classifier combination problem under strict separation of the classifier and combinator design. None other than the fact that every classifier has the same input and output specification is assumed about the training, design or implementation of the classifiers. A general theory of combination should possess the following properties. It must be able to combine anytype of classifiers regardless of the level of information contents in the outputs. In addition, a general combinator must be able to combine any mixture of classifier types and utilize all information available. Since classifier independence is difficult to achieve and to detect, it is essential for a combinator to handle correlated classifiers robustly. Although the performance of a robust (against correlation) combinator can be improved by adding classifiers indiscriminantly, it is generally of interest to achieve comparable performance with the minimum number of classifiers. Therefore, the combinator should have the ability to eliminate redundant classifiers. Furthermore, it is desirable to have a complexity control mechanism for the combinator. In the past, simplifications come from assumptions and constraints imposed by the system designers. In the general theory, there should be a mechanism to reduce solution complexity by exercising nonclassifierspecific constraints. Finally, a combinator should capture classifier/image dependencies. Nearly all combination methods have ignored the fact that classifier performances (and outputs) depend on various image characteristics, and this dependency is manifested in classifier output patterns in relation to input imag...
Finitary bayesian statistical inference through partitions tree distributions
 Sankhya
, 2007
"... According to the Bayesian theory, observations are usually considered to be part of an infinite sequence of random elements that are conditionally independent and identically distributed, given an unknown parameter. Such a parameter, which is the object of inference, depends on the entire sequence. ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
According to the Bayesian theory, observations are usually considered to be part of an infinite sequence of random elements that are conditionally independent and identically distributed, given an unknown parameter. Such a parameter, which is the object of inference, depends on the entire sequence. Consequently, the unknown parameter cannot generally be observed, and any hypothesis about its realizations might be devoid of any empirical meaning. Therefore it becomes natural to focus attention on finite sequences of observations. The present paper introduces specific laws for finite exchangeable sequences and analyses some of their most relevant statistical properties. These laws, assessed through sequences of nested partitions, are strongly reminiscent of Pólyatree distributions and allow forms of conjugate analysis. As a matter of fact, this family of distributions, called partitions tree distributions, contains the exchangeable laws directed by the more familiar Pólyatree processes. Moreover, the paper gives an example of partitions tree distribution connected with the hypergeometric urn scheme, where negative correlation between past and future observations is allowed. AMS (2000) subject classification. Primary 62C10,62F15,60G09.
Measures of Surprise in Bayesian Analysis
 Duke University
, 1997
"... Measures of surprise refer to quantifications of the degree of incompatibility of data with some hypothesized model H 0 without any reference to alternative models. Traditional measures of surprise have been the pvalues, which are however known to grossly overestimate the evidence against H 0 . Str ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Measures of surprise refer to quantifications of the degree of incompatibility of data with some hypothesized model H 0 without any reference to alternative models. Traditional measures of surprise have been the pvalues, which are however known to grossly overestimate the evidence against H 0 . Strict Bayesian analysis calls for an explicit specification of all possible alternatives to H 0 so Bayesians have not made routine use of measures of surprise. In this report we CRITICALLY REVIEw the proposals that have been made in this regard. We propose new modifications, stress the connections with robust Bayesian analysis and discuss the choice of suitable predictive distributions which allow surprise measures to play their intended role in the presence of nuisance parameters. We recommend either the use of appropriate likelihoodratio type measures or else the careful calibration of pvalues so that they are closer to Bayesian answers. Key words and phrases. Bayes factors; Bayesian pvalues; Bayesian robustness; Conditioning; Model checking; Predictive distributions. 1.
• Bayesian Case Studies in Nonparametrics
, 1991
"... Elements of Bayesian nonparametric statistical thought are explored in a series of case studies. Interpretation of a measurement as continuous, ordered, polychotomous, or dichotomous provides a framework in which examples are presented. Bayesian analogues to frequentist nonparametrics and overt Baye ..."
Abstract
 Add to MetaCart
Elements of Bayesian nonparametric statistical thought are explored in a series of case studies. Interpretation of a measurement as continuous, ordered, polychotomous, or dichotomous provides a framework in which examples are presented. Bayesian analogues to frequentist nonparametrics and overt Bayesian techniques are employed. Examples included are as follows: (1) averaging over families of distributions, (2) estimation of a single distribution function, (3) comparing several distribution functions, (4) estimating the coefficient of a concomitant variable affecting a distribution function, (5) monitoring compliance with a dichotomous measurement, and (6) using the multinomial for a categorization of any measurement's range. Lindley (1972, §12.2) provides an intitial sketch. Hill's (1968) nonparametric Bayesian construct and Berliner and Hill's (1988) application to survival are also reviewed. A commonality in the mechanics of these examples is the calculation of a marginal distribution over model parameters. Many are predictive distributions, resulting from an average over a likelihood and vague prior, and leaving observables for the calculations, as described by Roberts (1965) and advocated by Geisser (1971). Other specific observations from these efforts include the following
Background
"... Experiential learning is perhaps the most effective way to teach. One example is the scoring procedure used for exams in some decision analysis programs. Under this grading scheme, students take a multiplechoice exam, but rather than simply marking which answer they think is correct, they must assi ..."
Abstract
 Add to MetaCart
(Show Context)
Experiential learning is perhaps the most effective way to teach. One example is the scoring procedure used for exams in some decision analysis programs. Under this grading scheme, students take a multiplechoice exam, but rather than simply marking which answer they think is correct, they must assign a probability to each possible answer. The exam is then scored with a special scoring rule, under which students ’ best strategy is to avoid guessing and instead assign their true beliefs. Such a scoring function is known as a strictly proper scoring rule. In this paper, we discuss several different scoring rules and demonstrate how their use in testing situations provides insights for both students and instructors.
doi 10.1287/deca.1100.0184 ©2010 INFORMS Scoring Rules and Decision Analysis Education
"... Experiential learning is perhaps the most effective way to teach. One example is the scoring procedure usedfor exams in some decision analysis programs. Under this grading scheme, students take a multiplechoice exam, but rather than simply marking which answer they think is correct, they must assig ..."
Abstract
 Add to MetaCart
(Show Context)
Experiential learning is perhaps the most effective way to teach. One example is the scoring procedure usedfor exams in some decision analysis programs. Under this grading scheme, students take a multiplechoice exam, but rather than simply marking which answer they think is correct, they must assign a probability to each possible answer. The exam is then scored with a special scoring rule, under which students ’ best strategy is to avoid guessing and instead assign their true beliefs. Such a scoring function is known as a strictly proper scoring rule. In this paper, we discuss several different scoring rules and demonstrate how their use in testing situations provides insights for both students and instructors.