Results 1 
8 of
8
Bayesian model averaging
 STAT.SCI
, 1999
"... Standard statistical practice ignores model uncertainty. Data analysts typically select a model from some class of models and then proceed as if the selected model had generated the data. This approach ignores the uncertainty in model selection, leading to overcon dent inferences and decisions tha ..."
Abstract

Cited by 42 (0 self)
 Add to MetaCart
Standard statistical practice ignores model uncertainty. Data analysts typically select a model from some class of models and then proceed as if the selected model had generated the data. This approach ignores the uncertainty in model selection, leading to overcon dent inferences and decisions that are more risky than one thinks they are. Bayesian model averaging (BMA) provides a coherent mechanism for accounting for this model uncertainty. Several methods for implementing BMA haverecently emerged. We discuss these methods and present anumber of examples. In these examples, BMA provides improved outofsample predictive performance. We also provide a catalogue of
Combining probability distributions from dependent information sources
 Management Sci
, 1981
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 33 (1 self)
 Add to MetaCart
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
Computational Experiments and Reality
, 1999
"... This study explores three alternative econometric interpretations of dynamic, stochastic general equilibrium (DSGE) models. (1) A strong econometric interpretation takes the model literally and directly produces a likelihood function for observed prices and quantities. It is widely recognized that u ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
This study explores three alternative econometric interpretations of dynamic, stochastic general equilibrium (DSGE) models. (1) A strong econometric interpretation takes the model literally and directly produces a likelihood function for observed prices and quantities. It is widely recognized that under this interpretation, most DSGE models are rejected using classical econometrics and assigned zero probability in a Bayesian approach. (2) A weak econometric interpretation commonly made in the calibration literature confines attention to only a few functions of observed prices and interest rates and evaluates a model on its predictive distribution for these functions. This approach is equivalent to a Bayesian prior predictive analysis, developed by Box (1980) and predecessors. This study shows that the weak interpretation retains the implications of the strong interpretation, and therefore DSGE’s fare no better under this approach. (3) Under a minimal econometric interpretation, DSGE’s provide only prior distributions for specified population moments. When coupled with an econometric model (e.g., a vector autoregression) that includes the same moments, DSGE’s may be compared and used for inference using conventional Bayesian methods. This interpretation extends and formalizes an approach suggested by Dejong, Ingram and Whiteman (1996). All three interpretations are illustrated using models of the equity premium, and it is shown that the conclusions from a minimal interpretation differ substantially from those under a weak interpretation. This revision was prepared for the DYNARE Conference, CEPREMAP, Paris, September 45, 2006. It is work in progress. Comments welcome. Please do not cite or quote without the author’s permission. 1 1
A Theory Of Classifier Combination: The Neural Network Approach
, 1995
"... There is a trend in recent OCR development to improve system performance by combining recognition results of several complementary algorithms. This thesis examines the classifier combination problem under strict separation of the classifier and combinator design. None other than the fact that every ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
There is a trend in recent OCR development to improve system performance by combining recognition results of several complementary algorithms. This thesis examines the classifier combination problem under strict separation of the classifier and combinator design. None other than the fact that every classifier has the same input and output specification is assumed about the training, design or implementation of the classifiers. A general theory of combination should possess the following properties. It must be able to combine anytype of classifiers regardless of the level of information contents in the outputs. In addition, a general combinator must be able to combine any mixture of classifier types and utilize all information available. Since classifier independence is difficult to achieve and to detect, it is essential for a combinator to handle correlated classifiers robustly. Although the performance of a robust (against correlation) combinator can be improved by adding classifiers indiscriminantly, it is generally of interest to achieve comparable performance with the minimum number of classifiers. Therefore, the combinator should have the ability to eliminate redundant classifiers. Furthermore, it is desirable to have a complexity control mechanism for the combinator. In the past, simplifications come from assumptions and constraints imposed by the system designers. In the general theory, there should be a mechanism to reduce solution complexity by exercising nonclassifierspecific constraints. Finally, a combinator should capture classifier/image dependencies. Nearly all combination methods have ignored the fact that classifier performances (and outputs) depend on various image characteristics, and this dependency is manifested in classifier output patterns in relation to input imag...
Measures of Surprise in Bayesian Analysis
 Duke University
, 1997
"... Measures of surprise refer to quantifications of the degree of incompatibility of data with some hypothesized model H 0 without any reference to alternative models. Traditional measures of surprise have been the pvalues, which are however known to grossly overestimate the evidence against H 0 . Str ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Measures of surprise refer to quantifications of the degree of incompatibility of data with some hypothesized model H 0 without any reference to alternative models. Traditional measures of surprise have been the pvalues, which are however known to grossly overestimate the evidence against H 0 . Strict Bayesian analysis calls for an explicit specification of all possible alternatives to H 0 so Bayesians have not made routine use of measures of surprise. In this report we CRITICALLY REVIEw the proposals that have been made in this regard. We propose new modifications, stress the connections with robust Bayesian analysis and discuss the choice of suitable predictive distributions which allow surprise measures to play their intended role in the presence of nuisance parameters. We recommend either the use of appropriate likelihoodratio type measures or else the careful calibration of pvalues so that they are closer to Bayesian answers. Key words and phrases. Bayes factors; Bayesian pvalues; Bayesian robustness; Conditioning; Model checking; Predictive distributions. 1.
• Bayesian Case Studies in Nonparametrics
, 1991
"... Elements of Bayesian nonparametric statistical thought are explored in a series of case studies. Interpretation of a measurement as continuous, ordered, polychotomous, or dichotomous provides a framework in which examples are presented. Bayesian analogues to frequentist nonparametrics and overt Baye ..."
Abstract
 Add to MetaCart
Elements of Bayesian nonparametric statistical thought are explored in a series of case studies. Interpretation of a measurement as continuous, ordered, polychotomous, or dichotomous provides a framework in which examples are presented. Bayesian analogues to frequentist nonparametrics and overt Bayesian techniques are employed. Examples included are as follows: (1) averaging over families of distributions, (2) estimation of a single distribution function, (3) comparing several distribution functions, (4) estimating the coefficient of a concomitant variable affecting a distribution function, (5) monitoring compliance with a dichotomous measurement, and (6) using the multinomial for a categorization of any measurement's range. Lindley (1972, §12.2) provides an intitial sketch. Hill's (1968) nonparametric Bayesian construct and Berliner and Hill's (1988) application to survival are also reviewed. A commonality in the mechanics of these examples is the calculation of a marginal distribution over model parameters. Many are predictive distributions, resulting from an average over a likelihood and vague prior, and leaving observables for the calculations, as described by Roberts (1965) and advocated by Geisser (1971). Other specific observations from these efforts include the following
Model Averaging in Economics: An Overview ∗ Enrique MoralBenito †
, 2010
"... Standard practice in empirical research is based on two steps: first, researchers select a model from the space of all possible models; second, they proceed as if the selected model had generated the data. Therefore, uncertainty in the model selection step is typically ignored. Alternatively, model ..."
Abstract
 Add to MetaCart
Standard practice in empirical research is based on two steps: first, researchers select a model from the space of all possible models; second, they proceed as if the selected model had generated the data. Therefore, uncertainty in the model selection step is typically ignored. Alternatively, model averaging accounts for this model uncertainty. In this paper, I review the literature on model averaging with special emphasis on its applications to economics. Finally, as empirical illustration, I consider model averaging to examine the deterrent effect of capital punishment across states in the US. JEL Classification: C5, K4.
Modeling share returns an empirical study on the Variance Gamma model
"... Due to the fact that there has been only little research on some essential issues of the Variance Gamma (VG) process, we have recognized a gap in literature as to the performance of the various estimation methods for modeling empirical share returns. While some papers present only few estimated para ..."
Abstract
 Add to MetaCart
Due to the fact that there has been only little research on some essential issues of the Variance Gamma (VG) process, we have recognized a gap in literature as to the performance of the various estimation methods for modeling empirical share returns. While some papers present only few estimated parameters for a very small, selected empirical database, Finlay and Seneta (2008) compare most of the possible estimation methods using simulated data. In contrast to Finlay and Seneta (2008) we utilize a broad, daily, and empirical data set consisting of the stocks of each company listed on the DOW JONES over the period from 1991 to 2011. We also apply a regime switching model in order to identify normal and turbulent times within our data set and fit the VG process to the data in the respective period. We find out that the VG process parameters vary over time, and in accordance with the regime switching model, we recognize significantly increasing fitting rates which are due to the chosen periods.