Results 1  10
of
47
Forecast Evaluation and Combination
 IN G.S. MADDALA AND C.R. RAO (EDS.), HANDBOOK OF STATISTICS
, 1996
"... It is obvious that forecasts are of great importance and widely used in economics and finance. Quite simply, good forecasts lead to good decisions. The importance of forecast evaluation and combination techniques follows immediately forecast users naturally have a keen interest in monitoring and ..."
Abstract

Cited by 85 (24 self)
 Add to MetaCart
It is obvious that forecasts are of great importance and widely used in economics and finance. Quite simply, good forecasts lead to good decisions. The importance of forecast evaluation and combination techniques follows immediately forecast users naturally have a keen interest in monitoring and improving forecast performance. More generally, forecast evaluation figures prominently in many questions in empirical economics and finance, such as: Are expectations rational? (e.g., Keane and Runkle, 1990; Bonham and Cohen, 1995) Are financial markets efficient? (e.g., Fama, 1970, 1991) Do macroeconomic shocks cause agents to revise their forecasts at all horizons, or just at short and mediumterm horizons? (e.g., Campbell and Mankiw, 1987; Cochrane, 1988) Are observed asset returns "too volatile"? (e.g., Shiller, 1979; LeRoy and Porter, 1981) Are asset returns forecastable over long horizons? (e.g., Fama and French, 1988; Mark, 1995)
Asymptotic calibration
 Biometrika
, 1998
"... Can we forecast the probability of an arbitrary sequence of events happening so that the stated probability of an event happening is close to its empirical probability? We can view this prediction problem as a game played against nature, where at the beginning of the game Nature picks a data sequenc ..."
Abstract

Cited by 74 (4 self)
 Add to MetaCart
Can we forecast the probability of an arbitrary sequence of events happening so that the stated probability of an event happening is close to its empirical probability? We can view this prediction problem as a game played against nature, where at the beginning of the game Nature picks a data sequence and the forecaster picks a forecasting algorithm. If the forecaster is not allowed to randomize, then Nature win; there will always be data for which the forecaster does poorly. This paper shows that, if the forecaster can randomize, the forecaster wins in the sense that the forecasted probabilities and the empirical probabilities can be made arbitrarily close to each other.
Evaluating and combining subjective probability estimates
 Journal of Behavioral Decision Making
, 1997
"... This paper concerns the evaluation and combination of subjective probability estimates for categorical events. We argue that the appropriate criterion for evaluating individual and combined estimates depends on the type of uncertainty the decision maker seeks to represent, which in turn depends on h ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
This paper concerns the evaluation and combination of subjective probability estimates for categorical events. We argue that the appropriate criterion for evaluating individual and combined estimates depends on the type of uncertainty the decision maker seeks to represent, which in turn depends on his or her model of the event space. Decision makers require accurate estimates in the presence of aleatory uncertainty about exchangeable events, diagnostic estimates given epistemic uncertainty about unique events, and some combination of the two when the events are not necessarily unique, but the best equivalence class de®nition for exchangeable events is not apparent. Following a brief reveiw of the mathematical and empirical literature on combining judgments, we present an approach to the topic that derives from (1) a weak cognitive model of the individual that assumes subjective estimates are a function of underlying judgment perturbed by random error and (2) a classi®cation of judgment contexts in terms of the underlying information structure. In support of our developments, we present new analyses of two sets of subjective probability estimates, one of exchangeable and the other of unique events. As predicted, mean estimates were more accurate than the individual values in the ®rst case and more diagnostic in
STATED BELIEFS AND PLAY IN NORMALFORM GAMES
, 2004
"... Using data on oneshot games, we investigate the assumption that players respond to underlying expectations about their opponent’s behavior. In our laboratory experiments, subjects play a set of 14 twoperson 3x3 games, and state first order beliefs about their opponent’s behavior. The sets of respo ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
Using data on oneshot games, we investigate the assumption that players respond to underlying expectations about their opponent’s behavior. In our laboratory experiments, subjects play a set of 14 twoperson 3x3 games, and state first order beliefs about their opponent’s behavior. The sets of responses in the two tasks are largely inconsistent. Rather, we find evidence that the subjects perceive the games differently when they (i) choose actions, and (ii) state beliefs – they appear to pay more attention to the opponent’s incentives when they state beliefs than when they play the games. On average, they fail to best respond to their own stated beliefs in almost half of the games. The inconsistency is confirmed by estimates of a unified statistical model that jointly uses the actions and the belief statements. There, we can control for noise, and formulate a statistical test that rejects consistency. Effects of the belief elicitation procedure on subsequent actions are mostly insignificant.
Evaluating the predictive accuracy of volatility models
 Journal of Forecasting
, 2001
"... Statistical loss functions that generally lack economic content are commonly used for evaluating financial volatility forecasts. In this paper, an evaluation framework based on loss functions tailored to a user’s economic interests is proposed. According to these interests, the user specifies the ec ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
Statistical loss functions that generally lack economic content are commonly used for evaluating financial volatility forecasts. In this paper, an evaluation framework based on loss functions tailored to a user’s economic interests is proposed. According to these interests, the user specifies the economic events to be forecast, the criterion with which to evaluate these forecasts, and the subsets of the forecasts of particular interest. The volatility forecasts from a model are then transformed into probability forecasts of the relevant events and evaluated using the specified criteria (i.e., a probability scoring rule and calibration tests). An empirical example using exchange rate data illustrates the framework and confirms that the choice of loss function directly affects the forecast evaluation results.
Accuracy and Coherence: Prospects for an Alethic Epistemology of Partial Belief
"... Traditional epistemology is both dogmatic and alethic. It is dogmatic in the sense that it takes the fundamental doxastic attitude to be full belief, the state in which a person categorically accepts some proposition as true. It is alethic in the sense that it evaluates such categorical beliefs on t ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
Traditional epistemology is both dogmatic and alethic. It is dogmatic in the sense that it takes the fundamental doxastic attitude to be full belief, the state in which a person categorically accepts some proposition as true. It is alethic in the sense that it evaluates such categorical beliefs on the basis of what William James calls the ‘two great commandments ’ of epistemology: Believe the truth! Avoid error! Other central concepts of dogmatic epistemology – knowledge, justification, reliability, sensitivity, and so on – are understood in terms of their relationships to this ultimate standard of truth or accuracy. Some epistemologists, inspired by Bayesian approaches in decision theory and statistics, have sought to replace the dogmatic model with a probabilistic one in which partial beliefs, or credences, play the leading role. A person’s credence in a proposition X is her level of confidence in its truth. This corresponds, roughly, to the degree to which she is disposed to presuppose X in her theoretical and practical reasoning. Credences are inherently gradational: the strength of a partial belief in X can range from certainty of truth, through maximal uncertainty (in which X and its negation ∼X are believed equally strongly), to complete certainty of falsehood. These variations in confidence are warranted by differing states of evidence, and they rationalize different choices among options whose outcomes depend on X. It is a central normative doctrine of probabilistic epistemology that rational credences should obey the laws of probability. In the idealized case where a believer has a numerically precise credence b(X) for every proposition X in some Boolean algebra of propositions, 1 these laws are as follows:
Twostage dynamic signal detection: A theory of choice, decision time, and confidence
 In
, 2010
"... The 3 most oftenused performance measures in the cognitive and decision sciences are choice, response or decision time, and confidence. We develop a random walk/diffusion theory—2stage dynamic signal detection (2DSD) theory—that accounts for all 3 measures using a common underlying process. The mo ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
The 3 most oftenused performance measures in the cognitive and decision sciences are choice, response or decision time, and confidence. We develop a random walk/diffusion theory—2stage dynamic signal detection (2DSD) theory—that accounts for all 3 measures using a common underlying process. The model uses a drift diffusion process to account for choice and decision time. To estimate confidence, we assume that evidence continues to accumulate after the choice. Judges then interrupt the process to categorize the accumulated evidence into a confidence rating. The model explains all known interrelationships between the 3 indices of performance. Furthermore, the model also accounts for the distributions of each variable in both a perceptual and general knowledge task. The dynamic nature of the model also reveals the moderating effects of time pressure on the accuracy of choice and confidence. Finally, the model specifies the optimal solution for giving the fastest choice and confidence rating for a given level of choice and confidence accuracy. Judges are found to act in a manner consistent with the optimal solution when making confidence judgments.