Results 1  10
of
24
Calibrated Learning and Correlated Equilibrium
 Games and Economic Behavior
, 1996
"... Suppose two players meet each other in a repeated game where: 1. each uses a learning rule with the property that it is a calibrated forecast of the others plays, and 2. each plays a best response to this forecast distribution. ..."
Abstract

Cited by 86 (5 self)
 Add to MetaCart
Suppose two players meet each other in a repeated game where: 1. each uses a learning rule with the property that it is a calibrated forecast of the others plays, and 2. each plays a best response to this forecast distribution.
Asymptotic calibration
, 1998
"... Can we forecast the probability of an arbitrary sequence of events happening so that the stated probability of an event happening is close to its empirical probability? We can view this prediction problem as a game played against Nature, where at the beginning of the game Nature picks a data sequenc ..."
Abstract

Cited by 71 (4 self)
 Add to MetaCart
Can we forecast the probability of an arbitrary sequence of events happening so that the stated probability of an event happening is close to its empirical probability? We can view this prediction problem as a game played against Nature, where at the beginning of the game Nature picks a data sequence and the forecaster picks a forecasting algorithm. If the forecaster is not allowed to randomise, then Nature wins; there will always be data for which the forecaster does poorly. This paper shows that, if the forecaster can randomise, the forecaster wins in the sense that the forecasted probabilities and the empirical probabilities can be made arbitrarily close to each other.
Probabilistic forecasts, calibration and sharpness
 Journal of the Royal Statistical Society Series B
, 2007
"... Summary. Probabilistic forecasts of continuous variables take the form of predictive densities or predictive cumulative distribution functions. We propose a diagnostic approach to the evaluation of predictive performance that is based on the paradigm of maximizing the sharpness of the predictive dis ..."
Abstract

Cited by 41 (15 self)
 Add to MetaCart
Summary. Probabilistic forecasts of continuous variables take the form of predictive densities or predictive cumulative distribution functions. We propose a diagnostic approach to the evaluation of predictive performance that is based on the paradigm of maximizing the sharpness of the predictive distributions subject to calibration. Calibration refers to the statistical consistency between the distributional forecasts and the observations and is a joint property of the predictions and the events that materialize. Sharpness refers to the concentration of the predictive distributions and is a property of the forecasts only. A simple theoretical framework allows us to distinguish between probabilistic calibration, exceedance calibration and marginal calibration. We propose and study tools for checking calibration and sharpness, among them the probability integral transform histogram, marginal calibration plots, the sharpness diagram and proper scoring rules. The diagnostic approach is illustrated by an assessment and ranking of probabilistic forecasts of wind speed at the Stateline wind energy centre in the US Pacific Northwest. In combination with crossvalidation or in the time series context, our proposal provides very general, nonparametric alternatives to the use of information criteria for model diagnostics and model selection.
Deterministic calibration and Nash equilibrium
 Proceedings of the Seventeenth Annual Conference on Learning Theory, volume 3120 of Lecture Notes in Computer Science
, 2004
"... Abstract. We provide a natural learning process in which the joint frequency of empirical play converges into the set of convex combinations of Nash equilibria. In this process, all players rationally choose their actions using a public prediction made by a deterministic, weakly calibrated algorithm ..."
Abstract

Cited by 39 (2 self)
 Add to MetaCart
Abstract. We provide a natural learning process in which the joint frequency of empirical play converges into the set of convex combinations of Nash equilibria. In this process, all players rationally choose their actions using a public prediction made by a deterministic, weakly calibrated algorithm. Furthermore, the public predictions used in any given round of play are frequently close to some Nash equilibrium of the game. 1
Conditional Universal Consistency
, 1997
"... Each period, a player must choose an action without knowing the outcome that will be chosen by "Nature," according to an unknown and possibly historydependent stochastic rule. We discuss have a class of procedures that assign observations to categories, and prescribe a simple randomized v ..."
Abstract

Cited by 33 (0 self)
 Add to MetaCart
Each period, a player must choose an action without knowing the outcome that will be chosen by "Nature," according to an unknown and possibly historydependent stochastic rule. We discuss have a class of procedures that assign observations to categories, and prescribe a simple randomized variation of fictitious play within each category. These procedures are "conditionally consistent," in the sense of yielding almost as high a timeaverage payoff as could be obtained if the player chose knowing the conditional distributions of actions given categories. Moreover given any alternative procedure, there is a conditionally consistent procedure whose performance is no more than epsilon worse regardless of the discount factor. Cycles can persist if all players classify histories in the same way; however in an example, where players classify histories differently, the system converges to a Nash equilibrium. We also argue that in the long run the timeaverage of play should resemble a correlated equilibrium.
Calibrated Forecasting and Merging
, 1996
"... Consider a general finitestate stochastic process governed by an unknown objective probability distribution. Observing the system, a forecaster assigns subjective probabilities to future states. The resulting subjective forecast merges to the objective distribution if, with time, the forecasted pro ..."
Abstract

Cited by 22 (4 self)
 Add to MetaCart
Consider a general finitestate stochastic process governed by an unknown objective probability distribution. Observing the system, a forecaster assigns subjective probabilities to future states. The resulting subjective forecast merges to the objective distribution if, with time, the forecasted probabilities converge to the correct (but unknown) probabilities. The forecast is calibrated if observed longrun empirical distributions coincide with the forecasted probabilities. This paper links the unobserved reliability of forecasts to their observed empirical performance by demonstrating full equivalence between notions of merging and of calibration. It also indicates some implications of this equivalence for the literatures of forecasting and learning.
Any inspection is manipulable
 Econometrica
, 2001
"... Abstract. A forecaster provides a probabilistic prediction regarding the following day’s state of nature. To examine the forecaster, an inspector employs calibration tests that compare the average prediction and the empirical frequency of prespecified events. This paper shows that any mixed test ca ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
Abstract. A forecaster provides a probabilistic prediction regarding the following day’s state of nature. To examine the forecaster, an inspector employs calibration tests that compare the average prediction and the empirical frequency of prespecified events. This paper shows that any mixed test can be manipulated in the sense that, independently of the state realizations, the difference between the average prediction and the past empirical frequency that corresponds to almost any test employed diminishes to zero. In other words, a forecaster has a prediction scheme that passes almost any test. In particular, a forecaster can pass all the tests in a countable set simultaneously. I am grateful to Rann Smorodinsky, Sylvain Sorin and two anonymous referees for their helpful
Comparative Testing of Experts
, 2006
"... We show that a simple “reputationstyle” test can always identify which of two experts is informed about the true distribution. The test presumes no prior knowledge of the true distribution, achieves any desired degree of precision in some fixed finite time, and does not use “counterfactual” predict ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
We show that a simple “reputationstyle” test can always identify which of two experts is informed about the true distribution. The test presumes no prior knowledge of the true distribution, achieves any desired degree of precision in some fixed finite time, and does not use “counterfactual” predictions. Our test relies on a simple reputation argument due to Fudenberg and Levine (1992). We then use our setup to shed some light on the apparent paradox that a strategically motivated expert can ignorantly pass any test. We point out that this paradox is a consequence of the fact that, in the singleexpert setting, any mixed strategy for Nature is reducible to a pure strategy, thus eliminating any meaningful sense in which Nature can randomize. Comparative testing reverses the impossibility result because the presence of an informed expert eliminates the reducibility
A Geometric Proof of Calibration
, 2009
"... We provide yet another proof of the existence of calibrated forecasters; it has two merits. First, it is valid for an arbitrary finite number of outcomes. Second, it is short and simple and it follows from a direct application of Blackwell’s approachability theorem to carefully chosen vectorvalued ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
We provide yet another proof of the existence of calibrated forecasters; it has two merits. First, it is valid for an arbitrary finite number of outcomes. Second, it is short and simple and it follows from a direct application of Blackwell’s approachability theorem to carefully chosen vectorvalued payoff function and convex target set. Our proof captures the essence of existing proofs based on approachability (e.g., the proof by Foster [1999] in case of binary outcomes) and highlights the intrinsic connection between approachability and calibration.
Online calibrated forecasts: Memory efficiency versus universality for learning in games
 MACH LEARN
, 2006
"... We provide a simple learning process that enables an agent to forecast a sequence of outcomes. Our forecasting scheme, termed tracking forecast, is based on tracking the past observations while emphasizing recent outcomes. As opposed to other forecasting schemes, we sacrifice universality in favor ..."
Abstract

Cited by 8 (8 self)
 Add to MetaCart
We provide a simple learning process that enables an agent to forecast a sequence of outcomes. Our forecasting scheme, termed tracking forecast, is based on tracking the past observations while emphasizing recent outcomes. As opposed to other forecasting schemes, we sacrifice universality in favor of a significantly reduced memory requirements. We show that if the sequence of outcomes has certain properties—it has some internal (hidden) state that does not change too rapidly—then the tracking forecast is weakly calibrated so that the forecast appears to be correct most of the time. For binary outcomes, this result holds without any internal state assumptions. We consider learning in a repeated strategic game where each player attempts to compute some forecast of the opponent actions and play a best response to it. We show that if one of the players uses a tracking forecast, while the other player uses a standard learning algorithm (such as exponential regret matching or smooth fictitious play), then the player using the tracking forecast obtains the best response to the actual play of the other players. We further show that if both players use tracking forecast, then under certain conditions on the game matrix, convergence to a Nash