Results 1  10
of
25
If You’re So Smart, Why Aren’t You Rich? Belief Selection in Complete and Incomplete Markets
, 2001
"... ..."
Calibrated Forecasting and Merging
, 1996
"... Consider a general finitestate stochastic process governed by an unknown objective probability distribution. Observing the system, a forecaster assigns subjective probabilities to future states. The resulting subjective forecast merges to the objective distribution if, with time, the forecasted pro ..."
Abstract

Cited by 22 (5 self)
 Add to MetaCart
Consider a general finitestate stochastic process governed by an unknown objective probability distribution. Observing the system, a forecaster assigns subjective probabilities to future states. The resulting subjective forecast merges to the objective distribution if, with time, the forecasted probabilities converge to the correct (but unknown) probabilities. The forecast is calibrated if observed longrun empirical distributions coincide with the forecasted probabilities. This paper links the unobserved reliability of forecasts to their observed empirical performance by demonstrating full equivalence between notions of merging and of calibration. It also indicates some implications of this equivalence for the literatures of forecasting and learning.
Bayesian Representation of Stochastic Processes under Learning: de Finetti Revisited
 Econometrica
, 1998
"... A probability distribution governing the evolution of a stochastic process has infinitely many Bayesian representations of the form # = R # # # d####. Among these, a natural representation is one whose components ## # 's# are `learnable' #one can approximate # # by conditioning # on observation of t ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
A probability distribution governing the evolution of a stochastic process has infinitely many Bayesian representations of the form # = R # # # d####. Among these, a natural representation is one whose components ## # 's# are `learnable' #one can approximate # # by conditioning # on observation of the process# and `sufficient for prediction' ## # 's predictions are not aided by conditioning on observation of the process#. We show the existence and uniqueness of such a representation under a suitable asymptotic mixing condition on the process. This representation can be obtained by conditioning on the tailfield of the process, and any learnable representation that is sufficient for prediction is asymptotically like the tailfield representation. This result is related to the celebrated de Finetti theorem, but with exchangeability
Subjective Games And Equilibria
, 1994
"... Applying the concepts of Nash, Bayesian, and correlated equilibrium to analysis of strategic interaction, requires that players possess objective knowledge of the game and opponents' strategies. Such knowledge is often not available. The proposed notions of subjective games, and subjective Nash and ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
Applying the concepts of Nash, Bayesian, and correlated equilibrium to analysis of strategic interaction, requires that players possess objective knowledge of the game and opponents' strategies. Such knowledge is often not available. The proposed notions of subjective games, and subjective Nash and correlated equilibria, replace essential unavailable objective knowledge by subjective assessments. When playing a subjective game repeatedly, subjective optimizers converge to a subjective equilibrium. We apply this approach to some well known examples including a single multiarm bandit player, multiperson multiarm bandit games, and repeated Cournot oligopoly games.
The Santa Fe Bar Problem Revisited: Theoretical and Practical Implications
 Festival on Game Theory: Interactive Dynamics and Learning, SUNY Stony
, 1998
"... This paper investigates the Santa Fe (i.e., El Farol) bar problem from both a theoretical and a practical perspective. Theoretically, it is shown that beliefbased learning (e.g., Bayesian updating) yields unstable behavior in this repeated game. In particular, rationality and predictivity, two cond ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
This paper investigates the Santa Fe (i.e., El Farol) bar problem from both a theoretical and a practical perspective. Theoretically, it is shown that beliefbased learning (e.g., Bayesian updating) yields unstable behavior in this repeated game. In particular, rationality and predictivity, two conditions sufficient for convergence to Nash equilibrium, are inherently incompatible. On the practical side, it is demonstrated via simulations that computational learning algorithms in which agents occasionally act irrationally do indeed give rise to nearequilibrium behavior.
Social Learning in Recurring Games
 Games and Economic Behavior
, 1995
"... In a recurring game, a stage game is played sequentially by different groups of players. Each group receives publicly available information about the play of earlier groups. Not knowing the population distribution of playertypes (representing individual preferences and behavior), society members st ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
In a recurring game, a stage game is played sequentially by different groups of players. Each group receives publicly available information about the play of earlier groups. Not knowing the population distribution of playertypes (representing individual preferences and behavior), society members start with a prior probability distribution over a set of possible typedistributions. Late groups update their beliefs by considering the public information regarding the play of earlier groups. We study the limit beliefs and play of late groups and the relationships to the true (realized) typedistribution and equilibria of the true Bayesian stage game.
Testing Theories with Learnable and Predictive Representations ∗
, 2008
"... We study the problem of testing an expert whose theory has a learnable and predictive parametric representation, as do all standard processes used in Bayesian statistics. We design a test in which the expert is required to submit a date T by which he will have learned enough to deliver sharp predict ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
We study the problem of testing an expert whose theory has a learnable and predictive parametric representation, as do all standard processes used in Bayesian statistics. We design a test in which the expert is required to submit a date T by which he will have learned enough to deliver sharp predictions about future frequencies. His forecasts are then tested according to a simple hypothesis test. We show that this test passes an expert who knows the datagenerating process and cannot be manipulated by an uninformed one. Such a test is not possible
Characterizing predictable classes of processes
 In Proc. 25th Conference on Uncertainty in Artificial Intelligence (UAI’09
, 2009
"... The problem is sequence prediction in the following setting. A sequence x1,..., xn,... of discretevalued observations is generated according to some unknown probabilistic law (measure) µ. After observing each outcome, it is required to give the conditional probabilities of the next observation. The ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
The problem is sequence prediction in the following setting. A sequence x1,..., xn,... of discretevalued observations is generated according to some unknown probabilistic law (measure) µ. After observing each outcome, it is required to give the conditional probabilities of the next observation. The measure µ belongs to an arbitrary class C of stochastic processes. We are interested in predictors ρ whose conditional probabilities converge to the “true ” µconditional probabilities if any µ ∈ C is chosen to generate the data. We show that if such a predictor exists, then a predictor can also be obtained as a convex combination of a countably many elements of C. In other words, it can be obtained as a Bayesian predictor whose prior is concentrated on a countable set. This result is established for two very different measures of performance of prediction, one of which is very strong, namely, total variation, and the other is very weak, namely, prediction in expected average KullbackLeibler divergence. 1
On Finding Predictors for Arbitrary Families of Processes
"... The problem is sequence prediction in the following setting. A sequence x1,..., xn,... of discretevalued observations is generated according to some unknown probabilistic law (measure) µ. After observing each outcome, it is required to give the conditional probabilities of the next observation. The ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
The problem is sequence prediction in the following setting. A sequence x1,..., xn,... of discretevalued observations is generated according to some unknown probabilistic law (measure) µ. After observing each outcome, it is required to give the conditional probabilities of the next observation. The measure µ belongs to an arbitrary but known class C of stochastic process measures. We are interested in predictors ρ whose conditional probabilities converge (in some sense) to the “true ” µconditional probabilities, if any µ ∈ C is chosen to generate the sequence. The contribution of this work is in characterizing the families C for which such predictors exist, and in providing a specific and simple form in which to look for a solution. We show that if any predictor works, then there exists a Bayesian predictor, whose prior is discrete, and which works too. We also find several sufficient and necessary conditions for the existence of a predictor, in terms of topological characterizations of the family C, as well as in terms of local behaviour of the measures in C, which in some cases lead to procedures for constructing such predictors. It should be emphasized that the framework is completely general: the stochastic processes considered are not required to be i.i.d., stationary, or to belong to any parametric or countable family. 1