Results 1  10
of
136
SCHAPIRE: Adaptive game playing using multiplicative weights
 Games and Economic Behavior
, 1999
"... We present a simple algorithm for playing a repeated game. We show that a player using this algorithm suffers average loss that is guaranteed to come close to the minimum loss achievable by any fixed strategy. Our bounds are nonasymptotic and hold for any opponent. The algorithm, which uses the mult ..."
Abstract

Cited by 134 (14 self)
 Add to MetaCart
We present a simple algorithm for playing a repeated game. We show that a player using this algorithm suffers average loss that is guaranteed to come close to the minimum loss achievable by any fixed strategy. Our bounds are nonasymptotic and hold for any opponent. The algorithm, which uses the multiplicativeweight methods of Littlestone and Warmuth, is analyzed using the Kullback–Liebler divergence. This analysis yields a new, simple proof of the min–max theorem, as well as a provable method of approximately solving a game. A variant of our gameplaying algorithm is proved to be optimal in a very strong sense. Journal of Economic Literature
Game Theory, Online Prediction and Boosting
 In Proceedings of the Ninth Annual Conference on Computational Learning Theory
, 1996
"... We study the close connections between game theory, online prediction and boosting. After a brief review of game theory, we describe an algorithm for learning to play repeated games based on the online prediction methods of Littlestone and Warmuth. The analysis of this algorithm yields a simple pr ..."
Abstract

Cited by 133 (13 self)
 Add to MetaCart
We study the close connections between game theory, online prediction and boosting. After a brief review of game theory, we describe an algorithm for learning to play repeated games based on the online prediction methods of Littlestone and Warmuth. The analysis of this algorithm yields a simple proof of von Neumann's famous minmax theorem, as well as a provable method of approximately solving a game. We then show that the online prediction model is obtained by applying this gameplaying algorithm to an appropriate choice of game and that boosting is obtained by applying the same algorithm to the "dual" of this game. 1 INTRODUCTION The purpose of this paper is to bring out the close connections between game theory, online prediction and boosting. Briefly, game theory is the study of games and other interactions of various sorts. Online prediction is a learning model in which an agent predicts the classification of a sequence of items and attempts to minimize the total number of pre...
Measurement and modeling of depth cue combination: in defense of weak fusion
 Vision Research
, 1995
"... Various visual cues provide information about depth and shape in a scene. When several of these cues are simultaneously available in a single location in the scene, the visual system attempts to combine them. In this paper, we discuss three key issues relevant to the experimental analysis of depth c ..."
Abstract

Cited by 130 (20 self)
 Add to MetaCart
Various visual cues provide information about depth and shape in a scene. When several of these cues are simultaneously available in a single location in the scene, the visual system attempts to combine them. In this paper, we discuss three key issues relevant to the experimental analysis of depth cue combination in human vision: cue promotion, dynamic weighting of cues, and robustness of cue combination. We review recent psychophysical studies of human depth cue combination in light of these issues. We organize the discussion and review as the development of a model of the depth cue combination process termed modified weak fusion (MWF). We relate the MWF framework to Bayesian theories of cue combination. We argue that the MWF model is consistent with previous experimental results and is a parsimonious summary of these results. While the MWF model is motivated by normative considerations, it is primarily intended to guide experimental analysis of depth cue combination in human vision. We describe experimental methods, analogous to perturbation analysis, that permit us to analyze depth cue combination in novel ways. In particular these methods allow us to investigate the key issues we have raised. We summarize recent experimental tests of the MWF framework that use these methods. Depth Multiple cues Sensor fusion
On the coherence of expected shortfall
 In: Szegö, G. (Ed.), “Beyond VaR” (Special Issue). Journal of Banking & Finance
, 2002
"... Expected Shortfall (ES) in several variants has been proposed as remedy for the deficiencies of ValueatRisk (VaR) which in general is not a coherent risk measure. In fact, most definitions of ES lead to the same results when applied to continuous loss distributions. Differences may appear when the ..."
Abstract

Cited by 95 (7 self)
 Add to MetaCart
Expected Shortfall (ES) in several variants has been proposed as remedy for the deficiencies of ValueatRisk (VaR) which in general is not a coherent risk measure. In fact, most definitions of ES lead to the same results when applied to continuous loss distributions. Differences may appear when the underlying loss distributions have discontinuities. In this case even the coherence property of ES can get lost unless one took care of the details in its definition. We compare some of the definitions of Expected Shortfall, pointing out that there is one which is robust in the sense of yielding a coherent risk measure regardless of the underlying distributions. Moreover, this Expected Shortfall can be estimated effectively even in cases where the usual estimators for VaR fail.
Error Bands for Impulse Responses
 Econometrica
, 1999
"... We show how correctly to extend known methods for generating error bands in reduced form VAR’s to overidentified models. We argue that the conventional pointwise bands common in the literature should be supplemented with measures of shape uncertainty, and we show how to generate such measures. We fo ..."
Abstract

Cited by 87 (3 self)
 Add to MetaCart
We show how correctly to extend known methods for generating error bands in reduced form VAR’s to overidentified models. We argue that the conventional pointwise bands common in the literature should be supplemented with measures of shape uncertainty, and we show how to generate such measures. We focus on bands that characterize the shape of the likelihood. Such bands are not classical confidence regions. We explain that classical confidence regions mix information about parameter location with information about model fit, and hence can be misleading as summaries of the implications of the data for the location of parameters. Because classical confidence regions also present conceptual and computational problems in multivariate time series models, we suggest that likelihoodbased bands, rather than approximate confidence bands based on asymptotic theory, be standard in reporting results for this type of model. 1 I.
The Asymptotic Efficiency Of Simulation Estimators
 Operations Research
, 1992
"... A decisiontheoretic framework is proposed for evaluating the efficiency of simulation estimators. The framework includes the cost of obtaining the estimate as well as the cost of acting based on the estimate. The cost of obtaining the estimate and the estimate itself are represented as realizations ..."
Abstract

Cited by 43 (14 self)
 Add to MetaCart
A decisiontheoretic framework is proposed for evaluating the efficiency of simulation estimators. The framework includes the cost of obtaining the estimate as well as the cost of acting based on the estimate. The cost of obtaining the estimate and the estimate itself are represented as realizations of jointly distributed stochastic processes. In this context, the efficiency of a simulation estimator based on a given computational budget is defined as the reciprocal of the risk (the overall expected cost). This framework is appealing philosophically, but it is often difficult to apply in practice (e.g., to compare the efficiency of two different estimators) because only rarely can the efficiency associated with a given computational budget be calculated. However, a useful practical framework emerges in a large sample context when we consider the limiting behavior as the computational budget increases. A limit theorem established for this model supports and extends a fairly well known e...
Multihypothesis sequential probability ratio tests  Part II: Accurate asymptotic . . .
 IEEE TRANS. INFORM. THEORY
, 2000
"... In a companion paper [13], we proved that two specific constructions of multihypothesis sequential tests, which we refer to as Multihypothesis Sequential Probability Ratio Tests (MSPRT’s), are asymptotically optimal as the decision risks (or error probabilities) go to zero. The MSPRT’s asymptotical ..."
Abstract

Cited by 42 (14 self)
 Add to MetaCart
In a companion paper [13], we proved that two specific constructions of multihypothesis sequential tests, which we refer to as Multihypothesis Sequential Probability Ratio Tests (MSPRT’s), are asymptotically optimal as the decision risks (or error probabilities) go to zero. The MSPRT’s asymptotically minimize not only the expected sample size but also any positive moment of the stopping time distribution, under very general statistical models for the observations. In this paper, based on nonlinear renewal theory we find accurate asymptotic approximations (up to a vanishing term) for the expected sample size that take into account the “overshoot ” over the boundaries of decision statistics. The approximations are derived for the scenario where the hypotheses are simple, the observations are independent and identically distributed (i.i.d.) according to one of the underlying distributions, and the decision risks go to zero. Simulation results for practical examples show that these approximations are fairly accurate not only for large but also for moderate sample sizes. The asymptotic results given here complete the analysis initiated in [4], where firstorder asymptotics were obtained for the expected sample size under a specific restriction on the Kullback–Leibler distances between the hypotheses.
Predicting a Binary Sequence Almost as Well as the Optimal Biased Coin
, 1996
"... We apply the exponential weight algorithm, introduced and Littlestone and Warmuth [17] and by Vovk [24] to the problem of predicting a binary sequence almost as well as the best biased coin. We first show that for the case of the logarithmic loss, the derived algorithm is equivalent to the Bayes alg ..."
Abstract

Cited by 40 (5 self)
 Add to MetaCart
We apply the exponential weight algorithm, introduced and Littlestone and Warmuth [17] and by Vovk [24] to the problem of predicting a binary sequence almost as well as the best biased coin. We first show that for the case of the logarithmic loss, the derived algorithm is equivalent to the Bayes algorithm with Jeffrey's prior, that was studied by Xie and Barron under probabilistic assumptions [26]. We derive a uniform bound on the regret which holds for any sequence. We also show that if the empirical distribution of the sequence is bounded away from 0 and from 1, then, as the length of the sequence increases to infinity, the difference between this bound and a corresponding bound on the average case regret of the same algorithm (which is asymptotically optimal in that case) is only 1=2. We show that this gap of 1=2 is necessary by calculating the regret of the minmax optimal algorithm for this problem and showing that the asymptotic upper bound is tight. We also study the application...
Macroeconomics and Methodology
 Journal of Economics Perspectives
, 1996
"... This essay begins with a sketch of some ways I find it useful to think about science and its uses. Following that, the essay applies the framework it has sketched to discussion of several aspects of the recent history of of macroeconomics. It considers skeptically the effort by some economists in th ..."
Abstract

Cited by 37 (1 self)
 Add to MetaCart
This essay begins with a sketch of some ways I find it useful to think about science and its uses. Following that, the essay applies the framework it has sketched to discussion of several aspects of the recent history of of macroeconomics. It considers skeptically the effort by some economists in the real business cycle school to define a quantitative methodology that stands in opposition to, or at least ignores, econometrics “in the modern (narrow) sense of the term. ” It connects this effort to the concurrent tendency across much of social science for scholars to question the value of statistical rigor and increasingly to see their disciplines as searches for persuasive arguments rather than as searches for objective truth. The essay points to lines of substantive progress in macroeconomics that apparently flout the methodological prescriptions of the real business cycle school purists, yet are producing advances in understanding at least as important as what purist research has in fact achieved. Science as Data Reduction Advances in the natural sciences are discoveries of ways to compress data concerning the natural world both data that already exists and potential data with minimal loss of information. For example Tycho Brahe accumulated large amounts of reliable data on the movements