Results 1  10
of
13
Online regression competitive with reproducing kernel Hilbert spaces
, 2005
"... We consider the problem of online prediction of realvalued labels of new objects. The prediction algorithm’s performance is measured by the squared deviation of the predictions from the actual labels. No probabilistic assumptions are made about the way the labels and objects are generated. Instead ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
We consider the problem of online prediction of realvalued labels of new objects. The prediction algorithm’s performance is measured by the squared deviation of the predictions from the actual labels. No probabilistic assumptions are made about the way the labels and objects are generated. Instead, we are given a benchmark class of prediction rules some of which are hoped to produce good predictions. We show that for a wide range of infinitedimensional benchmark classes one can construct a prediction algorithm whose cumulative loss over the first N examples does not exceed the cumulative loss of any prediction rule in the class plus O ( √ N). Our proof technique is based on the recently developed method of defensive forecasting. 1
Competitive online learning with a convex loss function
, 2005
"... We consider the problem of sequential decision making under uncertainty in which the loss caused by a decision depends on the following binary observation. In competitive online learning, the goal is to design decision algorithms that are almost as good as the best decision rules in a wide benchmar ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
We consider the problem of sequential decision making under uncertainty in which the loss caused by a decision depends on the following binary observation. In competitive online learning, the goal is to design decision algorithms that are almost as good as the best decision rules in a wide benchmark class, without making any assumptions about the way the observations are generated. However, standard algorithms in this area can only deal with finitedimensional (often countable) benchmark classes. In this paper we give similar results for decision rules ranging over an arbitrary reproducing kernel Hilbert space. For example, it is shown that for a wide class of loss functions (including the standard square, absolute, and log loss functions) the average loss of the master algorithm, over the first N observations, does not exceed the average loss of the best decision rule with a bounded norm plus O(N −1/2). Our proof technique is very different from the standard ones and is based on recent results about defensive forecasting. Given the probabilities produced by a defensive forecasting algorithm, which are known to be well calibrated and to have good resolution in the long run, we use the expected loss minimization principle to find a suitable decision. 1
unknown title
, 2007
"... Implications of contrarian and onesided strategies for the faircoin game ..."
Abstract
 Add to MetaCart
Implications of contrarian and onesided strategies for the faircoin game
Continuous and randomized defensive forecasting: unified view
, 2008
"... Defensive forecasting is a method of transforming laws of probability (stated in gametheoretic terms as strategies for Sceptic) into forecasting algorithms. There are two known varieties of defensive forecasting: “continuous”, in which Sceptic’s moves are assumed continuous and which produces deter ..."
Abstract
 Add to MetaCart
Defensive forecasting is a method of transforming laws of probability (stated in gametheoretic terms as strategies for Sceptic) into forecasting algorithms. There are two known varieties of defensive forecasting: “continuous”, in which Sceptic’s moves are assumed continuous and which produces deterministic forecasts, and “randomized”, in which Sceptic’s moves are allowed to be discontinuous and Forecaster’s moves are allowed to be randomized. This note shows that the randomized variety can be obtained from the continuous variety by smearing Sceptic’s moves to make them continuous. 1
A gametheoretic version of Oakes ’ example for randomized forecasting
, 808
"... Using the gametheoretic framework for probability, Vovk and Shafer [10] have shown that it is always possible, using randomization, to make sequential probability forecasts that pass any countable set of wellbehaved statistical tests. This result generalizes work by other authors, who consider onl ..."
Abstract
 Add to MetaCart
Using the gametheoretic framework for probability, Vovk and Shafer [10] have shown that it is always possible, using randomization, to make sequential probability forecasts that pass any countable set of wellbehaved statistical tests. This result generalizes work by other authors, who consider only tests of calbration. We complement this result with a lower bound. We show that Vovk and Shafer’s result is valid only when the forecasts are computed with unrestrictedly increasing degree of accuracy. When some level of discreteness is fixed, we present a gametheoretic generalization of Oakes’ example for randomized forecasting that is a test failing any given method of deferministic forecasting; originally, this example was presented for deterministic calibration.
MATHEMATICAL ENGINEERING TECHNICAL REPORTS
, 2006
"... Gametheoretic versions of strong law of large numbers for unbounded variables ..."
Abstract
 Add to MetaCart
Gametheoretic versions of strong law of large numbers for unbounded variables
MATHEMATICAL ENGINEERING TECHNICAL REPORTS Implications of contrarian and onesided strategies for the faircoin game
, 2007
"... The METR technical reports are published as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electron ..."
Abstract
 Add to MetaCart
The METR technical reports are published as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author’s copyright. These works may not be reposted without the explicit permission of the copyright holder. Implications of contrarian and onesided strategies for the faircoin game
Predictions as statements and decisions (draft: comments welcome)
, 2007
"... Prediction is a complex notion, and different predictors (such as people, computer programs, and probabilistic theories) can pursue very different goals. In this paper I will review some popular kinds of prediction and argue that the theory of competitive online learning can benefit from the kinds ..."
Abstract
 Add to MetaCart
Prediction is a complex notion, and different predictors (such as people, computer programs, and probabilistic theories) can pursue very different goals. In this paper I will review some popular kinds of prediction and argue that the theory of competitive online learning can benefit from the kinds of prediction that are now foreign to it. The standard goal for predictor in learning theory is to incur a small loss for a given loss function measuring the discrepancy between the predictions and the actual outcomes. Competitive online learning concentrates on a “relative ” version of this goal: the predictor is to perform almost as well as the best strategies in a given benchmark class of prediction strategies. Such predictions can be interpreted as decisions made by a “small ” decision maker (i.e., one whose decisions do not affect the future outcomes). Predictions, or probability forecasts, considered in the foundations of