Results 1 
3 of
3
Online regression competitive with reproducing kernel Hilbert spaces
, 2005
"... We consider the problem of online prediction of realvalued labels of new objects. The prediction algorithm’s performance is measured by the squared deviation of the predictions from the actual labels. No probabilistic assumptions are made about the way the labels and objects are generated. Instead ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
(Show Context)
We consider the problem of online prediction of realvalued labels of new objects. The prediction algorithm’s performance is measured by the squared deviation of the predictions from the actual labels. No probabilistic assumptions are made about the way the labels and objects are generated. Instead, we are given a benchmark class of prediction rules some of which are hoped to produce good predictions. We show that for a wide range of infinitedimensional benchmark classes one can construct a prediction algorithm whose cumulative loss over the first N examples does not exceed the cumulative loss of any prediction rule in the class plus O ( √ N). Our proof technique is based on the recently developed method of defensive forecasting. 1
Convex games in banach spaces
, 2009
"... We study the regret of an online learner playing a multiround game in a Banach space B against an adversary that plays a convex function at each round. We characterize the minimax regret when the adversary plays linear functions in terms of the Martingale type of the dual of B. The cases when the a ..."
Abstract

Cited by 12 (6 self)
 Add to MetaCart
(Show Context)
We study the regret of an online learner playing a multiround game in a Banach space B against an adversary that plays a convex function at each round. We characterize the minimax regret when the adversary plays linear functions in terms of the Martingale type of the dual of B. The cases when the adversary plays bounded and uniformly convex functions respectively are also considered. Our results connect online convex learning to the study of the geometry of Banach spaces. We also show that appropriate modifications of the Mirror Descent algorithm from convex optimization can be used to achieve our regret upper bounds. Finally, we provide a version of Mirror Descent that adapts to the changing exponent of uniform convexity of the adversary’s functions. This adaptive mirror descent strategy provides new algorithms even for the more familiar Hilbert space case where the loss functions on each round have varying exponents of uniform convexity (curvature). 1
$25 Leading strategies in competitive online prediction
, 2007
"... Project web site: ..."
(Show Context)