Results 1  10
of
11
Supermartingales in Prediction with Expert Advice
 ALT 2008 Proceedings, LNCS(LNAI
, 2008
"... ar ..."
(Show Context)
Y.: Online regression competitive with changing predictors
 In: Proceedings of the 18th International Conference on Algorithmic Learning Theory (ALT 2007). Lecture Notes in Computer Science
"... Abstract. This paper deals with the problem of making predictions in the online mode of learning where the dependence of the outcome yt on the signal xt can change with time. The Aggregating Algorithm (AA) is a technique that optimally merges experts from a pool, so that the resulting strategy suffe ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
Abstract. This paper deals with the problem of making predictions in the online mode of learning where the dependence of the outcome yt on the signal xt can change with time. The Aggregating Algorithm (AA) is a technique that optimally merges experts from a pool, so that the resulting strategy suffers a cumulative loss that is almost as good as that of the best expert in the pool. We apply the AA to the case where the experts are all the linear predictors that can change with time. KAARCh is the kernel version of the resulting algorithm. In the kernel case, the experts are all the decision rules in some reproducing kernel Hilbert space that can change over time. We show that KAARCh suffers a cumulative square loss that is almost as good as that of any expert that does not change very rapidly. 1
Competing with wild prediction rules
 Machine Learning
"... We consider the problem of online prediction competitive with a benchmark class of continuous but highly irregular prediction rules. It is known that if the benchmark class is a reproducing kernel Hilbert space, there exists a prediction algorithm whose average loss over the first N examples does n ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
We consider the problem of online prediction competitive with a benchmark class of continuous but highly irregular prediction rules. It is known that if the benchmark class is a reproducing kernel Hilbert space, there exists a prediction algorithm whose average loss over the first N examples does not exceed the average loss of any prediction rule in the class plus a “regret term ” of O(N −1/2). The elements of some natural benchmark classes, however, are so irregular that these classes are not Hilbert spaces. In this paper we develop Banachspace methods to construct a prediction algorithm with a regret term of O(N −1/p), where p ∈ [2, ∞) and p − 2 reflects the degree to which the benchmark class fails to be a Hilbert space. Only the square loss function is considered. 1
Dimensionfree exponentiated gradient
 In NIPS
, 2013
"... I present a new online learning algorithm that extends the exponentiated gradient framework to infinite dimensional spaces. My analysis shows that the algorithm is implicitly able to estimate the L2 norm of the unknown competitor, U, achieving a regret bound of the order of O(U log(U T + 1))√T), ins ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
I present a new online learning algorithm that extends the exponentiated gradient framework to infinite dimensional spaces. My analysis shows that the algorithm is implicitly able to estimate the L2 norm of the unknown competitor, U, achieving a regret bound of the order of O(U log(U T + 1))√T), instead of the standard O((U2 + 1)√T), achievable without knowing U. For this analysis, I introduce novel tools for algorithms with timevarying regularizers, through the use of local smoothness. Through a lower bound, I also show that the algorithm is optimal up to log(UT) term for linear and Lipschitz losses. 1
Online Learning with Multiple Operatorvalued Kernels
"... We consider the problem of learning a vectorvalued function f in an online learning setting. The function f is assumed to lie in a reproducing Hilbert space of operatorvalued kernels. We describe two online algorithms for learning f while taking into account the output structure. A first contri ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
We consider the problem of learning a vectorvalued function f in an online learning setting. The function f is assumed to lie in a reproducing Hilbert space of operatorvalued kernels. We describe two online algorithms for learning f while taking into account the output structure. A first contribution is an algorithm, ONORMA, that extends the standard kernelbased online learning algorithm NORMA from scalarvalued to operatorvalued setting. We report a cumulative error bound that holds both for classification and regression. We then define a second algorithm, MONORMA, which addresses the limitation of predefining the output structure in ONORMA by learning sequentially a linear combination of operatorvalued kernels. Our experiments show that the proposed algorithms achieve good performance results with low computational cost. 1
$25 Leading strategies in competitive online prediction
, 2007
"... Project web site: ..."
(Show Context)
Online gradient descent learning algorithm †
"... This paper considers the leastsquare online gradient descent algorithm in a reproducing kernel Hilbert space (RKHS) without an explicit regularization term. We present a novel capacity independent approach to derive error bounds and convergence results for this algorithm. The essential element in o ..."
Abstract
 Add to MetaCart
(Show Context)
This paper considers the leastsquare online gradient descent algorithm in a reproducing kernel Hilbert space (RKHS) without an explicit regularization term. We present a novel capacity independent approach to derive error bounds and convergence results for this algorithm. The essential element in our analysis is the interplay between the generalization error and a weighted cumulative error which we define in the paper. We show that, although the algorithm does not involve an explicit RKHS regularization term, choosing the step sizes appropriately can yield competitive error rates with those in the literature.
Internal Examiner: TBA
"... I declare that this dissertation was composed by myself, that the work contained herein is my own except where explicitly stated otherwise in the text, and that this work has not been submitted for any other degree or professional qualification except as specified. ..."
Abstract
 Add to MetaCart
I declare that this dissertation was composed by myself, that the work contained herein is my own except where explicitly stated otherwise in the text, and that this work has not been submitted for any other degree or professional qualification except as specified.