Results 1  10
of
111
24th Annual Conference on Learning Theory Beyond the regret minimization barrier: an optimal algorithm for stochastic stronglyconvex optimization
"... We give a novel algorithm for stochastic stronglyconvex optimization in the gradient oracle model which returns an O ( 1 T)approximate solution after T gradient updates. This rate of convergence is optimal in the gradient oracle model. This improves upon the previously log(T) known best rate of O( ..."
Abstract
 Add to MetaCart
( T), which was obtained by applying an online stronglyconvex optimization algorithm with regret O(log(T)) to the batch setting. We complement this result by proving that any algorithm has expected regret of Ω(log(T)) in the online stochastic stronglyconvex optimization setting. This lower bound
Beyond the regret minimization barrier: an optimal algorithm for stochastic stronglyconvex optimization
 In Proceedings of the 24th Annual Conference on Learning Theory, volume 19 of JMLR Workshop and Conference Proceedings
, 2011
"... We give a novel algorithm for stochastic stronglyconvex optimization in the gradient oracle model which returns an O ( 1 T)approximate solution after T gradient updates. This rate of convergence is optimal in the gradient oracle model. This improves upon the previously log(T) known best rate of O( ..."
Abstract

Cited by 58 (3 self)
 Add to MetaCart
( T), which was obtained by applying an online stronglyconvex optimization algorithm with regret O(log(T)) to the batch setting. We complement this result by proving that any algorithm has expected regret of Ω(log(T)) in the online stochastic stronglyconvex optimization setting. This lower bound
Logarithmic regret algorithms for strongly convex repeated games
 The Hebrew University
, 2007
"... Many problems arising in machine learning can be cast as a convex optimization problem, in which a sum of a loss term and a regularization term is minimized. For example, in Support Vector Machines the loss term is the average hingeloss of a vector over a training set of examples and the regulariza ..."
Abstract

Cited by 28 (3 self)
 Add to MetaCart
and the regularization term is the squared Euclidean norm of this vector. In this paper we study an algorithmic framework for strongly convex repeated games and apply it for solving regularized loss minimization problems. In a convex repeated game, a predictor chooses a sequence of vectors from a convex set. After each
A Stochastic View of Optimal Regret through Minimax Duality
"... We study the regret of optimal strategies for online convex optimization games. Using von Neumann’s minimax theorem, we show that the optimal regret in this adversarial setting is closely related to the behavior of the empirical minimization algorithm in a stochastic process setting: it is equal to ..."
Abstract

Cited by 47 (21 self)
 Add to MetaCart
We study the regret of optimal strategies for online convex optimization games. Using von Neumann’s minimax theorem, we show that the optimal regret in this adversarial setting is closely related to the behavior of the empirical minimization algorithm in a stochastic process setting: it is equal
Stochastic linear optimization under bandit feedback
 In submission
, 2008
"... In the classical stochastic karmed bandit problem, in each of a sequence of T rounds, a decision maker chooses one of k arms and incurs a cost chosen from an unknown distribution associated with that arm. The goal is to minimize regret, defined as the difference between the cost incurred by the alg ..."
Abstract

Cited by 100 (8 self)
 Add to MetaCart
In the classical stochastic karmed bandit problem, in each of a sequence of T rounds, a decision maker chooses one of k arms and incurs a cost chosen from an unknown distribution associated with that arm. The goal is to minimize regret, defined as the difference between the cost incurred
Accelerated gradient methods for stochastic optimization and online learning
 Advances in Neural Information Processing Systems 22
, 2009
"... Regularized risk minimization often involves nonsmooth optimization, either because of the loss function (e.g., hinge loss) or the regularizer (e.g., ℓ1regularizer). Gradient methods, though highly scalable and easy to implement, are known to converge slowly. In this paper, we develop a novel acce ..."
Abstract

Cited by 32 (1 self)
 Add to MetaCart
accelerated gradient method for stochastic optimization while still preserving their computational simplicity and scalability. The proposed algorithm, called SAGE (Stochastic Accelerated GradiEnt), exhibits fast convergence rates on stochastic composite optimization with convex or strongly convex objectives
The KLUCB algorithm for bounded stochastic bandits and beyond
 In Proceedings of COLT
, 2011
"... This paper presents a finitetime analysis of the KLUCB algorithm, an online, horizonfree index policy for stochastic bandit problems. We prove two distinct results: first, for arbitrary bounded rewards, the KLUCB algorithm satisfies a uniformly better regret bound than UCB and its variants; secon ..."
Abstract

Cited by 58 (5 self)
 Add to MetaCart
This paper presents a finitetime analysis of the KLUCB algorithm, an online, horizonfree index policy for stochastic bandit problems. We prove two distinct results: first, for arbitrary bounded rewards, the KLUCB algorithm satisfies a uniformly better regret bound than UCB and its variants
Stochastic convex optimization with bandit feedback
, 1107
"... This paper addresses the problem of minimizing a convex, Lipschitz function f over a convex, compact set X under a stochastic bandit feedback model. In this model, the algorithm is allowed to observe noisy realizations of the function value f(x) at any query point x ∈ X. The quantity of interest is ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
This paper addresses the problem of minimizing a convex, Lipschitz function f over a convex, compact set X under a stochastic bandit feedback model. In this model, the algorithm is allowed to observe noisy realizations of the function value f(x) at any query point x ∈ X. The quantity of interest
Stochastic ADMM for Nonsmooth Optimization
"... Abstract We present a stochastic setting for optimization problems with nonsmooth convex separable objective functions over linear equality constraints. To solve such problems, we propose a stochastic Alternating Direction Method of Multipliers (ADMM) algorithm. Our algorithm applies to a more gene ..."
Abstract
 Add to MetaCart
Abstract We present a stochastic setting for optimization problems with nonsmooth convex separable objective functions over linear equality constraints. To solve such problems, we propose a stochastic Alternating Direction Method of Multipliers (ADMM) algorithm. Our algorithm applies to a more
Minimizing Finite Sums with the Stochastic Average Gradient
, 2013
"... We propose the stochastic average gradient (SAG) method for optimizing the sum of a finite number of smooth convex functions. Like stochastic gradient (SG) methods, the SAG method’s iteration cost is independent of the number of terms in the sum. However, by incorporating a memory of previous gradie ..."
Abstract

Cited by 42 (2 self)
 Add to MetaCart
We propose the stochastic average gradient (SAG) method for optimizing the sum of a finite number of smooth convex functions. Like stochastic gradient (SG) methods, the SAG method’s iteration cost is independent of the number of terms in the sum. However, by incorporating a memory of previous
Results 1  10
of
111