Results 1  10
of
9,593
Stochastic Subgradient Methods
"... Stochastic subgradient methods play an important role in machine learning. We introduced the concepts of subgradient methods and stochastic subgradient methods in this project, discussed their convergence conditions as well as the strong and weak points against their competitors. We demonstrated the ..."
Abstract
 Add to MetaCart
Stochastic subgradient methods play an important role in machine learning. We introduced the concepts of subgradient methods and stochastic subgradient methods in this project, discussed their convergence conditions as well as the strong and weak points against their competitors. We demonstrated
Adaptive Subgradient Methods for Online Learning and Stochastic Optimization
, 2010
"... Stochastic subgradient methods are widely used, well analyzed, and constitute effective tools for optimization and online learning. Stochastic gradient methods ’ popularity and appeal are largely due to their simplicity, as they largely follow predetermined procedural schemes. However, most common s ..."
Abstract

Cited by 287 (3 self)
 Add to MetaCart
Stochastic subgradient methods are widely used, well analyzed, and constitute effective tools for optimization and online learning. Stochastic gradient methods ’ popularity and appeal are largely due to their simplicity, as they largely follow predetermined procedural schemes. However, most common
Stochastic Subgradient Methods
, 2007
"... Suppose f: R n → R is a convex function. We say that a random vector ˜g ∈ R n is a noisy (unbiased) subgradient of f at x ∈ dom f if g = E ˜g ∈ ∂f(x), i.e., we have f(z) ≥ f(x) + (E ˜g) T (z − x) for all z. Thus, ˜g is a noisy unbiased subgradient of f at x if it can be written as ˜g = g + v, where ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
can represent (presumably small) error in computing a true subgradient, error that arises in Monte Carlo evaluation of a function defined as an expected value, or measurement error. Some references for stochastic subgradient methods are [Sho98, §2.4], [Pol87, Chap. 5]. Some books on stochastic
Distributed Subgradient Methods for Multiagent Optimization
, 2007
"... We study a distributed computation model for optimizing a sum of convex objective functions corresponding to multiple agents. For solving this (not necessarily smooth) optimization problem, we consider a subgradient method that is distributed among the agents. The method involves every agent minimiz ..."
Abstract

Cited by 234 (24 self)
 Add to MetaCart
We study a distributed computation model for optimizing a sum of convex objective functions corresponding to multiple agents. For solving this (not necessarily smooth) optimization problem, we consider a subgradient method that is distributed among the agents. The method involves every agent
Dynamic subgradient methods
, 2007
"... Abstract. Lagrangian relaxation is commonly used to generate bounds for mixedinteger linear programming problems. However, when the number of dualized constraints is very large (exponential in the dimension of the primal problem), explicit dualization is no longer possible. In order to reduce the d ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
dual convergence of such a strategy when using an adapted subgradient method for the dual step. 1.
Incremental Subgradient Methods For Nondifferentiable Optimization
, 2001
"... We consider a class of subgradient methods for minimizing a convex function that consists of the sum of a large number of component functions. This type of minimization arises in a dual context from Lagrangian relaxation of the coupling constraints of large scale separable problems. The idea is to p ..."
Abstract

Cited by 118 (10 self)
 Add to MetaCart
We consider a class of subgradient methods for minimizing a convex function that consists of the sum of a large number of component functions. This type of minimization arises in a dual context from Lagrangian relaxation of the coupling constraints of large scale separable problems. The idea
Subgradient methods for saddlepoint problems
 Journal of Optimization Theory and Applications
, 2009
"... We study subgradient methods for computing the saddle points of a convexconcave function. Our motivation is coming from networking applications where dual and primaldual subgradient methods have attracted much attention in designing decentralized network protocols. We first present a subgradient al ..."
Abstract

Cited by 39 (0 self)
 Add to MetaCart
We study subgradient methods for computing the saddle points of a convexconcave function. Our motivation is coming from networking applications where dual and primaldual subgradient methods have attracted much attention in designing decentralized network protocols. We first present a subgradient
Distributed Subgradient Methods and Quantization Effects
"... Abstract — We consider a convex unconstrained optimization problem that arises in a network of agents whose goal is to cooperatively optimize the sum of the individual agent objective functions through local computations and communications. For this problem, we use averaging algorithms to develop di ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
distributed subgradient methods that can operate over a timevarying topology. Our focus is on the convergence rate of these methods and the degradation in performance when only quantized information is available. Based on our recent results on the convergence time of distributed averaging algorithms, we
Hedge Algorithm and Subgradient Methods
, 2010
"... We show that the Hedge Algorithm, a method widely used in Machine Learning, can be interpreted as a particular subgradient algorithm for minimizing a wellchosen convex function, namely a Mirror Descent Scheme. Using this reformulation, we can improve slightly the worstcase convergence guarantees of ..."
Abstract
 Add to MetaCart
We show that the Hedge Algorithm, a method widely used in Machine Learning, can be interpreted as a particular subgradient algorithm for minimizing a wellchosen convex function, namely a Mirror Descent Scheme. Using this reformulation, we can improve slightly the worstcase convergence guarantees
Results 1  10
of
9,593