Results 1 
3 of
3
1A Parallel Stochastic Approximation Method for Nonconvex MultiAgent Optimization Problems
"... Abstract—Consider the problem of minimizing the expected value of a (possibly nonconvex) cost function parameterized by a random (vector) variable, when the expectation cannot be computed accurately (e.g., because the statistics of the random variables are unknown and/or the computational complexity ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract—Consider the problem of minimizing the expected value of a (possibly nonconvex) cost function parameterized by a random (vector) variable, when the expectation cannot be computed accurately (e.g., because the statistics of the random variables are unknown and/or the computational complexity is prohibitive). Classical sample stochastic gradient methods for solving this problem may empirically suffer from slow convergence. In this paper, we propose for the first time a stochastic parallel Successive Convex Approximationbased (bestresponse) algorithmic framework for general nonconvex stochastic sumutility optimization problems, which arise naturally in the design of multiagent systems. The proposed novel decomposition enables all users to update their optimization variables in parallel by solving a sequence of strongly convex subproblems, one for each user. Almost surely convergence to stationary points is proved. We then customize our algorithmic framework to solve the stochastic sum rate maximization problem over SingleInputSingleOutput (SISO) frequencyselective interference channels, multipleinputmultipleoutput (MIMO) interference channels, and MIMO multipleaccess channels. Numerical results show that our algorithms are much faster than stateoftheart stochastic gradient schemes while achieving the same (or better) sumrates. Index Terms—Multiagent systems, parallel optimization, stochastic approximation.
Distributed Gradient Methods with Variable Number of Working Nodes
"... AbstractWe consider distributed optimization where N nodes in a connected network minimize the sum of their local costs subject to a common constraint set. We propose a distributed projected gradient method where each node, at each iteration k, performs an update (is active) with probability p k , ..."
Abstract
 Add to MetaCart
(Show Context)
AbstractWe consider distributed optimization where N nodes in a connected network minimize the sum of their local costs subject to a common constraint set. We propose a distributed projected gradient method where each node, at each iteration k, performs an update (is active) with probability p k , and stays idle (is inactive) with probability 1 − p k . Whenever active, each node performs an update by weightaveraging its solution estimate with the estimates of its active neighbors, taking a negative gradient step with respect to its local cost, and performing a projection onto the constraint set; inactive nodes perform no updates. Assuming that nodes' local costs are strongly convex, with Lipschitz continuous gradients, we show that, as long as activation probability p k grows to one asymptotically, our algorithm converges in the mean square sense (MSS) to the same solution as the standard distributed gradient method, i.e., as if all the nodes were active at all iterations. Moreover, when p k grows to one linearly, with an appropriately set convergence factor, the algorithm has a linear MSS convergence, with practically the same factor as the standard distributed gradient method. Simulations demonstrate that, when compared with the standard distributed gradient method, the proposed algorithm significantly reduces the overall number of pernode communications and pernode gradient evaluations (computational cost) for the same required accuracy.