Results 1 
5 of
5
A Class of Randomized PrimalDual Algorithms for Distributed Optimization, arXiv preprint arXiv:1406.6404v3
, 2014
"... Abstract Based on a preconditioned version of the randomized blockcoordinate forwardbackward algorithm recently proposed in ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract Based on a preconditioned version of the randomized blockcoordinate forwardbackward algorithm recently proposed in
Success and Failure of AdaptationDiffusion Algorithms for Consensus in MultiAgent Networks
"... Abstract—This paper investigates the problem of distributed stochastic approximation in multiagent systems. The algorithm under study consists of two steps: a local stochastic approximation step and a diffusion step which drives the network to a consensus. The diffusion step uses rowstochastic ma ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract—This paper investigates the problem of distributed stochastic approximation in multiagent systems. The algorithm under study consists of two steps: a local stochastic approximation step and a diffusion step which drives the network to a consensus. The diffusion step uses rowstochastic matrices to weight the network exchanges. As opposed to previous works, exchange matrices are not supposed to be doubly stochastic, and may also depend on the past estimate. We prove that nondoubly stochastic matrices generally influence the limit points of the algorithm. Nevertheless, the limit points are not affected by the choice of the matrices provided that the latter are doublystochastic in expectation. This conclusion legitimates the use of broadcastlike diffusion protocols, which are easier to implement. Next, by means of a central limit theorem, we prove that doubly stochastic protocols perform asymptotically as well as centralized algorithms and we quantify the degradation caused by the use of non doubly stochastic matrices. Throughout the paper, a special emphasis is put on the special case of distributed nonconvex optimization as an illustration of our results. I.
Stochastic Approximations and Perturbations in ForwardBackward Splitting for Monotone Operators *
"... Abstract We investigate the asymptotic behavior of a stochastic version of the forwardbackward splitting algorithm for finding a zero of the sum of a maximally monotone setvalued operator and a cocoercive operator in Hilbert spaces. Our general setting features stochastic approximations of the co ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract We investigate the asymptotic behavior of a stochastic version of the forwardbackward splitting algorithm for finding a zero of the sum of a maximally monotone setvalued operator and a cocoercive operator in Hilbert spaces. Our general setting features stochastic approximations of the cocoercive operator and stochastic perturbations in the evaluation of the resolvents of the setvalued operator. In addition, relaxations and not necessarily vanishing proximal parameters are allowed. Weak and strong almost sure convergence properties of the iterates is established under mild conditions on the underlying stochastic processes. Leveraging these results, we also establish the almost sure convergence of the iterates of a stochastic variant of a primaldual proximal splitting method for composite minimization problems.
A Robust BlockJacobi Algorithm for Quadratic Programming under Lossy Communications
"... Abstract: We address the problem distributed quadratic programming under lossy communications where the global cost function is the sum of coupled local cost functions, typical in localization problems and partitionbased state estimation. We propose a novel solution based on a generalized gradient ..."
Abstract
 Add to MetaCart
Abstract: We address the problem distributed quadratic programming under lossy communications where the global cost function is the sum of coupled local cost functions, typical in localization problems and partitionbased state estimation. We propose a novel solution based on a generalized gradient descent strategy, namely a BlockJacobi descent algorithm, which is amenable for a distributed implementation and which is provably robust to communication failure if the step size is suciently small. Interestingly, robustness to packet loss, implies also robustness of the algorithm to broadcast communication protocols, asynchronous computation and bounded random communication delays. The theoretical analysis relies on the separation of time scales and singular perturbation theory. Our algorithm is numerically studied in the context of partitionbased state estimation in smart grids based on the IEEE 123 nodes distribution feeder benchmark. The proposed algorithm is observed to exhibit a similar convergence rate when compared with the well known ADMM algorithm with no packet losses, while it has considerably better performance when including moderate packet losses.
REVISED VERSION 1 Explicit Convergence Rate of a Distributed Alternating Direction Method of Multipliers
"... Abstract — Consider a set of N agents seeking to solve distributively the minimization problem infx ∑N n=1 fn(x) where the convex functions fn are local to the agents. The popular Alternating Direction Method of Multipliers has the potential to handle distributed optimization problems of this kind. ..."
Abstract
 Add to MetaCart
Abstract — Consider a set of N agents seeking to solve distributively the minimization problem infx ∑N n=1 fn(x) where the convex functions fn are local to the agents. The popular Alternating Direction Method of Multipliers has the potential to handle distributed optimization problems of this kind. We provide a general reformulation of the problem and obtain a class of distributed algorithms which encompass various network architectures. The rate of convergence of our method is considered. It is assumed that the infimum of the problem is reached at a point x?, the functions fn are twice differentiable at this point and ∑∇2fn(x?)> 0 in the positive definite ordering of symmetric matrices. With these assumptions, it is shown that the convergence to the consensus x? is linear and the exact rate is provided. Application examples where this rate can be optimized with respect to the ADMM free parameter ρ are also given.