Results 1  10
of
18
On the O(1/k) Convergence of Asynchronous Distributed Alternating Direction Method of Multipliers
, 2013
"... We consider a network of agents that are cooperatively solving a global optimization problem, where the objective function is the sum of privately known local objective functions of the agents and the decision variables are coupled via linear constraints. Recent literature focused on special cases o ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
(Show Context)
We consider a network of agents that are cooperatively solving a global optimization problem, where the objective function is the sum of privately known local objective functions of the agents and the decision variables are coupled via linear constraints. Recent literature focused on special cases of this formulation and studied their distributed solution through either subgradient based methods with O(1 /√k) rate of convergence (where k is the iteration number) or Alternating Direction Method of Multipliers (ADMM) based methods, which require a synchronous implementation and a globally known order on the agents. In this paper, we present a novel asynchronous ADMM based distributed method for the general formulation and show that it converges at the rate O (1/k).
Local Linear Convergence of the Alternating Direction Method of Multipliers on Quadratic or Linear Programs
"... We introduce a novel matrix recurrence yielding a new spectral analysis of the local transient convergence behavior of the Alternating Direction Method of Multipliers (ADMM), for the particular case of a quadratic program or a linear program. We identify a particular combination of vector iterates w ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
(Show Context)
We introduce a novel matrix recurrence yielding a new spectral analysis of the local transient convergence behavior of the Alternating Direction Method of Multipliers (ADMM), for the particular case of a quadratic program or a linear program. We identify a particular combination of vector iterates whose convergence can be analyzed via a spectral analysis. The theory predicts that ADMM should go through up to four convergence regimes, such as constant step convergence or linear convergence, ending with the latter when close enough to the optimal solution if the optimal solution is unique and satisfies strict complementarity.
Distributed ADMM for Model Predictive Control and Congestion Control
"... Abstract — Many problems in control can be modeled as an optimization problem over a network of nodes. Solving them with distributed algorithms provides advantages over centralized solutions, such as privacy and the ability to process data locally. In this paper, we solve optimization problems in ne ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
(Show Context)
Abstract — Many problems in control can be modeled as an optimization problem over a network of nodes. Solving them with distributed algorithms provides advantages over centralized solutions, such as privacy and the ability to process data locally. In this paper, we solve optimization problems in networks where each node requires only partial knowledge of the problem’s solution. We explore this feature to design a decentralized algorithm that allows a significant reduction in the total number of communications. Our algorithm is based on the Alternating Direction of Multipliers (ADMM), and we apply it to distributed Model Predictive Control (MPC) and TCP/IP congestion control. Simulation results show that the proposed algorithm requires less communications than previous work for the same solution accuracy. x4 x1 4
Parallel multiblock ADMM with o(1/k) convergence,” Preprint, available online at arXiv: 1312.3040
, 2014
"... Abstract. This paper introduces a parallel and distributed extension to the alternating direction method of multipliers (ADMM) for solving convex problem: minimize f1(x1) + · · ·+ fN (xN) subject to A1x1 + · · ·+ANxN = c, x1 ∈ X1,..., xN ∈ XN. The algorithm decomposes the original problem into N ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
(Show Context)
Abstract. This paper introduces a parallel and distributed extension to the alternating direction method of multipliers (ADMM) for solving convex problem: minimize f1(x1) + · · ·+ fN (xN) subject to A1x1 + · · ·+ANxN = c, x1 ∈ X1,..., xN ∈ XN. The algorithm decomposes the original problem into N smaller subproblems and solves them in parallel at each iteration. This Jacobiantype algorithm is well suited for distributed computing and is particularly attractive for solving certain largescale problems. This paper introduces a few novel results. Firstly, it shows that extending ADMM straightforwardly from the classic GaussSeidel setting to the Jacobian setting, from 2 blocks to N blocks, will preserve convergence if matrices Ai are mutually nearorthogonal and have full columnrank. Secondly, for general matrices Ai, this paper proposes to add proximal terms of different kinds to the N subproblems so that the subproblems can be solved in flexible and efficient ways and the algorithm converges globally at a rate of o(1/k). Thirdly, a simple technique is introduced to improve some existing convergence rates from O(1/k) to o(1/k). In practice, some conditions in our convergence theorems are conservative. Therefore, we introduce a strategy for dynamically tuning the parameters in the algorithm, leading to substantial acceleration of the convergence in practice. Numerical results are presented to demonstrate the efficiency of the proposed method in comparison with several existing parallel algorithms. We implemented our algorithm on Amazon EC2, an ondemand public computing cloud, and report its performance on very largescale basis pursuit problems with distributed data. Key words. alternating direction method of multipliers, ADMM, parallel and distributed computing, convergence rate
Distributed Maximum Likelihood Sensor Network Localization
 IEEE Transactions on Signal Processing
, 2014
"... Abstract—We propose a class of convex relaxations to solve the sensor network localization problem, based on a maximum likelihood (ML) formulation. This class, as well as the tightness of the relaxations, depends on the noise probability density function (PDF) of the collected measurements.We deri ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
Abstract—We propose a class of convex relaxations to solve the sensor network localization problem, based on a maximum likelihood (ML) formulation. This class, as well as the tightness of the relaxations, depends on the noise probability density function (PDF) of the collected measurements.We derive a computational efficient edgebased version of this ML convex relaxation class and we design a distributed algorithm that enables the sensor nodes to solve these edgebased convex programs locally by communicating only with their close neighbors. This algorithm relies on the alternating direction method of multipliers (ADMM), it converges to the centralized solution, it can run asynchronously, and it is computation errorresilient. Finally, we compare our proposed distributed scheme with other available methods, both analytically and numerically, and we argue the added value of ADMM, especially for largescale networks. Index Terms—Distributed optimization, convex relaxations, sensor network localization, distributed algorithms, ADMM, distributed localization, sensor networks, maximum likelihood. I.
Linear Convergence of ADMM on a Model Problem
"... In this short report, we analyze the convergence of ADMM as a matrix recurrence for the particular case of a quadratic program or a linear program. We identify a particular combination of the vector iterates in the standard ADMM iteration that exhibits monotonic convergence. We present an analysis w ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
In this short report, we analyze the convergence of ADMM as a matrix recurrence for the particular case of a quadratic program or a linear program. We identify a particular combination of the vector iterates in the standard ADMM iteration that exhibits monotonic convergence. We present an analysis which indicates the convergence depends on the eigenvalues of a particular matrix operator. The theory predicts that ADMM should exhibit linear convergence when close enough to the optimal solution, but when far away can exhibit slow “constant step ” convergence. This is illustrated with a convergence trace from linear program. 1
Solving systems of monotone inclusions via primaldual splitting techniques
"... Abstract. In this paper we propose an algorithm for solving systems of coupled monotone inclusions in Hilbert spaces. The operators arising in each of the inclusions of the system are processed in each iteration separately, namely, the singlevalued are evaluated explicitly (forward steps), while t ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we propose an algorithm for solving systems of coupled monotone inclusions in Hilbert spaces. The operators arising in each of the inclusions of the system are processed in each iteration separately, namely, the singlevalued are evaluated explicitly (forward steps), while the setvalued ones via their resolvents (backward steps). In addition, most of the steps in the iterative scheme can be executed simultaneously, this making the method applicable to a variety of convex minimization problems. The numerical performances of the proposed splitting algorithm are emphasized through applications in average consensus on colored networks and image classification via support vector machines.
Asynchronous Distributed ADMM for Consensus Optimization
"... Distributed optimization algorithms are highly attractive for solving big data problems. In particular, many machine learning problems can be formulated as the global consensus optimization problem, which can then be solved in a distributed manner by the alternating direction method of multiplier ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Distributed optimization algorithms are highly attractive for solving big data problems. In particular, many machine learning problems can be formulated as the global consensus optimization problem, which can then be solved in a distributed manner by the alternating direction method of multipliers (ADMM) algorithm. However, this suffers from the straggler problem as its updates have to be synchronized. In this paper, we propose an asynchronous ADMM algorithm by using two conditions to control the asynchrony: partial barrier and bounded delay. The proposed algorithm has a simple structure and good convergence guarantees (its convergence rate can be reduced to that of its synchronous counterpart). Experiments on different distributed ADMM applications show that asynchrony reduces the time on network waiting, and achieves faster convergence than its synchronous counterpart in terms of the wall clock time. 1.
Local Linear Convergence of ADMM on Quadratic or Linear Programs
"... In this paper, we analyze the convergence of the Alternating Direction Method of Multipliers (ADMM) as a matrix recurrence for the particular case of a quadratic program or a linear program. We identify a particular combination of the vector iterates in the standard ADMM iteration that exhibits almo ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
In this paper, we analyze the convergence of the Alternating Direction Method of Multipliers (ADMM) as a matrix recurrence for the particular case of a quadratic program or a linear program. We identify a particular combination of the vector iterates in the standard ADMM iteration that exhibits almost monotonic convergence. We present an analysis which indicates the convergence depends on the eigenvalues of a particular matrix operator. The theory predicts that ADMM should exhibit linear convergence when close enough to the optimal solution, but when far away can exhibit slow “constant step ” convergence. This is illustrated with a convergence trace from linear program.
1 Distributed Compressed Sensing For Static and TimeVarying Networks
"... Abstract—We consider the problem of innetwork compressed sensing from distributed measurements. Every agent has a set of measurements of a signal x, and the objective is for the agents to recover x from their collective measurements using only communication with neighbors in the network. Our distri ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract—We consider the problem of innetwork compressed sensing from distributed measurements. Every agent has a set of measurements of a signal x, and the objective is for the agents to recover x from their collective measurements using only communication with neighbors in the network. Our distributed approach to this problem is based on the centralized Iterative Hard Thresholding algorithm (IHT). We first present a distributed IHT algorithm for static networks that leverages standard tools from distributed computing to execute innetwork computations with minimized bandwidth consumption. Next, we address distributed signal recovery in networks with timevarying topologies. The network dynamics necessarily introduce inaccuracies to our innetwork computations. To accommodate these inaccuracies, we show how centralized IHT can be extended to include inexact computations while still providing the same recovery guarantees as the original IHT algorithm. We then leverage these new theoretical results to develop a distributed version of IHT for timevarying networks. Evaluations show that our distributed algorithms for both static and timevarying networks outperform previously proposed solutions in time and bandwidth by several orders of magnitude. Index Terms—compressed sensing, distributed algorithm, iterative hard thresholding, distributed consensus I.