Results 1  10
of
35
Linearized Alternating Direction Method with Adaptive Penalty for LowRank Representation
"... Many machine learning and signal processing problems can be formulated as linearly constrained convex programs, which could be efficiently solved by the alternating direction method (ADM). However, usually the subproblems in ADM are easily solvable only when the linear mappings in the constraints ar ..."
Abstract

Cited by 53 (8 self)
 Add to MetaCart
(Show Context)
Many machine learning and signal processing problems can be formulated as linearly constrained convex programs, which could be efficiently solved by the alternating direction method (ADM). However, usually the subproblems in ADM are easily solvable only when the linear mappings in the constraints are identities. To address this issue, we propose a linearized ADM (LADM) method by linearizing the quadratic penalty term and adding a proximal term when solving the subproblems. For fast convergence, we also allow the penalty to change adaptively according a novel update rule. We prove the global convergence of LADM with adaptive penalty (LADMAP). As an example, we apply LADMAP to solve lowrank representation (LRR), which is an important subspace clustering technique yet suffers from high computation cost. By combining LADMAP with a skinny SVD representation technique, we are able to reduce the complexity O(n 3) of the original ADM based method to O(rn 2), where r and n are the rank and size of the representation matrix, respectively, hence making LRR possible for large scale applications. Numerical experiments verify that for LRR our LADMAP based methods are much faster than stateoftheart algorithms. 1
Distributed basis pursuit
 IEEE Trans. Sig. Proc
, 2012
"... Abstract—We propose a distributed algorithm for solving the optimization problem Basis Pursuit (BP). BP finds the leastnorm solution of the underdetermined linear system and is used, for example, in compressed sensing for reconstruction. Our algorithm solves BP on a distributed platform such as a s ..."
Abstract

Cited by 28 (7 self)
 Add to MetaCart
(Show Context)
Abstract—We propose a distributed algorithm for solving the optimization problem Basis Pursuit (BP). BP finds the leastnorm solution of the underdetermined linear system and is used, for example, in compressed sensing for reconstruction. Our algorithm solves BP on a distributed platform such as a sensor network, and is designed to minimize the communication between nodes. The algorithm only requires the network to be connected, has no notion of a central processing node, and no node has access to the entire matrix at any time. We consider two scenarios in which either the columns or the rows of are distributed among the compute nodes. Our algorithm, named DADMM, is a decentralized implementation of the alternating direction method of multipliers. We show through numerical simulation that our algorithm requires considerably less communications between the nodes than the stateoftheart algorithms. Index Terms—Augmented Lagrangian, basis pursuit (BP), distributed optimization, sensor networks.
The Direct Extension of ADMM for Multiblock Convex Minimization Problems is Not Necessarily Convergent
, 2013
"... Abstract. The alternating direction method of multipliers (ADMM) is now widely used in many fields, and its convergence was proved when two blocks of variables are alternatively updated. It is strongly desirable and practically valuable to extend ADMM directly to the case of a multiblock convex min ..."
Abstract

Cited by 27 (2 self)
 Add to MetaCart
(Show Context)
Abstract. The alternating direction method of multipliers (ADMM) is now widely used in many fields, and its convergence was proved when two blocks of variables are alternatively updated. It is strongly desirable and practically valuable to extend ADMM directly to the case of a multiblock convex minimization problem where its objective function is the sum of more than two separable convex functions. However, the convergence of this extension has been missing for a long time — neither affirmatively proved convergence nor counter example showing its failure of convergence is known in the literature. In this paper we answer this longstanding open question: the direct extension of ADMM is not necessarily convergent. We present an example showing its failure of convergence, and a sufficient condition ensuring its convergence.
DADMM: A communicationefficient distributed algorithm for separable optimization
 IEEE Trans. Sig. Proc
, 2013
"... ar ..."
(Show Context)
Temperature Aware Workload Management in Geodistributed Datacenters
"... Datacenters consume an enormous amount of energy with significant financial and environmental costs. For geodistributed datacenters, a workload management approach that routes user requests to locations with cheaper and cleaner electricity has been shown to be promising lately. We consider two key ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
(Show Context)
Datacenters consume an enormous amount of energy with significant financial and environmental costs. For geodistributed datacenters, a workload management approach that routes user requests to locations with cheaper and cleaner electricity has been shown to be promising lately. We consider two key aspects that have not been explored in this approach. First, through empirical studies, we find that the energy efficiency of the cooling system depends directly on the ambient temperature, which exhibits a significant degree of geographical diversity. Temperature diversity can be used by workload management to reduce the overall cooling energy overhead. Second, energy consumption comes from not only interactive workloads driven by user requests, but also delay tolerant batch workloads that run at the backend. The elastic nature of batch workloads can be exploited to further reduce the energy cost. In this work, we propose to make workload management for geodistributed datacenters temperature aware. We formulate the problem as a joint optimization of request routing for interactive workloads and capacity allocation for batch workloads. We develop a distributed algorithm based on an mblock alternating direction method of multipliers (ADMM) algorithm that extends the classical 2block algorithm. We prove the convergence and rate of convergence results under general assumptions. Tracedriven simulations demonstrate that our approach is able to provide 5%–20% overall cost savings for geodistributed datacenters.
Solving multipleblock separable convex minimization problems using twoblock alternating direction method of multipliers
, 2013
"... ar ..."
(Show Context)
Parallel multiblock ADMM with o(1/k) convergence,” Preprint, available online at arXiv: 1312.3040
, 2014
"... Abstract. This paper introduces a parallel and distributed extension to the alternating direction method of multipliers (ADMM) for solving convex problem: minimize f1(x1) + · · ·+ fN (xN) subject to A1x1 + · · ·+ANxN = c, x1 ∈ X1,..., xN ∈ XN. The algorithm decomposes the original problem into N ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Abstract. This paper introduces a parallel and distributed extension to the alternating direction method of multipliers (ADMM) for solving convex problem: minimize f1(x1) + · · ·+ fN (xN) subject to A1x1 + · · ·+ANxN = c, x1 ∈ X1,..., xN ∈ XN. The algorithm decomposes the original problem into N smaller subproblems and solves them in parallel at each iteration. This Jacobiantype algorithm is well suited for distributed computing and is particularly attractive for solving certain largescale problems. This paper introduces a few novel results. Firstly, it shows that extending ADMM straightforwardly from the classic GaussSeidel setting to the Jacobian setting, from 2 blocks to N blocks, will preserve convergence if matrices Ai are mutually nearorthogonal and have full columnrank. Secondly, for general matrices Ai, this paper proposes to add proximal terms of different kinds to the N subproblems so that the subproblems can be solved in flexible and efficient ways and the algorithm converges globally at a rate of o(1/k). Thirdly, a simple technique is introduced to improve some existing convergence rates from O(1/k) to o(1/k). In practice, some conditions in our convergence theorems are conservative. Therefore, we introduce a strategy for dynamically tuning the parameters in the algorithm, leading to substantial acceleration of the convergence in practice. Numerical results are presented to demonstrate the efficiency of the proposed method in comparison with several existing parallel algorithms. We implemented our algorithm on Amazon EC2, an ondemand public computing cloud, and report its performance on very largescale basis pursuit problems with distributed data. Key words. alternating direction method of multipliers, ADMM, parallel and distributed computing, convergence rate
Structured Learning of Gaussian Graphical Models
"... We consider estimation of multiple highdimensional Gaussian graphical models corresponding to a single set of nodes under several distinct conditions. We assume that most aspects of the networks are shared, but that there are some structured differences between them. Specifically, the network diffe ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
We consider estimation of multiple highdimensional Gaussian graphical models corresponding to a single set of nodes under several distinct conditions. We assume that most aspects of the networks are shared, but that there are some structured differences between them. Specifically, the network differences are generated from node perturbations: a few nodes are perturbed across networks, and most or all edges stemming from such nodes differ between networks. This corresponds to a simple model for the mechanism underlying many cancers, in which the gene regulatory network is disrupted due to the aberrant activity of a few specific genes. We propose to solve this problem using the perturbednode joint graphical lasso, a convex optimization problem that is based upon the use of a rowcolumn overlap norm penalty. We then solve the convex problem using an alternating directions method of multipliers algorithm. Our proposal is illustrated on synthetic data and on an application to brain cancer gene expression data. 1
Linearized alternating direction method with parallel splitting and adaptive penalty for separable convex programs
 in machine learning,” in ACML, 2013
"... Please note that the number of equations, propositions and theorems in supplemental materials are different from that in the manuscript. The problem we are interested in is as follows: min ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Please note that the number of equations, propositions and theorems in supplemental materials are different from that in the manuscript. The problem we are interested in is as follows: min