Results 1 
7 of
7
Hybrid Random/Deterministic Parallel Algorithms for Convex and Nonconvex Big Data Optimization
"... We propose a decomposition framework for the parallel optimization of the sum of a differentiable (possibly nonconvex) function and a nonsmooth (possibly nonseparable), convex one. The latter term is usually employed to enforce structure in the solution, typically sparsity. The main contribution of ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
We propose a decomposition framework for the parallel optimization of the sum of a differentiable (possibly nonconvex) function and a nonsmooth (possibly nonseparable), convex one. The latter term is usually employed to enforce structure in the solution, typically sparsity. The main contribution of this work is a novel parallel, hybrid random/deterministic decomposition scheme wherein, at each iteration, a subset of (block) variables is updated at the same time by minimizing a convex surrogate of the original nonconvex function. To tackle hugescale problems, the (block) variables to be updated are chosen according to a mixed random and deterministic procedure, which captures the advantages of both pure deterministic and random updatebased schemes. Almost sure convergence of the proposed scheme is established. Numerical results show that on hugescale problems the proposed hybrid random/deterministic algorithm compares favorably to random and deterministic schemes on both convex and nonconvex problems.
Iteration complexity analysis of multiblock ADMM for a family of convex minimization without strong convexity
, 2015
"... Abstract The alternating direction method of multipliers (ADMM) is widely used in solving structured convex optimization problems due to its superior practical performance. On the theoretical side however, a counterexample was shown in ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract The alternating direction method of multipliers (ADMM) is widely used in solving structured convex optimization problems due to its superior practical performance. On the theoretical side however, a counterexample was shown in
MOCCA: Mirrored Convex/Concave Optimization for Nonconvex Composite Functions
, 2016
"... Abstract Many optimization problems arising in highdimensional statistics decompose naturally into a sum of several terms, where the individual terms are relatively simple but the composite objective function can only be optimized with iterative algorithms. In this paper, we are interested in opti ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract Many optimization problems arising in highdimensional statistics decompose naturally into a sum of several terms, where the individual terms are relatively simple but the composite objective function can only be optimized with iterative algorithms. In this paper, we are interested in optimization problems of the form F(Kx) + G(x), where K is a fixed linear transformation, while F and G are functions that may be nonconvex and/or nondifferentiable. In particular, if either of the terms are nonconvex, existing alternating minimization techniques may fail to converge; other types of existing approaches may instead be unable to handle nondifferentiability. We propose the mocca (mirrored convex/concave) algorithm, a primal/dual optimization approach that takes a local convex approximation to each term at every iteration. Inspired by optimization problems arising in computed tomography (CT) imaging, this algorithm can handle a range of nonconvex composite optimization problems, and offers theoretical guarantees for convergence when the overall problem is approximately convex (that is, any concavity in one term is balanced out by convexity in the other term). Empirical results show fast convergence for several structured signal recovery problems.
Linearized Alternating Direction Method of Multipliers for Constrained Nonconvex Regularized Optimization
"... Abstract In this paper, we consider a wide class of constrained nonconvex regularized minimization problems, where the constraints are linearly constraints. It was reported in the literature that nonconvex regularization usually yields a solution with more desirable sparse structural properties bey ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract In this paper, we consider a wide class of constrained nonconvex regularized minimization problems, where the constraints are linearly constraints. It was reported in the literature that nonconvex regularization usually yields a solution with more desirable sparse structural properties beyond convex ones. However, it is not easy to obtain the proximal mapping associated with nonconvex regularization, due to the imposed linearly constraints. In this paper, the optimization problem with linear constraints is solved by the Linearized Alternating Direction Method of Multipliers (LADMM). Moreover, we present a detailed convergence analysis of the LADMM algorithm for solving nonconvex compositely regularized optimization with a large class of nonconvex penalties. Experimental results on several realworld datasets validate the efficacy of the proposed algorithm.
NESTT: A Nonconvex PrimalDual Splitting Method for Distributed and Stochastic Optimization
"... Abstract We study a stochastic and distributed algorithm for nonconvex problems whose objective consists of a sum of N nonconvex L i /N smooth functions, plus a nonsmooth regularizer. The proposed NonconvEx primaldual SpliTTing (NESTT) algorithm splits the problem into N subproblems, and utilizes ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract We study a stochastic and distributed algorithm for nonconvex problems whose objective consists of a sum of N nonconvex L i /N smooth functions, plus a nonsmooth regularizer. The proposed NonconvEx primaldual SpliTTing (NESTT) algorithm splits the problem into N subproblems, and utilizes an augmented Lagrangian based primaldual scheme to solve it in a distributed and stochastic manner. With a special nonuniform sampling, a version of NESTT achieves stationary solution using O(( L i /N ) 2 / ) gradient evaluations, which can be up to O(N ) times better than the (proximal) gradient descent methods. It also achieves Qlinear convergence rate for nonconvex 1 penalized quadratic problems with polyhedral constraints. Further, we reveal a fundamental connection between primaldual based methods and a few primal only methods such as IAG/SAG/SAGA.
Global Convergence of Unmodified 3Block ADMM for a Class of Convex Minimization Problems
, 2015
"... The alternating direction method of multipliers (ADMM) has been successfully applied to solve structured convex optimization problems due to its superior practical performance. The convergence properties of the 2block ADMM have been studied extensively in the literature. Specifically, it has been p ..."
Abstract
 Add to MetaCart
The alternating direction method of multipliers (ADMM) has been successfully applied to solve structured convex optimization problems due to its superior practical performance. The convergence properties of the 2block ADMM have been studied extensively in the literature. Specifically, it has been proven that the 2block ADMM globally converges for any penalty parameter γ> 0. In this sense, the 2block ADMM allows the parameter to be free, i.e., there is no need to restrict the value for the parameter when implementing this algorithm in order to ensure convergence. However, for the 3block ADMM, Chen et al. [4] recently constructed a counterexample showing that it can diverge if no further condition is imposed. The existing results on studying further sufficient conditions on guaranteeing the convergence of the 3block ADMM usually require γ to be smaller than a certain bound, which is usually either difficult to compute or too small to make it a practical algorithm. In this paper, we show that the 3block ADMM still globally converges with any penalty parameter γ> 0 when applied to solve a class of commonly encountered problems to be called regularized least squares decomposition (RLSD) in this paper, which covers many important applications in practice.
1Convergence of Bregman Alternating Direction Method with Multipliers for Nonconvex Composite Problems
"... The alternating direction method with multipliers (ADMM) has been one of most powerful and successful methods for solving various convex or nonconvex composite problems that arise in the fields of image & signal processing and machine learning. In convex settings, numerous convergence results ha ..."
Abstract
 Add to MetaCart
(Show Context)
The alternating direction method with multipliers (ADMM) has been one of most powerful and successful methods for solving various convex or nonconvex composite problems that arise in the fields of image & signal processing and machine learning. In convex settings, numerous convergence results have been established for ADMM as well as its varieties. However, there have been few studies on the convergence properties of ADMM under nonconvex frameworks, since the convergence analysis of nonconvex algorithm is generally very difficult. In this paper we study the Bregman modification of ADMM (BADMM), which includes the conventional ADMM as a special case and can significantly improve the performance of the algorithm. Under some assumptions, we show that the iterative sequence generated by BADMM converges to a stationary point of the associated augmented Lagrangian function. The obtained results underline the feasibility of ADMM in applications under nonconvex settings.