Results 1  10
of
46
A Singular Value Thresholding Algorithm for Matrix Completion
, 2008
"... This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem, and arises in many important applications as in the task of reco ..."
Abstract

Cited by 192 (12 self)
 Add to MetaCart
This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem, and arises in many important applications as in the task of recovering a large matrix from a small subset of its entries (the famous Netflix problem). Offtheshelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple firstorder and easytoimplement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative and produces a sequence of matrices {X k, Y k} and at each step, mainly performs a softthresholding operation on the singular values of the matrix Y k. There are two remarkable features making this attractive for lowrank matrix completion problems. The first is that the softthresholding operation is applied to a sparse matrix; the second is that the rank of the iterates {X k} is empirically nondecreasing. Both these facts allow the algorithm to make use of very minimal storage space and keep the computational cost of each iteration low. On
NESTA: A Fast and Accurate FirstOrder Method for Sparse Recovery
, 2009
"... Accurate signal recovery or image reconstruction from indirect and possibly undersampled data is a topic of considerable interest; for example, the literature in the recent field of compressed sensing is already quite immense. Inspired by recent breakthroughs in the development of novel firstorder ..."
Abstract

Cited by 71 (1 self)
 Add to MetaCart
Accurate signal recovery or image reconstruction from indirect and possibly undersampled data is a topic of considerable interest; for example, the literature in the recent field of compressed sensing is already quite immense. Inspired by recent breakthroughs in the development of novel firstorder methods in convex optimization, most notably Nesterov’s smoothing technique, this paper introduces a fast and accurate algorithm for solving common recovery problems in signal processing. In the spirit of Nesterov’s work, one of the key ideas of this algorithm is a subtle averaging of sequences of iterates, which has been shown to improve the convergence properties of standard gradientdescent algorithms. This paper demonstrates that this approach is ideally suited for solving largescale compressed sensing reconstruction problems as 1) it is computationally efficient, 2) it is accurate and returns solutions with several correct digits, 3) it is flexible and amenable to many kinds of reconstruction problems, and 4) it is robust in the sense that its excellent performance across a wide range of problems does not depend on the fine tuning of several parameters. Comprehensive numerical experiments on realistic signals exhibiting a large dynamic range show that this algorithm compares favorably with recently proposed stateoftheart methods. We also apply the algorithm to solve other problems for which there are fewer alternatives, such as totalvariation minimization, and
Bregman iterative algorithms for ℓ1minimization with applications to compressed sensing
 SIAM J. Imaging Sci
, 2008
"... Abstract. We propose simple and extremely efficient methods for solving the basis pursuit problem min{‖u‖1: Au = f,u ∈ R n}, which is used in compressed sensing. Our methods are based on Bregman iterative regularization, and they give a very accurate solution after solving only a very small number o ..."
Abstract

Cited by 59 (13 self)
 Add to MetaCart
Abstract. We propose simple and extremely efficient methods for solving the basis pursuit problem min{‖u‖1: Au = f,u ∈ R n}, which is used in compressed sensing. Our methods are based on Bregman iterative regularization, and they give a very accurate solution after solving only a very small number of 1 instances of the unconstrained problem minu∈Rn μ‖u‖1 + 2 ‖Au−fk ‖ 2 2 for given matrix A and vector f k. We show analytically that this iterative approach yields exact solutions in a finite number of steps and present numerical results that demonstrate that as few as two to six iterations are sufficient in most cases. Our approach is especially useful for many compressed sensing applications where matrixvector operations involving A and A ⊤ can be computed by fast transforms. Utilizing a fast fixedpoint continuation solver that is based solely on such operations for solving the above unconstrained subproblem, we were able to quickly solve huge instances of compressed sensing problems on a standard PC.
The Convex Geometry of Linear Inverse Problems
, 2010
"... In applications throughout science and engineering one is often faced with the challenge of solving an illposed inverse problem, where the number of available measurements is smaller than the dimension of the model to be estimated. However in many practical situations of interest, models are constr ..."
Abstract

Cited by 38 (10 self)
 Add to MetaCart
In applications throughout science and engineering one is often faced with the challenge of solving an illposed inverse problem, where the number of available measurements is smaller than the dimension of the model to be estimated. However in many practical situations of interest, models are constrained structurally so that they only have a few degrees of freedom relative to their ambient dimension. This paper provides a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse problems. The class of simple models considered are those formed as the sum of a few atoms from some (possibly infinite) elementary atomic set; examples include wellstudied cases such as sparse vectors (e.g., signal processing, statistics) and lowrank matrices (e.g., control, statistics), as well as several others including sums of a few permutations matrices (e.g., ranked elections, multiobject tracking), lowrank tensors (e.g., computer vision, neuroscience), orthogonal matrices (e.g., machine learning), and atomic measures (e.g., system identification). The convex programming formulation is based on minimizing the norm induced by the convex hull of the atomic set; this norm is referred to as the atomic norm. The facial
Fast convex optimization algorithms for exact recovery of a corrupted lowrank matrix
 In Intl. Workshop on Comp. Adv. in MultiSensor Adapt. Processing, Aruba, Dutch Antilles
, 2009
"... Abstract. This paper studies algorithms for solving the problem of recovering a lowrank matrix with a fraction of its entries arbitrarily corrupted. This problem can be viewed as a robust version of classical PCA, and arises in a number of application domains, including image processing, web data r ..."
Abstract

Cited by 33 (6 self)
 Add to MetaCart
Abstract. This paper studies algorithms for solving the problem of recovering a lowrank matrix with a fraction of its entries arbitrarily corrupted. This problem can be viewed as a robust version of classical PCA, and arises in a number of application domains, including image processing, web data ranking, and bioinformatic data analysis. It was recently shown that under surprisingly broad conditions, it can be exactly solved via a convex programming surrogate that combines nuclear norm minimization and ℓ1norm minimization. This paper develops and compares two complementary approaches for solving this convex program. The first is an accelerated proximal gradient algorithm directly applied to the primal; while the second is a gradient algorithm applied to the dual problem. Both are several orders of magnitude faster than the previous stateoftheart algorithm for this problem, which was based on iterative thresholding. Simulations demonstrate the performance improvement that can be obtained via these two algorithms, and clarify their relative merits.
Templates for Convex Cone Problems with Applications to Sparse Signal Recovery
, 2010
"... This paper develops a general framework for solving a variety of convex cone problems that frequently arise in signal processing, machine learning, statistics, and other fields. The approach works as follows: first, determine a conic formulation of the problem; second, determine its dual; third, app ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
This paper develops a general framework for solving a variety of convex cone problems that frequently arise in signal processing, machine learning, statistics, and other fields. The approach works as follows: first, determine a conic formulation of the problem; second, determine its dual; third, apply smoothing; and fourth, solve using an optimal firstorder method. A merit of this approach is its flexibility: for example, all compressed sensing problems can be solved via this approach. These include models with objective functionals such as the totalvariation norm, ‖W x‖1 where W is arbitrary, or a combination thereof. In addition, the paper also introduces a number of technical contributions such as a novel continuation scheme, a novel approach for controlling the step size, and some new results showing that the smooth and unsmoothed problems are sometimes formally equivalent. Combined with our framework, these lead to novel, stable and computationally efficient algorithms. For instance, our general implementation is competitive with stateoftheart methods for solving intensively studied problems such as the LASSO. Further, numerical experiments show that one can solve the Dantzig selector problem, for which no efficient largescale solvers exist, in a few hundred iterations. Finally, the paper is accompanied with a software release. This software is not a single, monolithic solver; rather, it is a suite of programs and routines designed to serve as building blocks for constructing complete algorithms. Keywords. Optimal firstorder methods, Nesterov’s accelerated descent algorithms, proximal algorithms, conic duality, smoothing by conjugation, the Dantzig selector, the LASSO, nuclearnorm minimization.
Alternating direction algorithms for ℓ1problems in compressive sensing
, 2009
"... Abstract. In this paper, we propose and study the use of alternating direction algorithms for several ℓ1norm minimization problems arising from sparse solution recovery in compressive sensing, including the basis pursuit problem, the basispursuit denoising problems of both unconstrained and constr ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
Abstract. In this paper, we propose and study the use of alternating direction algorithms for several ℓ1norm minimization problems arising from sparse solution recovery in compressive sensing, including the basis pursuit problem, the basispursuit denoising problems of both unconstrained and constrained forms, as well as others. We present and investigate two classes of algorithms derived from either the primal or the dual forms of the ℓ1problems. The construction of the algorithms consists of two main steps: (1) to reformulate an ℓ1problem into one having partially separable objective functions by adding new variables and constraints; and (2) to apply an exact or inexact alternating direction method to the resulting problem. The derived alternating direction algorithms can be regarded as firstorder primaldual algorithms because both primal and dual variables are updated at each and every iteration. Convergence properties of these algorithms are established or restated when they already exist. Extensive numerical results in comparison with several stateoftheart algorithms are given to demonstrate that the proposed algorithms are efficient, stable and robust. Moreover, we present numerical results to emphasize two practically important but perhaps overlooked points. One point is that algorithm speed should always be evaluated relative to appropriate solution accuracy; another is that whenever erroneous measurements possibly exist, the ℓ1norm fidelity should be the fidelity of choice in compressive sensing. Key words. Sparse solution recovery, compressive sensing, ℓ1minimization, primal, dual, alternating direction method
Convergence of the Linearized Bregman Iteration for ℓ1norm Minimization
, 2008
"... Abstract. One of the key steps in compressed sensing is to solve the basis pursuit problem minu∈R n{�u�1: Au = f}. Bregman iteration was very successfully used to solve this problem in [40]. Also, a simple and fast iterative algorithm based on linearized Bregman iteration was proposed in [40], which ..."
Abstract

Cited by 21 (7 self)
 Add to MetaCart
Abstract. One of the key steps in compressed sensing is to solve the basis pursuit problem minu∈R n{�u�1: Au = f}. Bregman iteration was very successfully used to solve this problem in [40]. Also, a simple and fast iterative algorithm based on linearized Bregman iteration was proposed in [40], which is described in detail with numerical simulations in [35]. A convergence analysis of the smoothed version of this algorithm was given in [11]. The purpose of this paper is to prove that the linearized Bregman iteration proposed in [40] for the basis pursuit problem indeed converges. 1.
A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization and continuation
 SIAM Journal on Scientific Computing
, 2010
"... Abstract. We propose a fast algorithm for solving the ℓ1regularized minimization problem minx∈R n µ‖x‖1 + ‖Ax − b ‖ 2 2 for recovering sparse solutions to an undetermined system of linear equations Ax = b. The algorithm is divided into two stages that are performed repeatedly. In the first stage a ..."
Abstract

Cited by 21 (7 self)
 Add to MetaCart
Abstract. We propose a fast algorithm for solving the ℓ1regularized minimization problem minx∈R n µ‖x‖1 + ‖Ax − b ‖ 2 2 for recovering sparse solutions to an undetermined system of linear equations Ax = b. The algorithm is divided into two stages that are performed repeatedly. In the first stage a firstorder iterative method called “shrinkage ” yields an estimate of the subset of components of x likely to be nonzero in an optimal solution. Restricting the decision variables x to this subset and fixing their signs at their current values reduces the ℓ1norm ‖x‖1 to a linear function of x. The resulting subspace problem, which involves the minimization of a smaller and smooth quadratic function, is solved in the second phase. Our code FPC AS embeds this basic twostage algorithm in a continuation (homotopy) approach by assigning a decreasing sequence of values to µ. This code exhibits stateoftheart performance both in terms of its speed and its ability to recover sparse signals. It can even recover signals that are not as sparse as required by current compressive sensing theory.
2010 Analysis and generalizations of the linearized Bregman method
 SIAM J. Imaging Sci
"... Abstract. This paper analyzes and improves the linearized Bregman method for solving the basis pursuit and related sparse optimization problems. The analysis shows that the linearized Bregman method has the exact regularization property; namely, it converges to an exact solution of the basis pursuit ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
Abstract. This paper analyzes and improves the linearized Bregman method for solving the basis pursuit and related sparse optimization problems. The analysis shows that the linearized Bregman method has the exact regularization property; namely, it converges to an exact solution of the basis pursuit problem whenever its smooth parameter α is greater than a certain value. The analysis is based on showing that the linearized Bregman algorithm is equivalent to gradient descent applied to a certain dual formulation. This result motivates generalizations of the algorithm enabling the use of gradientbased optimization techniques such as line search, Barzilai–Borwein, limited memory BFGS (LBFGS), nonlinear conjugate gradient, and Nesterov’s methods. In the numerical simulations, the two proposed implementations, one using Barzilai–Borwein steps with nonmonotone line search and the other using LBFGS, gave more accurate solutions in much shorter times than the basic implementation of the linearized Bregman method with a socalled kicking technique. Key words. Bregman, linearized Bregman, compressed sensing, ℓ1minimization, basis pursuit