Results 1  10
of
91
Parallel stochastic gradient algorithms for largescale matrix completion
 MATHEMATICAL PROGRAMMING COMPUTATION
, 2013
"... This paper develops Jellyfish, an algorithm for solving dataprocessing problems with matrixvalued decision variables regularized to have low rank. Particular examples of problems solvable by Jellyfish include matrix completion problems and leastsquares problems regularized by the nuclear norm or ..."
Abstract

Cited by 74 (8 self)
 Add to MetaCart
This paper develops Jellyfish, an algorithm for solving dataprocessing problems with matrixvalued decision variables regularized to have low rank. Particular examples of problems solvable by Jellyfish include matrix completion problems and leastsquares problems regularized by the nuclear norm or γ2norm. Jellyfish implements a projected incremental gradient method with a biased, random ordering of the increments. This biased ordering allows for a parallel implementation that admits a speedup nearly proportional to the number of processors. On largescale matrix completion tasks, Jellyfish is orders of magnitude more efficient than existing codes. For example, on the Netflix Prize data set, prior art computes rating predictions in approximately 4 hours, while Jellyfish solves the same problem in under 3 minutes on a 12 core workstation.
Lowrank matrix completion by riemannian optimization
 ANCHPMATHICSE, Mathematics Section, École Polytechnique Fédérale de
"... The matrix completion problem consists of finding or approximating a lowrank matrix based on a few samples of this matrix. We propose a novel algorithm for matrix completion that minimizes the least square distance on the sampling set over the Riemannian manifold of fixedrank matrices. The algorit ..."
Abstract

Cited by 40 (4 self)
 Add to MetaCart
(Show Context)
The matrix completion problem consists of finding or approximating a lowrank matrix based on a few samples of this matrix. We propose a novel algorithm for matrix completion that minimizes the least square distance on the sampling set over the Riemannian manifold of fixedrank matrices. The algorithm is an adaptation of classical nonlinear conjugate gradients, developed within the framework of retractionbased optimization on manifolds. We describe all the necessary objects from differential geometry necessary to perform optimization over this lowrank matrix manifold, seen as a submanifold embedded in the space of matrices. In particular, we describe how metric projection can be used as retraction and how vector transport lets us obtain the conjugate search directions. Additionally, we derive secondorder models that can be used in Newton’s method based on approximating the exponential map on this manifold to second order. Finally, we prove convergence of a regularized version of our algorithm under the assumption that the restricted isometry property holds for incoherent matrices throughout the iterations. The numerical experiments indicate that our approach scales very well for largescale problems and compares favorable with the stateoftheart, while outperforming most existing solvers. 1
Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization
 Mathematics of Computation
"... Abstract. The nuclear norm is widely used to induce lowrank solutions for many optimization problems with matrix variables. Recently, it has been shown that the augmented Lagrangian method (ALM) and the alternating direction method (ADM) are very efficient for many convex programming problems arisi ..."
Abstract

Cited by 29 (4 self)
 Add to MetaCart
(Show Context)
Abstract. The nuclear norm is widely used to induce lowrank solutions for many optimization problems with matrix variables. Recently, it has been shown that the augmented Lagrangian method (ALM) and the alternating direction method (ADM) are very efficient for many convex programming problems arising from various applications, provided that the resulting subproblems are sufficiently simple to have closedform solutions. In this paper, we are interested in the application of the ALM and the ADM for some nuclear norm involved minimization problems. When the resulting subproblems do not have closedform solutions, we propose to linearize these subproblems such that closedform solutions of these linearized subproblems can be easily derived. Global convergence of these linearized ALM and ADM are established under standard assumptions. Finally, we verify the effectiveness and efficiency of these new methods by some numerical experiments. 1.
Augmented Lagrangian alternating direction method for matrix separation based on lowrank factorization
, 2011
"... The matrix separation problem aims to separate a lowrank matrix and a sparse matrix from their sum. This problem has recently attracted considerable research attention due to its wide range of potential applications. Nuclearnorm minimization models have been proposed for matrix separation and prov ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
(Show Context)
The matrix separation problem aims to separate a lowrank matrix and a sparse matrix from their sum. This problem has recently attracted considerable research attention due to its wide range of potential applications. Nuclearnorm minimization models have been proposed for matrix separation and proved to yield exact separations under suitable conditions. These models, however, typically require the calculation of a full or partial singular value decomposition (SVD) at every iteration that can become increasingly costly as matrix dimensions and rank grow. To improve scalability, in this paper we propose and investigate an alternative approach based on solving a nonconvex, lowrank factorization model by an augmented Lagrangian alternating direction method. Numerical studies indicate that the effectiveness of the proposed model is limited to problems where the sparse matrix does not dominate the lowrank one in magnitude, though this limitation can be alleviated by certain data preprocessing techniques. On the other hand, extensive numerical results show that, within its applicability range, the proposed method in general has a much faster solution speed than nuclearnorm minimization algorithms, and often provides better recoverability.
Improved iteratively reweighted least squares for unconstrained smoothed lq minimization
 SIAM J. Numer. Anal
, 2013
"... Abstract. In this paper, we first study ℓq minimization and its associated iterative reweighted algorithm for recovering sparse vectors. Unlike most existing work, we focus on unconstrained ℓq minimization, for which we show a few advantages on noisy measurements and/or approximately sparse vectors. ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, we first study ℓq minimization and its associated iterative reweighted algorithm for recovering sparse vectors. Unlike most existing work, we focus on unconstrained ℓq minimization, for which we show a few advantages on noisy measurements and/or approximately sparse vectors. Inspired by the results in [Daubechies et al., Comm. Pure Appl. Math., 63 (2010), pp. 1–38] for constrained ℓq minimization, we start with a preliminary yet novel analysis for unconstrained ℓq minimization, which includes convergence, error bound, and local convergence behavior. Then, the algorithm and analysis are extended to the recovery of lowrank matrices. The algorithms for both vector and matrix recovery have been compared to some stateoftheart algorithms and show superior performance on recovering sparse vectors and lowrank matrices.
Scaled Gradients on Grassmann Manifolds for Matrix Completion
"... This paper describes gradient methods based on a scaled metric on the Grassmann manifold for lowrank matrix completion. The proposed methods significantly improve canonical gradient methods, especially on illconditioned matrices, while maintaining established global convegence and exact recovery g ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
(Show Context)
This paper describes gradient methods based on a scaled metric on the Grassmann manifold for lowrank matrix completion. The proposed methods significantly improve canonical gradient methods, especially on illconditioned matrices, while maintaining established global convegence and exact recovery guarantees. A connection between a form of subspace iteration for matrix completion and the scaled gradient descent procedure is also established. The proposed conjugate gradient method based on the scaled gradient outperforms several existing algorithms for matrix completion and is competitive with recently proposed methods. 1
LOWRANK OPTIMIZATION WITH TRACE NORM PENALTY∗
"... Abstract. The paper addresses the problem of lowrank trace norm minimization. We propose an algorithm that alternates between fixedrank optimization and rankone updates. The fixedrank optimization is characterized by an efficient factorization that makes the trace norm differentiable in the sear ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
(Show Context)
Abstract. The paper addresses the problem of lowrank trace norm minimization. We propose an algorithm that alternates between fixedrank optimization and rankone updates. The fixedrank optimization is characterized by an efficient factorization that makes the trace norm differentiable in the search space and the computation of duality gap numerically tractable. The search space is nonlinear but is equipped with a Riemannian structure that leads to efficient computations. We present a secondorder trustregion algorithm with a guaranteed quadratic rate of convergence. Overall, the proposed optimization scheme converges superlinearly to the global solution while maintaining complexity that is linear in the number of rows and columns of the matrix. To compute a set of solutions efficiently for a grid of regularization parameters we propose a predictorcorrector approach that outperforms the naive warmrestart approach on the fixedrank quotient manifold. The performance of the proposed algorithm is illustrated on problems of lowrank matrix completion and multivariate linear regression.
An Alternating Direction Algorithm for Matrix Completion with Nonnegative Factors
"... Abstract. This paper introduces a novel algorithm for the nonnegative matrix factorization and completion problem, which aims to find nonnegative matrices X and Y from a subset of entries of a nonnegative matrix M so that XY approximates M. This problem is closely related to the two existing problem ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
(Show Context)
Abstract. This paper introduces a novel algorithm for the nonnegative matrix factorization and completion problem, which aims to find nonnegative matrices X and Y from a subset of entries of a nonnegative matrix M so that XY approximates M. This problem is closely related to the two existing problems: nonnegative matrix factorization and lowrank matrix completion, in the sense that it kills two birds with one stone. As it takes advantages of both nonnegativity and low rank, its results can be superior than those of the two problems alone. Our algorithm is applied to minimizing a nonconvex constrained leastsquares formulation and is based on the classic alternating direction augmented Lagrangian method. Preliminary convergence properties and numerical simulation results are presented. Compared to a recent algorithm for nonnegative random matrix factorization, the proposed algorithm yields comparable factorization through accessing only half of the matrix entries. On tasks of recovering incomplete grayscale and hyperspectral images, the results of the proposed algorithm have overall better qualities than those of two recent algorithms for matrix completion.
A Compressive Sensing and Unmixing Scheme for Hyperspectral Data Processing
"... Hyperspectral data processing typically demands enormous computational resources in terms of storage, computation and I/O throughputs, especially when realtime processing is desired. In this paper, we investigate a lowcomplexity scheme for hyperspectral data compression and reconstruction. In this ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
Hyperspectral data processing typically demands enormous computational resources in terms of storage, computation and I/O throughputs, especially when realtime processing is desired. In this paper, we investigate a lowcomplexity scheme for hyperspectral data compression and reconstruction. In this scheme, compressed hyperspectral data are acquired directly by a device similar to the singlepixel camera [5] based on the principle of compressive sensing. To decode the compressed data, we propose a numerical procedure to directly compute the unmixed abundance fractions of given endmembers, completely bypassing highcomplexity tasks involving the hyperspectral data cube itself. The reconstruction model is to minimize the total variational of the abundance fractions subject to a preprocessed fidelity equation with a significantly reduced size, and other side constraints. An augmented Lagrangian type algorithm is developed to solve this model. We conduct extensive numerical experiments to demonstrate the feasibility and efficiency of the proposed approach, using both synthetic data and hardwaremeasured data. Experimental and computational evidences obtained from this study indicate that the proposed scheme has a high potential in realworld applications.
Structured lowrank approximation with missing data
 SIAM J. Matrix Anal. Appl
, 2013
"... We consider lowrank approximation of affinely structured matrices with missing elements. The method proposed is based on reformulation of the problem as inner and outer optimization. The inner minimization is a singular linear leastnorm problem and admits an analytic solution. The outer problem i ..."
Abstract

Cited by 14 (10 self)
 Add to MetaCart
(Show Context)
We consider lowrank approximation of affinely structured matrices with missing elements. The method proposed is based on reformulation of the problem as inner and outer optimization. The inner minimization is a singular linear leastnorm problem and admits an analytic solution. The outer problem is a nonlinear least squares problem and is solved by local optimization methods: minimization subject to quadratic equality constraints and unconstrained minimization with regularized cost function. The method is generalized to weighted lowrank approximation with missing values and is illustrated on approximate lowrank matrix completion, system identification, and datadriven simulation problems. An extended version of the paper is a literate program, implementing the method and reproducing the presented results.