Results 1  10
of
41
Ranksparsity incoherence for matrix decomposition
, 2009
"... Abstract. Suppose we are given a matrix that is formed by adding an unknown sparse matrix to an unknown lowrank matrix. Our goal is to decompose the given matrix into its sparse and lowrank components. Such a problem arises in a number of applications in model and system identification, and is int ..."
Abstract

Cited by 80 (10 self)
 Add to MetaCart
Abstract. Suppose we are given a matrix that is formed by adding an unknown sparse matrix to an unknown lowrank matrix. Our goal is to decompose the given matrix into its sparse and lowrank components. Such a problem arises in a number of applications in model and system identification, and is intractable to solve in general. In this paper we consider a convex optimization formulation to splitting the specified matrix into its components, by minimizing a linear combination of the ℓ1 norm and the nuclear norm of the components. We develop a notion of ranksparsity incoherence, expressed as an uncertainty principle between the sparsity pattern of a matrix and its row and column spaces, and use it to characterize both fundamental identifiability as well as (deterministic) sufficient conditions for exact recovery. Our analysis is geometric in nature with the tangent spaces to the algebraic varieties of sparse and lowrank matrices playing a prominent role. When the sparse and lowrank matrices are drawn from certain natural random ensembles, we show that the sufficient conditions for exact recovery are satisfied with high probability. We conclude with simulation results on synthetic matrix decomposition problems.
Matrix Completion from Noisy Entries
"... Given a matrix M of lowrank, we consider the problem of reconstructing it from noisy observations of a small, random subset of its entries. The problem arises in a variety of applications, from collaborative filtering (the ‘Netflix problem’) to structurefrommotion and positioning. We study a low ..."
Abstract

Cited by 42 (2 self)
 Add to MetaCart
Given a matrix M of lowrank, we consider the problem of reconstructing it from noisy observations of a small, random subset of its entries. The problem arises in a variety of applications, from collaborative filtering (the ‘Netflix problem’) to structurefrommotion and positioning. We study a low complexity algorithm introduced in [1], based on a combination of spectral techniques and manifold optimization, that we call here OPTSPACE. We prove performance guarantees that are orderoptimal in a number of circumstances. 1
Guaranteed rank minimization via singular value projection
 In NIPS 2010
, 2010
"... Minimizing the rank of a matrix subject to affine constraints is a fundamental problem with many important applications in machine learning and statistics. In this paper we propose a simple and fast algorithm SVP (Singular Value Projection) for rank minimization under affine constraints (ARMP) and s ..."
Abstract

Cited by 38 (2 self)
 Add to MetaCart
Minimizing the rank of a matrix subject to affine constraints is a fundamental problem with many important applications in machine learning and statistics. In this paper we propose a simple and fast algorithm SVP (Singular Value Projection) for rank minimization under affine constraints (ARMP) and show that SVP recovers the minimum rank solution for affine constraints that satisfy a restricted isometry property (RIP). Our method guarantees geometric convergence rate even in the presence of noise and requires strictly weaker assumptions on the RIP constants than the existing methods. We also introduce a Newtonstep for our SVP framework to speedup the convergence with substantial empirical gains. Next, we address a practically important application of ARMP the problem of lowrank matrix completion, for which the defining affine constraints do not directly obey RIP, hence the guarantees of SVP do not hold. However, we provide partial progress towards a proof of exact recovery for our algorithm by showing a more restricted isometry property and observe empirically that our algorithm recovers lowrank incoherent matrices from an almost optimal number of uniformly sampled entries. We also demonstrate empirically that our algorithms outperform existing methods, such as those of [5, 18, 14], for ARMP and the matrix completion problem by an order of magnitude and are also more robust to noise and sampling schemes. In particular, results show that our SVPNewton method is significantly robust to noise and performs impressively on a more realistic powerlaw sampling scheme for the matrix completion problem. 1
NonParametric Bayesian Dictionary Learning for Sparse Image Representations
"... Nonparametric Bayesian techniques are considered for learning dictionaries for sparse image representations, with applications in denoising, inpainting and compressive sensing (CS). The beta process is employed as a prior for learning the dictionary, and this nonparametric method naturally infers ..."
Abstract

Cited by 36 (23 self)
 Add to MetaCart
Nonparametric Bayesian techniques are considered for learning dictionaries for sparse image representations, with applications in denoising, inpainting and compressive sensing (CS). The beta process is employed as a prior for learning the dictionary, and this nonparametric method naturally infers an appropriate dictionary size. The Dirichlet process and a probit stickbreaking process are also considered to exploit structure within an image. The proposed method can learn a sparse dictionary in situ; training images may be exploited if available, but they are not required. Further, the noise variance need not be known, and can be nonstationary. Another virtue of the proposed method is that sequential inference can be readily employed, thereby allowing scaling to large images. Several example results are presented, using both Gibbs and variational Bayesian inference, with comparisons to other stateoftheart approaches.
Fast convex optimization algorithms for exact recovery of a corrupted lowrank matrix
 In Intl. Workshop on Comp. Adv. in MultiSensor Adapt. Processing, Aruba, Dutch Antilles
, 2009
"... Abstract. This paper studies algorithms for solving the problem of recovering a lowrank matrix with a fraction of its entries arbitrarily corrupted. This problem can be viewed as a robust version of classical PCA, and arises in a number of application domains, including image processing, web data r ..."
Abstract

Cited by 33 (6 self)
 Add to MetaCart
Abstract. This paper studies algorithms for solving the problem of recovering a lowrank matrix with a fraction of its entries arbitrarily corrupted. This problem can be viewed as a robust version of classical PCA, and arises in a number of application domains, including image processing, web data ranking, and bioinformatic data analysis. It was recently shown that under surprisingly broad conditions, it can be exactly solved via a convex programming surrogate that combines nuclear norm minimization and ℓ1norm minimization. This paper develops and compares two complementary approaches for solving this convex program. The first is an accelerated proximal gradient algorithm directly applied to the primal; while the second is a gradient algorithm applied to the dual problem. Both are several orders of magnitude faster than the previous stateoftheart algorithm for this problem, which was based on iterative thresholding. Simulations demonstrate the performance improvement that can be obtained via these two algorithms, and clarify their relative merits.
Robust Photometric Stereo via LowRank Matrix Completion and Recovery ⋆
"... Abstract. We present a new approach to robustly solve photometric stereo problems. We cast the problem of recovering surface normals from multiple lighting conditions as a problem of recovering a lowrank matrix with both missing entries and corrupted entries, which model all types of nonLambertian ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
Abstract. We present a new approach to robustly solve photometric stereo problems. We cast the problem of recovering surface normals from multiple lighting conditions as a problem of recovering a lowrank matrix with both missing entries and corrupted entries, which model all types of nonLambertian effects such as shadows and specularities. Unlike previous approaches that use LeastSquares or heuristic robust techniques, our method uses advanced convex optimization techniques that are guaranteed to find the correct lowrank matrix by simultaneously fixing its missing and erroneous entries. Extensive experimental results demonstrate that our method achieves unprecedentedly accurate estimates of surface normals in the presence of significant amount of shadows and specularities. The new technique can be used to improve virtually any photometric stereo method including uncalibrated photometric stereo. 1
Bayesian Robust Principal Component Analysis
, 2010
"... A hierarchical Bayesian model is considered for decomposing a matrix into lowrank and sparse components, assuming the observed matrix is a superposition of the two. The matrix is assumed noisy, with unknown and possibly nonstationary noise statistics. The Bayesian framework infers an approximate r ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
A hierarchical Bayesian model is considered for decomposing a matrix into lowrank and sparse components, assuming the observed matrix is a superposition of the two. The matrix is assumed noisy, with unknown and possibly nonstationary noise statistics. The Bayesian framework infers an approximate representation for the noise statistics while simultaneously inferring the lowrank and sparseoutlier contributions; the model is robust to a broad range of noise levels, without having to change model hyperparameter settings. In addition, the Bayesian framework allows exploitation of additional structure in the matrix. For example, in video applications each row (or column) corresponds to a video frame, and we introduce a Markov dependency between consecutive rows in the matrix (corresponding to consecutive frames in the video). The properties of this Markov process are also inferred based on the observed matrix, while simultaneously denoising and recovering the lowrank and sparse components. We compare the Bayesian model to a stateoftheart optimizationbased implementation of robust PCA; considering several examples, we demonstrate competitive performance of the proposed model.
Learning Incoherent Sparse and LowRank Patterns from Multiple Tasks
"... We consider the problem of learning incoherent sparse and lowrank patterns from multiple tasks. Our approach is based on a linear multitask learning formulation, in which the sparse and lowrank patterns are induced by a cardinality regularization term and a lowrank constraint, respectively. This f ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
We consider the problem of learning incoherent sparse and lowrank patterns from multiple tasks. Our approach is based on a linear multitask learning formulation, in which the sparse and lowrank patterns are induced by a cardinality regularization term and a lowrank constraint, respectively. This formulation is nonconvex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for smallsize problems. We propose to employ the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is nondifferentiable and the feasible domain is nontrivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and an Euclidean projection subproblem. In addition, we present two projected gradient algorithms and discuss their rates of convergence. Experimental results on benchmark data sets demonstrate the effectiveness of the proposed multitask learning formulation and the efficiency of the proposed projected gradient algorithms.
An introduction to a class of matrix cone programming
"... In this paper, we define a class of linear conic programming (which we call matrix cone ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
In this paper, we define a class of linear conic programming (which we call matrix cone
Augmented Lagrangian alternating direction method for matrix separation based on lowrank factorization
, 2011
"... The matrix separation problem aims to separate a lowrank matrix and a sparse matrix from their sum. This problem has recently attracted considerable research attention due to its wide range of potential applications. Nuclearnorm minimization models have been proposed for matrix separation and prov ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
The matrix separation problem aims to separate a lowrank matrix and a sparse matrix from their sum. This problem has recently attracted considerable research attention due to its wide range of potential applications. Nuclearnorm minimization models have been proposed for matrix separation and proved to yield exact separations under suitable conditions. These models, however, typically require the calculation of a full or partial singular value decomposition (SVD) at every iteration that can become increasingly costly as matrix dimensions and rank grow. To improve scalability, in this paper we propose and investigate an alternative approach based on solving a nonconvex, lowrank factorization model by an augmented Lagrangian alternating direction method. Numerical studies indicate that the effectiveness of the proposed model is limited to problems where the sparse matrix does not dominate the lowrank one in magnitude, though this limitation can be alleviated by certain data preprocessing techniques. On the other hand, extensive numerical results show that, within its applicability range, the proposed method in general has a much faster solution speed than nuclearnorm minimization algorithms, and often provides better recoverability.