Results 1  10
of
29
SOLVING A LOWRANK FACTORIZATION MODEL FOR MATRIX COMPLETION BY A NONLINEAR SUCCESSIVE OVERRELAXATION ALGORITHM
"... Abstract. The matrix completion problem is to recover a lowrank matrix from a subset of its entries. The main solution strategy for this problem has been based on nuclearnorm minimization which requires computing singular value decompositions – a task that is increasingly costly as matrix sizes an ..."
Abstract

Cited by 91 (10 self)
 Add to MetaCart
(Show Context)
Abstract. The matrix completion problem is to recover a lowrank matrix from a subset of its entries. The main solution strategy for this problem has been based on nuclearnorm minimization which requires computing singular value decompositions – a task that is increasingly costly as matrix sizes and ranks increase. To improve the capacity of solving largescale problems, we propose a lowrank factorization model and construct a nonlinear successive overrelaxation (SOR) algorithm that only requires solving a linear least squares problem per iteration. Convergence of this nonlinear SOR algorithm is analyzed. Numerical results show that the algorithm can reliably solve a wide range of problems at a speed at least several times faster than many nuclearnorm minimization algorithms. Key words. Matrix Completion, alternating minimization, nonlinear GS method, nonlinear SOR method AMS subject classifications. 65K05, 90C06, 93C41, 68Q32
GROUP SPARSE OPTIMIZATION BY ALTERNATING DIRECTION METHOD
, 2011
"... Abstract. This paper proposes efficient algorithms for group sparse optimization with mixed ℓ2,1regularization, which arises from the reconstruction of group sparse signals in compressive sensing, and the group Lasso problem in statistics and machine learning. It is known that encoding the group in ..."
Abstract

Cited by 25 (3 self)
 Add to MetaCart
(Show Context)
Abstract. This paper proposes efficient algorithms for group sparse optimization with mixed ℓ2,1regularization, which arises from the reconstruction of group sparse signals in compressive sensing, and the group Lasso problem in statistics and machine learning. It is known that encoding the group information in addition to sparsity will lead to better signal recovery/feature selection. The ℓ2,1regularization promotes group sparsity, but the resulting problem, due to the mixednorm structure and possible grouping irregularity, is considered more difficult to solve than the conventional ℓ1regularized problem. Our approach is based on a variable splitting strategy and the classic alternating direction method (ADM). Two algorithms are presented, one derived from the primal and the other from the dual of the ℓ2,1regularized problem. The convergence of the proposed algorithms is guaranteed by the existing ADM theory. General group configurations such as overlapping groups and incomplete covers can be easily handled by our approach. Computational results show that on random problems the proposed ADM algorithms exhibit good efficiency, and strong stability and robustness.
Online robust subspace tracking from partial information, Arxiv preprint arXiv:1109.3827
"... This paper presents GRASTA (Grassmannian Robust Adaptive Subspace Tracking Algorithm), an efficient and robust online algorithm for tracking subspaces from highly incomplete information. The algorithm uses a robust l1norm cost function in order to estimate and track nonstationary subspaces when t ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
(Show Context)
This paper presents GRASTA (Grassmannian Robust Adaptive Subspace Tracking Algorithm), an efficient and robust online algorithm for tracking subspaces from highly incomplete information. The algorithm uses a robust l1norm cost function in order to estimate and track nonstationary subspaces when the streaming data vectors are corrupted with outliers. We apply GRASTA to the problems of robust matrix completion and realtime separation of background from foreground in video. In this second application, we show that GRASTA performs highquality separation of moving objects from background at exceptional speeds: In one popular benchmark video example [28], GRASTA achieves a rate of 57 frames per second, even when run in MATLAB on a personal laptop.
Fixedrank representation for unsupervised visual learning
"... Subspace clustering and feature extraction are two of the most commonly used unsupervised learning techniques in computer vision and pattern recognition. Stateoftheart techniques for subspace clustering make use of recent advances in sparsity and rank minimization. However, existing techniques a ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
(Show Context)
Subspace clustering and feature extraction are two of the most commonly used unsupervised learning techniques in computer vision and pattern recognition. Stateoftheart techniques for subspace clustering make use of recent advances in sparsity and rank minimization. However, existing techniques are computationally expensive and may result in degenerate solutions that degrade clustering performance in the case of insufficient data sampling. To partially solve these problems, and inspired by existing work on matrix factorization, this paper proposes fixedrank representation (FRR) as a unified framework for unsupervised visual learning. FRR is able to reveal the structure of multiple subspaces in closedform when the data is noiseless. Furthermore, we prove that under some suitable conditions, even with insufficient observations, FRR can still reveal the true subspace memberships. To achieve robustness to outliers and noise, a sparse regularizer is introduced into the FRR framework. Beyond subspace clustering, FRR can be used for unsupervised feature extraction. As a nontrivial byproduct, a fast numerical solver is developed for FRR. Experimental results on both synthetic data and real applications validate our theoretical analysis and demonstrate the benefits of FRR for unsupervised visual learning. 1.
Convergence Analysis of Alternating Direction Method of Multipliers for a Family of Nonconvex Problems
"... ar ..."
(Show Context)
Robust Locally Linear Analysis with Applications to Image Denoising and Blind Inpainting
, 2011
"... We study the related problems of denoising images corrupted by impulsive noise and blind inpainting (i.e., inpainting when the deteriorated region is unknown). Our basic approach is to model the set of patches of pixels in an image as a union of low dimensional subspaces, corrupted by sparse but pe ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
We study the related problems of denoising images corrupted by impulsive noise and blind inpainting (i.e., inpainting when the deteriorated region is unknown). Our basic approach is to model the set of patches of pixels in an image as a union of low dimensional subspaces, corrupted by sparse but perhaps large magnitude noise. For this purpose, we develop a robust and iterative RANSAC like method for single subspace modeling and extend it to an iterative algorithm for modeling multiple subspaces. We prove convergence for both algorithms and carefully compare our methods with other recent ideas for such robust modeling. We demonstrate state of the art performance of our method for both imaging problems.
A PROXIMAL POINT ALGORITHM FOR LOGDETERMINANT OPTIMIZATION WITH GROUP LASSO REGULARIZATION
"... We consider the covariance selection problem where variables are clustered into groups and the inverse covariance matrix is expected to have a blockwise sparse structure. This problem is realized via penalizing the maximum likelihood estimation of the inverse covariance matrix by group Lasso regul ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
We consider the covariance selection problem where variables are clustered into groups and the inverse covariance matrix is expected to have a blockwise sparse structure. This problem is realized via penalizing the maximum likelihood estimation of the inverse covariance matrix by group Lasso regularization. We propose to solve the resulting logdeterminant optimization problem by the classical proximal point algorithm (PPA). At each iteration, as it is difficult to update the primal variables directly, we first solve the dual subproblem by a NewtonCG method and then update the primal variables by explicit formulas based on the computed dual variables. We also propose to accelerate the PPA by an inexact generalized Newton’s method when the iterate is close to the solution. Theoretically, we prove that, at the optimal solution, the negative definiteness of the generalized Hessian matrices of the dual objective function is equivalent to the constraint nondegeneracy condition for the primal problem. Global and local convergence results are also presented for the proposed PPA. Moreover, based on the augmented Lagrangian function of the dual problem we derive an alternating direction method (ADM), which is easily implementable, and demonstrated to be efficient for some random problems. Numerical results, including comparisons with the ADM, are presented to demonstrate that the proposed NewtonCG based PPA is stable, efficient and, in particular, outperforms the ADM, especially when higher accuracy is required.
Factor Matrix Trace Norm Minimization for LowRank Tensor Completion
"... Most existing lownrank minimization algorithms for tensor completion suffer from high computational cost due to involving multiple singular value decompositions (SVDs) at each iteration. To address this issue, we propose a novel factor matrix trace norm minimization method for tensor completion ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Most existing lownrank minimization algorithms for tensor completion suffer from high computational cost due to involving multiple singular value decompositions (SVDs) at each iteration. To address this issue, we propose a novel factor matrix trace norm minimization method for tensor completion problems. Based on the CANDECOMP/PARAFAC (CP) decomposition, we first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the moden rank of a tensor. Then, we introduce a tractable relaxation of our rank function, which leads to a convex combination problem of much smaller scale matrix nuclear norm minimization. Finally, we develop an efficient alternating direction method of multipliers (ADMM) scheme to solve the proposed problem. Experimental results on both synthetic and realworld data validate the effectiveness of our approach. Moreover, our method is significantly faster than the stateoftheart approaches and scales well to handle large datasets. 1
ROML: A robust feature correspondence approach for matching objects in a set of images
, 2014
"... Featurebased object matching is a fundamental problem for many applications in computer vision, such as object recognition, 3D reconstruction, tracking, and motion segmentation. In this work, we consider simultaneously matching object instances in a set of images,where both inlier and outlier feat ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Featurebased object matching is a fundamental problem for many applications in computer vision, such as object recognition, 3D reconstruction, tracking, and motion segmentation. In this work, we consider simultaneously matching object instances in a set of images,where both inlier and outlier features are extracted. The task is to identify the inlier features and establish their consistent correspondences across the image set. This is a challenging combinatorial problem, and the problem complexity grows exponentially
Numerical Algorithms for a Class of Matrix Norm Approximation Problems
, 2012
"... This thesis focuses on designing robust and efficient algorithms for a class of matrix norm approximation (MNA) problems that are to find an affine combination of given matrices having the minimal spectral norm subject to some prescribed linear equality and inequality constraints. These problems a ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
This thesis focuses on designing robust and efficient algorithms for a class of matrix norm approximation (MNA) problems that are to find an affine combination of given matrices having the minimal spectral norm subject to some prescribed linear equality and inequality constraints. These problems arise often in numerical algebra,