Results 1  10
of
23
Sparse Representation or Collaborative Representation: Which Helps Face Recognition?
"... As a recently proposed technique, sparse representation based classification (SRC) has been widely used for face recognition (FR). SRC first codes a testing sample as a sparse linear combination of all the training samples, and then classifies the testing sample by evaluating which class leads to th ..."
Abstract

Cited by 107 (16 self)
 Add to MetaCart
As a recently proposed technique, sparse representation based classification (SRC) has been widely used for face recognition (FR). SRC first codes a testing sample as a sparse linear combination of all the training samples, and then classifies the testing sample by evaluating which class leads to the minimum representation error. While the importance of sparsity is much emphasized in SRC and many related works, the use of collaborative representation (CR) in SRC is ignored by most literature. However, is it really the l1norm sparsity that improves the FR accuracy? This paper devotes to analyze the working mechanism of SRC, and indicates that it is the CR but not the l1norm sparsity that makes SRC powerful for face classification. Consequently, we propose a very simple yet much more efficient face classification scheme, namely CR based classification with regularized least square (CRC_RLS). The extensive experiments clearly show that CRC_RLS has very competitive classification results, while it has significantly less complexity than SRC.
Multitask lowrank affinity pursuit for image segmentation
 In ICCV
"... This paper investigates how to boost regionbased image segmentation by pursuing a new solution to fuse multiple types of image features. A collaborative image segmentation framework, called multitask lowrank affinity pursuit, is presented for such a purpose. Given an image described with mul ..."
Abstract

Cited by 29 (1 self)
 Add to MetaCart
(Show Context)
This paper investigates how to boost regionbased image segmentation by pursuing a new solution to fuse multiple types of image features. A collaborative image segmentation framework, called multitask lowrank affinity pursuit, is presented for such a purpose. Given an image described with multiple types of features, we aim at inferring a unified affinity matrix that implicitly encodes the segmentation of the image. This is achieved by seeking the sparsityconsistent lowrank affinities from the joint decompositions of multiple feature matrices into pairs of sparse and lowrank matrices, the latter of which is expressed as the production of the image feature matrix and its corresponding image affinity matrix. The inference process is formulated as a constrained nuclear norm and `2,1norm minimization problem, which is convex and can be solved efficiently with the Augmented Lagrange Multiplier method. Compared to previous methods, which are usually based on a single type of features, the proposed method seamlessly integrates multiple types of features to jointly produce the affinity matrix within a single inference step, and produces more accurate and reliable segmentation results. Experiments on the MSRC dataset and Berkeley segmentation dataset well validate the superiority of using multiple features over single feature and also the superiority of our method over conventional methods for feature fusion. Moreover, our method is shown to be very competitive while comparing to other stateoftheart methods. 1.
Robust lowrank subspace segmentation with semidefinite guarantees
 In ICDM Workshop
, 2010
"... Abstract—Recently there is a line of research work proposing to employ Spectral Clustering (SC) to segment (group)1 highdimensional structural data such as those (approximately) lying on subspaces2 or lowdimensional manifolds. By learning the affinity matrix in the form of sparse reconstruction, t ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
(Show Context)
Abstract—Recently there is a line of research work proposing to employ Spectral Clustering (SC) to segment (group)1 highdimensional structural data such as those (approximately) lying on subspaces2 or lowdimensional manifolds. By learning the affinity matrix in the form of sparse reconstruction, techniques proposed in this vein often considerably boost the performance in subspace settings where traditional SC can fail. Despite the success, there are fundamental problems that have been left unsolved: the spectrum property of the learned affinity matrix cannot be gauged in advance, and there is often one ugly symmetrization step that postprocesses the affinity for SC input. Hence we advocate to enforce the symmetric positive semidefinite constraint explicitly during learning (LowRank Representation with Positive SemiDefinite constraint, or LRRPSD), and show that factually it can be solved in an exquisite scheme efficiently instead of generalpurpose SDP solvers that usually scale up poorly. We provide rigorous mathematical derivations to show that, in its canonical form, LRRPSD is equivalent to the recently proposed LowRank Representation (LRR) scheme [1], and hence offer theoretic and practical insights to both LRRPSD and LRR, inviting future research. As per the computational cost, our proposal is at most comparable to that of LRR, if not less. We validate our theoretic analysis and optimization scheme by experiments on both synthetic and real data sets. Keywordsspectral clustering, affinity matrix learning, rank minimization, robust estimation, eigenvalue thresholding I.
X.W.: Nonnegative sparse coding for discriminative semisupervised learning
 In: CVPR
, 2011
"... An informative and discriminative graph plays an important role in the graphbased semisupervised learning methods. This paper introduces a nonnegative sparse algorithm and its approximated algorithm based on the l0l1 equivalence theory to compute the nonnegative sparse weights of a graph. Hence ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
(Show Context)
An informative and discriminative graph plays an important role in the graphbased semisupervised learning methods. This paper introduces a nonnegative sparse algorithm and its approximated algorithm based on the l0l1 equivalence theory to compute the nonnegative sparse weights of a graph. Hence, the sparse probability graph (SPG) is termed for representing the proposed method. The nonnegative sparse weights in the graph naturally serve as clustering indicators, benefiting for semisupervised learning. More important, our approximation algorithm speeds up the computation of the nonnegative sparse coding, which is still a bottleneck for any previous attempts of sparse nonnegative graph learning. And it is much more efficient than using l1norm sparsity technique for learning large scale sparse graph. Finally, for discriminative semisupervised learning, an adaptive label propagation algorithm is also proposed to iteratively predict the labels of data on the SPG. Promising experimental results show that the nonnegative sparse coding is efficient and effective for discriminative semisupervised learning. 1.
Scalable sparse subspace clustering
 CVPR
"... In this paper, we address two problems in Sparse Subspace Clustering algorithm (SSC), i.e., scalability issue and outofsample problem. SSC constructs a sparse similarity graph for spectral clustering by using 1minimization based coefficients, has achieved stateoftheart results for image clus ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
In this paper, we address two problems in Sparse Subspace Clustering algorithm (SSC), i.e., scalability issue and outofsample problem. SSC constructs a sparse similarity graph for spectral clustering by using 1minimization based coefficients, has achieved stateoftheart results for image clustering and motion segmentation. However, the time complexity of SSC is proportion to the cubic of problem size such that it is inefficient to apply SSC into large scale setting. Moreover, SSC does not handle with outofsample data that are not used to construct the similarity graph. For each new datum, SSC needs recalculating the cluster membership of the whole data set, which makes SSC is not competitive in fast online clustering. To address the problems, this paper proposes outofsample extension of SSC, named as Scalable Sparse Subspace Clustering (SSSC), which makes SSC feasible to cluster large scale data sets. The solution of SSSC adopts a ”sampling, clustering, coding, and classifying ” strategy. Extensive experimental results on several popular data sets demonstrate the effectiveness and efficiency of our method comparing with the stateoftheart algorithms. 1.
Correlation adaptive subspace segmentation by trace lasso
 In ICCV
, 2013
"... This paper studies the subspace segmentation problem. Given a set of data points drawn from a union of subspaces, the goal is to partition them into their underlying subspaces they were drawn from. The spectral clustering method is used as the framework. It requires to find an affinity matrix which ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
This paper studies the subspace segmentation problem. Given a set of data points drawn from a union of subspaces, the goal is to partition them into their underlying subspaces they were drawn from. The spectral clustering method is used as the framework. It requires to find an affinity matrix which is close to block diagonal, with nonzero entries corresponding to the data point pairs from the same subspace. In this work, we argue that both sparsity and the grouping effect are important for subspace segmentation. A sparse affinity matrix tends to be block diagonal, with less connections between data points from different subspaces. The grouping effect ensures that the highly corrected data which are usually from the same subspace can be grouped together. Sparse Subspace Clustering (SSC), by using 1minimization, encourages sparsity for data selection, but it lacks of the grouping effect. On the contrary, LowRank Representation (LRR), by rank minimization, and Least Squares Regression (LSR), by 2regularization, exhibit strong grouping effect, but they are short in subset selection. Thus the obtained affinity matrix is usually very sparse by SSC, yet very dense by LRR and LSR. In this work, we propose the Correlation Adaptive Subspace Segmentation (CASS) method by using trace Lasso. CASS is a data correlation dependent method which simultaneously performs automatic data selection and groups correlated data together. It can be regarded as a method which adaptively balances SSC and LSR. Both theoretical and experimental results show the effectiveness of CASS. 1.
Learning stable multilevel dictionaries for sparse representation of images. ArXiv 1303.0448
, 2013
"... Abstract—Sparse representations using learned dictionaries are being increasingly used with success in several data processing and machine learning applications. The availability of abundant training data necessitates the development of efficient, robust and provably good dictionary learning algorit ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
Abstract—Sparse representations using learned dictionaries are being increasingly used with success in several data processing and machine learning applications. The availability of abundant training data necessitates the development of efficient, robust and provably good dictionary learning algorithms. Algorithmic stability and generalization are desirable characteristics for dictionary learning algorithms that aim to build global dictionaries which can efficiently model any test data similar to the training samples. In this paper, we propose an algorithm to learn dictionaries for sparse representations from large scale data, and prove that the proposed learning algorithm is stable and generalizable asymptotically. The algorithm employs a 1D subspace clustering procedure, the Khyperline clustering, in order to learn a hierarchical dictionary with multiple levels. We also propose an informationtheoretic scheme to estimate the number of atoms needed in each level of learning and develop an ensemble approach to learn robust dictionaries. Using the proposed dictionaries, the sparse code for novel test data can be computed using a lowcomplexity pursuit procedure. We demonstrate the stability and generalization characteristics of the proposed algorithm using simulations. We also evaluate the utility of the multilevel dictionaries in compressed recovery and subspace learning applications. I.
Halfquadratic based iterative minimization for robust sparse representation
 IEEE TPAMI
, 2013
"... Abstract—Robust sparse representation has shown significant potential in solving challenging problems in computer vision such as biometrics and visual surveillance. Although several robust sparse models have been proposed and promising results have been obtained, they are either for error correction ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract—Robust sparse representation has shown significant potential in solving challenging problems in computer vision such as biometrics and visual surveillance. Although several robust sparse models have been proposed and promising results have been obtained, they are either for error correction or for error detection, and learning a general framework that systematically unifies these two aspects and explore their relation is still an open problem. In this paper, we develop a halfquadratic (HQ) framework to solve the robust sparse representation problem. By defining different kinds of halfquadratic functions, the proposed HQ framework is applicable to performing both error correction and error detection. More specifically, by using the additive form of HQ, we propose an 1regularized error correction method by iteratively recovering corrupted data from errors incurred by noises and outliers; by using the multiplicative form of HQ, we propose an 1regularized error detection method by learning from uncorrupted data iteratively. We also show that the 1regularization solved by softthresholding function has a dual relationship to Huber Mestimator, which theoretically guarantees the performance of robust sparse representation in terms of Mestimation. Experiments on robust face recognition under severe occlusion and corruption validate our framework and findings.
Robust Subspace Segmentation with Blockdiagonal Prior
"... The subspace segmentation problem is addressed in this paper by effectively constructing an exactly blockdiagonal sample affinity matrix. The blockdiagonal structure is heavily desired for accurate sample clustering but is rather difficult to obtain. Most current stateoftheart subspace segmenta ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
The subspace segmentation problem is addressed in this paper by effectively constructing an exactly blockdiagonal sample affinity matrix. The blockdiagonal structure is heavily desired for accurate sample clustering but is rather difficult to obtain. Most current stateoftheart subspace segmentation methods (such as SSC [4] and LRR [12]) resort to alternative structural priors (such as sparseness and lowrankness) to construct the affinity matrix. In this work, we directly pursue the blockdiagonal structure by proposing a graph Laplacian constraint based formulation, and then develop an efficient stochastic subgradient algorithm for optimization. Moreover, two new subspace segmentation methods, the blockdiagonal SSC and LRR, are devised in this work. To the best of our knowledge, this is the first research attempt to explicitly pursue such a blockdiagonal structure. Extensive experiments on face clustering, motion segmentation and graph construction for semisupervised learning clearly demonstrate the superiority of our novelly proposed subspace segmentation methods. 1.
Graphbased Learning via autogrouped sparse regularization and kernelized extension
 IEEE Trans. Knowl. Data Eng
, 2013
"... Abstract—The key task in developing graphbased learning algorithms is constructing an informative graph to express the contextual information of a data manifold. Since traditional graph construction methods are sensitive to noise and less datumadaptive to changes in density, a new method called ‘1 ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract—The key task in developing graphbased learning algorithms is constructing an informative graph to express the contextual information of a data manifold. Since traditional graph construction methods are sensitive to noise and less datumadaptive to changes in density, a new method called ‘1graph was proposed recently. A graph construction needs to have two important properties: sparsity and locality. The ‘1graph has a strong sparsity property, but a weak locality property. Thus, we propose a new method of constructing an informative graph using autogrouped sparse regularization based on the ‘1graph, which is called as Group Sparse graph (GSgraph). We also show how to efficiently construct a GSgraph in reproducing kernel Hilbert space with the kernel trick. The new methods, the GSgraph and its kernelized version (KGSgraph), have the same noiseinsensitive property as that of ‘1graph and also can successively preserve the properties of sparsity and locality simultaneously. Furthermore, we integrate the proposed graph with several graphbased learning algorithms to demonstrate the effectiveness of our method. The empirical studies on benchmarks show that the proposed methods outperform the ‘1graph and other traditional graph construction methods in various learning tasks. Index Terms—Graph based learning, sparse representation, spectral embedding, subspace learning, nonnegative matrix factorization Ç 1