Results 1 
5 of
5
Robust face recognition via sparse representation,” (preprint
 IEEE Trans. Pattern Analysis and Machine Intelligence
"... Abstract — We consider the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise. We cast the recognition problem as one of classifying among multiple linear regression models, and argue that new theory from sp ..."
Abstract

Cited by 321 (22 self)
 Add to MetaCart
Abstract — We consider the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise. We cast the recognition problem as one of classifying among multiple linear regression models, and argue that new theory from sparse signal representation offers the key to addressing this problem. Based on a sparse representation computed by ℓ 1minimization, we propose a general classification algorithm for (imagebased) object recognition. This new framework provides new insights into two crucial issues in face recognition: feature extraction and robustness to occlusion. For feature extraction, we show that if sparsity in the recognition problem is properly harnessed, the choice of features is no longer critical. What is critical, however, is whether the number of features is sufficiently large and whether the sparse representation is correctly computed. Unconventional features such as downsampled images and random projections perform just as well as conventional features such as Eigenfaces and Laplacianfaces, as long as the dimension of the feature space surpasses certain threshold, predicted by the theory of sparse representation. This framework can handle errors due to occlusion and corruption uniformly, by exploiting the fact that these errors are often sparse w.r.t. to the standard (pixel) basis. The theory of sparse representation helps predict how much occlusion the recognition algorithm can handle and how to choose the training images to maximize robustness to occlusion. We conduct extensive experiments on publicly available databases to verify the efficacy of the proposed algorithm, and corroborate the above claims.
Simultaneous image transformation and sparse representation recovery
 IEEE Conferenece on Computer Vision and Pattern Recognition (CVPR
, 2008
"... Sparse representation in compressive sensing is gaining increasing attention due to its success in various applications. As we demonstrate in this paper, however, image sparse representation is sensitive to image plane transformations such that existing approaches can not reconstruct the sparse repr ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
Sparse representation in compressive sensing is gaining increasing attention due to its success in various applications. As we demonstrate in this paper, however, image sparse representation is sensitive to image plane transformations such that existing approaches can not reconstruct the sparse representation of a geometrically transformed image. We introduce a simple technique for obtaining transformationinvariant image sparse representation. It is rooted in two observations: 1) if the aligned model images of an object span a linear subspace, their transformed versions with respect to some group of transformations can still span a linear subspace in a higher dimension; 2) if a target (or test) image, aligned with the model images, lives in the above subspace, its prealignment versions would get closer to the subspace after applying estimated transformations with more and more accurate parameters. These observations motivate us to project a potentially unaligned target image to random projection manifolds defined by the model images and the transformation model. Each projection is then separated into the aligned projection target and a residue due to misalignment. The desired aligned projection target is then iteratively optimized by gradually diminishing the residue. In this framework, we can simultaneously recover the sparse representation of a target image and the image plane transformation between the target and the model images. We have applied the proposed methodology to two applications: face recognition, and dynamic texture registration. The improved performance over previous methods that we obtain demonstrates the effectiveness of the proposed approach. 1.
Feature selection in face recognition: A sparse representation perspective
, 2007
"... In this paper, we examine the role of feature selection in face recognition from the perspective of sparse representation. We cast the recognition problem as finding a sparse representation of the test image features w.r.t. the training set. The sparse representation can be accurately and efficientl ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
In this paper, we examine the role of feature selection in face recognition from the perspective of sparse representation. We cast the recognition problem as finding a sparse representation of the test image features w.r.t. the training set. The sparse representation can be accurately and efficiently computed by ℓ 1minimization. The proposed simple algorithm generalizes conventional face recognition classifiers such as nearest neighbors and nearest subspaces. Using face recognition under varying illumination and expression as an example, we show that if sparsity in the recognition problem is properly harnessed, the choice of features is no longer critical. What is critical, however, is whether the number of features is sufficient and whether the sparse representation is correctly found. We conduct extensive experiments to validate the significance of imposing sparsity using the Extended Yale B database and the AR database. Our thorough evaluation shows that, using conventional features such as Eigenfaces and facial parts, the proposed algorithm achieves much higher recognition accuracy on face images with variation in either illumination or expression. Furthermore, other unconventional features such as severely downsampled images and randomly projected features perform almost equally well with the increase of feature dimensions. The differences in performance between different features become insignificant as the featurespace dimension is sufficiently large.
Sparse Brain Network Recovery under Compressed Sensing
, 2010
"... Partial correlation is a useful connectivity measure for brain networks, especially, when it is needed to remove the confounding effects in highly correlated networks. Since it is difficult to estimate the exact partial correlation under the smalln largep situation, a sparseness constraint is gene ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Partial correlation is a useful connectivity measure for brain networks, especially, when it is needed to remove the confounding effects in highly correlated networks. Since it is difficult to estimate the exact partial correlation under the smalln largep situation, a sparseness constraint is generally introduced. In this paper, we consider the sparse linear regression model with a l1norm penalty, a.k.a., least absolute shrinkage and selection operator (LASSO), for estimating sparse brain connectivity. LASSO is a wellknown decoding algorithm in the compressed sensing (CS). The CS theory states that LASSO can reconstruct the exact sparse signal even from a small set of noisy measurements. We briefly show that the penalized linear regression for partial correlation estimation is related with CS. It opens a new possibility that the proposed framework can be used for a sparse brain network recovery. As an illustration, we construct sparse brain networks of 97 regions of interest (ROIs) obtained from FDGPET data for the autism spectrum disorder (ASD) children and the pediatric control (PedCon) subjects. As a model validation, we check their reproducibilities by leaveoneout cross validation and compare the clustered structures derived from the brain networks of ASD and PedCon. Keywords: Brain Connectivity, Compressed Sensing, Partial Correlation, LASSO. 1
Compressive sensing: a paradigm shift in signal processing
, 812
"... We survey a new paradigm in signal processing known as "compressive sensing". Contrary to old practices of data acquisition and reconstruction based on the ShannonNyquist sampling principle, the new theory shows that it is possible to reconstruct images or signals of scientific interest accurately ..."
Abstract
 Add to MetaCart
We survey a new paradigm in signal processing known as "compressive sensing". Contrary to old practices of data acquisition and reconstruction based on the ShannonNyquist sampling principle, the new theory shows that it is possible to reconstruct images or signals of scientific interest accurately and even exactly from a number of samples which is far smaller than the desired resolution of the image/signal, e.g., the number of pixels in the image. This new technique draws from results in several fields of mathematics, including algebra, optimization, probability theory, and harmonic analysis. We will discuss some of the key mathematical ideas behind compressive sensing, as well as its implications to other fields: numerical analysis, information theory, theoretical computer science, and engineering. 1