Results 1  10
of
44
Robust face recognition via sparse representation
 IEEE TRANS. PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 2008
"... We consider the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise. We cast the recognition problem as one of classifying among multiple linear regression models, and argue that new theory from sparse signa ..."
Abstract

Cited by 731 (32 self)
 Add to MetaCart
We consider the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise. We cast the recognition problem as one of classifying among multiple linear regression models, and argue that new theory from sparse signal representation offers the key to addressing this problem. Based on a sparse representation computed by ℓ 1minimization, we propose a general classification algorithm for (imagebased) object recognition. This new framework provides new insights into two crucial issues in face recognition: feature extraction and robustness to occlusion. For feature extraction, we show that if sparsity in the recognition problem is properly harnessed, the choice of features is no longer critical. What is critical, however, is whether the number of features is sufficiently large and whether the sparse representation is correctly computed. Unconventional features such as downsampled images and random projections perform just as well as conventional features such as Eigenfaces and Laplacianfaces, as long as the dimension of the feature space surpasses certain threshold, predicted by the theory of sparse representation. This framework can handle errors due to occlusion and corruption uniformly, by exploiting the fact that these errors are often sparse w.r.t. to the standard (pixel) basis. The theory of sparse representation helps predict how much occlusion the recognition algorithm can handle and how to choose the training images to maximize robustness to occlusion. We conduct extensive experiments on publicly available databases to verify the efficacy of the proposed algorithm, and corroborate the above claims.
Sparse representation for signal classification
 In Adv. NIPS
, 2006
"... In this paper, application of sparse representation (factorization) of signals over an overcomplete basis (dictionary) for signal classification is discussed. Searching for the sparse representation of a signal over an overcomplete dictionary is achieved by optimizing an objective function that incl ..."
Abstract

Cited by 64 (0 self)
 Add to MetaCart
(Show Context)
In this paper, application of sparse representation (factorization) of signals over an overcomplete basis (dictionary) for signal classification is discussed. Searching for the sparse representation of a signal over an overcomplete dictionary is achieved by optimizing an objective function that includes two terms: one that measures the signal reconstruction error and another that measures the sparsity. This objective function works well in applications where signals need to be reconstructed, like coding and denoising. On the other hand, discriminative methods, such as linear discriminative analysis (LDA), are better suited for classification tasks. However, discriminative methods are usually sensitive to corruption in signals due to lacking crucial properties for signal reconstruction. In this paper, we present a theoretical framework for signal classification with sparse representation. The approach combines the discrimination power of the discriminative methods with the reconstruction property and the sparsity of the sparse representation that enables one to deal with signal corruptions: noise, missing data and outliers. The proposed approach is therefore capable of robust classification with a sparse representation of signals. The theoretical results are demonstrated with signal classification tasks, showing that the proposed approach outperforms the standard discriminative methods and the standard sparse representation in the case of corrupted signals. 1
Support vector machines in face recognition with occlusions
 in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition
"... Support Vector Machines (SVM) are one of the most useful techniques in classification problems. One clear example is face recognition. However, SVM cannot be applied when the feature vectors defining our samples have missing entries. This is clearly the case in face recognition when occlusions are p ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
(Show Context)
Support Vector Machines (SVM) are one of the most useful techniques in classification problems. One clear example is face recognition. However, SVM cannot be applied when the feature vectors defining our samples have missing entries. This is clearly the case in face recognition when occlusions are present in the training and/or testing sets. When k features are missing in a sample vector of class 1, these define an affine subspace of k dimensions. The goal of the SVM is to maximize the margin between the vectors of class 1 and class 2 on those dimensions with no missing elements and, at the same time, maximize the margin between the vectors in class 2 and the affine subspace of class 1. This second term of the SVM criterion will minimize the overlap between the classification hyperplane and the subspace of solutions in class 1, because we do not know which values in this subspace a test vector can take. The hyperplane minimizing this overlap is obviously the one parallel to the missing dimensions. However, this condition is too restrictive, because its solution will generally contradict that obtained when maximizing the margin of the visible data. To resolve this problem, we define a criterion which minimizes the probability of overlap. The resulting optimization problem can be solved efficiently and we show how the global minimum of the error term is guaranteed under mild conditions. We provide extensive experimental results, demonstrating the superiority of the proposed approach over the state of the art. 1.
Maximum correntropy criterion for robust face recognition
 IEEE Trans. Pattern Anal. Mach. Intell
"... Abstract—In this paper, we present a sparse correntropy framework for computing robust sparse representations of face images for recognition. Compared with the stateoftheart l1normbased sparse representation classifier (SRC), which assumes that noise also has a sparse representation, our sparse ..."
Abstract

Cited by 22 (7 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, we present a sparse correntropy framework for computing robust sparse representations of face images for recognition. Compared with the stateoftheart l1normbased sparse representation classifier (SRC), which assumes that noise also has a sparse representation, our sparse algorithm is developed based on the maximum correntropy criterion, which is much more insensitive to outliers. In order to develop a more tractable and practical approach, we in particular impose nonnegativity constraint on the variables in the maximum correntropy criterion and develop a halfquadratic optimization technique to approximately maximize the objective function in an alternating way so that the complex optimization problem is reduced to learning a sparse representation through a weighted linear least squares problem with nonnegativity constraint at each iteration. Our extensive experiments demonstrate that the proposed method is more robust and efficient in dealing with the occlusion and corruption problems in face recognition as compared to the related stateoftheart methods. In particular, it shows that the proposed method can improve both recognition accuracy and receiver operator characteristic (ROC) curves, while the computational cost is much lower than the SRC algorithms. Index Terms—Information theoretical learning, correntropy, linear least squares, halfquadratic optimization, sparse representation, Mestimator, face recognition, occlusion and corruption. Ç 1
Face recognition with contiguous occlusion using markov random fields
 in Proceedings of IEEE International Conference on Computer Vision, 2009
"... Partially occluded faces are common in many applications of face recognition. While algorithms based on sparse representation have demonstrated promising results, they achieve their best performance on occlusions that are not spatially correlated (i.e. random pixel corruption). We show that such spa ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
(Show Context)
Partially occluded faces are common in many applications of face recognition. While algorithms based on sparse representation have demonstrated promising results, they achieve their best performance on occlusions that are not spatially correlated (i.e. random pixel corruption). We show that such sparsitybased algorithms can be significantly improved by harnessing prior knowledge about the pixel error distribution. We show how a Markov Random Field model for spatial continuity of the occlusion can be integrated into the computation of a sparse representation of the test image with respect to the training images. Our algorithm efficiently and reliably identifies the corrupted regions and excludes them from the sparse representation. Extensive experiments on both laboratory and realworld datasets show that our algorithm tolerates much larger fractions and varieties of occlusion than current stateoftheart algorithms. 1.
Outlier Detection with the Kernelized Spatial Depth Function
, 2008
"... Statistical depth functions provide from the “deepest ” point a “centeroutward ordering” of multidimensional data. In this sense, depth functions can measure the “extremeness” or “outlyingness” of a data point with respect to a given data set. Hence they can detect outliers – observations that appe ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
(Show Context)
Statistical depth functions provide from the “deepest ” point a “centeroutward ordering” of multidimensional data. In this sense, depth functions can measure the “extremeness” or “outlyingness” of a data point with respect to a given data set. Hence they can detect outliers – observations that appear extreme relative to the rest of the observations. Of the various statistical depths, the spatial depth is especially appealing because of its computational efficiency and mathematical tractability. In this article, we propose a novel statistical depth, the kernelized spatial depth (KSD), which generalizes the spatial depth via positive definite kernels. By choosing a proper kernel, the KSD can capture the local structure of a data set while the spatial depth fails. We demonstrate this by the halfmoon data and the ringshaped data. Based on the KSD, we propose a novel outlier detection algorithm, by which an observation with a depth value less than a threshold is declared as an outlier. The proposed algorithm is simple in structure: the threshold is the only one parameter for a given kernel. It applies to a oneclass learning setting, in which “normal ” observations are given as the training data, as well as to a missing label scenario where the training set consists of a mixture of normal observations and outliers with unknown labels. We give upper bounds on the false alarm probability of a depthbased detector. These upper bounds can be used to determine the threshold. We perform extensive experiments on synthetic data and data sets from real applications. The proposed outlier detector is compared with existing methods. The KSD outlier detector demonstrates competitive performance.
Composite Binary Losses
, 2009
"... We study losses for binary classification and class probability estimation and extend the understanding of them from margin losses to general composite losses which are the composition of a proper loss with a link function. We characterise when margin losses can be proper composite losses, explicitl ..."
Abstract

Cited by 16 (9 self)
 Add to MetaCart
(Show Context)
We study losses for binary classification and class probability estimation and extend the understanding of them from margin losses to general composite losses which are the composition of a proper loss with a link function. We characterise when margin losses can be proper composite losses, explicitly show how to determine a symmetric loss in full from half of one of its partial losses, introduce an intrinsic parametrisation of composite binary losses and give a complete characterisation of the relationship between proper losses and “classification calibrated ” losses. We also consider the question of the “best ” surrogate binary loss. We introduce a precise notion of “best ” and show there exist situations where two convex surrogate losses are incommensurable. We provide a complete explicit characterisation of the convexity of composite binary losses in terms of the link function and the weight function associated with the proper loss which make up the composite loss. This characterisation suggests new ways of “surrogate tuning”. Finally, in an appendix we present some new algorithmindependent results on the relationship between properness, convexity and robustness to misclassification noise for binary losses and show that all convex proper losses are nonrobust to misclassification noise. 1
Face Recognition with Occlusions in the Training and Testing Sets
"... Partial occlusions in face images pose a great problem for most face recognition algorithms. Several solutions to this problem have been proposed over the years – ranging from dividing the face image into a set of local regions to sophisticated statistical methods. In the present paper, we pose the ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
(Show Context)
Partial occlusions in face images pose a great problem for most face recognition algorithms. Several solutions to this problem have been proposed over the years – ranging from dividing the face image into a set of local regions to sophisticated statistical methods. In the present paper, we pose the problem as a reconstruction one. In this approach, each test image is described as a linear combination of the training samples in each class. The class samples providing the best reconstruction determine the class label. Here, “best reconstruction ” means that reconstruction providing the smallest matching error when using an appropriate metric to compare the reconstructed and test images. A key point in our formulation is to base this reconstruction solely on the visible data in the training and testing sets. This allows to have partial occlusions in both the training and testing samples, while previous methods only dealt with occlusions in the testing set. We show extensive experimental results using a large variety of comparative studies, demonstrating the superiority of the proposed approach over the state of the art. 1.
The Complete GaborFisher Classifier for Robust Face Recognition
"... This paper develops a novel face recognition technique ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
This paper develops a novel face recognition technique