Results 1 
8 of
8
Discriminative common vectors for face recognition
 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 2005
"... In face recognition tasks, the dimension of the sample space is typically larger than the number of the samples in the training set. As a consequence, the withinclass scatter matrix is singular and the Linear Discriminant Analysis (LDA) method cannot be applied directly. This problem is known as t ..."
Abstract

Cited by 67 (7 self)
 Add to MetaCart
In face recognition tasks, the dimension of the sample space is typically larger than the number of the samples in the training set. As a consequence, the withinclass scatter matrix is singular and the Linear Discriminant Analysis (LDA) method cannot be applied directly. This problem is known as the “small sample size” problem. In this paper, we propose a new face recognition method called the Discriminative Common Vector method based on a variation of Fisher’s Linear Discriminant Analysis for the small sample size case. Two different algorithms are given to extract the discriminative common vectors representing each person in the training set of the face database. One algorithm uses the withinclass scatter matrix of the samples in the training set while the other uses the subspace methods and the GramSchmidt orthogonalization procedure to obtain the discriminative common vectors. Then, the discriminative common vectors are used for classification of new faces. The proposed method yields an optimal solution for maximizing the modified Fisher’s Linear Discriminant criterion given in the paper. Our test results show that the Discriminative Common Vector method is superior to other methods in terms of recognition accuracy, efficiency, and numerical stability.
An optimization criterion for generalized discriminant analysis on undersampled problems
 IEEE Trans. Pattern Analysis and Machine Intelligence
, 2004
"... Abstract—An optimization criterion is presented for discriminant analysis. The criterion extends the optimization criteria of the classical Linear Discriminant Analysis (LDA) through the use of the pseudoinverse when the scatter matrices are singular. It is applicable regardless of the relative size ..."
Abstract

Cited by 28 (8 self)
 Add to MetaCart
Abstract—An optimization criterion is presented for discriminant analysis. The criterion extends the optimization criteria of the classical Linear Discriminant Analysis (LDA) through the use of the pseudoinverse when the scatter matrices are singular. It is applicable regardless of the relative sizes of the data dimension and sample size, overcoming a limitation of classical LDA. The optimization problem can be solved analytically by applying the Generalized Singular Value Decomposition (GSVD) technique. The pseudoinverse has been suggested and used for undersampled problems in the past, where the data dimension exceeds the number of data points. The criterion proposed in this paper provides a theoretical justification for this procedure. An approximation algorithm for the GSVDbased approach is also presented. It reduces the computational complexity by finding subclusters of each cluster and uses their centroids to capture the structure of each cluster. This reduced problem yields much smaller matrices to which the GSVD can be applied efficiently. Experiments on text data, with up to 7,000 dimensions, show that the approximation algorithm produces results that are close to those produced by the exact algorithm. Index Terms—Classification, clustering, dimension reduction, generalized singular value decomposition, linear discriminant analysis, text mining. 1
Efficient model selection for regularized linear discriminant analysis
 In Proceedings of the 15th ACM International Conference on Information and Knowledge Management
, 2006
"... Classical Linear Discriminant Analysis (LDA) is not applicable for small sample size problems due to the singularity of the scatter matrices involved. Regularized LDA (RLDA) provides a simple strategy to overcome the singularity problem by applying a regularization term, which is commonly estimated ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Classical Linear Discriminant Analysis (LDA) is not applicable for small sample size problems due to the singularity of the scatter matrices involved. Regularized LDA (RLDA) provides a simple strategy to overcome the singularity problem by applying a regularization term, which is commonly estimated via crossvalidation from a set of candidates. However, crossvalidation may be computationally prohibitive when the candidate set is large. An efficient algorithm for RLDA is presented that computes the optimal transformation of RLDA for a large set of parameter candidates, with approximately the same cost as running RLDA a small number of times. Thus it facilitates efficient model selection for RLDA. An intrinsic relationship between RLDA and Uncorrelated LDA (ULDA), which was recently proposed for dimension
Face Recognition by Regularized Discriminant Analysis
"... Abstract—When the feature dimension is larger than the number of samples the small samplesize problem occurs. There is great concern about it within the face recognition community. We point out that optimizing the Fisher index in linear discriminant analysis does not necessarily give the best perfo ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Abstract—When the feature dimension is larger than the number of samples the small samplesize problem occurs. There is great concern about it within the face recognition community. We point out that optimizing the Fisher index in linear discriminant analysis does not necessarily give the best performance for a face recognition system. We propose a new regularization scheme. The proposed method is evaluated using the Olivetti Research Laboratory database, the Yale database, and the Feret database. Index Terms—Face recognition, optimization, regularized discriminant analysis (RDA), small samplesize problem. I.
Boosting Kernel Discriminative Common Vectors for Face Recognition
, 2009
"... Problem statement: Kernel discriminative common vector (KDCV) was one of the most effective nonlinear techniques for feature extraction from high dimensional data including images and text data. Approach: This study presented a new algorithm called Boosting Kernel Discriminative Common Vector (BK ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Problem statement: Kernel discriminative common vector (KDCV) was one of the most effective nonlinear techniques for feature extraction from high dimensional data including images and text data. Approach: This study presented a new algorithm called Boosting Kernel Discriminative Common Vector (BKDCV) to further improve the overall performance of KDCV by integrating the boosting and KDCV techniques. Results: In BKDCV, the feature selection and the classifier training were conducted by KDCV and AdaBoost.M2 respectively. To reduce the dependency between classifier outputs and to speed up the learning, each classifier was trained in the different feature space which was obtained by applying KDCV to a small set of hardtoclassify training samples. The proposed method BKDCV possessed several appealing properties. First, like all Kernel methods, it handled nonlinearity in a disciplined manner. Second by introducing pairwise class discriminant information into discriminant criterion, it further increased the classification accuracy. Third, by calculating significant discriminant information, within class scatter space, it also effectively contracted with the small sample size problem. Fourth, it constituted a strong ensemble based KDCV framework by taking advantage of boosting and KDCV techniques. Conclusion: This new method was applied on extended yale B face database and achieves better classification accuracy. Experimental results demonstrated the promising performance of the proposed method as compared to the other methods.
Twostage Methods for Linear Discriminant Analysis: Equivalent Results at a Lower Cost ∗
"... Linear discriminant analysis (LDA) has been used for decades to extract features that preserve class separability. It is classically defined as an optimization problem involving covariance matrices that represent the scatter within and between clusters. The requirement that one of these matrices be ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Linear discriminant analysis (LDA) has been used for decades to extract features that preserve class separability. It is classically defined as an optimization problem involving covariance matrices that represent the scatter within and between clusters. The requirement that one of these matrices be nonsingular restricts its application to data sets in which the dimension of the data does not exceed the sample size. Recently, the applicability of LDA has been extended by using the generalized singular value decomposition (GSVD) to circumvent the nonsingularity requirement. Alternatively, many studies have taken a twostage approach in which the first stage reduces the dimension of the data enough so that it can be followed by classical LDA. In this paper, we justify a twostage approach that uses either principal component analysis or latent semantic indexing before the LDA/GSVD method. We show that it is equivalent to singlestage LDA/GSVD. We also present a computationally simpler choice for the first stage, and conclude with a discussion of the relative merits of each approach. 1
implementation of Feature Extraction Module using Two Dimensional Maximum Margin Criteria which removes
"... Illumination variation is a challenging problem in face recognition research area. Same person can appear greatly different under varying lighting conditions. This paper consists of Face Recognition System which is invariant to illumination variations. Face recognition system which uses Linear Discr ..."
Abstract
 Add to MetaCart
Illumination variation is a challenging problem in face recognition research area. Same person can appear greatly different under varying lighting conditions. This paper consists of Face Recognition System which is invariant to illumination variations. Face recognition system which uses Linear Discriminant Analysis (LDA) as feature extractor have Small Sample Size (SSS). It consists of
CORRESPONDENCE ADDRESS:
"... In some pattern recognition tasks, the dimension of the sample space is larger than number of samples in the training set. This is known as the “small sample size problem”. The Linear Discriminant Analysis (LDA) techniques cannot be applied directly to the small sample size case. The small sample si ..."
Abstract
 Add to MetaCart
In some pattern recognition tasks, the dimension of the sample space is larger than number of samples in the training set. This is known as the “small sample size problem”. The Linear Discriminant Analysis (LDA) techniques cannot be applied directly to the small sample size case. The small sample size problem is also encountered when kernel approaches are used for recognition. In this paper we attempt to answer the question of “How one should choose the optimal projection vectors for feature extraction in the small sample size case? ” Based on our findings, we propose a new method called the Kernel Discriminative Common Vector (Kernel DCV) Method. In this method, we first nonlinearly map the original input space to an implicit higherdimensional feature space, in which the data are hoped to be linearly separable. Then, the optimal projection vectors are computed in this transformed space. The proposed method yields an optimal solution for maximizing a modified Fisher’s Linear Discriminant criterion, discussed in the paper. Thus, under certain conditions, a 100 % recognition rate is guaranteed for the training set samples. Experiments on test data also show that in many situations the generalization performance of the proposed method compares favorably with other kernel approaches. Index Terms: Discriminative common vectors, feature extraction, Fisher’s linear discriminant analysis, kernel methods, small sample size.