Results 1  10
of
29
Graph embedding and extension: A general framework for dimensionality reduction
 IEEE Trans. Pattern Anal. Mach. Intell
, 2007
"... Abstract—Over the past few decades, a large family of algorithms—supervised or unsupervised; stemming from statistics or geometry theory—has been designed to provide different solutions to the problem of dimensionality reduction. Despite the different motivations of these algorithms, we present in t ..."
Abstract

Cited by 103 (17 self)
 Add to MetaCart
(Show Context)
Abstract—Over the past few decades, a large family of algorithms—supervised or unsupervised; stemming from statistics or geometry theory—has been designed to provide different solutions to the problem of dimensionality reduction. Despite the different motivations of these algorithms, we present in this paper a general formulation known as graph embedding to unify them within a common framework. In graph embedding, each algorithm can be considered as the direct graph embedding or its linear/kernel/tensor extension of a specific intrinsic graph that describes certain desired statistical or geometric properties of a data set, with constraints from scale normalization or a penalty graph that characterizes a statistical or geometric property that should be avoided. Furthermore, the graph embedding framework can be used as a general platform for developing new dimensionality reduction algorithms. By utilizing this framework as a tool, we propose a new supervised dimensionality reduction algorithm called Marginal Fisher Analysis in which the intrinsic graph characterizes the intraclass compactness and connects each data point with its neighboring points of the same class, while the penalty graph connects the marginal points and characterizes the interclass separability. We show that MFA effectively overcomes the limitations of the traditional Linear Discriminant Analysis algorithm due to data distribution assumptions and available projection directions. Real face recognition experiments show the superiority of our proposed MFA in comparison to LDA, also for corresponding kernel and tensor extensions. Index Terms—Dimensionality reduction, manifold learning, subspace learning, graph embedding framework. 1
A Unified Framework for Subspace Face Recognition
 IEEE Trans. PAMI
, 2004
"... Abstract—PCA, LDA, and Bayesian analysis are the three most representative subspace face recognition approaches. In this paper, we show that they can be unified under the same framework. We first model face difference with three components: intrinsic difference, transformation difference, and noise. ..."
Abstract

Cited by 66 (24 self)
 Add to MetaCart
(Show Context)
Abstract—PCA, LDA, and Bayesian analysis are the three most representative subspace face recognition approaches. In this paper, we show that they can be unified under the same framework. We first model face difference with three components: intrinsic difference, transformation difference, and noise. A unified framework is then constructed by using this face difference model and a detailed subspace analysis on the three components. We explain the inherent relationship among different subspace methods and their unique contributions to the extraction of discriminating information from the face difference. Based on the framework, a unified subspace analysis method is developed using PCA, Bayes, and LDA as three steps. A 3D parameter space is constructed using the three subspace dimensions as axes. Searching through this parameter space, we achieve better recognition performance than standard subspace methods.
Dualspace linear discriminant analysis for face recognition
 Proc. IEEE Conf. Computer Vision and Pattern Recognition
, 2004
"... Linear Discriminant Analysis (LDA) is a popular feature extraction technique for face recognition. However, it often suffers from the small sample size problem when dealing with the high dimensional face data. Some approaches have been proposed to overcome this problem, but they are often unstable a ..."
Abstract

Cited by 54 (16 self)
 Add to MetaCart
(Show Context)
Linear Discriminant Analysis (LDA) is a popular feature extraction technique for face recognition. However, it often suffers from the small sample size problem when dealing with the high dimensional face data. Some approaches have been proposed to overcome this problem, but they are often unstable and have to discard some discriminative information. In this paper, a dualspace LDA approach for face recognition is proposed to take full advantage of the discriminative information in the face space. Based on a probabilistic visual model, the eigenvalue spectrum in the null space of withinclass scatter matrix is estimated, and discriminant analysis is simultaneously applied in the principal and null subspaces of the withinclass scatter matrix. The two sets of discriminative features are then combined for recognition. It outperforms existing LDA approaches. 1.
Random Sampling LDA for Face Recognition
 IN PROCEEDINGS OF CVPR, WASHINGTON D.C., USA, 2004. PROCEEDINGS OF THE 2004 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR’04) 10636919/04 $20.00 © 2004 IEEE
, 2004
"... Linear Discriminant Analysis (LDA) is a popular feature extraction technique for face recognition. However, It often suffers from the small sample size problem when dealing with the high dimensional face data. Fisherface and Null Space LDA (NLDA) are two conventional approaches to address this prob ..."
Abstract

Cited by 43 (8 self)
 Add to MetaCart
(Show Context)
Linear Discriminant Analysis (LDA) is a popular feature extraction technique for face recognition. However, It often suffers from the small sample size problem when dealing with the high dimensional face data. Fisherface and Null Space LDA (NLDA) are two conventional approaches to address this problem. But in many cases, these LDA classifiers are overfitted to the training set and discard some useful discriminative information. In this paper, by analyzing different overfitting problems for the two kinds of LDA classifiers, we propose an approach using random subspace and bagging to improve them respectively. By random sampling on feature vector and training samples, multiple stabilized Fisherface and NLDA classifiers are constructed. The two kinds of complementary classifiers are integrated using a fusion rule, so nearly all the discriminative information are preserved. We also apply this approach to the integration of multiple features. A robust face recognition system integrating shape, texture and Gabor responses is finally developed. 1.
Random sampling for subspace face recognition
 International Journal of Computer Vision
, 2006
"... Abstract. Subspace face recognition often suffers from two problems: (1) the training sample set is small compared with the high dimensional feature vector; (2) the performance is sensitive to the subspace dimension. Instead of pursuing a single optimal subspace, we develop an ensemble learning fram ..."
Abstract

Cited by 34 (14 self)
 Add to MetaCart
(Show Context)
Abstract. Subspace face recognition often suffers from two problems: (1) the training sample set is small compared with the high dimensional feature vector; (2) the performance is sensitive to the subspace dimension. Instead of pursuing a single optimal subspace, we develop an ensemble learning framework based on random sampling on all three key components of a classification system: the feature space, training samples, and subspace parameters. Fisherface and Null Space LDA (NLDA) are two conventional approaches to address the small sample size problem. But in many cases, these LDA classifiers are overfitted to the training set and discard some useful discriminative information. By analyzing different overfitting problems for the two kinds of LDA classifiers, we use random subspace and bagging to improve them respectively. By random sampling on feature vectors and training samples, multiple stabilized Fisherface and NLDA classifiers are constructed and the two groups of complementary classifiers are integrated using a fusion rule, so nearly all the discriminative information is preserved. In addition, we further apply random sampling on parameter selection in order to overcome the difficulty of selecting optimal parameters in our algorithms. Then, we use the developed random sampling framework for the integration of multiple features. A robust random sampling face recognition system integrating shape, texture, and Gabor responses is finally constructed.
Graph embedding: a general framework for dimensionality reduction. CVPR
, 2005
"... In the last decades, a large family of algorithmsũ supervised or unsupervised; stemming from statistic or geometry theoryũhave been proposed to provide different solutions to the problem of dimensionality reduction. In this paper, beyond the different motivations of these algorithms, we propose a ..."
Abstract

Cited by 34 (5 self)
 Add to MetaCart
(Show Context)
In the last decades, a large family of algorithmsũ supervised or unsupervised; stemming from statistic or geometry theoryũhave been proposed to provide different solutions to the problem of dimensionality reduction. In this paper, beyond the different motivations of these algorithms, we propose a general framework, graph embedding along with its linearization and kernelization, which in theory reveals the underlying objective shared by most previous algorithms. It presents a unified perspective to understand these algorithms; that is, each algorithm can be considered as the direct graph embedding or its linear/kernel extension of some specific graph characterizing certain statistic or geometry property of a data set. Furthermore, this framework is a general platform to develop new algorithm for dimensionality reduction. To this end, we propose a new supervised algorithm, Marginal Fisher Analysis (MFA), for dimensionality reduction by designing two graphs that characterize the intraclass compactness and interclass separability, respectively. MFA measures the intraclass compactness with the distance between each data point and its neighboring points of the same class, and measures the interclass separability with the class margins; thus it overcomes the limitations of traditional Linear Discriminant Analysis algorithm in terms of data distribution assumptions and available projection directions. The toy problem on artificial data and the real face recognition experiments both show the superiority of our proposed MFA in comparison to LDA. 1.
Gabor wavelets and General Discriminant Analysis for face identification and verification
, 2007
"... ..."
Hallucinating face by eigentransformation
 IEEE Trans. SMCC
, 2005
"... A bimodal biometric database, ” in IEE Proc. Vision, Image Signal ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
(Show Context)
A bimodal biometric database, ” in IEE Proc. Vision, Image Signal
SemiSupervised Discriminant Analysis Using Robust PathBased Similarity
 Proc. IEEE Conf. Computer Vision and Pattern Recognition
, 2008
"... Linear Discriminant Analysis (LDA), which works by maximizing the withinclass similarity and minimizing the betweenclass similarity simultaneously, is a popular dimensionality reduction technique in pattern recognition and machine learning. In realworld applications when labeled data are limited, ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
Linear Discriminant Analysis (LDA), which works by maximizing the withinclass similarity and minimizing the betweenclass similarity simultaneously, is a popular dimensionality reduction technique in pattern recognition and machine learning. In realworld applications when labeled data are limited, LDA does not work well. Under many situations, however, it is easy to obtain unlabeled data in large quantities. In this paper, we propose a novel dimensionality reduction method, called SemiSupervised Discriminant Analysis (SSDA), which can utilize both labeled and unlabeled data to perform dimensionality reduction in the semisupervised setting. Our method uses a robust pathbased similarity measure to capture the manifold structure of the data and then uses the obtained similarity to maximize the separability between different classes. A kernel extension of the proposed method for nonlinear dimensionality reduction in the semisupervised setting is also presented. Experiments on face recognition demonstrate the effectiveness of the proposed method. 1.
A framework of 2d fisher discriminant analysis: Application to face recognition with small number of training samples
 in IEEE Conference on Computer Vision and Pattern Recognition
, 2005
"... A novel framework called 2D Fisher Discriminant Analysis (2DFDA) is proposed to deal with the Small Sample Size (SSS) problem in conventional OneDimensional Linear Discriminant Analysis (1DLDA). Different from the 1DLDA based approaches, 2DFDA is based on 2D image matrices rather than column ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
A novel framework called 2D Fisher Discriminant Analysis (2DFDA) is proposed to deal with the Small Sample Size (SSS) problem in conventional OneDimensional Linear Discriminant Analysis (1DLDA). Different from the 1DLDA based approaches, 2DFDA is based on 2D image matrices rather than column vectors so the image matrix does not need to be transformed into a long vector before feature extraction. The advantage arising in this way is that the SSS problem does not exist anymore because the betweenclass and withinclass scatter matrices constructed in 2DFDA are both of fullrank. This framework contains unilateral and bilateral 2DFDA. It is applied to face recognition where only few training images exist for each subject. Both the unilateral and bilateral 2DFDA achieve excellent performance on two public databases: ORL database and Yale face database B. 1