## Discriminant common vecotors versus neighbourhood components analysis and laplacianfaces: A comparative study in small sample size problem

Venue: | Image and Vision Computing |

Citations: | 4 - 4 self |

### BibTeX

@ARTICLE{Liu_discriminantcommon,

author = {Jun Liu and Songcan Chen},

title = {Discriminant common vecotors versus neighbourhood components analysis and laplacianfaces: A comparative study in small sample size problem},

journal = {Image and Vision Computing},

year = {},

volume = {24},

pages = {2006}

}

### OpenURL

### Abstract

and Laplacianfaces (LAP) are three recently proposed methods which can effectively learn linear projection matrices for dimensionality reduction in face recognition, where the dimension of the sample space is typically larger than the number of samples in the training set and consequently the so-called small sample size (SSS) problem exists. The three methods obtained their respective projection matrices based on different objective functions and all claimed to be superior to such methods as Principal Component Analysis (PCA) and PCA plus Linear Discriminant Analysis (PCA+LDA) in terms of classification accuracy. However, in literature, no comparative study is carried out among them. In this paper, we carry out a comparative study among them in face recognition (or generally in the SSS problem), and argue that the projection matrix yielded by DCV is the optimal solution to both NCA and LAP in terms of their respective objective functions, whereas neither NCA nor LAP may get their own optimal solutions. In addition, we show that DCV is more efficient than both NCA and LAP for both linear dimensionality reduction and subsequent classification in SSS problem. Finally, experiments are conducted on ORL, AR and YALE face databases to verify our arguments and to present some insights for future study.

### Citations

9946 | Statistical Learning Theory - Vapnik - 1998 |

2386 | Support vector network - Cortes, Vapnik - 1995 |

1654 | Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
- Belhumeur, Hespanha, et al.
(Show Context)
Citation Context ...YDY T is a d×d matrix, which implies that YDY T is singular. To overcome the singularity of YDY T , LAP employs a procedure similar to the PCA+LDA or the Fisherface method proposed by Belhumeur et al =-=[5]-=-, namely applying a PCA projection first. More specifically, LAP operates as follows: Step 1: PCA projection. Project the face images yi, i=1, 2, …, M to the PCA subspace by keeping the 98 percent inf... |

1422 | Vapnik –“A training algorithm for optimal margin classifiers - Boser, Guyon, et al. - 1992 |

1045 |
Face recognition using eigenfaces
- Turk, Pentland
- 1991
(Show Context)
Citation Context ...tion * Corresponding author: Tel: +86-25-84892452, Fax: +86-25-84498069. Email: s.chen@nuaa.edu.cn, j.liu@nuaa.edu.cn. 1s1 Introduction In face recognition, we usually employ appearance-based methods =-=[1, 2]-=-. One primary advantage of appearance-based methods is that it is not necessary to create representations or models for face images since, for a given face image, its model is now implicitly defined i... |

1007 |
Visual learning and recognition of 3-D objects from appearance
- Murase, Nayar
- 1995
(Show Context)
Citation Context ...tion * Corresponding author: Tel: +86-25-84892452, Fax: +86-25-84498069. Email: s.chen@nuaa.edu.cn, j.liu@nuaa.edu.cn. 1s1 Introduction In face recognition, we usually employ appearance-based methods =-=[1, 2]-=-. One primary advantage of appearance-based methods is that it is not necessary to create representations or models for face images since, for a given face image, its model is now implicitly defined i... |

462 | Laplacian eigenmaps and spectral techniques for embedding and clustering - Belkin, Niyogi - 2002 |

432 | Using discriminant eigenfeatures for image retrieval - Swets, Weng - 1996 |

322 | Pca versus lda
- Martinez, Kak
- 2001
(Show Context)
Citation Context ...of appearance-based methods is that it is not necessary to create representations or models for face images since, for a given face image, its model is now implicitly defined in the face image itself =-=[3]-=-. When using appearance-based methods, we usually represent an image of size r×c pixels by a vector in a d-dimensional space, where d=rc. Although such an appearance based representation is simple in ... |

234 | Locality preserving projections
- He, Niyogi
(Show Context)
Citation Context ...serving the locality structure of the image space. To this end, it models a manifold [13-15] structure by a nearest-neighbor graph, constructs a face subspace by Locality Preserving Projections (LPP) =-=[16]-=-, and performs dimensionality reduction by a set of feature images called Laplacianfaces. NCA [18] aims at learning a Mahalanobis distance measure to be used in the k Nearest Neighbor (KNN) classifica... |

225 | Neighbourhood components analysis - Goldberger, Roweis, et al. |

210 | Face recognition using laplacianfaces
- He, Yan, et al.
- 2005
(Show Context)
Citation Context ...s to the so-called small sample size (SSS) problem. A common way to resolve this problem is to use dimensionality reduction techniques. Discriminant Common Vecotors (DCV) [7, 8], Laplacianfaces (LAP) =-=[17]-=- and Neighbourhood Components Analysis (NCA) [18] are three recently proposed methods which can effectively learn linear projection matrices for dimensionality reduction in face recognition. DCV [7, 8... |

170 |
A new LDA-based face recognition system which can solve the small sample size problem
- Chen, Liao, et al.
- 2000
(Show Context)
Citation Context ...uch Mahalanobis distance down to learning a linear projection (or transformation) matrix, and at the same time avoids the inverse 1 Note that, we have proved in [9] that such null space based methods =-=[10]-=- as Generalised K-L Expansion (GKLE) [11], PCA plus Null Space (PNS) [12] and DCV are in fact equivalent, so our discussion of DCV in this paper can naturally be extended to both GKLE and PNS. 2sopera... |

124 |
The AR Face database [CVC
- Martinez, Benavente
- 1998
(Show Context)
Citation Context ...nd illumination. Fig. 2 shows the eleven images of one person from this dataset. 20s(1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) Fig. 2 Eleven images from one person in the YALE face database The AR =-=[26]-=- face dataset consists of over 3200 images of frontal images of faces of 126 subjects. Each subject has 26 different images which were grabbed in two different sessions separated by two weeks, 13 imag... |

96 | Nonlinear Dimensionality Reduction by Locally - Roweis, Saul - 2000 |

68 | Discriminative common vectors for face recognition
- Cevikalp, Neamtu, et al.
- 2005
(Show Context)
Citation Context ...n the training set which leads to the so-called small sample size (SSS) problem. A common way to resolve this problem is to use dimensionality reduction techniques. Discriminant Common Vecotors (DCV) =-=[7, 8]-=-, Laplacianfaces (LAP) [17] and Neighbourhood Components Analysis (NCA) [18] are three recently proposed methods which can effectively learn linear projection matrices for dimensionality reduction in ... |

44 | Robust coding scheme for indexing and retrieval from large face databases - LIU, WECHSLER |

29 | Manifold of facial expression - Chang, Hu, et al. - 2003 |

27 | A two-stage linear discriminant analysis via QR-decomposition
- Ye, Li
- 2005
(Show Context)
Citation Context ...composition to matrix [Hw Hb] which is of size d by M-1. Thus the space complexity for DCV in process of computation is O(dM). 2) For LAP, it first applies a PCA stage whose space complexity is O(dM) =-=[27]-=-, and then LAP manipulates on matrices whose sizes are less than d by M for calculating its projection matrix. Thus the space complexity for LAP is also O(dM). 3) In NCA’s Step1, it needs to store A, ... |

17 |
The common vector approach and its relation to principal component analysis
- Gulmezoglu, Dzhafarov, et al.
(Show Context)
Citation Context ...ors (DCV) Before describing DCV, we first introduce the idea of common vectors from which DCV is originated. The idea of common vectors is originally introduced for isolated word recognition problems =-=[19, 20]-=- in the case where the number of samples in each class is less than or equal to the dimensionality of sample space. These approaches extract the common properties of classes in the training set by eli... |

8 |
A novel approach to isolated word recognition
- Gülmezoğlu, Dzhafarov, et al.
- 1999
(Show Context)
Citation Context ...ors (DCV) Before describing DCV, we first introduce the idea of common vectors from which DCV is originated. The idea of common vectors is originally introduced for isolated word recognition problems =-=[19, 20]-=- in the case where the number of samples in each class is less than or equal to the dimensionality of sample space. These approaches extract the common properties of classes in the training set by eli... |

5 |
A generalised K–L expansion method which can deal with small samples size and high-dimensional problems
- Yang, Zhang, et al.
- 2003
(Show Context)
Citation Context ... a linear projection (or transformation) matrix, and at the same time avoids the inverse 1 Note that, we have proved in [9] that such null space based methods [10] as Generalised K-L Expansion (GKLE) =-=[11]-=-, PCA plus Null Space (PNS) [12] and DCV are in fact equivalent, so our discussion of DCV in this paper can naturally be extended to both GKLE and PNS. 2soperation of the matrix in calculating traditi... |

4 |
Kernel Methods for Pattern Analysis
- John, Nello
- 2004
(Show Context)
Citation Context ...m. Favored by this revealed essence, DCV can be understood more clearly than the three steps described in section 2.1 and be easily extended to its nonlinear version utilizing kernel QR decomposition =-=[25]-=-. In addition, again based on this revealed essence, we can verify that when the training samples are linearly independent, the extracted discriminant common vectors for different classes are differen... |

2 |
Face recognition by using discriminative common vectors
- Cevikalp, Wilke
- 2004
(Show Context)
Citation Context ...n the training set which leads to the so-called small sample size (SSS) problem. A common way to resolve this problem is to use dimensionality reduction techniques. Discriminant Common Vecotors (DCV) =-=[7, 8]-=-, Laplacianfaces (LAP) [17] and Neighbourhood Components Analysis (NCA) [18] are three recently proposed methods which can effectively learn linear projection matrices for dimensionality reduction in ... |

2 |
Solving the small size problem of
- Huang, Liu, et al.
- 2002
(Show Context)
Citation Context ...rmation) matrix, and at the same time avoids the inverse 1 Note that, we have proved in [9] that such null space based methods [10] as Generalised K-L Expansion (GKLE) [11], PCA plus Null Space (PNS) =-=[12]-=- and DCV are in fact equivalent, so our discussion of DCV in this paper can naturally be extended to both GKLE and PNS. 2soperation of the matrix in calculating traditional Mahalanobis distance metric... |

1 | Equivalences among different Null Space Based feature extraction Methods for Small Sample Size Problem - Liu, Chen |