## Face recognition using laplacianfaces (2005)

### Cached

### Download Links

Venue: | IEEE Transactions on Pattern Analysis and Machine Intelligence |

Citations: | 203 - 23 self |

### BibTeX

@ARTICLE{He05facerecognition,

author = {Xiaofei He and Shuicheng Yan and Yuxiao Hu and Partha Niyogi and Hong-jiang Zhang},

title = {Face recognition using laplacianfaces},

journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},

year = {2005},

volume = {27},

pages = {328--340}

}

### Years of Citing Articles

### OpenURL

### Abstract

Abstract—We propose an appearance-based face recognition method called the Laplacianface approach. By using Locality Preserving Projections (LPP), the face images are mapped into a face subspace for analysis. Different from Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) which effectively see only the Euclidean structure of face space, LPP finds an embedding that preserves local information, and obtains a face subspace that best detects the essential face manifold structure. The Laplacianfaces are the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the face manifold. In this way, the unwanted variations resulting from changes in lighting, facial expression, and pose may be eliminated or reduced. Theoretical analysis shows that PCA, LDA, and LPP can be obtained from different graph models. We compare the proposed Laplacianface approach with Eigenface and Fisherface methods on three different face data sets. Experimental results suggest that the proposed Laplacianface approach provides a better representation and achieves lower error rates in face recognition. Index Terms—Face recognition, principal component analysis, linear discriminant analysis, locality preserving projections, face manifold, subspace learning. 1

### Citations

2713 | Normalized cuts and image segmentation - Shi, Malik - 2000 |

1752 |
A global geometric framework for nonlinear dimensionality reduction
- Tenenbaum, Silva, et al.
- 2000
(Show Context)
Citation Context ...o that PCA is less sensitive to different training datasets. 2sRecently, a number of research efforts have shown that the face images possibly reside on a nonlinear submanifold [7][10][18][19][21][23]=-=[27]-=-. However, both PCA and LDA effectively see only the Euclidean structure. They fail to discover the underlying structure, if the face images lie on a nonlinear submanifold hidden in the image space. S... |

1572 | Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection
- Belhumeur, Hespanha, et al.
- 1997
(Show Context)
Citation Context ...actice, however, these n×m dimensional spaces are too large to allow robust and fast face recognition. A common way to attempt to resolve this problem is to use dimensionality reduction techniques [1]=-=[2]-=-[8][11][12][14][22][26][28][32][35]. Two of the most popular techniques for this purpose are Principal Component Analysis (PCA) [28] and Linear Discriminant Analysis (LDA) [2]. PCA is an eigenvector m... |

1026 |
Spectral Graph Theory
- Chung
- 1997
(Show Context)
Citation Context ... w T XðD SÞX T w w T XLX T w; where X x1; x2; ...; xnŠ, and D is a diagonal matrix; its entries are column (or row since S is symmetric) sums of S;Dii P j Sji. L D S is the Laplacian matrix =-=[6]-=-. Matrix D provides a natural measure on the data points. The bigger the value Dii (corresponding to yi) is, the more “important” is yi. Therefore, we impose a constraint as follows: y T Dy 1 ) w T ... |

988 |
Face recognition using eigenfaces
- Turk, Pentland
- 1991
(Show Context)
Citation Context ... INTRODUCTION Many face recognition techniques have been developed over the past few decades. One of the most successful and well-studied techniques to face recognition is the appearance-based method =-=[28]-=-[16]. When using appearance-based methods, we usually represent an image of size n×m pixels by a vector in an n×m dimensional space. In practice, however, these n×m dimensional spaces are too large to... |

984 |
Visual learning and recognition of 3-D objects from appearance
- Murase, Nayar
- 1995
(Show Context)
Citation Context ...RODUCTION Many face recognition techniques have been developed over the past few decades. One of the most successful and well-studied techniques to face recognition is the appearance-based method [28]=-=[16]-=-. When using appearance-based methods, we usually represent an image of size n×m pixels by a vector in an n×m dimensional space. In practice, however, these n×m dimensional spaces are too large to all... |

578 | Probabilistic visual learning for object representation
- Moghaddam, Pentland
- 1997
(Show Context)
Citation Context ...ing is done. Figure 5 shows an example of the original face image and the cropped image. Different pattern classifiers have been applied for face recognition, including nearest-neighbor [2], Bayesian =-=[15]-=-, Support Vector Machine [17], etc. In this paper, we apply the nearest-neighbor classifier for its simplicity. In short, the recognition process has three steps. First, we calculate the Laplacianface... |

451 |
Low-Dimensional Procedure for the Characterization of Human Faces
- Sirovich, Kirby
- 1987
(Show Context)
Citation Context ... n×m dimensional spaces are too large to allow robust and fast face recognition. A common way to attempt to resolve this problem is to use dimensionality reduction techniques [1][2][8][11][12][14][22]=-=[26]-=-[28][32][35]. Two of the most popular techniques for this purpose are Principal Component Analysis (PCA) [28] and Linear Discriminant Analysis (LDA) [2]. PCA is an eigenvector method designed to model... |

449 | Laplacian eigenmaps and spectral techniques for embedding and clustering
- Belkin, Niyogi
- 1990
(Show Context)
Citation Context ...nlinear submanifold hidden in the image space. Some nonlinear techniques have been proposed to discover the nonlinear structure of the manifold, e.g. Isomap [27], LLE [18][20], and Laplacian Eigenmap =-=[3]-=-. These nonlinear methods do yield impressive results on some benchmark artificial data sets. However, they yield maps that are defined only on the training data points and how to evaluate the maps on... |

306 | A.C.Kak, “PCA versus LDA
- Martinez
(Show Context)
Citation Context ... m-dimensional spaces are too large to allow robust and fast face recognition. A common way to attempt to resolve this problem is to use dimensionality reduction techniques [1], [2], [8], [11], [12], =-=[14]-=-, [22], [26], [28], [34], [37]. Two of the most popular techniques for this purpose are Principal Component Analysis (PCA) [28] and Linear Discriminant Analysis (LDA) [2]. PCA is an eigenvector method... |

268 | Think globally, fit locally: unsupervised learning of low dimensional manifolds
- Saul, Roweis
- 2003
(Show Context)
Citation Context ...f the face images lie on a nonlinear submanifold hidden in the image space. Some nonlinear techniques have been proposed to discover the nonlinear structure of the manifold, e.g. Isomap [27], LLE [18]=-=[20]-=-, and Laplacian Eigenmap [3]. These nonlinear methods do yield impressive results on some benchmark artificial data sets. However, they yield maps that are defined only on the training data points and... |

244 | Face recognition by elastic bunch graph matching
- Wiskott, Fellous, et al.
- 1997
(Show Context)
Citation Context ... somehow similar to Fisherfaces. 21sFigure 5. The original face image and the cropped image. 7.2 Face Recognition Using Laplacianfaces Once the Laplacianfaces are created, face recognition [2][14][28]=-=[29]-=- becomes a pattern classification task. In this section, we investigate the performance of our proposed Laplacianfaces method for face recognition. The system performance is compared with the Eigenfac... |

226 | Locality preserving projections
- He, Niyogi
(Show Context)
Citation Context ...be specific, the manifold structure is modeled by a nearest-neighbor graph which preserves the local structure of the image space. A face subspace is obtained by Locality Preserving Projections (LPP) =-=[9]-=-. Each face image in the image space is mapped to a low-dimensional face 0162-8828/05/$20.00 ß 2005 IEEE Published by the IEEE Computer Societys2 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTE... |

166 | Charting a manifold
- Brand
- 2003
(Show Context)
Citation Context ...r, they yield maps that are defined only on the training data points and how to evaluate the maps on novel test data points remains unclear. Therefore, these nonlinear manifold learning techniques [3]=-=[5]-=-[18][20] [27][33] might not be suitable for some computer vision tasks, such as face recognition. In the meantime, there has been some interest in the problem of developing low dimensional representat... |

145 | Principal manifolds and nonlinear dimension reduction via local tangent space alignment
- Zhang, Zha
(Show Context)
Citation Context ...mensionality of the nonlinear face manifold, or, degrees of freedom. We know that the dimensionality of the manifold is equal to the dimensionality of the local tangent space. Some previous works [33]=-=[34]-=- show that the local tangent space can be approximated using points in a neighbor set. Therefore, one possibility is to estimate the dimensionality of the tangent space. Another possible extension of ... |

128 |
Kernel Eigenfaces vs. Kernel Fisherfaces: Face Recognition Using Kernel Methods
- Yang
- 2002
(Show Context)
Citation Context ... tasks, such as face recognition. In the meantime, there has been some interest in the problem of developing low-dimensional representations through kernel based techniques for face recognition [13], =-=[33]-=-. These methods can discover the nonlinear structure of the face images. However, they are computationally expensive. Moreover, none of them explicitly considers the structure of the manifold on which... |

121 | Video-Based Face Recognition Using Probabilistic Appearance Manifolds
- Lee, Yang
- 2003
(Show Context)
Citation Context ...nuous curve in image space since there is only one degree of freedom, viz. the angel of rotation. Thus, we can say that the set of face images are intrinsically onedimensional. Many recent works [7], =-=[10]-=-, [18], [19], [21], [23], [27] have shown that the face images do reside on a lowdimensional submanifold embedded in a high-dimensional ambient space (image space). Therefore, an effective subspace le... |

100 | Subspace Linear Discriminant Analysis for Face Recognition
- Zhao, Chellappa, et al.
- 1999
(Show Context)
Citation Context ...onal spaces are too large to allow robust and fast face recognition. A common way to attempt to resolve this problem is to use dimensionality reduction techniques [1][2][8][11][12][14][22][26][28][32]=-=[35]-=-. Two of the most popular techniques for this purpose are Principal Component Analysis (PCA) [28] and Linear Discriminant Analysis (LDA) [2]. PCA is an eigenvector method designed to model linear vari... |

95 | Support Vector Machines Applied to Face Recognition
- Phillips
- 1998
(Show Context)
Citation Context ...n example of the original face image and the cropped image. Different pattern classifiers have been applied for face recognition, including nearest-neighbor [2], Bayesian [15], Support Vector Machine =-=[17]-=-, etc. In this paper, we apply the nearest-neighbor classifier for its simplicity. In short, the recognition process has three steps. First, we calculate the Laplacianfaces from the training set of fa... |

92 |
Nonlinear Dimensionality Reduction by Locally
- Roweis, Saul
- 2000
(Show Context)
Citation Context ...orm LDA, and also that PCA is less sensitive to different training datasets. 2sRecently, a number of research efforts have shown that the face images possibly reside on a nonlinear submanifold [7][10]=-=[18]-=-[19][21][23][27]. However, both PCA and LDA effectively see only the Euclidean structure. They fail to discover the underlying structure, if the face images lie on a nonlinear submanifold hidden in th... |

78 |
Using Manifold Structure for Partially Labeled Classication
- Belkin, Niyogi
- 2003
(Show Context)
Citation Context ...tially an unsupervised learning process. And in many practical cases, one finds a wealth of easily available unlabeled samples. These samples might help to discover the face manifold. For example, in =-=[4]-=-, it is shown how unlabeled samples are used for discovering the manifold structure and hence improving the classification accuracy. Since the face images are believed to reside on a sub-manifold embe... |

77 | Global coordination of local linear models - Roweis, Saul, et al. |

50 |
Boosting chain learning for object detection
- Xiao, Zhu, et al.
- 2003
(Show Context)
Citation Context ...2 32 pixels, with 256 gray levels per pixel. Thus, each image is repre-sented by a 1,024-dimensional vector in image space. The details of our methods for face detection and alignment can be found in =-=[30]-=-, [32]. No further preprocessing is done. Fig. 5 shows an example of the original face image and the cropped image. Different pattern classifiers have been applied for face recognition, including near... |

34 |
Face Recognition Using Kernel Based Fisher Discriminant Analysis
- Liu, Huang, et al.
- 2002
(Show Context)
Citation Context ...vision tasks, such as face recognition. In the meantime, there has been some interest in the problem of developing low dimensional representations through kernel based techniques for face recognition =-=[13]-=-[19]. These methods can discover the nonlinear structure of the face images. However, they are computationally expensive. Moreover, none of them explicitly considers the structure of the manifold on w... |

28 | Effects on Facial Expression
- Chang, Bowyer, et al.
- 2005
(Show Context)
Citation Context ...outperform LDA, and also that PCA is less sensitive to different training datasets. 2sRecently, a number of research efforts have shown that the face images possibly reside on a nonlinear submanifold =-=[7]-=-[10][18][19][21][23][27]. However, both PCA and LDA effectively see only the Euclidean structure. They fail to discover the underlying structure, if the face images lie on a nonlinear submanifold hidd... |

20 | pursuit: A new approach to appearance based recognition
- Shashua, Levin, et al.
(Show Context)
Citation Context ... also that PCA is less sensitive to different training datasets. 2sRecently, a number of research efforts have shown that the face images possibly reside on a nonlinear submanifold [7][10][18][19][21]=-=[23]-=-[27]. However, both PCA and LDA effectively see only the Euclidean structure. They fail to discover the underlying structure, if the face images lie on a nonlinear submanifold hidden in the image spac... |

18 |
W.: “An Efficient LDA Algorithm for Face Recognition
- Yang, Yu, et al.
- 2000
(Show Context)
Citation Context ...ensional spaces are too large to allow robust and fast face recognition. A common way to attempt to resolve this problem is to use dimensionality reduction techniques [1][2][8][11][12][14][22][26][28]=-=[32]-=-[35]. Two of the most popular techniques for this purpose are Principal Component Analysis (PCA) [28] and Linear Discriminant Analysis (LDA) [2]. PCA is an eigenvector method designed to model linear ... |

18 | Ranking Prior Likelihood Distributions for Bayesian Shape Localization Framework
- Yan, Li, et al.
- 2003
(Show Context)
Citation Context ...ixels, with 256 gray levels per pixel. Thus, each image is repre-sented by a 1,024-dimensional vector in image space. The details of our methods for face detection and alignment can be found in [30], =-=[32]-=-. No further preprocessing is done. Fig. 5 shows an example of the original face image and the cropped image. Different pattern classifiers have been applied for face recognition, including nearest-ne... |

15 | Principal component analysis over continuous subspaces and intersection of half-spaces
- Levin, Shashua
- 2002
(Show Context)
Citation Context ..., however, these n×m dimensional spaces are too large to allow robust and fast face recognition. A common way to attempt to resolve this problem is to use dimensionality reduction techniques [1][2][8]=-=[11]-=-[12][14][22][26][28][32][35]. Two of the most popular techniques for this purpose are Principal Component Analysis (PCA) [28] and Linear Discriminant Analysis (LDA) [2]. PCA is an eigenvector method d... |

14 |
Decomposed eigenface for face recognition under various lighting conditions
- Shakunaga, Shigenari
- 2001
(Show Context)
Citation Context ...hese n×m dimensional spaces are too large to allow robust and fast face recognition. A common way to attempt to resolve this problem is to use dimensionality reduction techniques [1][2][8][11][12][14]=-=[22]-=-[26][28][32][35]. Two of the most popular techniques for this purpose are Principal Component Analysis (PCA) [28] and Linear Discriminant Analysis (LDA) [2]. PCA is an eigenvector method designed to m... |

12 |
Linear Subspaces for Illumination Robust Face Recognition
- Batur, Hayes
- 2001
(Show Context)
Citation Context ... practice, however, these n×m dimensional spaces are too large to allow robust and fast face recognition. A common way to attempt to resolve this problem is to use dimensionality reduction techniques =-=[1]-=-[2][8][11][12][14][22][26][28][32][35]. Two of the most popular techniques for this purpose are Principal Component Analysis (PCA) [28] and Linear Discriminant Analysis (LDA) [2]. PCA is an eigenvecto... |

6 |
and Partha Niyogi, “Locality Preserving
- He
- 2003
(Show Context)
Citation Context ...be specific, the manifold structure is modeled by a nearest-neighbor graph which preserves the local structure of the image space. A face subspace is obtained by Locality Preserving Projections (LPP) =-=[9]-=-. Each face image in the image space is mapped to a lowdimensional face subspace, which is characterized by a set of feature images, called Laplacianfaces. The face subspace preserves local structure,... |

5 |
Face Database
- Univ
- 2002
(Show Context)
Citation Context ...methods in face recognition. In this study, three face databases were tested. The first one is the PIE (pose, illumination, and expression) database from CMU [25], the second one is the Yale database =-=[30]-=-, and the third one is the MSRA database collected at the Microsoft Research Asia. In all the experiments, preprocessing to locate the faces was applied. Original images were normalized (in scale and ... |

4 |
Where to go with face recognition
- Gross, Shi, et al.
- 2001
(Show Context)
Citation Context ...ice, however, these n×m dimensional spaces are too large to allow robust and fast face recognition. A common way to attempt to resolve this problem is to use dimensionality reduction techniques [1][2]=-=[8]-=-[11][12][14][22][26][28][32][35]. Two of the most popular techniques for this purpose are Principal Component Analysis (PCA) [28] and Linear Discriminant Analysis (LDA) [2]. PCA is an eigenvector meth... |

4 |
Yale Face Database,” http://cvc.yale.edu/projects/ yalefaces/yalefaces.html
- Univ
- 1997
(Show Context)
Citation Context ... LPP and Laplacian Eigenmap. In this study, three face databases were tested. The first one is the PIE (pose, illumination, and expression) database from CMU [25], the second one is the Yale database =-=[31]-=-, and the third one is the MSRA database collected at the Microsoft Research Asia. In all the experiments, preprocessing to locate the faces was applied. Original images were normalized (in scale and ... |

1 |
Isometric Embedding and Continuum
- Zha, Zhang
- 2003
(Show Context)
Citation Context ...ps that are defined only on the training data points and how to evaluate the maps on novel test data points remains unclear. Therefore, these nonlinear manifold learning techniques [3][5][18][20] [27]=-=[33]-=- might not be suitable for some computer vision tasks, such as face recognition. In the meantime, there has been some interest in the problem of developing low dimensional representations through kern... |