Results 1  10
of
504
Face Recognition: A Literature Survey
, 2000
"... ... This paper provides an uptodate critical survey of still and videobased face recognition research. There are two underlying motivations for us to write this survey paper: the first is to provide an uptodate review of the existing literature, and the second is to offer some insights into ..."
Abstract

Cited by 1398 (21 self)
 Add to MetaCart
... This paper provides an uptodate critical survey of still and videobased face recognition research. There are two underlying motivations for us to write this survey paper: the first is to provide an uptodate review of the existing literature, and the second is to offer some insights into the studies of machine recognition of faces. To provide a comprehensive survey, we not only categorize existing recognition techniques but also present detailed descriptions of representative methods within each category. In addition,
An introduction to kernelbased learning algorithms
 IEEE TRANSACTIONS ON NEURAL NETWORKS
, 2001
"... This paper provides an introduction to support vector machines (SVMs), kernel Fisher discriminant analysis, and ..."
Abstract

Cited by 598 (55 self)
 Add to MetaCart
This paper provides an introduction to support vector machines (SVMs), kernel Fisher discriminant analysis, and
In defense of onevsall classification
 Journal of Machine Learning Research
, 2004
"... Editor: John ShaweTaylor We consider the problem of multiclass classification. Our main thesis is that a simple “onevsall ” scheme is as accurate as any other approach, assuming that the underlying binary classifiers are welltuned regularized classifiers such as support vector machines. This the ..."
Abstract

Cited by 318 (0 self)
 Add to MetaCart
Editor: John ShaweTaylor We consider the problem of multiclass classification. Our main thesis is that a simple “onevsall ” scheme is as accurate as any other approach, assuming that the underlying binary classifiers are welltuned regularized classifiers such as support vector machines. This thesis is interesting in that it disagrees with a large body of recent published work on multiclass classification. We support our position by means of a critical review of the existing literature, a substantial collection of carefully controlled experimental work, and theoretical arguments.
Enhanced local texture feature sets for face recognition under difficult lighting conditions
 In Proc. AMFG’07
, 2007
"... Abstract. Recognition in uncontrolled situations is one of the most important bottlenecks for practical face recognition systems. We address this by combining the strengths of robust illumination normalization, local texture based face representations and distance transform based matching metrics. S ..."
Abstract

Cited by 274 (10 self)
 Add to MetaCart
Abstract. Recognition in uncontrolled situations is one of the most important bottlenecks for practical face recognition systems. We address this by combining the strengths of robust illumination normalization, local texture based face representations and distance transform based matching metrics. Specifically, we make three main contributions: (i) we present a simple and efficient preprocessing chain that eliminates most of the effects of changing illumination while still preserving the essential appearance details that are needed for recognition; (ii) we introduce Local Ternary Patterns (LTP), a generalization of the Local Binary Pattern (LBP) local texture descriptor that is more discriminant and less sensitive to noise in uniform regions; and (iii) we show that replacing local histogramming with a local distance transform based similarity metric further improves the performance of LBP/LTP based face recognition. The resulting method gives stateoftheart performance on three popular datasets chosen to test recognition under difficult
KPCA plus LDA: a complete kernel Fisher discriminant framework for feature extraction and recognition
 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 2005
"... This paper examines the theory of kernel Fisher discriminant analysis (KFD) in a Hilbert space and develops a twophase KFD framework, i.e., kernel principal component analysis (KPCA) plus Fisher linear discriminant analysis (LDA). This framework provides novel insights into the nature of KFD. Base ..."
Abstract

Cited by 139 (7 self)
 Add to MetaCart
This paper examines the theory of kernel Fisher discriminant analysis (KFD) in a Hilbert space and develops a twophase KFD framework, i.e., kernel principal component analysis (KPCA) plus Fisher linear discriminant analysis (LDA). This framework provides novel insights into the nature of KFD. Based on this framework, the authors propose a complete kernel Fisher discriminant analysis (CKFD) algorithm. CKFD can be used to carry out discriminant analysis in “double discriminant subspaces.” The fact that, it can make full use of two kinds of discriminant information, regular and irregular, makes CKFD a more powerful discriminator. The proposed algorithm was tested and evaluated using the FERET face database and the CENPARMI handwritten numeral database. The experimental results show that CKFD outperforms other KFD algorithms.
Efficient and Robust Feature Extraction by Maximum Margin Criterion
 In Advances in Neural Information Processing Systems 16
, 2003
"... In pattern recognition, feature extraction techniques are widely employed to reduce the dimensionality of data and to enhance the discriminatory information. Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) are two most popular linear dimensionality reduction methods. Howev ..."
Abstract

Cited by 116 (5 self)
 Add to MetaCart
(Show Context)
In pattern recognition, feature extraction techniques are widely employed to reduce the dimensionality of data and to enhance the discriminatory information. Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) are two most popular linear dimensionality reduction methods. However, PCA is not very effective for the extraction of the most discriminant features and LDA is not stable due to the small sample size problem. In this paper, we propose some new (linear and nonlinear) feature extractors based on maximum margin criterion (MMC). Geometrically, feature extractors based on MMC maximize the (average) margin between classes after dimensionality reduction. It is shown that MMC can represent class separability better than PCA. As a connection to LDA, we may also derive LDA from MMC by incorporating some constraints. By using some other constraints, we establish a new linear feature extractor that does not suffer from the small sample size problem, which is known to cause serious stability problems for LDA. The kernelized (nonlinear) counterpart of this linear feature extractor is also established in the paper. Our extensive experiments demonstrate that the new feature extractors are effective, stable, and efficient.
A robust minimax approach to classification
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2002
"... When constructing a classifier, the probability of correct classification of future data points should be maximized. We consider a binary classification problem where the mean and covariance matrix of each class are assumed to be known. No further assumptions are made with respect to the classcondi ..."
Abstract

Cited by 104 (7 self)
 Add to MetaCart
(Show Context)
When constructing a classifier, the probability of correct classification of future data points should be maximized. We consider a binary classification problem where the mean and covariance matrix of each class are assumed to be known. No further assumptions are made with respect to the classconditional distributions. Misclassification probabilities are then controlled in a worstcase setting: that is, under all possible choices of classconditional densities with given mean and covariance matrix, we minimize the worstcase (maximum) probability of misclassification of future data points. For a linear decision boundary, this desideratum is translated in a very direct way into a (convex) second order cone optimization problem, with complexity similar to a support vector machine problem. The minimax problem can be interpreted geometrically as minimizing the maximum of the Mahalanobis distances to the two classes. We address the issue of robustness with respect to estimation errors (in the means and covariances of the
Gene functional classification from heterogeneous data
 IN PROCEEDINGS OF THE FIFTH ANNUAL INTERNATIONAL CONFERENCE ON COMPUTATIONAL BIOLOGY: APRIL 2225, 2001
, 2001
"... In our attempts to understand cellular function at the molecular level, we must be able to synthesize information from disparate types of genomic data. We consider the problem of inferring gene functional classifications from a heterogeneous data set consisting of DNA microarray expression measureme ..."
Abstract

Cited by 99 (1 self)
 Add to MetaCart
In our attempts to understand cellular function at the molecular level, we must be able to synthesize information from disparate types of genomic data. We consider the problem of inferring gene functional classifications from a heterogeneous data set consisting of DNA microarray expression measurements and phylogenetic profiles from wholegenome sequence comparisons. We demonstrate the application of the support vector machine (SVM) learning algorithm to this functional inference task. Our results suggest the importance of exploiting prior information about the heterogeneity of the data. In particular, we propose an SVM kernel function that is explicitly heterogeneous. We also show how to use knowledge about heterogeneity to aid in feature selection.
Nonlinear Discriminant Analysis using Kernel Functions
 Advances in Neural Information Processing Systems
, 1999
"... Fishers linear discriminant analysis (LDA) is a classical multivariate technique both for dimension reduction and classication. The data vectors are transformed into a low dimensional subspace such that the class centroids are spread out as much as possible. In this subspace LDA works as a simple pr ..."
Abstract

Cited by 99 (9 self)
 Add to MetaCart
Fishers linear discriminant analysis (LDA) is a classical multivariate technique both for dimension reduction and classication. The data vectors are transformed into a low dimensional subspace such that the class centroids are spread out as much as possible. In this subspace LDA works as a simple prototype classier. The resulting decision boundaries are linear. However, in many applications the linear boundaries do not adequately separate the classes and the possibility of modeling more complex boundaries would be desirable. In this paper we present a nonlinear generalization of discriminant analysis that implements the method of representing dot products of pattern vectors by kernel functions. This technique allows to eciently compute discriminant functions in arbitrary feature spaces for which such kernel representations exist. 1 Introduction Classical linear discriminant analysis (LDA) is a statistical method that attempts to project data vectors that belong to c dierent classes...
Learning Gene Functional Classifications From Multiple Data Types
 JOURNAL OF COMPUTATIONAL BIOLOGY
, 2002
"... In our attempts to understand cellular function at the molecular level, we must be able to synthesize information from disparate types of genomic data. We consider the problem of inferring gene functional classifications from a heterogeneous data set consisting of DNA microarray expression measureme ..."
Abstract

Cited by 98 (1 self)
 Add to MetaCart
In our attempts to understand cellular function at the molecular level, we must be able to synthesize information from disparate types of genomic data. We consider the problem of inferring gene functional classifications from a heterogeneous data set consisting of DNA microarray expression measurements and phylogenetic profiles from wholegenome sequence comparisons. We demonstrate the application of the support vector machine (SVM) learning algorithm to this functional inference task. Our results suggest the importance of exploiting prior information about the heterogeneity of the data. In particular, we propose an SVM kernel function that is explicitly heterogeneous. In addition, we describe feature scaling methods for further exploiting prior knowledge of heterogeneity by giving each data type different weights.