Results 1 - 10
of
139
Enhanced local texture feature sets for face recognition under difficult lighting conditions
- In Proc. AMFG’07
, 2007
"... Abstract. Recognition in uncontrolled situations is one of the most important bottlenecks for practical face recognition systems. We address this by combining the strengths of robust illumination normalization, local texture based face representations and distance transform based matching metrics. S ..."
Abstract
-
Cited by 274 (10 self)
- Add to MetaCart
Abstract. Recognition in uncontrolled situations is one of the most important bottlenecks for practical face recognition systems. We address this by combining the strengths of robust illumination normalization, local texture based face representations and distance transform based matching metrics. Specifically, we make three main contributions: (i) we present a simple and efficient preprocessing chain that eliminates most of the effects of changing illumination while still preserving the essential appearance details that are needed for recognition; (ii) we introduce Local Ternary Patterns (LTP), a generalization of the Local Binary Pattern (LBP) local texture descriptor that is more discriminant and less sensitive to noise in uniform regions; and (iii) we show that replacing local histogramming with a local distance transform based similarity metric further improves the performance of LBP/LTP based face recognition. The resulting method gives state-of-the-art performance on three popular datasets chosen to test recognition under difficult
Subclass discriminant analysis
- IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 2006
"... Over the years, many Discriminant Analysis (DA) algorithms have been proposed for the study of high-dimensional data in a large variety of problems. Each of these algorithms is tuned to a specific type of data distribution (that which best models the problem at hand). Unfortunately, in most problem ..."
Abstract
-
Cited by 60 (10 self)
- Add to MetaCart
Over the years, many Discriminant Analysis (DA) algorithms have been proposed for the study of high-dimensional data in a large variety of problems. Each of these algorithms is tuned to a specific type of data distribution (that which best models the problem at hand). Unfortunately, in most problems the form of each class pdf is a priori unknown, and the selection of the DA algorithm that best fits our data is done over trial-and-error. Ideally, one would like to have a single formulation which can be used for most distribution types. This can be achieved by approximating the underlying distribution of each class with a mixture of Gaussians. In this approach, the major problem to be addressed is that of determining the optimal number of Gaussians per class, i.e., the number of subclasses. In this paper, two criteria able to find the most convenient division of each class into a set of subclasses are derived. Extensive experimental results are shown using five databases. Comparisons are given against Linear Discriminant Analysis (LDA), Direct LDA (DLDA), Heteroscedastic LDA (HLDA), Nonparametric DA (NDA), and Kernel-Based LDA (K-LDA). We show that our method is always the best or comparable to the best.
Capitalize on dimensionality increasing techniques for improving face recognition grand challenge performance
- IEEE TPAMI
, 2006
"... Abstract—This paper presents a novel pattern recognition framework by capitalizing on dimensionality increasing techniques. In particular, the framework integrates Gabor image representation, a novel multiclass Kernel Fisher Analysis (KFA) method, and fractional power polynomial models for improving ..."
Abstract
-
Cited by 56 (2 self)
- Add to MetaCart
(Show Context)
Abstract—This paper presents a novel pattern recognition framework by capitalizing on dimensionality increasing techniques. In particular, the framework integrates Gabor image representation, a novel multiclass Kernel Fisher Analysis (KFA) method, and fractional power polynomial models for improving pattern recognition performance. Gabor image representation, which increases dimensionality by incorporating Gabor filters with different scales and orientations, is characterized by spatial frequency, spatial locality, and orientational selectivity for coping with image variabilities such as illumination variations. The KFA method first performs nonlinear mapping from the input space to a high-dimensional feature space, and then implements the multiclass Fisher discriminant analysis in the feature space. The significance of the nonlinear mapping is that it increases the discriminating power of the KFA method, which is linear in the feature space but nonlinear in the input space. The novelty of the KFA method comes from the fact that 1) it extends the two-class kernel Fisher methods by addressing multiclass pattern classification problems and 2) it improves upon the traditional Generalized Discriminant Analysis (GDA) method by deriving a unique solution (compared to the GDA solution, which is not unique). The fractional power polynomial models further improve performance of the proposed pattern recognition framework. Experiments on face recognition using both the FERET database and the FRGC (Face Recognition Grand Challenge) databases show the feasibility of the proposed framework. In particular, experimental results using the FERET database show that the KFA method performs better than the GDA method and the fractional power polynomial models help both the KFA method and the GDA method improve their face recognition performance. Experimental results using the FRGC databases show that the proposed pattern recognition framework improves face
Inter-modality face recognition
, 2006
"... Abstract. Recently, the wide deployment of practical face recognition systems gives rise to the emergence of the inter-modality face recognition problem. In this problem, the face images in the database and the query images captured on spot are acquired under quite different conditions or even using ..."
Abstract
-
Cited by 44 (3 self)
- Add to MetaCart
(Show Context)
Abstract. Recently, the wide deployment of practical face recognition systems gives rise to the emergence of the inter-modality face recognition problem. In this problem, the face images in the database and the query images captured on spot are acquired under quite different conditions or even using different equipments. Conventional approaches either treat the samples in a uniform model or introduce an intermediate conversion stage, both of which would lead to severe performance degradation due to the great discrepancies between different modalities. In this paper, we propose a novel algorithm called Common Discriminant Feature Extrac-tion specially tailored to the inter-modality problem. In the algorithm, two transforms are simultaneously learned to transform the samples in both modalities respectively to the common feature space. We formulate the learning objective by incorporating both the empirical discrimina-tive power and the local smoothness of the feature transformation. By explicitly controlling the model complexity through the smoothness con-straint, we can effectively reduce the risk of overfitting and enhance the generalization capability. Furthermore, to cope with the nongaussian dis-tribution and diverse variations in the sample space, we develop two non-linear extensions of the algorithm: one is based on kernelization, while the other is a multi-mode framework. These extensions substantially improve the recognition performance in complex situation. Extensive experiments are conducted to test our algorithms in two application scenarios: opti-cal image-infrared image recognition and photo-sketch recognition. Our algorithms show excellent performance in the experiments. 1
Riemannian manifold learning
- IEEE Trans. Pattern Anal. Mach. Intell
, 2008
"... Abstract—Recently, manifold learning has beenwidely exploited in pattern recognition, data analysis, andmachine learning. This paper presents a novel framework, called Riemannian manifold learning (RML), based on the assumption that the input high-dimensional data lie on an intrinsically low-dimensi ..."
Abstract
-
Cited by 42 (0 self)
- Add to MetaCart
(Show Context)
Abstract—Recently, manifold learning has beenwidely exploited in pattern recognition, data analysis, andmachine learning. This paper presents a novel framework, called Riemannian manifold learning (RML), based on the assumption that the input high-dimensional data lie on an intrinsically low-dimensional Riemannian manifold. The main idea is to formulate the dimensionality reduction problem as a classical problem in Riemannian geometry, that is, how to construct coordinate charts for a given Riemannian manifold? We implement the Riemannian normal coordinate chart, which has been the most widely used in Riemannian geometry, for a set of unorganized data points. First, two input parameters (the neighborhood size k and the intrinsic dimension d) are estimated based on an efficient simplicial reconstruction of the underlying manifold. Then, the normal coordinates are computed to map the input high-dimensional data into a low-dimensional space. Experiments on synthetic data, as well as real-world images, demonstrate that our algorithm can learn intrinsic geometric structures of the data, preserve radial geodesic distances, and yield regular embeddings.
Computational and Theoretical Analysis of Null Space and Orthogonal Linear Discriminant Analysis
- JOURNAL OF MACHINE LEARNING RESEARCH 7 (2006) 1183--1204
, 2006
"... Dimensionality reduction is an important pre-processing step in many applications. Linear discriminant analysis (LDA) is a classical statistical approach for supervised dimensionality reduction. It aims to maximize the ratio of the between-class distance to the within-class distance, thus maximizi ..."
Abstract
-
Cited by 26 (7 self)
- Add to MetaCart
Dimensionality reduction is an important pre-processing step in many applications. Linear discriminant analysis (LDA) is a classical statistical approach for supervised dimensionality reduction. It aims to maximize the ratio of the between-class distance to the within-class distance, thus maximizing the class discrimination. It has been used widely in many applications. However, the classical LDA formulation requires the nonsingularity of the scatter matrices involved. For undersampled problems, where the data dimensionality is much larger than the sample size, all scatter matrices are singular and classical LDA fails. Many extensions, including null space LDA (NLDA) and orthogonal LDA (OLDA), have been proposed in the past to overcome this problem. NLDA aims to maximize the between-class distance in the null space of the within-class scatter matrix, while OLDA computes a set of orthogonal discriminant vectors via the simultaneous diagonalization of the scatter matrices. They have been applied successfully in various applications. In this
Subspace Learning from Image gradient orientations
, 2012
"... We introduce the notion of subspace learning from image gradient orientations for appearance-based object recognition. As image data is typically noisy and noise is substantially different from Gaussian, traditional subspace learning from pixel intensities fails very often to estimate reliably the ..."
Abstract
-
Cited by 17 (9 self)
- Add to MetaCart
We introduce the notion of subspace learning from image gradient orientations for appearance-based object recognition. As image data is typically noisy and noise is substantially different from Gaussian, traditional subspace learning from pixel intensities fails very often to estimate reliably the low-dimensional subspace of a given data population. We show that replacing pixel intensities with gradient orientations and the ℓ2 norm with a cosine-based distance measure offers, to some extend, a remedy to this problem. Within this framework, which we coin IGO (Image Gradient Orientations) subspace learning, we first formulate and study the properties of Principal Component Analysis of image gradient orientations (IGO-PCA). We then show its connection to previously proposed robust PCA techniques both theoretically and experimentally. Finally, we derive a number of other popular subspace learning techniques, namely Linear Discriminant Analysis (LDA), Locally Linear Embedding (LLE) and Laplacian Eigenmaps (LE). Experimental results show that our algorithms outperform significantly popular methods such as Gabor features and Local Binary Patterns and achieve state-of-the-art performance for difficult problems such as illumination- and occlusion-robust face recognition. In addition to this, the proposed IGO-methods require the eigen-decomposition of simple covariance matrices and are as computationally efficient as their corresponding ℓ2 norm intensity-based counterparts. Matlab code for the methods presented in this paper can be found at
Misalignment-robust face recognition
- IEEE TIP
, 2010
"... In this paper, we study the problem of subspace-based face recognition under scenarios with spatial misalign-ments and/or image occlusions. For a given subspace, the embedding of a new datum and the underlying spatial mis-alignment parameters are simultaneously inferred by solv-ing a constrained 1 n ..."
Abstract
-
Cited by 17 (1 self)
- Add to MetaCart
(Show Context)
In this paper, we study the problem of subspace-based face recognition under scenarios with spatial misalign-ments and/or image occlusions. For a given subspace, the embedding of a new datum and the underlying spatial mis-alignment parameters are simultaneously inferred by solv-ing a constrained 1 norm optimization problem, which minimizes the error between the misalignment-amended image and the image reconstructed from the given sub-space along with its principal complementary subspace. A byproduct of this formulation is the capability to detect the underlying image occlusions. Extensive experiments on spatial misalignment estimation, image occlusion detection, and face recognition with spatial misalignments and im-age occlusions all validate the effectiveness of our proposed general formulation. 1.
Linear Laplacian discrimination for feature extraction
- In CVPR, 2007. 6
"... Discriminant feature extraction plays a fundamental role in pattern recognition. In this paper, we propose the Linear Laplacian Discrimination (LLD) algorithm for discriminant feature extraction. LLD is an extension of Linear Discriminant Analysis (LDA). Our motivation is to address the issue that L ..."
Abstract
-
Cited by 13 (4 self)
- Add to MetaCart
(Show Context)
Discriminant feature extraction plays a fundamental role in pattern recognition. In this paper, we propose the Linear Laplacian Discrimination (LLD) algorithm for discriminant feature extraction. LLD is an extension of Linear Discriminant Analysis (LDA). Our motivation is to address the issue that LDA cannot work well in cases where sample spaces are non-Euclidean. Specifically, we define the within-class scatter and the between-class scatter using similarities which are based on pairwise distances in sample spaces. Thus the structural information of classes is contained in the within-class and the between-class Laplacian matrices which are free from metrics of sample spaces. The optimal discriminant subspace can be derived by controlling the structural evolution of Laplacian matrices. Experiments are performed on the facial database for FRGC version 2. Experimental results show that LLD is effective in extracting discriminant features. 1.
Discriminant graph structures for facial expression recognition
- IEEE Transactions on Multimedia
, 2008
"... Abstract—In this paper, a series of advances in elastic graph matching for facial expression recognition are proposed. More specifically, a new technique for the selection of the most discrim-inant facial landmarks for every facial expression (discriminant expression-specific graphs) is applied. Fur ..."
Abstract
-
Cited by 13 (3 self)
- Add to MetaCart
(Show Context)
Abstract—In this paper, a series of advances in elastic graph matching for facial expression recognition are proposed. More specifically, a new technique for the selection of the most discrim-inant facial landmarks for every facial expression (discriminant expression-specific graphs) is applied. Furthermore, a novel kernel-based technique for discriminant feature extraction from graphs is presented. This feature extraction technique remedies some of the limitations of the typical kernel Fisher discriminant analysis (KFDA) which provides a subspace of very limited di-mensionality (i.e., one or two dimensions) in two-class problems. The proposed methods have been applied to the Cohn–Kanade database in which very good performance has been achieved in a fully automatic manner. Index Terms—Elastic graph matching, expandable graphs, Fisher’s linear discriminant analysis, Kernel techniques. I.