Results 1 - 10
of
25
Boosting 3D-Geometric Features for Efficient Face Recognition and Gender Classification
- IEEE Transactions on Information Forensics & Security
"... HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte p ..."
Abstract
-
Cited by 6 (4 self)
- Add to MetaCart
(Show Context)
HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et a ̀ la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Learning race from face: A survey
- IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2014
"... Abstract—Faces convey a wealth of social signals, including race, expression, identity, age and gender, all of which have attracted increasing attention from multi-disciplinary research, such as psychology, neuroscience, computer science, to name a few. Gleaned from recent advances in computer visio ..."
Abstract
-
Cited by 4 (0 self)
- Add to MetaCart
(Show Context)
Abstract—Faces convey a wealth of social signals, including race, expression, identity, age and gender, all of which have attracted increasing attention from multi-disciplinary research, such as psychology, neuroscience, computer science, to name a few. Gleaned from recent advances in computer vision, computer graphics, and machine learning, computational intelligence based racial face analysis has been particularly popular due to its significant potential and broader impacts in extensive real-world applications, such as security and defense, surveillance, human computer interface (HCI), biometric-based identification, among others. These studies raise an important question: How implicit, non-declarative racial category can be conceptually modeled and quantitatively inferred from the face? Nevertheless, race classification is challenging due to its ambiguity and complexity depending on context and criteria. To address this challenge, recently, significant efforts have been reported toward race detection and categorization in the community. This survey provides a comprehensive and critical review of the state-of-the-art advances in face-race perception, principles, algorithms, and applications. We first discuss race perception problem formulation and motivation, while highlighting the conceptual potentials of racial face processing. Next, taxonomy of feature representational models, algorithms, performance and racial databases are presented with systematic discussions within the unified learning scenario. Finally, in order to stimulate future research in this field, we also highlight the major opportunities and challenges, as well as potentially important cross-cutting themes and research directions for the issue of learning race from face.
Superfaces: A Super-Resolution Model for 3D Faces
"... Abstract. Face recognition based on the analysis of 3D scans has been an active research subject over the last few years. However, the impact of the resolution of 3D scans on the recognition process has not been addressed explicitly yet being of primal importance after the introduction of a new gene ..."
Abstract
-
Cited by 3 (2 self)
- Add to MetaCart
(Show Context)
Abstract. Face recognition based on the analysis of 3D scans has been an active research subject over the last few years. However, the impact of the resolution of 3D scans on the recognition process has not been addressed explicitly yet being of primal importance after the introduction of a new generation of low cost 4D scanning devices. These devices are capable of combined depth/rgb acquisition over time with a low resolution compared to the 3D scanners typically used in 3D face recognition benchmarks. In this paper, we define a super-resolution model for 3D faces by which a sequence of low-resolution 3D scans can be processed to extract a higher resolution 3D face model, namely the superface model. The proposed solution relies on the Scaled ICP procedure to align the low-resolution 3D models with each other and estimate the value of the high-resolution 3D model based on the statistics of values of the lowresolution scans in corresponding points. The approach is validated on a data set that includes, for each subject, one sequence of low-resolution 3D face scans and one ground-truth high-resolution 3D face model acquired through a high-resolution 3D scanner. In this way, results of the super-resolution process are evaluated qualitatively and quantitatively by measuring the error between the superface and the ground-truth. 1
Robust learning from normals for 3d face recognition
- In ECCV-W
, 2012
"... Abstract. We introduce novel subspace-based methods for learning from the az-imuth angle of surface normals for 3D face recognition. We show that the nor-mal azimuth angles combined with Principal Component Analysis (PCA) using a cosine-based distance measure can be used for robust face recognition ..."
Abstract
-
Cited by 3 (1 self)
- Add to MetaCart
(Show Context)
Abstract. We introduce novel subspace-based methods for learning from the az-imuth angle of surface normals for 3D face recognition. We show that the nor-mal azimuth angles combined with Principal Component Analysis (PCA) using a cosine-based distance measure can be used for robust face recognition from facial surfaces. The proposed algorithms are well-suited for all types of 3D facial data including data produced by range cameras (depth images), photometric stereo (PS) and shade-from-X (SfX) algorithms. We demonstrate the robustness of the proposed algorithms both in 3D face reconstruction from synthetically occluded samples, as well as, in face recognition using the FRGC v2 3D face database and the recently collected Photoface database where the proposed method achieves state-of-the-art results. An important aspect of our method is that it can achieve good face recognition/verification performance by using raw 3D scans without any heavy preprocessing (i.e., model fitting, surface smoothing etc.). 1
Distinguishing Facial Features for Ethnicity-Based 3D Face Recognition
"... Among different approaches for 3D face recognition, solutions based on local facial characteristics are very promising, mainly because they can manage facial expression variations by assigning different weights to different parts of the face. However, so far, a few works have investigated the indivi ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
Among different approaches for 3D face recognition, solutions based on local facial characteristics are very promising, mainly because they can manage facial expression variations by assigning different weights to different parts of the face. However, so far, a few works have investigated the individual relevance that local features play in 3D face recognition with very simple solutions applied in the practice. In this article, a local approach to 3D face recognition is combined with a feature selection model to study the relative relevance of different regions of the face for the purpose of discriminating between different subjects. The proposed solution is experimented using facial scans of the Face Recognition Grand Challenge dataset. Results of the experimentation are two-fold: they quantitatively demonstrate the assumption that different regions of the face have different relevance for face discrimination and also show that the relevance of facial regions changes for different ethnic groups.
3D Face Recognition Based on Multiple Keypoint Descriptors and Sparse Representation
, 2014
"... Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle ..."
Abstract
-
Cited by 2 (1 self)
- Add to MetaCart
(Show Context)
Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD) and the sparse representation-based classification (SRC). We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at
Cortical 3D Face Recognition Framework
"... Empirical studies concerning face recognition suggest that faces may be stored in memory by a few canonical representations. In cortical area V1 exist double-opponent colour blobs, also simple, complex and end-stopped cells which provide input for a multiscale line/edge representation, keypoints for ..."
Abstract
- Add to MetaCart
(Show Context)
Empirical studies concerning face recognition suggest that faces may be stored in memory by a few canonical representations. In cortical area V1 exist double-opponent colour blobs, also simple, complex and end-stopped cells which provide input for a multiscale line/edge representation, keypoints for dynamic routing and saliency maps for Focus-of-Attention. All these combined allow us to segregate faces. Events of different facial views are stored in memory and combined in order to identify the view and recognise the face including facial expression. In this paper we show that with five 2D views and their cortical representations it is possible to determine the left-right and frontal-lateral-profile views and to achieve view-invariant recognition of 3D faces.
Sub-Holistic Hidden Markov Model for Face Recognition
"... Abstract In this paper, a face recognition technique " Sub-Holistic Hidden Markov Model" has ..."
Abstract
- Add to MetaCart
Abstract In this paper, a face recognition technique " Sub-Holistic Hidden Markov Model" has
Cross-Modality 2D-3D Face Recognition via Multiview Smooth Discriminant Analysis Based on ELM
"... In recent years, 3D face recognition has attracted increasing attention from worldwide researchers. Rather than homogeneous face data, more and more applications require flexible input face data nowadays. In this paper, we propose a new approach for cross-modality 2D-3D face recognition (FR), which ..."
Abstract
- Add to MetaCart
(Show Context)
In recent years, 3D face recognition has attracted increasing attention from worldwide researchers. Rather than homogeneous face data, more and more applications require flexible input face data nowadays. In this paper, we propose a new approach for cross-modality 2D-3D face recognition (FR), which is called Multiview Smooth Discriminant Analysis (MSDA) based on Extreme Learning Machines (ELM). Adding the Laplacian penalty constrain for the multiview feature learning, the proposed MSDA is first proposed to extract the cross-modality 2D-3D face features. The MSDA aims at finding a multiview learning based common discriminative feature space and it can then fully utilize the underlying relationship of features from different views. To speed up the learning phase of the classifier, the recent popular algorithm named Extreme Learning Machine (ELM) is adopted to train the single hidden layer feedforward neural networks (SLFNs). To evaluate the effectiveness of our proposed FR framework, experimental results on a benchmark face recognition dataset are presented. Simulations show that our new proposed method generally outperforms several recent approaches with a fast training speed.