• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

F.: Tied factor analysis for face recognition across large pose differences. TPAMI (2008)

by S Prince, J Warrell, J Elder, Felisberti
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 45
Next 10 →

Probabilistic Linear Discriminant Analysis for

by Simon J. D. Prince - Inferences About Identity ,” ICCV , 2007
"... Many current face recognition algorithms perform badly when the lighting or pose of the probe and gallery images differ. In this paper we present a novel algorithm designed for these conditions. We describe face data as resulting from a generative model which incorporates both withinindividual and b ..."
Abstract - Cited by 121 (5 self) - Add to MetaCart
Many current face recognition algorithms perform badly when the lighting or pose of the probe and gallery images differ. In this paper we present a novel algorithm designed for these conditions. We describe face data as resulting from a generative model which incorporates both withinindividual and between-individual variation. In recognition we calculate the likelihood that the differences between face images are entirely due to within-individual variability. We extend this to the non-linear case where an arbitrary face manifold can be described and noise is position-dependent. We also develop a “tied ” version of the algorithm that allows explicit comparison across quite different viewing conditions. We demonstrate that our model produces state of the art results for (i) frontal face recognition (ii) face recognition under varying pose. 1.
(Show Context)

Citation Context

... model from the 2D image and estimate pose and lighting explicitly [3, 2] and (iii) learn a statistical relation between faces viewed under different conditions [9, 12]. In recent work, Prince et al. =-=[14, 15]-=- proposed an novel method for recognition across large pose changes. They proposed a generative model to explain variation in the facesdata. Some of the variables in the model represented identity and...

Probabilistic models for inference about iden‐ tity

by Peng Li, Yun Fu, Umar Mohammed, James Elder, Simon J. D. Prince - IEEE TPAMI , 2012
"... Abstract—Many face recognition algorithms use “distance-based ” methods: feature vectors are extracted from each face and distances in feature space are compared to determine matches. In this paper we argue for a fundamentally different approach. We consider each image as having been generated from ..."
Abstract - Cited by 52 (0 self) - Add to MetaCart
Abstract—Many face recognition algorithms use “distance-based ” methods: feature vectors are extracted from each face and distances in feature space are compared to determine matches. In this paper we argue for a fundamentally different approach. We consider each image as having been generated from several underlying causes, some of which are due to identity (latent identity variables, or LIVs) and some of which are not. In recognition we evaluate the probability that two faces have the same underlying identity cause. We make these ideas concrete by developing a series of novel generative models which incorporate both within-individual and between-individual variation. We consider both the linear case where signal and noise are represented by a subspace, and the non-linear case where an arbitrary face manifold can be described and noise is position-dependent. We also develop a “tied ” version of the algorithm that allows explicit comparison of faces across quite different viewing conditions. We demonstrate that our model produces results that are comparable or better than the state of the art for both frontal face recognition and face recognition under varying pose.
(Show Context)

Citation Context

...ace [14] (ii) create a 3D model from the 2D image and estimate pose and lighting explicitly [5], [4] and (iii) learn a statistical relation between faces viewed under different conditions [16], [27], =-=[34]-=-. 1.2 Probabilistic Face Recognition The aforementioned distance-based models provide a hard matching decision - however, it would be better to assign a posterior probability to each explanation of th...

Bypassing Synthesis: PLS for Face Recognition with Pose, Low-Resolution and Sketch

by Abhishek Sharma, David W Jacobs
"... This paper presents a novel way to perform multi-modal face recognition. We use Partial Least Squares (PLS) to linearly map images in different modalities to a common linear subspace in which they are highly correlated. PLS has been previously used effectively for feature selection in face recogniti ..."
Abstract - Cited by 44 (4 self) - Add to MetaCart
This paper presents a novel way to perform multi-modal face recognition. We use Partial Least Squares (PLS) to linearly map images in different modalities to a common linear subspace in which they are highly correlated. PLS has been previously used effectively for feature selection in face recognition. We show both theoretically and experimentally that PLS can be used effectively across modalities. We also formulate a generic intermediate subspace comparison framework for multi-modal recognition. Surprisingly, we achieve high performance using only pixel intensities as features. We experimentally demonstrate the highest published recognition rates on the pose variations in the PIE data set, and also show that PLS can be used to compare sketches to photos, and to compare images taken at different resolutions. 1.
(Show Context)

Citation Context

...oser to our work in spirit, and have provided valuable inspiration. In [14] (BLM) the authors have used Singular Value Decomposition to derive a common content space for a set of different styles and =-=[12]-=- uses a probabilistic model to generate coupled subspaces for different poses. We discuss [14] further in the next section to provide motivation for our use of PLS, and we also compare experimentally ...

Maximizing Intraindividual Correlations for Face Recognition Across Pose Differences

by Annan Li, Shiguang Shan, Xilin Chen, Wen Gao - IEEE CVPR
"... The variations of pose lead to significant performance decline in face recognition systems, which is a bottleneck in face recognition. A key problem is how to measure the similarity between two image vectors of unequal length that viewed from different pose. In this paper, we propose a novel approac ..."
Abstract - Cited by 14 (3 self) - Add to MetaCart
The variations of pose lead to significant performance decline in face recognition systems, which is a bottleneck in face recognition. A key problem is how to measure the similarity between two image vectors of unequal length that viewed from different pose. In this paper, we propose a novel approach for pose robust face recognition, in which the similarity is measured by correlations in a media subspace between different poses on patch level. The media subspace is constructed by Canonical Correlation Analysis, such that the intra-individual correlations are maximized. Based on the media subspace two recognition approaches are developed. In the first, we transform non-frontal face into frontal for recognition. And in the second, we perform recognition in the media subspace with probabilistic modeling. The experimental results on FERET database demonstrate the efficiency of our approach. 1.
(Show Context)

Citation Context

...rform the verification using a Bayesian classifier based on mixtures of Gaussians. Similarly, Lee and Kim [10]transform the non-frontal image to frontal in linear feature space. Recently Prince et al.=-=[15]-=- propose a new algorithm based on learning the tied factors between different view points. Based on these factors, recognition is performed with probabilistic distance metirc modeling. Since local pat...

Coupled bias-variance tradeoff for crosspose face recognition

by Annan Li, Shiguang Shan, Wen Gao - IEEE Transactions on Image Processing , 2012
"... Abstract—Subspace-based face representation can be looked as a regression problem. From this viewpoint, we first revisited the problem of recognizing faces across pose differences, which is a bottleneck in face recognition. Then, we propose a new approach for cross-pose face recognition using a regr ..."
Abstract - Cited by 13 (0 self) - Add to MetaCart
Abstract—Subspace-based face representation can be looked as a regression problem. From this viewpoint, we first revisited the problem of recognizing faces across pose differences, which is a bottleneck in face recognition. Then, we propose a new approach for cross-pose face recognition using a regressor with a coupled bias–variance tradeoff. We found that striking a coupled balance between bias and variance in regression for different poses could improve the regressor-based cross-pose face representation, i.e., the regressor can be more stable against a pose difference. With the basic idea, ridge regression and lasso regression are explored. Experimental results on CMU PIE, the FERET, and the Multi-PIE face databases show that the proposed bias–variance tradeoff can achieve considerable reinforcement in recognition performance. Index Terms—Bias–variance tradeoff, face recognition, LASSO regression, pose differences, ridge regression.
(Show Context)

Citation Context

...g part of the complete model. The coefficients of the complete appearance model are estimated from a partial model. Face recognition is performed by matching the coefficients. Recently, Prince et al. =-=[12]-=- have represented faces across pose differences using factor analysis. Factors in different poses are tied to construct a hidden “identity subspace” invariant to pose variations. Recognition is perfor...

Visio-lization: Generating Novel Facial Images

by Umar Mohammed, Simon J. D, Prince Jan Kautz
"... Figure 1: We aim to learn a model of facial images (including hair, eyes, beards etc.) and use this to generate new samples (A and B). The results do not resemble any of the training faces, but are realistic and incorporate variation in sex, age, pose, illumination, hairstyle and other factors. We a ..."
Abstract - Cited by 12 (2 self) - Add to MetaCart
Figure 1: We aim to learn a model of facial images (including hair, eyes, beards etc.) and use this to generate new samples (A and B). The results do not resemble any of the training faces, but are realistic and incorporate variation in sex, age, pose, illumination, hairstyle and other factors. We also describe methods to edit real faces (C and D) by inpainting large regions (E) or changing expression (F). Our goal is to generate novel realistic images of faces using a model trained from real examples. This model consists of two components: First we consider face images as samples from a texture with spatially varying statistics and describe this texture with a local non-parametric model. Second, we learn a parametric global model of all of the pixel values. To generate realistic faces, we combine the strengths of both approaches and condition the local non-parametric model on the global parametric model. We demonstrate that with appropriate choice of local and global models it is possible to reliably generate new realistic face images that do not correspond to any individual in the training data. We extend the model to cope with considerable intra-class variation (pose and illumination). Finally, we apply our model to editing real facial images: we demonstrate image in-painting, interactive techniques for improving synthesized images and modifying facial expressions.

Morphable displacement field based image matching for face recognition across pose

by Shaoxin Li , Xin Liu , Xiujuan Chai , Haihong Zhang , Shihong Lao , Shiguang Shan - In Computer Vision–ECCV 2012 , 2012
"... Abstract. Fully automatic Face Recognition Across Pose (FRAP) is one of the most desirable techniques, however, also one of the most challenging tasks in face recognition field. Matching a pair of face images in different poses can be converted into matching their pixels corresponding to the same s ..."
Abstract - Cited by 8 (2 self) - Add to MetaCart
Abstract. Fully automatic Face Recognition Across Pose (FRAP) is one of the most desirable techniques, however, also one of the most challenging tasks in face recognition field. Matching a pair of face images in different poses can be converted into matching their pixels corresponding to the same semantic facial point. Following this idea, given two images G and P in different poses, we propose a novel method, named Morphable Displacement Field (MDF), to match G with P 's virtual view under G's pose. By formulating MDF as a convex combination of a number of template displacement fields generated from a 3D face database, our model satisfies both global conformity and local consistency. We further present an approximate but effective solution of the proposed MDF model, named implicit Morphable Displacement Field (iMDF), which synthesizes virtual view implicitly via an MDF by minimizing matching residual. This formulation not only avoids intractable optimization of the high-dimensional displacement field but also facilitates a constrained quadratic optimization. The proposed method can work well even when only 2 facial landmarks are labeled, which makes it especially suitable for fully automatic FRAP system. Extensive evaluations on FERET, PIE and Multi-PIE databases show considerable improvement over state-ofthe-art FRAP algorithms in both semi-automatic and fully automatic evaluation protocols.
(Show Context)

Citation Context

...roposed an eigen lightfield model in which faces under different poses were represented as part of a global model containing all available pose variations. The global model could be estimated from partial model(i.e. face under some pose) and used as poseinvariant feature for face recognition. Chai et al. [7] adopted linear regression model to estimate densely sampled overlapping virtual frontal patches from corresponding non-frontal patches. Then, all virtual frontal patches were combined by averaging the overlapping pixels to form the virtual frontal face image for recognition. Prince et al. [8] exploited factor analysis model to represent faces in varying pose. Factors in different poses were tied to construct a pose-invariant “identity subspace” for final recognition. Fundamentally speaking, the grand challenge in FRAP is an awful misalignment problem caused by the complex 3D structure of human face, i.e., the same facial point in 3D is projected to very different positions in the images of different poses in 2D. So, essentially, all FRAP methods implicitly or explicitly handle pose variation by matching pixels in 2D face images of different poses to the same semantic 3D facial poi...

Face recognition at a distance system for surveillance applications

by Frederick W Wheeler , Richard L Weiss , Peter H Tu - in Proc. IEEE Conf. Biometrics: Theory Applications and Systems(BTAS
"... Abstract-Face recognition at a distance is concerned with the automatic recognition of non-cooperative subjects over a wide area. This remote biometric collection and identification problem can be addressed with an active vision system where people are detected and tracked with wide-field-of-view c ..."
Abstract - Cited by 5 (0 self) - Add to MetaCart
Abstract-Face recognition at a distance is concerned with the automatic recognition of non-cooperative subjects over a wide area. This remote biometric collection and identification problem can be addressed with an active vision system where people are detected and tracked with wide-field-of-view cameras and near-field-of-view pan-tilt-zoom cameras are automatically controlled to collect high-resolution facial images. We have developed a prototype active-vision face recognition at a distance system that we call the Biometric Surveillance System. In this paper we review related prior work, describe the design and operation of this system, and provide experimental performance results. The system features predictive subject targeting and an adaptive target selection mechanism based on the current actions and history of each tracked subject to help ensure that facial images are captured for all subjects in view. Experimental tests designed to simulate operation in large transportation hubs show that the system can track subjects and capture facial images at distances of 25-50 m and can recognize them using a commercial face recognition system at a distance of 15-20 m.
(Show Context)

Citation Context

...also been developed by Marchesotti et al. [4], with persons detected in the WFOV video using a blob detector and a NFOV camera that is panned and tilted to acquire short video clips of subject faces. A face cataloger system has been developed and described by Hampapur et al. [5], [6]. For person detection, this system uses two geometrically calibrated WFOV cameras with overlapping views of a 6 m by 6 m capture area. A 2D multi-blob tracker is applied to each WFOV camera view and a 3D multi-blob tracker uses these outputs to determine 3D head locations in a real-world coordinate system. Prince [7], [8], Elder [9] et al. have developed an approach to face capture at a distance with a goal of being robust to pose and partial occlusion of subjects. They make use of a stationary WFOV camera with a 135◦ field of view and a NFOV camera with a 13◦ field of view. For robustness to occlusion, faces are detected in the WFOV camera view instead of whole bodies. Faces are detected using a combination of motion detection, background modeling and skin detection. The NFOV PTZ camera is then directed to the detected faces for higher resolution facial image capture. Bellotto et al. [10] describe an arc...

Parametric manifold of an object under different viewing directions

by Xiaozheng Zhang, Yongsheng Gao, Terry Caelli - In ECCV , 2012
"... Abstract. The appearance of a 3D object depends on both the viewing direc-tions and illumination conditions. It is proven that all n-pixel images of a con-vex object with Lambertian surface under variable lighting from infinity form a convex polyhedral cone (called illumination cone) in n-dimensiona ..."
Abstract - Cited by 5 (0 self) - Add to MetaCart
Abstract. The appearance of a 3D object depends on both the viewing direc-tions and illumination conditions. It is proven that all n-pixel images of a con-vex object with Lambertian surface under variable lighting from infinity form a convex polyhedral cone (called illumination cone) in n-dimensional space. This paper tries to answer the other half of the question: What is the set of images of an object under all viewing directions? A novel image representation is pro-posed, which transforms any n-pixel image of a 3D object to a vector in a 2n-dimensional pose space. In such a pose space, we prove that the transformed images of a 3D object under all viewing directions form a parametric manifold in a 6-dimensional linear subspace. With in-depth rotations along a single axis in particular, this manifold is an ellipse. Furthermore, we show that this para-metric pose manifold of a convex object can be estimated from a few images in different poses and used to predict object’s appearances under unseen viewing directions. These results immediately suggest a number of approaches to object recognition, scene detection, and 3D modelling. Experiments on both synthetic data and real images were reported, which demonstrates the validity of the proposed representation.
(Show Context)

Citation Context

...es to simplifysthe pattern recognition problems under viewpoint variations. Because viewpoint andspose have the same effect in images of an object, this paper uses them interchangeably. Prince et al. =-=[11]-=- approximated pose variations in image space as non-linearstransformations. Active appearance models [7] and eigen light fields [9] predictedsnovel appearances of a human face from exemplar appearance...

Robust pose invariant face recognition using coupled latent space discriminant analysis

by Abhishek Sharma, Murad Al Haj, Jonghyun Choi, Larry S. Davis, David W. Jacobs - CVIU
"... We propose a novel pose-invariant face recognition approach which we call Dis-criminant Multiple Coupled Latent Subspace framework. It finds sets of pro-jection directions for different poses such that the projected images of the same subject are maximally correlated in the latent space. Discriminan ..."
Abstract - Cited by 4 (0 self) - Add to MetaCart
We propose a novel pose-invariant face recognition approach which we call Dis-criminant Multiple Coupled Latent Subspace framework. It finds sets of pro-jection directions for different poses such that the projected images of the same subject are maximally correlated in the latent space. Discriminant analysis with artificially simulated pose errors in the latent space makes it robust to small pose errors caused due to a subject’s incorrect pose estimation. We do a com-parative analysis of three popular learning approaches: Partial Least Squares (PLS), Bilinear Model (BLM) and Canonical Correlational Analysis (CCA) in the proposed coupled latent subspace framework. We also show that using more than two poses simultaneously with CCA results in better performance. We re-port state-of-the-art results for pose-invariant face recognition on CMU PIE and FERET and comparable results on MultiPIE when using only 4 fiducial points and intensity features.
(Show Context)

Citation Context

...t identities in training data and testing data are different and mutually exclusive. These assumptions are quite standard for learning based methods and have been used by many researchers in the past =-=[14, 15, 38, 22, 12, 33, 39, 2, 40, 13, 9]-=-. Our simple PLS based framework worked well for CMU PIE dataset, which has face images in tightly controlled acquisition scenario that ensures that the ground-truth poses are very close to the actual...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University