Results 11  20
of
211
Domain Adaptation with Multiple Sources
"... This paper presents a theoretical analysis of the problem of domain adaptation with multiple sources. For each source domain, the distribution over the input points as well as a hypothesis with error at most ǫ are given. The problem consists of combining these hypotheses to derive a hypothesis with ..."
Abstract

Cited by 56 (4 self)
 Add to MetaCart
(Show Context)
This paper presents a theoretical analysis of the problem of domain adaptation with multiple sources. For each source domain, the distribution over the input points as well as a hypothesis with error at most ǫ are given. The problem consists of combining these hypotheses to derive a hypothesis with small error with respect to the target domain. We present several theoretical results relating to this problem. In particular, we prove that standard convex combinations of the source hypotheses may in fact perform very poorly and that, instead, combinations weighted by the source distributions benefit from favorable theoretical guarantees. Our main result shows that, remarkably, for any fixed target function, there exists a distribution weighted combining rule that has a loss of at most ǫ with respect to any target mixture of the source distributions. We further generalize the setting from a single target function to multiple consistent target functions and show the existence of a combining rule with error at most 3ǫ. Finally, we report empirical results for a multiple source adaptation problem with a realworld dataset. 1
Recognizing partially occluded, expression variant faces from single training image per person with SOM and soft kNN ensemble
 IEEE Transaction on Neural Networks
, 2005
"... Abstract—Most classical templatebased frontal face recognition techniques assume that multiple images per person are available for training, while in many realworld applications only one training image per person is available and the test images may be partially occluded or may vary in expressions ..."
Abstract

Cited by 55 (9 self)
 Add to MetaCart
(Show Context)
Abstract—Most classical templatebased frontal face recognition techniques assume that multiple images per person are available for training, while in many realworld applications only one training image per person is available and the test images may be partially occluded or may vary in expressions. This paper addresses those problems by extending a previous local probabilistic approach presented by Martinez, using the SelfOrganizing Map (SOM) instead of a mixture of Gaussians to learn the subspace that represented each individual. Based on the localization of the training images, two strategies of learning the SOM topological space are proposed, namely to train a single SOM map for all the samples and to train a separate SOM map for each class, respectively. A soft k nearest neighbor (soft kNN) ensemble method, which can effectively exploit the outputs of the SOM topological space, is also proposed to identify the unlabelled subjects. Experiments show that the proposed method exhibits high robust performance against the partial occlusions and variant expressions. Index Terms—Face recognition, single training image per person, occlusion, face expression, selforganizing map I.
Robust Parameterized Component Analysis: Theory and Applications to 2D Facial Modeling
 Computer Vision and Image Understanding, 91:53 – 71
, 2002
"... Principal Component Analysis (PCA) has been successfully applied to construct linear models of shape, graylevel, and motion. In particular, PCA has been widely used to model the variation in the appearance of people's faces. We extend previous work on facial modeling for tracking faces in video ..."
Abstract

Cited by 53 (12 self)
 Add to MetaCart
Principal Component Analysis (PCA) has been successfully applied to construct linear models of shape, graylevel, and motion. In particular, PCA has been widely used to model the variation in the appearance of people's faces. We extend previous work on facial modeling for tracking faces in video sequences as they undergo significant changes due to facial expressions. Here we develop personspecific facial appearance models (PSFAM), which use modular PCA to model complex intraperson appearance changes. Such models require aligned visual training data; in previous work, this has involved a time consuming and errorprone hand alignment and cropping process. Instead, we introduce parameterized component analysis to learn a subspace that is invariant to affine (or higher order) geometric transformations. The automatic learning of a PSFAM given a training image sequence is posed as a continuous optimization problem and is solved with a mixture of stochastic and deterministic techniques achieving subpixel accuracy.
A gmm parts based face representation for improved verification through relevance adaptation
 In CVPR
, 2004
"... Motivated by the success of parts based representations in face detection we have attempted to address some of the problems associated with applying such a philosophy to the task of face verification. Hitherto, a major problem with this approach in face verification is the intrinsic lack of training ..."
Abstract

Cited by 50 (5 self)
 Add to MetaCart
(Show Context)
Motivated by the success of parts based representations in face detection we have attempted to address some of the problems associated with applying such a philosophy to the task of face verification. Hitherto, a major problem with this approach in face verification is the intrinsic lack of training observations, stemming from individual subjects, in order to estimate the required conditional distributions. The estimated distributions have to be generalized enough to encompass the differing permutations of a subject’s face yet still be able to discriminate between subjects. In our work the well known Gaussian mixture model (GMM) framework is employed to model the conditional density function of the parts based representation of the face. We demonstrate that excellent performance can be obtained from our GMM based representation through the employment of adaptation theory, specifically relevance adaptation (RA). Our results are presented for the frontal images of the BANCA database. 1.
A Region Ensemble for 3D Face Recognition
, 2008
"... In this paper, we introduce a new system for 3D face recognition based on the fusion of results from a committee of regions that have been independently matched. Experimental results demonstrate that using 28 small regions on the face allow for the highest level of 3D face recognition. Scorebase ..."
Abstract

Cited by 48 (2 self)
 Add to MetaCart
(Show Context)
In this paper, we introduce a new system for 3D face recognition based on the fusion of results from a committee of regions that have been independently matched. Experimental results demonstrate that using 28 small regions on the face allow for the highest level of 3D face recognition. Scorebased fusion is performed on the individual region match scores and experimental results show that the Borda count and consensus voting methods yield higher performance than the standard sum, product, and min fusion rules. In addition, results are reported that demonstrate the robustness of our algorithm by simulating large holes and artifacts in images. To our knowledge, no other work has been published that uses a large number of 3D face regions for highperformance face matching. Rank one recognition rates of 97.2 % and verification rates of 93.2 % at a 0.1 % false accept rate are reported and compared to other methods published on the face recognition grand challenge v2 data set.
Domain Adaptation: Learning Bounds and Algorithms
"... This paper addresses the general problem of domain adaptation which arises in a variety of applications where the distribution of the labeled sample available somewhat differs from that of the test data. Building on previous work by BenDavid et al. (2007), we introduce a novel distance between dist ..."
Abstract

Cited by 45 (7 self)
 Add to MetaCart
(Show Context)
This paper addresses the general problem of domain adaptation which arises in a variety of applications where the distribution of the labeled sample available somewhat differs from that of the test data. Building on previous work by BenDavid et al. (2007), we introduce a novel distance between distributions, discrepancy distance, that is tailored to adaptation problems with arbitrary loss functions. We give Rademacher complexity bounds for estimating the discrepancy distance from finite samples for different loss functions. Using this distance, we derive new generalization bounds for domain adaptation for a wide family of loss functions. We also present a series of novel adaptation bounds for large classes of regularizationbased algorithms, including support vector machines and kernel ridge regression based on the empirical discrepancy. This motivates our analysis of the problem of minimizing the empirical discrepancy for various loss functions for which we also give several algorithms. We report the results of preliminary experiments that demonstrate the benefits of our discrepancy minimization algorithms for domain adaptation. 1
Learning From Examples in the Small Sample Case: Face Expression Recognition
, 2005
"... Examplebased learning for computer vision can be difficult when a large number of examples to represent each pattern or object class is not available. In such situations, learning from a small number of samples is of practical value. To study this issue, the task of face expression recognition with ..."
Abstract

Cited by 36 (2 self)
 Add to MetaCart
Examplebased learning for computer vision can be difficult when a large number of examples to represent each pattern or object class is not available. In such situations, learning from a small number of samples is of practical value. To study this issue, the task of face expression recognition with a small number of training images of each expression is considered. A new technique based on linear programming for both feature selection and classifier training is introduced. A pairwise framework for feature selection, instead of using all classes simultaneously, is presented. Experimental results compare the method with three others: a simplified Bayes classifier, support vector machine, and AdaBoost. Finally, each algorithm is analyzed and a new categorization of these algorithms is given, especially for learning from examples in the small sample case.
Curse of misalignment in face recognition: Problem and a novel misalignment learning solution
 in Proc. IEEE Int. Conf. Automatic Face and Gesture Recognition
"... In this paper, we present the rarely concerned curse of misalignment problem in face recognition, and propose a novel misalignment learning solution. Misalignment problem is firstly empirically investigated through systematically evaluating Fisherface’s sensitivity to misalignment on the FERET f ..."
Abstract

Cited by 30 (3 self)
 Add to MetaCart
(Show Context)
In this paper, we present the rarely concerned curse of misalignment problem in face recognition, and propose a novel misalignment learning solution. Misalignment problem is firstly empirically investigated through systematically evaluating Fisherface’s sensitivity to misalignment on the FERET face database by perturbing the eye coordinates, which reveals that the imprecise localization of the facial landmarks abruptly degenerates the Fisherface system. We explicitly define this problem as curse of misalignment for highlighting its graveness. We then analyze the sources of curse of misalignment and group the possible solutions into three categories: invariant features, misalignment modeling, and alignment retuning. And then we propose a set of measurement combining the recognition rate with the alignment error distribution to evaluate the overall performance of specific face recognition approach with its robustness against the misalignment considered. Finally, a novel misalignment learning method, named EFisherface, is proposed to reinforce the recognizer to model the misalignment variations. Experimental results have impressively indicated the effectiveness of the proposed EFisherface to tackle the curse of misalignment problem. 1.
Maximum correntropy criterion for robust face recognition
 IEEE Trans. Pattern Anal. Mach. Intell
"... Abstract—In this paper, we present a sparse correntropy framework for computing robust sparse representations of face images for recognition. Compared with the stateoftheart l1normbased sparse representation classifier (SRC), which assumes that noise also has a sparse representation, our sparse ..."
Abstract

Cited by 28 (9 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, we present a sparse correntropy framework for computing robust sparse representations of face images for recognition. Compared with the stateoftheart l1normbased sparse representation classifier (SRC), which assumes that noise also has a sparse representation, our sparse algorithm is developed based on the maximum correntropy criterion, which is much more insensitive to outliers. In order to develop a more tractable and practical approach, we in particular impose nonnegativity constraint on the variables in the maximum correntropy criterion and develop a halfquadratic optimization technique to approximately maximize the objective function in an alternating way so that the complex optimization problem is reduced to learning a sparse representation through a weighted linear least squares problem with nonnegativity constraint at each iteration. Our extensive experiments demonstrate that the proposed method is more robust and efficient in dealing with the occlusion and corruption problems in face recognition as compared to the related stateoftheart methods. In particular, it shows that the proposed method can improve both recognition accuracy and receiver operator characteristic (ROC) curves, while the computational cost is much lower than the SRC algorithms. Index Terms—Information theoretical learning, correntropy, linear least squares, halfquadratic optimization, sparse representation, Mestimator, face recognition, occlusion and corruption. Ç 1
Support vector machines in face recognition with occlusions
 in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition
"... Support Vector Machines (SVM) are one of the most useful techniques in classification problems. One clear example is face recognition. However, SVM cannot be applied when the feature vectors defining our samples have missing entries. This is clearly the case in face recognition when occlusions are p ..."
Abstract

Cited by 27 (0 self)
 Add to MetaCart
(Show Context)
Support Vector Machines (SVM) are one of the most useful techniques in classification problems. One clear example is face recognition. However, SVM cannot be applied when the feature vectors defining our samples have missing entries. This is clearly the case in face recognition when occlusions are present in the training and/or testing sets. When k features are missing in a sample vector of class 1, these define an affine subspace of k dimensions. The goal of the SVM is to maximize the margin between the vectors of class 1 and class 2 on those dimensions with no missing elements and, at the same time, maximize the margin between the vectors in class 2 and the affine subspace of class 1. This second term of the SVM criterion will minimize the overlap between the classification hyperplane and the subspace of solutions in class 1, because we do not know which values in this subspace a test vector can take. The hyperplane minimizing this overlap is obviously the one parallel to the missing dimensions. However, this condition is too restrictive, because its solution will generally contradict that obtained when maximizing the margin of the visible data. To resolve this problem, we define a criterion which minimizes the probability of overlap. The resulting optimization problem can be solved efficiently and we show how the global minimum of the error term is guaranteed under mild conditions. We provide extensive experimental results, demonstrating the superiority of the proposed approach over the state of the art. 1.