Results 11 -
18 of
18
IEEE TRANSACTIONS ON IMAGE PROCESSING 1 Robust Face Recognition from Multi-View Videos
"... Abstract—Multi-view face recognition has become an active research area in the last few years. In this paper, we present an approach for video-based face recognition in camera net-works. Our goal is to handle pose variations by exploiting the redundancy in multi-view video data. However, unlike trad ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract—Multi-view face recognition has become an active research area in the last few years. In this paper, we present an approach for video-based face recognition in camera net-works. Our goal is to handle pose variations by exploiting the redundancy in multi-view video data. However, unlike traditional approaches that explicitly estimate the pose of the face, we propose a novel feature for robust face recognition in the presence of diffuse lighting and pose variations. The proposed feature is developed using the spherical harmonic representation of the face texture-mapped onto a sphere; the texture map itself is generated by back-projecting the multi-view video data. Video plays an important role in this scenario. First, it provides an automatic and efficient way for feature extraction. Second, the data redundancy renders the recognition algorithm more robust. We measure the similarity between feature sets from different videos using the Reproducing Kernel Hilbert Space. We demonstrate that the proposed approach outperforms traditional algorithms on a multi-view video database.
Learning from Synthetic Data Using a Stacked Multichannel Autoencoder
"... Abstract—Learning from synthetic data has many important and practical applications, An example of application is photo-sketch recognition. Using synthetic data is challenging due to the differences in feature distributions between synthetic and real data, a phenomenon we term synthetic gap. In this ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract—Learning from synthetic data has many important and practical applications, An example of application is photo-sketch recognition. Using synthetic data is challenging due to the differences in feature distributions between synthetic and real data, a phenomenon we term synthetic gap. In this paper, we investigate and formalize a general framework – Stacked Multichannel Autoencoder (SMCAE) that enables bridging the synthetic gap and learning from synthetic data more efficiently. In particular, we show that our SMCAE can not only transform and use synthetic data on the challenging face-sketch recognition task, but that it can also help simulate real images, which can be used for training classifiers for recognition. Preliminary experiments validate the effectiveness of the framework. I.
View-constrained Latent Variable Model for Multi-view Facial Expression Classification
"... Abstract. We propose a view-constrained latent variable model for multi-view facial expression classification. In this model, we first learn a discriminative manifold shared by multiple views of facial expressions, followed by the expression classification in the shared manifold. For learn-ing, we u ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract. We propose a view-constrained latent variable model for multi-view facial expression classification. In this model, we first learn a discriminative manifold shared by multiple views of facial expressions, followed by the expression classification in the shared manifold. For learn-ing, we use the expression data from multiple views, however, the infer-ence is performed using the data from a single view. Our experiments on data of posed and spontaneously displayed facial expressions show that the proposed approach outperforms the state-of-the-art methods for multi-view facial expression classification, and several state-of-the-art methods for multi-view learning. 1
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. 1 Learning Prototype Hyperplanes for Face Verification in the
"... Abstract—In this paper, we propose a new scheme called Prototype Hyperplane Learning (PHL) for face verification in the wild using only weakly labeled training samples (i.e., we only know whether each pair of samples are from the same class or different classes without knowing the class label of eac ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract—In this paper, we propose a new scheme called Prototype Hyperplane Learning (PHL) for face verification in the wild using only weakly labeled training samples (i.e., we only know whether each pair of samples are from the same class or different classes without knowing the class label of each sample) by leveraging a large number of unlabeled samples in a generic data set. Our scheme represents each sample in the weakly labeled data set as a mid-level feature with each entry as the corresponding decision value from the classification hyperplane (referred to as the prototype hyperplane) of one Support Vector Machine (SVM) model, in which a sparse set of support vectors are selected from the unlabeled generic data set based on the learnt combination coefficients. To learn the optimal prototype hyperplanes for the extraction of mid-level features, we propose a Fisher’s Linear Discriminant-like (FLD-like) objective function by maximizing the discriminability on the weakly labeled data set with a constraint enforcing sparsity on the combination coefficients of each SVM model, which is solved by using an alternating optimization method. Then, we use the recent work called Side-Information based Linear Discriminant Analysis (SILD) for dimensionality reduction and a cosine similarity measure for final face verification. Comprehensive experiments on two data sets, Labeled Faces in the Wild (LFW) and YouTube Faces, demonstrate the effectiveness of our scheme.
Discriminative Prior Bias Learning for Pattern Classification
"... Abstract: Prior information has been effectively exploited mainly using probabilistic models. In this paper, by focus-ing on the bias embedded in the classifier, we propose a novel method to discriminatively learn the prior bias based on the extra prior information assigned to the samples other than ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract: Prior information has been effectively exploited mainly using probabilistic models. In this paper, by focus-ing on the bias embedded in the classifier, we propose a novel method to discriminatively learn the prior bias based on the extra prior information assigned to the samples other than the class category, e.g., the 2-D position where the local image feature is extracted. The proposed method is formulated in the framework of maximum margin to adaptively optimize the biases, improving the classification performance. We also present the computationally efficient optimization approach that makes the method even faster than the standard SVM of the same size. The experimental results on patch labeling in the on-board camera images demonstrate the favorable performance of the proposed method in terms of both classification accuracy and computation time. 1
Low-Rank Bilinear Classification: Efficient Convex Optimization and Extensions
, 2013
"... Your article is protected by copyright and all rights are held exclusively by Springer Science +Business Media New York. This e-offprint is for personal use only and shall not be self-archived in electronic repositories. If you wish to self-archive your article, please use the accepted manuscript ve ..."
Abstract
- Add to MetaCart
(Show Context)
Your article is protected by copyright and all rights are held exclusively by Springer Science +Business Media New York. This e-offprint is for personal use only and shall not be self-archived in electronic repositories. If you wish to self-archive your article, please use the accepted manuscript version for posting on your own website. You may further deposit the accepted manuscript version in any repository, provided it is only made publicly available 12 months after official publication or later and provided acknowledgement is given to the original source of publication and a link is inserted to the published article on Springer's website. The link must be accompanied by the following text: "The final publication is available at link.springer.com”.
IEEE TRANSACTIONS ON IMAGE PROCESSING 1 Discriminative Shared Gaussian Processes for Multi-view and View-invariant Facial Expression Recognition
"... Abstract—Images of facial expressions are often captured from various views as a result of either head movements or variable camera position. Existing methods for multi-view and/or view-invariant facial expression recognition typically perform classification of the observed expression by using eithe ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract—Images of facial expressions are often captured from various views as a result of either head movements or variable camera position. Existing methods for multi-view and/or view-invariant facial expression recognition typically perform classification of the observed expression by using either classifiers learned separately for each view or a single classifier learned for all views. However, these approaches ignore the fact that different views of a facial expression are just different manifestations of the same facial expression. By accounting for this redundancy, we can design more effective classifiers for the target task. To this end, we propose a Discriminative Shared Gaussian Process Latent Variable Model (DS-GPLVM) for multi-view and view-invariant classification of facial expressions from multiple views. In this model, we first learn a discriminative manifold shared by multiple views of a facial expression. Subsequently, we perform facial expression classification in the expression manifold. Finally, classification of an observed facial expression is carried out either in the view-invariant manner (using only a single view of the expression) or in the multi-view manner (using multiple views of the expression). The proposed model can also be used to perform fusion of different facial features in a principled manner. We validate the proposed DS-GPLVM on both posed and spontaneously displayed facial expressions from three publicly available datasets (MultiPIE, LFPW, and SFEW). We show that this model outperforms the state-of-the-art methods for multi-view and view-invariant facial expression classification, and several state-of-the-art methods for multi-view learning and feature fusion. Index Terms—view-invariant, multi-view learning, facial ex-pression recognition, Gaussian Processes. I.