Results 1 
6 of
6
Facial age estimation by nonlinear aging pattern subspace [Z
"... Human age estimation by face images is an interesting yet challenging research topic emerging in recent years. This paper extends our previous work on facial age estimation (a linear method named AGES). In order to match the nonlinear nature of the human aging progress, a new algorithm named KAGES ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
(Show Context)
Human age estimation by face images is an interesting yet challenging research topic emerging in recent years. This paper extends our previous work on facial age estimation (a linear method named AGES). In order to match the nonlinear nature of the human aging progress, a new algorithm named KAGES is proposed based on a nonlinear subspace trained on the aging patterns, which are defined as sequences of individual face images sorted in time order. Both the training and test (age estimation) processes of KAGES rely on a probabilistic model of KPCA. In the experimental results, the performance of KAGES is not only better than all the compared algorithms, but also better than the human observers in age estimation. The results are sensitive to parameter choice however, and future research challenges are identified.
CVPR #244 CVPR 2008 Submission #244. CONFIDENTIAL REVIEW COPY. DO NOT DISTRIBUTE. CVPR
"... Kernel Principal Component Analysis (KPCA) is a popular generalization of linear PCA that allows nonlinear feature extraction. In KPCA, data in the input space is mapped to higher (usually) dimensional feature space where the data can be linearly modeled. The feature space is typically induced impl ..."
Abstract
 Add to MetaCart
(Show Context)
Kernel Principal Component Analysis (KPCA) is a popular generalization of linear PCA that allows nonlinear feature extraction. In KPCA, data in the input space is mapped to higher (usually) dimensional feature space where the data can be linearly modeled. The feature space is typically induced implicitly by a kernel function, and linear PCA in the feature space is performed via the kernel trick. However, due to the implicitness of the feature space, some extensions of PCA such as robust PCA cannot be directly generalized to KPCA. This paper proposes a unified framework for treating noise, missing data, and outliers in KPCA. Our method is based on a novel cost function to perform inference in KPCA. Extensive experiments, in both synthetic and real data, show that our algorithm outperforms existing methods.
Estimating Time Delays between Irregularly Sampled Time Series
, 2007
"... etheses repository This unpublished thesis/dissertation is copyright of the author and/or third ..."
Abstract
 Add to MetaCart
(Show Context)
etheses repository This unpublished thesis/dissertation is copyright of the author and/or third
Handling Multimodal Information Fusion with Missing Observations Using the Neutral Point Substitution Method
"... Abstract. We have previously introduced, in purely theoretical terms, the notion of neutral point substitution for missing kernel data in multimodal problems. In particular, it was demonstrated that when modalities are maximally disjoint, the method is precisely equivalent to the Sum rule decision s ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. We have previously introduced, in purely theoretical terms, the notion of neutral point substitution for missing kernel data in multimodal problems. In particular, it was demonstrated that when modalities are maximally disjoint, the method is precisely equivalent to the Sum rule decision scheme. As well as forging an intriguing analogy between multikernel and decisioncombination methods, this finding means that the neutralpoint method should exhibit a degree of resilience to class misattribution within the individual classifiers through the relative cancelling of combined estimation errors (if sufficiently decorrelated). However, the case of completely disjoint modalities is unrepresentative of the general missing data problem. We here set out to experimentally test the notion of neutral point substitution in a realistic experimental scenario with partiallydisjoint data to establish the practical application of the method. The tested data consists in multimodal Biometric measurements of individuals in which the missingmodality problem is endemic. We hence test a SVM classifier under both the modal decision fusion and neutral pointsubstitution paradigms, and find that, while error cancellation is indeed apparent, the genuinely multimodal approach enabled by the neutralpoint method is superior by a significant factor. 1
Parallel Approach for Time Series Analysis with General Regression
"... How to cite Complete issue More information about this article Journal's homepage in redalyc.org Scientific Information System ..."
Abstract
 Add to MetaCart
(Show Context)
How to cite Complete issue More information about this article Journal's homepage in redalyc.org Scientific Information System
Parallel Approach for Time Series Analysis with General Regression Neural Networks
"... The accuracy on time delay estimation given pairs of irregularly sampled time series is of great relevance in astrophysics. However the computational time is also important because the study of large data sets is needed. Besides introducing a new approach for time delay estimation, this paper presen ..."
Abstract
 Add to MetaCart
(Show Context)
The accuracy on time delay estimation given pairs of irregularly sampled time series is of great relevance in astrophysics. However the computational time is also important because the study of large data sets is needed. Besides introducing a new approach for time delay estimation, this paper presents a parallel approach to obtain a fast algorithm for time delay estimation. The neural network architecture that we use is general Regression Neural Network (GRNN). For the parallel approach, we use Message Passing Interface (MPI) on a beowulftype cluster and on a Cray supercomputer and we also use the Compute Unified Device Architecture (CUDA™) language on Graphics Processing Units (GPUs). We demonstrate that, with our approach, fast algorithms can be obtained for time delay estimation on large data sets with the same accuracy as stateoftheart methods.