Results 11  20
of
149
Simultaneous Dimensionality Reduction and Human Age Estimation via Kernel Partial Least Squares Regression
"... Human age estimation has recently become an active research topic in computer vision and pattern recognition, because of many potential applications in reality. In this paper we propose to use the kernel partial least squares (KPLS) regression for age estimation. The KPLS (or linear PLS) method has ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
(Show Context)
Human age estimation has recently become an active research topic in computer vision and pattern recognition, because of many potential applications in reality. In this paper we propose to use the kernel partial least squares (KPLS) regression for age estimation. The KPLS (or linear PLS) method has several advantages over previous approaches: (1) the KPLS can reduce feature dimensionality and learn the aging function simultaneously in a single learning framework, instead of performing each task separately using different techniques; (2) the KPLS can find a small number of latent variables, e.g., 20, to project thousands of features into a very lowdimensional subspace, which may have great impact on realtime applications; and (3) the KPLS regression has an output vector that can contain multiple labels, so that several related problems, e.g., age estimation, gender classification, and ethnicity estimation can be solved altogether. This is the first time that the kernel PLS method is introduced and applied to solve a regression problem in computer vision with high accuracy. Experimental results on a very large database show that the KPLS is significantly better than the popular SVM method, and outperform the stateoftheart approaches in human age estimation. 1.
The Berlin BrainComputer Interface: machine learning based detection of user specific brain states
 Journal of Universal Computer Science
, 2006
"... Abstract: We outline the Berlin BrainComputer Interface (BBCI), a system which enables us to translate brain signals from movements or movement intentions into control commands. The main contribution of the BBCI, which is a noninvasive EEGbased BCI system, is the use of advanced machine learning ..."
Abstract

Cited by 21 (9 self)
 Add to MetaCart
Abstract: We outline the Berlin BrainComputer Interface (BBCI), a system which enables us to translate brain signals from movements or movement intentions into control commands. The main contribution of the BBCI, which is a noninvasive EEGbased BCI system, is the use of advanced machine learning techniques that allow to adapt to the specific brain signatures of each user with literally no training. In BBCI a calibration session of about 20min is necessary to provide a data basis from which the individualized brain signatures are inferred. This is very much in contrast to conventional BCI approaches that rely on operand conditioning and need extensive subject training of the order 50100 hours. Our machine learning concept thus allows to achieve high quality feedback already after the very first session. This work reviews a broad range of investigations and experiments that have been performed within the BBCI project. In addition to these general paradigmatic BCI results, this work provides a condensed outline of the underlying machine learning and signal processing techniques that make the BBCI succeed. In the first experimental paradigm we analyze the predictability of limb movement long before the actual movement takes place using only the movement intention measured from the premovement (readiness) EEG potentials. The experiments include both offline studies and an online feedback
An Optimization Perspective on Kernel Partial Least Squares Regression
 Advances in Learning Theory: Methods, Models and Applications
, 2003
"... Abstract. This work provides a novel derivation based on optimization for the partial least squares (PLS) algorithm for linear regression and the kernel partial least squares (KPLS) algorithm for nonlinear regression. This derivation makes the PLS algorithm, popularly and successfully used for chem ..."
Abstract

Cited by 21 (4 self)
 Add to MetaCart
(Show Context)
Abstract. This work provides a novel derivation based on optimization for the partial least squares (PLS) algorithm for linear regression and the kernel partial least squares (KPLS) algorithm for nonlinear regression. This derivation makes the PLS algorithm, popularly and successfully used for chemometrics applications, more accessible to machine learning researchers. The work introduces Direct KPLS, a novel way to kernelize PLS based on direct factorization of the kernel matrix. Computational results and discussion illustrate the relative merits of KPLS and Direct KPLS versus closely related kernel methods such as support vector machines and kernel ridge regression. ∗ This work was supported by NSF grant number IIS9979860. Many thanks to Roman Rosipal, Nello Cristianini, and Johan Suykens for many helpful discussions on PLS and kernel methods, Sean Ekans from Concurrent Pharmaceutical for providing molecule descriptions for the Albumin data set, Curt Breneman and N. Sukumar for generating descriptors for the Albumin data, and Tony Van Gestel for an efficient Gaussian kernel
GaborBased Kernel PartialLeastSquares Discrimination Features for Face Recognition ∗
, 2007
"... Abstract. The paper presents a novel method for the extraction of facial features based on the Gaborwavelet representation of face images and the kernel partialleastsquares discrimination (KPLSD) algorithm. The proposed featureextraction method, called the Gaborbased kernel partialleastsquare ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
(Show Context)
Abstract. The paper presents a novel method for the extraction of facial features based on the Gaborwavelet representation of face images and the kernel partialleastsquares discrimination (KPLSD) algorithm. The proposed featureextraction method, called the Gaborbased kernel partialleastsquares discrimination (GKPLSD), is performed in two consecutive steps. In the first step a set of forty Gabor wavelets is used to extract discriminative and robust facial features, while in the second step the kernel partialleastsquares discrimination technique is used to reduce the dimensionality of the Gabor feature vector and to further enhance its discriminatory power. For optimal performance, the KPLSDbased transformation is implemented using the recently proposed fractionalpowerpolynomial models. The experimental results based on the XM2VTS and ORL databases show that the GKPLSD approach outperforms featureextraction methods such as principal component analysis (PCA), linear discriminant analysis (LDA), kernel principal component analysis (KPCA) or generalized discriminant analysis (GDA) as well as combinations of these methods with Gabor representations of the face images. Furthermore, as the KPLSD algorithm is derived from the kernel partialleastsquares regression (KPLSR) model it does not suffer from the smallsamplesize problem, which is regularly encountered in the field of face recognition.
Eigenproblems in Pattern Recognition
 Handbook of Geometric Computing: Applications in Pattern Recognition, Computer Vision, Neuralcomputing, and Robotics
, 2005
"... The task of studying the properties of configurations of points embedded in a metric space has long been a central task in pattern recognition, but has acquired even greater importance after the recent introduction of kernelbased learning methods. These methods work by virtually embedding general ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
(Show Context)
The task of studying the properties of configurations of points embedded in a metric space has long been a central task in pattern recognition, but has acquired even greater importance after the recent introduction of kernelbased learning methods. These methods work by virtually embedding general
The kernel mutual information
 In IEEE ICASSP
, 2003
"... We introduce a new contrast function, the kemel mutual information (KMIj, to measure the degree of independence of continuous random variables. This contrast function provides an approximate upper bound on the mutual information, as measured near independence, and is based on a kernel density estima ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
We introduce a new contrast function, the kemel mutual information (KMIj, to measure the degree of independence of continuous random variables. This contrast function provides an approximate upper bound on the mutual information, as measured near independence, and is based on a kernel density estimate of the mutual information between a discretised approximation of the continuous random variables. We show that Bach and Jordan’s kernel generalised variance (KGV) is also an upper bound on the same kernel density estimate, but is looser. Finally, we suggest that the addition of a regularising term in the KGV causes it to approach the KMI, which motivates the introduction of this regularisation. 1.
Multidimensional Vector Regression for Accurate and LowCost Location Estimation in Pervasive Computing
 IEEE TRANS. KNOWLEDGE AND DATA ENG
, 2006
"... In this paper, we present an algorithm for multidimensional vector regression on data that are highly uncertain and nonlinear, and then apply it to the problem of indoor location estimation in a wireless local area network (WLAN). Our aim is to obtain an accurate mapping between the signal space and ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
(Show Context)
In this paper, we present an algorithm for multidimensional vector regression on data that are highly uncertain and nonlinear, and then apply it to the problem of indoor location estimation in a wireless local area network (WLAN). Our aim is to obtain an accurate mapping between the signal space and the physical space without requiring too much human calibration effort. This location estimation problem has traditionally been tackled through probabilistic models trained on manually labeled data, which are expensive to obtain. In contrast, our algorithm adopts Kernel Canonical Correlation Analysis (KCCA) to build a nonlinear mapping between the signalvector space and the physical location space by transforming data in both spaces into their canonical features. This allows the pairwise similarity of samples in both spaces to be maximally correlated using kernels. We use a Gaussian kernel to adapt to the noisy characteristics of signal strengths and a Matérn kernel to sense the changes in physical locations. By using real data collected in an 802.11 wireless LAN environment, we achieve accurate location estimation for pervasive computing while requiring a much smaller set of labeled training data than previous methods.
A Tour of Modern Image Filtering
, 2011
"... Recent developments in computational imaging and restoration have heralded the arrival and convergence of several powerful methods for adaptive processing of multidimensional data. Examples include ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
Recent developments in computational imaging and restoration have heralded the arrival and convergence of several powerful methods for adaptive processing of multidimensional data. Examples include
WeaklyPaired Maximum Covariance Analysis for Multimodal Dimensionality Reduction and Transfer Learning
"... Abstract. We study the problem of multimodal dimensionality reduction assuming that data samples can be missing at training time, and not all data modalities may be present at application time. Maximum covariance analysis, as a generalization of PCA, has many desirable properties, but its applicatio ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
(Show Context)
Abstract. We study the problem of multimodal dimensionality reduction assuming that data samples can be missing at training time, and not all data modalities may be present at application time. Maximum covariance analysis, as a generalization of PCA, has many desirable properties, but its application to practical problems is limited by its need for perfectly paired data. We overcome this limitation by a latent variable approach that allows working with weakly paired data and is still able to efficiently process large datasets using standard numerical routines. The resulting weakly paired maximum covariance analysis often finds better representations than alternative methods, as we show in two exemplary tasks: texture discrimination and transfer learning. 1
Kernelbased regression and objective nonlinear measures to assess brain functioning
, 2001
"... Two di®erent problems of re°ecting brain functioning are addressed. This involves human performance monitoring during the signal detection task and depth of anaesthesia monitoring. The common aspect of both problems is to monitor brain activity through the electroencephalogram recordings on the sc ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
(Show Context)
Two di®erent problems of re°ecting brain functioning are addressed. This involves human performance monitoring during the signal detection task and depth of anaesthesia monitoring. The common aspect of both problems is to monitor brain activity through the electroencephalogram recordings on the scalp. Although these two problems create only a fractional part of the tasks associated with physiological data analysis the results and the methodology proposed have wider applicability. A theoretical and practical investigation of the di®erent forms of kernelbased nonlinear regression models and e±cient kernelbased algorithms for appropriate features extraction is undertaken. The main focus is on solving the problem of providing reduced variance estimates of the regression coe±cients when a linear regression in some kernel function de¯ned feature space is assumed. To that end Kernel Principal Component Regression and Kernel Partial Least Squares Regression techniques are proposed. These kernelbased techniques were found to be very e±cient when observed data are mapped to a high dimensional feature space where usually algorithms as simple as their