Results 1  10
of
83
Nonlinear component analysis as a kernel eigenvalue problem

, 1996
"... We describe a new method for performing a nonlinear form of Principal Component Analysis. By the use of integral operator kernel functions, we can efficiently compute principal components in highdimensional feature spaces, related to input space by some nonlinear map; for instance the space of all ..."
Abstract

Cited by 1573 (83 self)
 Add to MetaCart
(Show Context)
We describe a new method for performing a nonlinear form of Principal Component Analysis. By the use of integral operator kernel functions, we can efficiently compute principal components in highdimensional feature spaces, related to input space by some nonlinear map; for instance the space of all possible 5pixel products in 16x16 images. We give the derivation of the method, along with a discussion of other techniques which can be made nonlinear with the kernel approach; and present first experimental results on nonlinear feature extraction for pattern recognition.
Probabilistic Visual Learning for Object Representation
, 1996
"... We present an unsupervised technique for visual learning which is based on density estimation in highdimensional spaces using an eigenspace decomposition. Two types of density estimates are derived for modeling the training data: a multivariate Gaussian (for unimodal distributions) and a Mixtureof ..."
Abstract

Cited by 699 (15 self)
 Add to MetaCart
We present an unsupervised technique for visual learning which is based on density estimation in highdimensional spaces using an eigenspace decomposition. Two types of density estimates are derived for modeling the training data: a multivariate Gaussian (for unimodal distributions) and a MixtureofGaussians model (for multimodal distributions). These probability densities are then used to formulate a maximumlikelihood estimation framework for visual search and target detection for automatic object recognition and coding. Our learning technique is applied to the probabilistic visual modeling, detection, recognition, and coding of human faces and nonrigid objects such as hands.
Locally weighted learning
 ARTIFICIAL INTELLIGENCE REVIEW
, 1997
"... This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, ass ..."
Abstract

Cited by 599 (51 self)
 Add to MetaCart
This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, assessing predictions, handling noisy data and outliers, improving the quality of predictions by tuning t parameters, interference between old and new data, implementing locally weighted learning e ciently, and applications of locally weighted learning. A companion paper surveys how locally weighted learning can be used in robot learning and control.
The EM Algorithm for Mixtures of Factor Analyzers
, 1997
"... Factor analysis, a statistical method for modeling the covariance structure of high dimensional data using a small number of latent variables, can be extended by allowing different local factor models in different regions of the input space. This results in a model which concurrently performs cluste ..."
Abstract

Cited by 278 (18 self)
 Add to MetaCart
(Show Context)
Factor analysis, a statistical method for modeling the covariance structure of high dimensional data using a small number of latent variables, can be extended by allowing different local factor models in different regions of the input space. This results in a model which concurrently performs clustering and dimensionality reduction, and can be thought of as a reduced dimension mixture of Gaussians. We present an exact ExpectationMaximization algorithm for fitting the parameters of this mixture of factor analyzers. 1 Introduction Clustering and dimensionality reduction have long been considered two of the fundamental problems in unsupervised learning (Duda & Hart, 1973; Chapter 6). In clustering, the goal is to group data points by similarity between their features. Conversely, in dimensionality reduction, the goal is to group (or compress) features that are highly correlated. In this paper we present an EM learning algorithm for a method which combines one of the basic forms of dime...
Probabilistic Visual Learning for Object Detection
, 1995
"... We present an unsupervised technique for visual learning which is based on density estimation in highdimensional spaces using an eigenspace decomposition. Two types of density estimates are derived for modeling the training data: a multivariate Gaussian (for a unimodal distribution) and a multivari ..."
Abstract

Cited by 237 (16 self)
 Add to MetaCart
We present an unsupervised technique for visual learning which is based on density estimation in highdimensional spaces using an eigenspace decomposition. Two types of density estimates are derived for modeling the training data: a multivariate Gaussian (for a unimodal distribution) and a multivariate MixtureofGaussians model (for multimodal distributions). These probability densities are then used to formulate a maximumlikelihood estimation framework for visual search and target detection for automatic object recognition. This learning technique is tested in experiments with modeling and subsequent detection of human faces and nonrigid objects such as hands.
Parametric Hidden Markov Models for Gesture Recognition
 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 1999
"... A new method for the representation, recognition, and interpretation of parameterized gesture is presented. By parameterized gesture we mean gestures that exhibit a systematic spatial variation; one example is a point gesture where the relevant parameter is the twodimensional direction. Our approa ..."
Abstract

Cited by 208 (3 self)
 Add to MetaCart
A new method for the representation, recognition, and interpretation of parameterized gesture is presented. By parameterized gesture we mean gestures that exhibit a systematic spatial variation; one example is a point gesture where the relevant parameter is the twodimensional direction. Our approach is to extend the standard hidden Markov model method of gesture recognition by including a global parametric variation in the output probabilities of the HMM states. Using a linear model of dependence, we formulate an expectationmaximization (EM) method for training the parametric HMM. During testing, a similar EM algorithm simultaneously maximizes the output likelihood of the PHMM for the given sequence and estimates the quantifying parameters. Using visually derived and directly measured threedimensional hand position measurements as input, we present results that demonstrate the recognition superiority of the PHMM over standard HMM techniques, as well as greater robustness in parameter estimation with respect to noise in the input features. Last, we extend the PHMM to handle arbitrary smooth (nonlinear) dependencies. The nonlinear formulation requires the use of a generalized expectationmaximization (GEM) algorithm for both training and the simultaneous recognition of the gesture and estimation of the value of the parameter. We present results on a pointing gesture, where the nonlinear approach permits the natural spherical coordinate parameterization of pointing direction.
Videobased face recognition using probabilistic appearance manifolds
 In Proc. IEEE Conference on Computer Vision and Pattern Recognition
, 2003
"... This paper presents a novel method to model and recognize human faces in video sequences. Each registered person is represented by a lowdimensional appearance manifold in the ambient image space. The complex nonlinear appearance manifold expressed as a collection of subsets (named pose manifolds), ..."
Abstract

Cited by 176 (5 self)
 Add to MetaCart
(Show Context)
This paper presents a novel method to model and recognize human faces in video sequences. Each registered person is represented by a lowdimensional appearance manifold in the ambient image space. The complex nonlinear appearance manifold expressed as a collection of subsets (named pose manifolds), and the connectivity among them. Each pose manifold is approximated by an affine plane. To construct this representation, exemplars are sampled from videos, and these exemplars are clustered with a Kmeans algorithm; each cluster is represented as a plane computed through principal component analysis (PCA). The connectivity between the pose manifolds encodes the transition probability between images in each of the pose manifold and is learned from a training video sequences. A maximum a posteriori formulation is presented for face recognition in test video sequences by integrating the likelihood that the input image comes from a particular pose manifold and the transition probability to this pose manifold from the previous frame. To recognize faces with partial occlusion, we introduce a weight mask into the process. Extensive experiments demonstrate that the proposed algorithm outperforms existing framebased face recognition methods with temporal voting schemes. 1
"Eigenlips" for Robust Speech Recognition
, 1994
"... In this study we improve the performance of a hybrid connectionist speech recognition system by incorporating visual information about the corresponding lip movements. Specifically, we investigate the benefits of adding visual features in the presence of additive noise and crosstalk (cocktail party ..."
Abstract

Cited by 140 (4 self)
 Add to MetaCart
In this study we improve the performance of a hybrid connectionist speech recognition system by incorporating visual information about the corresponding lip movements. Specifically, we investigate the benefits of adding visual features in the presence of additive noise and crosstalk (cocktail party effect). Our study extends our previous experiments [3] by using a new visual front end, and an alternative architecture for combining the visual and acoustic information. Furthermore, we have extended our recognizer to a multispeaker, connected letters recognizer. Our results show a significant improvement for the combined architecture (acoustic and visual information) over just the acoustic system in the presence of additive noise and crosstalk.
Deformotion  Deforming Motion, Shape Average and the Joint Registration and Segmentation of Images
 International Journal of Computer Vision
, 2002
"... What does it mean for a deforming object to be "moving" (see Fig. 1)? How can we separate the overall motion (a finitedimensional group action) from the more general deformation (a di#eomorphism)? In this paper we propose a definition of motion for a deforming object and introduce a notio ..."
Abstract

Cited by 120 (18 self)
 Add to MetaCart
(Show Context)
What does it mean for a deforming object to be "moving" (see Fig. 1)? How can we separate the overall motion (a finitedimensional group action) from the more general deformation (a di#eomorphism)? In this paper we propose a definition of motion for a deforming object and introduce a notion of "shape average" as the entity that separates the motion from the deformation. Our definition allows us to derive novel and e#cient algorithms to register nonequivalent shapes using regionbased methods, and to simultaneously approximate and register structures in greyscale images. We also extend the notion of shape average to that of a "moving average" in order to track moving and deforming objects through time.
Automatic choice of dimensionality for PCA
, 2000
"... A central issue in principal component analysis (PCA) is choosing the number of principal components to be retained. By interpreting PCA as density estimation, we show how to use Bayesian model selection to estimate the true dimensionality of the data. The resulting estimate is simple to compute ..."
Abstract

Cited by 102 (2 self)
 Add to MetaCart
(Show Context)
A central issue in principal component analysis (PCA) is choosing the number of principal components to be retained. By interpreting PCA as density estimation, we show how to use Bayesian model selection to estimate the true dimensionality of the data. The resulting estimate is simple to compute yet guaranteed to pick the correct dimensionality, given enough data. The estimate involves an integral over the Steifel manifold of kframes, which is difficult to compute exactly. But after choosing an appropriate parameterization and applying Laplace's method, an accurate and practical estimator is obtained. In simulations, it is convincingly better than crossvalidation and other proposed algorithms, plus it runs much faster.