Results 1 
9 of
9
REGRESSION ON MANIFOLDS: ESTIMATION OF THE EXTERIOR DERIVATIVE
 SUBMITTED TO THE ANNALS OF STATISTICS
, 2010
"... Collinearity and nearcollinearity of predictors cause difficulties when doing regression. In these cases, variable selection becomes untenable because of mathematical issues concerning the existence and numerical stability of the regression coefficients, and interpretation of the coefficients is am ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
(Show Context)
Collinearity and nearcollinearity of predictors cause difficulties when doing regression. In these cases, variable selection becomes untenable because of mathematical issues concerning the existence and numerical stability of the regression coefficients, and interpretation of the coefficients is ambiguous because gradients are not defined. Using a differential geometric interpretation, in which the regression coefficients are interpreted as estimates of the exterior derivative of a function, we develop a new method to do regression in the presence of collinearities. Our regularization scheme can improve estimation error, and it can be easily modified to include lassotype regularization. These estimators also have simple extensions to the “large p, small n” context.
A dissimilarity kernel with local features for robust facial recognition
 Proc. IEEE Int. Conf. on Image Processing, 2010
"... Local binary pattern (LBP) has recently been proposed for texture analysis and local feature description and has also been applied to face recognition with promising results. However, besides the descriptors, a suitable similarity measure that can efficiently learn to distinguish facial features is ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Local binary pattern (LBP) has recently been proposed for texture analysis and local feature description and has also been applied to face recognition with promising results. However, besides the descriptors, a suitable similarity measure that can efficiently learn to distinguish facial features is also important. In this paper, a novel framework for robust face recognition is presented that considers both local and global features by using multiresolution LBP descriptors. The framework can tolerate variations in expression, lighting condition and occlusion. A weighted distance measure is used to learn the dissimilarity between sets of LBP features. We formulate the distance function as a conditionally positive semidefinite (CPD) kernel, thus making it suitable for kernelbased algorithms such as support vector machines (SVMs) whose optimal solutions are guaranteed. We show that by defining it in a Hilbert space, the proposed CPD kernel has advantages over traditional methods computing the l2 distances in the Euclidean space. The experiments show that the approach is efficient and significantly outperforms the current stateoftheart methods on the publicly available AR face database. Index Terms — Local binary pattern, local feature, dissimilarity measure, robustness, conditionally positive semidefinite kernel 1.
On the Convergence of Maximum Variance Unfolding
"... Maximum Variance Unfolding is one of the main methods for (nonlinear) dimensionality reduction. We study its large sample limit, providing specific rates of convergence under standard assumptions. We find that it is consistent when the underlying submanifold is isometric to a convex subset, and we p ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Maximum Variance Unfolding is one of the main methods for (nonlinear) dimensionality reduction. We study its large sample limit, providing specific rates of convergence under standard assumptions. We find that it is consistent when the underlying submanifold is isometric to a convex subset, and we provide some simple examples where it fails to be consistent.
Regression Reformulations of LLE and LTSA With Locally Linear Transformation
"... Abstract—Locally linear embedding (LLE) and local tangent space alignment (LTSA) are two fundamental algorithms in manifold learning. Both LLE and LTSA employ linear methods to achieve their goals but with different motivations and formulations. LLE is developed by locally linear reconstructions i ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Abstract—Locally linear embedding (LLE) and local tangent space alignment (LTSA) are two fundamental algorithms in manifold learning. Both LLE and LTSA employ linear methods to achieve their goals but with different motivations and formulations. LLE is developed by locally linear reconstructions in both high and lowdimensional spaces, while LTSA is developed with the combinations of tangent space projections and locally linear alignments. This paper gives the regression reformulations of the LLE and LTSA algorithms in terms of locally linear transformations. The reformulations can help us to bridge them together, with which both of them can be addressed into a unified framework. Under this framework, the connections and differences between LLE and LTSA are explained. Illuminated by the connections and differences, an improved LLE algorithm is presented in this paper. Our algorithm learns the manifold in way of LLE but can significantly improve the performance. Experiments are conducted to illustrate this fact. Index Terms—Improved locally linear embedding (LLE) (ILLE), LLE, local tangent space alignment (LTSA), regression reformulation. I.
Extensions of Laplacian Eigenmaps for Manifold Learning
, 2011
"... This thesis deals with the theory and practice of manifold learning, especially as they relate to the problem of classification. We begin with a well known algorithm, Laplacian Eigenmaps, and then proceed to extend it in two independent directions. First, we generalize this algorithm to allow for th ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
This thesis deals with the theory and practice of manifold learning, especially as they relate to the problem of classification. We begin with a well known algorithm, Laplacian Eigenmaps, and then proceed to extend it in two independent directions. First, we generalize this algorithm to allow for the use of partially labeled data, and establish the theoretical foundation of the resulting semisupervised learning method. Second, we consider two ways of accelerating the most computationally intensive step of Laplacian Eigenmaps, the construction of an adjacency graph. Both of them produce high quality approximations, and we conclude by showing that they work well together to achieve a dramatic reduction in computational time.
LINEAR AND NONLINEAR DIMENSIONALITY REDUCTION FOR FACE RECOGNITION
"... Principal component analysis (PCA) has long been a simple, efficient technique for dimensionality reduction. However, many nonlinear methods such as local linear embedding and curvilinear component analysis have been proposed for increasingly complex nonlinear data recently. In this paper, we invest ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Principal component analysis (PCA) has long been a simple, efficient technique for dimensionality reduction. However, many nonlinear methods such as local linear embedding and curvilinear component analysis have been proposed for increasingly complex nonlinear data recently. In this paper, we investigate and compare linear PCA and various nonlinear methods for face recognition. Results drawn from experiments on realworld face databases show that both linear and nonlinear methods yield similar performance and differences in classification rate are insignificant to conclude which method is always superior. A nonlinearity measure is derived to quantify the degree of nonlinearity of a data set in the reduced subspace. It can be used to indicate the effectiveness of nonlinear or linear dimensionality reduction. Index Terms—PCA, dimensionality reduction, nonlinear manifold, face recognition
Bias Selection Using TaskTargeted Random Subspaces for Robust Application of GraphBased SemiSupervised Learning
"... Abstract—Graphs play a role in many semisupervised learning algorithms, where unlabeled samples are used to find useful structural properties in the data. Dimensionality reduction and regularization based on preserving smoothness over a graph are common in these settings, and they perform particula ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—Graphs play a role in many semisupervised learning algorithms, where unlabeled samples are used to find useful structural properties in the data. Dimensionality reduction and regularization based on preserving smoothness over a graph are common in these settings, and they perform particularly well if proximity in the original feature space closely reflects similarity in the classification problem of interest. However, many realworld problem spaces are overwhelmed by noise in the form of features that have no useful relevance to the concept that is being learned. This leads to a lack of robustness in these methods that limits their applicability to new domains. We present a graphconstruction method that uses a collection of taskspecific random subspaces to promote smoothness with respect to the problem of interest. Application of this method in a graphbased semisupervised setting demonstrates improvements in both the effectiveness and robustness of the learning algorithms in noisy problem domains. Keywordsapplications; graph Laplacian; semisupervised; I.
Committee:
, 2012
"... Date of final oral examination: 12/11/12 The dissertation is approved by the following members of the Final Oral ..."
Abstract
 Add to MetaCart
(Show Context)
Date of final oral examination: 12/11/12 The dissertation is approved by the following members of the Final Oral