Results 1  10
of
19
Graph embedding and extension: A general framework for dimensionality reduction
 IEEE TRANS. PATTERN ANAL. MACH. INTELL
, 2007
"... Over the past few decades, a large family of algorithms—supervised or unsupervised; stemming from statistics or geometry theory—has been designed to provide different solutions to the problem of dimensionality reduction. Despite the different motivations of these algorithms, we present in this paper ..."
Abstract

Cited by 271 (29 self)
 Add to MetaCart
(Show Context)
Over the past few decades, a large family of algorithms—supervised or unsupervised; stemming from statistics or geometry theory—has been designed to provide different solutions to the problem of dimensionality reduction. Despite the different motivations of these algorithms, we present in this paper a general formulation known as graph embedding to unify them within a common framework. In graph embedding, each algorithm can be considered as the direct graph embedding or its linear/kernel/tensor extension of a specific intrinsic graph that describes certain desired statistical or geometric properties of a data set, with constraints from scale normalization or a penalty graph that characterizes a statistical or geometric property that should be avoided. Furthermore, the graph embedding framework can be used as a general platform for developing new dimensionality reduction algorithms. By utilizing this framework as a tool, we propose a new supervised dimensionality reduction algorithm called Marginal Fisher Analysis in which the intrinsic graph characterizes the intraclass compactness and connects each data point with its neighboring points of the same class, while the penalty graph connects the marginal points and characterizes the interclass separability. We show that MFA effectively overcomes the limitations of the traditional Linear Discriminant Analysis algorithm due to data distribution assumptions and available projection directions. Real face recognition experiments show the superiority of our proposed MFA in comparison to LDA, also for corresponding kernel and tensor extensions.
Diffusion Wavelets
, 2004
"... We present a multiresolution construction for efficiently computing, compressing and applying large powers of operators that have high powers with low numerical rank. This allows the fast computation of functions of the operator, notably the associated Green’s function, in compressed form, and their ..."
Abstract

Cited by 148 (16 self)
 Add to MetaCart
(Show Context)
We present a multiresolution construction for efficiently computing, compressing and applying large powers of operators that have high powers with low numerical rank. This allows the fast computation of functions of the operator, notably the associated Green’s function, in compressed form, and their fast application. Classes of operators satisfying these conditions include diffusionlike operators, in any dimension, on manifolds, graphs, and in nonhomogeneous media. In this case our construction can be viewed as a farreaching generalization of Fast Multipole Methods, achieved through a different point of view, and of the nonstandard wavelet representation of CalderónZygmund and pseudodifferential operators, achieved through a different multiresolution analysis adapted to the operator. We show how the dyadic powers of an operator can be used to induce a multiresolution analysis, as in classical LittlewoodPaley and wavelet theory, and we show how to construct, with fast and stable algorithms, scaling function and wavelet bases associated to this multiresolution analysis, and the corresponding downsampling operators, and use them to compress the corresponding powers of the operator. This allows to extend multiscale signal processing to general spaces (such as manifolds and graphs) in a very natural way, with corresponding fast algorithms.
Learning a spatially smooth subspace for face recognition
 Computer Vision and Pattern Recognition, 2007. CVPR ’07. IEEE Conference on
, 2007
"... Subspace learning based face recognition methods have attracted considerable interests in recently years, including ..."
Abstract

Cited by 53 (3 self)
 Add to MetaCart
(Show Context)
Subspace learning based face recognition methods have attracted considerable interests in recently years, including
Spectral Regression: A Unified Approach for Sparse Subspace Learning
"... Recently the problem of dimensionality reduction (or, subspace learning) has received a lot of interests in many fields of information processing, including data mining, information retrieval, and pattern recognition. Some popular methods include Principal Component Analysis (PCA), Linear Discrimina ..."
Abstract

Cited by 23 (6 self)
 Add to MetaCart
Recently the problem of dimensionality reduction (or, subspace learning) has received a lot of interests in many fields of information processing, including data mining, information retrieval, and pattern recognition. Some popular methods include Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Locality Preserving Projection (LPP). However, a disadvantage of all these approaches is that the learned projective functions are linear combinations of all the original features, thus it is often difficult to interpret the results. In this paper, we propose a novel dimensionality reduction framework, called Unified Sparse Subspace Learning (USSL), for learning sparse projections. USSL casts the problem of learning the projective functions into a regression framework, which facilitates the use of different kinds of regularizers. By using a L1norm regularizer (lasso), the sparse projections can be efficiently computed. Experimental results on real world classification and clustering problems demonstrate the effectiveness of our method.
Spectral regression for dimensionality reduction
, 2007
"... Spectral methods have recently emerged as a powerful tool for dimensionality reduction and manifold learning. These methods use information contained in the eigenvectors of a data affinity (i.e., itemitem similarity) matrix to reveal low dimensional structure in high dimensional data. The most pop ..."
Abstract

Cited by 17 (6 self)
 Add to MetaCart
(Show Context)
Spectral methods have recently emerged as a powerful tool for dimensionality reduction and manifold learning. These methods use information contained in the eigenvectors of a data affinity (i.e., itemitem similarity) matrix to reveal low dimensional structure in high dimensional data. The most popular manifold learning algorithms include Locally Linear Embedding, Isomap, and Laplacian Eigenmap. However, these algorithms only provide the embedding results of training samples. There are many extensions of these approaches which try to solve the outofsample extension problem by seeking an embedding function in reproducing kernel Hilbert space. However, a disadvantage of all these approaches is that their computations usually involve eigendecomposition of dense matrices which is expensive in both time and memory. In this paper, we propose a novel dimensionality reduction method, called Spectral Regression (SR). SR casts the problem of learning an embedding function into a regression framework, which avoids eigendecomposition of dense matrices. Also, with the regression based framework, different kinds of regularizers can be naturally incorporated into our algorithm which makes it more flexible. SR can be performed in supervised, unsupervised and semisupervised situation. It can make efficient use of both labeled and unlabeled points to discover the intrinsic discriminant structure in the data. Experimental results on classification and semisupervised classification demonstrate the effectiveness and efficiency of our algorithm.
Modeling correspondences for multicamera tracking using nonlinear manifold learning and target dynamics
 In Conference on Computer Vision and Pattern Recognition
, 2006
"... Multicamera tracking systems often must maintain consistent identity labels of the targets across views to recover 3D trajectories and fully take advantage of the additional information available from the multiple sensors. Previous approaches to the “correspondence across views ” problem include ma ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
(Show Context)
Multicamera tracking systems often must maintain consistent identity labels of the targets across views to recover 3D trajectories and fully take advantage of the additional information available from the multiple sensors. Previous approaches to the “correspondence across views ” problem include matching features, using camera calibration information, and computing homographies between views under the assumption that the world is planar. However, it can be difficult to match features across significantly different views. Furthermore, calibration information is not always available and planar world hypothesis can be too restrictive. In this paper, a new approach is presented for matching correspondences based on the use of nonlinear manifold learning and system dynamics identification. The proposed approach does not require similar views, calibration nor geometric assumptions of the 3D environment, and is robust to noise and occlusion. Experimental results demonstrate the use of this approach to generate and predict views in cases where identity labels become ambiguous. 1.
Robust Locally Linear Embedding
, 2005
"... In the past few years, some nonlinear dimensionality reduction (NLDR) or nonlinear manifold learning methods have aroused a great deal of interest in the machine learning community. These methods are promising in that they can automatically discover the lowdimensional nonlinear manifold in a highd ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
(Show Context)
In the past few years, some nonlinear dimensionality reduction (NLDR) or nonlinear manifold learning methods have aroused a great deal of interest in the machine learning community. These methods are promising in that they can automatically discover the lowdimensional nonlinear manifold in a highdimensional data space and then embed the data points into a lowdimensional embedding space, using tractable linear algebraic techniques that are easy to implement and are not prone to local minima. Despite their appealing properties, these NLDR methods are not robust against outliers in the data, yet so far very little has been done to address the robustness problem. In this paper, we address this problem in the context of an NLDR method called locally linear embedding (LLE). Based on robust estimation techniques, we propose an approach to make LLE more robust. We refer to this approach as robust locally linear embedding (RLLE). We also present several specific methods for realizing this general RLLE approach. Experimental results on both synthetic and realworld data show that RLLE is very robust against outliers.
Spectral regression: A regression framework for efficient regularized subspace learning
, 2009
"... Spectral methods have recently emerged as a powerful tool for dimensionality reduction and manifold learning. These methods use information contained in the eigenvectors of a data affinity (i.e., itemitem similarity) matrix to reveal the low dimensional structure in the high dimensional data. The m ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
(Show Context)
Spectral methods have recently emerged as a powerful tool for dimensionality reduction and manifold learning. These methods use information contained in the eigenvectors of a data affinity (i.e., itemitem similarity) matrix to reveal the low dimensional structure in the high dimensional data. The most popular manifold learning algorithms include Locally Linear Embedding, ISOMAP, and Laplacian Eigenmap. However, these algorithms only provide the embedding results of training samples. There are many extensions of these approaches which try to solve the outofsample extension problem by seeking an embedding function in reproducing kernel Hilbert space. However, a disadvantage of all these approaches is that their computations usually involve eigendecomposition of dense matrices which is expensive in both time and memory. In this thesis, we introduce a novel dimensionality reduction framework, called Spectral Regression (SR). SR casts the problem of learning an embedding function into a regression framework, which avoids eigendecomposition of dense matrices. Also, with the regression as a building block, different kinds of regularizers can be naturally incorporated into our framework which makes it more flexible. SR can be performed in supervised, unsupervised and semisupervised situation. It can make efficient use of both labeled and unlabeled points to discover the intrinsic discriminant structure in the data. We have applied our algorithms to several real world applications, e.g. face analysis, document representation and contentbased image retrieval. ii To my Parents. iii
A Convergent Solution to Tensor Subspace Learning
 in Proc. IJCAI’07
, 2007
"... Recently, substantial efforts have been devoted to the subspace learning techniques based on tensor representation, such as 2DLDA [Ye et al., 2004], DATER [Yan et al., 2005] and Tensor Subspace Analysis (TSA) [He et al., 2005]. In this context, a vital yet unsolved problem is that the computational ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Recently, substantial efforts have been devoted to the subspace learning techniques based on tensor representation, such as 2DLDA [Ye et al., 2004], DATER [Yan et al., 2005] and Tensor Subspace Analysis (TSA) [He et al., 2005]. In this context, a vital yet unsolved problem is that the computational convergency of these iterative algorithms is not guaranteed. In this work, we present a novel solution procedure for general tensorbased subspace learning, followed by a detailed convergency proof of the solution projection matrices and the objective function value. Extensive experiments on realworld databases verify the high convergence speed of the proposed procedure, as well as its superiority in classification capability over traditional solution procedures. 1
Graph Embedding with Constraints
"... Recently graph based dimensionality reduction has received a lot of interests in many fields of information processing. Central to it is a graph structure which models the geometrical and discriminant structure of the data manifold. When label information is available, it is usually incorporated int ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Recently graph based dimensionality reduction has received a lot of interests in many fields of information processing. Central to it is a graph structure which models the geometrical and discriminant structure of the data manifold. When label information is available, it is usually incorporated into the graph structure by modifying the weights between data points. In this paper, we propose a novel dimensionality reduction algorithm, called Constrained Graph Embedding, which considers the label information as additional constraints. Specifically, we constrain the space of the solutions that we explore only to contain embedding results that are consistent with the labels. Experimental results on two real life data sets illustrate the effectiveness of our proposed method. 1