Results 1 - 10
of
57
Spectral Regression for Efficient Regularized Subspace
- Learning,” Proc. 11th Int’l Conf. Computer Vision (ICCV ’07
, 2007
"... Subspace learning based face recognition methods have attracted considerable interests in recent years, including Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Locality Preserving Projection (LPP), Neighborhood Preserving Embedding (NPE) and Marginal Fisher Analysis (MFA). ..."
Abstract
-
Cited by 62 (4 self)
- Add to MetaCart
(Show Context)
Subspace learning based face recognition methods have attracted considerable interests in recent years, including Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Locality Preserving Projection (LPP), Neighborhood Preserving Embedding (NPE) and Marginal Fisher Analysis (MFA). However, a disadvantage of all these approaches is that their computations involve eigendecomposition of dense matrices which is expensive in both time and memory. In this paper, we propose a novel dimensionality reduction framework, called Spectral Regression (SR), for efficient regularized subspace learning. SR casts the problem of learning the projective functions into a regression framework, which avoids eigen-decomposition of dense matrices. Also, with the regression based framework, different kinds of regularizers can be naturally incorporated into our algorithm which makes it more flexible. Computational analysis shows that SR has only linear-time complexity which is a huge speed up comparing to the cubic-time complexity of the ordinary approaches. Experimental results on face recognition demonstrate the effectiveness and efficiency of our method. 1.
Trace ratio vs. ratio trace for dimensionality reduction
- Proceedings of Conference on Computer Vision and Pattern Recognition (CVPR
, 2007
"... A large family of algorithms for dimensionality reduc-tion end with solving a Trace Ratio problem in the form of argmaxW Tr(WTSpW)/Tr(WTSlW)1, which is gener-ally transformed into the corresponding Ratio Trace form argmaxW Tr [ (WTSlW)−1(WTSpW) ] for obtaining a closed-form but inexact solution. In ..."
Abstract
-
Cited by 55 (10 self)
- Add to MetaCart
(Show Context)
A large family of algorithms for dimensionality reduc-tion end with solving a Trace Ratio problem in the form of argmaxW Tr(WTSpW)/Tr(WTSlW)1, which is gener-ally transformed into the corresponding Ratio Trace form argmaxW Tr [ (WTSlW)−1(WTSpW) ] for obtaining a closed-form but inexact solution. In this work, an efficient iterative procedure is presented to directly solve the Trace Ratio problem. In each step, a Trace Difference problem argmaxW Tr[WT (Sp−λSl)W] is solved with λ being the trace ratio value computed from the previous step. Con-vergence of the projection matrix W, as well as the global optimum of the trace ratio value λ, are proven based on point-to-set map theories. In addition, this procedure is fur-ther extended for solving trace ratio problems with more general constraint WTCW=I and providing exact solu-tions for kernel-based subspace learning problems. Exten-sive experiments on faces and UCI data demonstrate the high convergence speed of the proposed solution, as well as its superiority in classification capability over correspond-ing solutions to the ratio trace problem. 1.
Discriminant locally linear embedding with high-order tensor data
- IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS, PART B: CYBERNETICS
, 2008
"... Graph-embedding along with its linearization and kernelization provides a general framework that unifies most traditional dimensionality reduction algorithms. From this frame-work, we propose a new manifold learning technique called discriminant locally linear embedding (DLLE), in which the local ge ..."
Abstract
-
Cited by 44 (12 self)
- Add to MetaCart
Graph-embedding along with its linearization and kernelization provides a general framework that unifies most traditional dimensionality reduction algorithms. From this frame-work, we propose a new manifold learning technique called discriminant locally linear embedding (DLLE), in which the local geometric properties within each class are preserved according to the locally linear embedding (LLE) criterion, and the separabil-ity between different classes is enforced by maximizing margins between point pairs on different classes. To deal with the out-of-sample problem in visual recognition with vector input, the linear version of DLLE, i.e., linearization of DLLE (DLLE/L), is directly proposed through the graph-embedding framework. Moreover, we propose its multilinear version, i.e., tensorization of DLLE, for the out-of-sample problem with high-order tensor input. Based on DLLE, a procedure for gait recognition is described. We conduct comprehensive experiments on both gait and face recognition, and observe that: 1) DLLE along its linearization and tensorization outperforms the related versions of linear discriminant analysis, and DLLE/L demonstrates greater effectiveness than the linearization of LLE; 2) algorithms based on tensor representations are generally superior to linear algorithms when dealing with intrinsically high-order data; and 3) for human gait recognition, DLLE/L generally obtains higher accuracy than state-of-the-art gait recognition algorithms on the standard University of South Florida gait database.
Locality sensitive discriminant analysis
- IJCAI
"... Linear Discriminant Analysis (LDA) is a popular data-analytic tool for studying the class relationship between data points. A major disadvantage of LDA is that it fails to discover the local geometrical structure of the data manifold. In this paper, we introduce a novel linear algorithm for discrimi ..."
Abstract
-
Cited by 37 (3 self)
- Add to MetaCart
Linear Discriminant Analysis (LDA) is a popular data-analytic tool for studying the class relationship between data points. A major disadvantage of LDA is that it fails to discover the local geometrical structure of the data manifold. In this paper, we introduce a novel linear algorithm for discriminant analysis, called Locality Sensitive Discriminant Analysis (LSDA). When there is no sufficient training samples, local structure is generally more important than global structure for discriminant analysis. By discovering the local manifold structure, LSDA finds a projection which maximizes the margin between data points from different classes at each local area. Specifically, the data points are mapped into a subspace in which the nearby points with the same label are close to each other while the nearby points with different labels are far apart. Experiments carried out on several standard face databases show a clear improvement over the results of LDA-based recognition.
Marginal Fisher Analysis and Its Variants for Human Gait Recognition and Content- Based Image Retrieval
"... Abstract—Dimensionality reduction algorithms, which aim to select a small set of efficient and discriminant features, have attracted great attention for human gait recognition and content-based image retrieval (CBIR). In this paper, we present extensions of our recently proposed marginal Fisher anal ..."
Abstract
-
Cited by 35 (5 self)
- Add to MetaCart
(Show Context)
Abstract—Dimensionality reduction algorithms, which aim to select a small set of efficient and discriminant features, have attracted great attention for human gait recognition and content-based image retrieval (CBIR). In this paper, we present extensions of our recently proposed marginal Fisher analysis (MFA) to address these problems. For human gait recognition, we first present a direct application of MFA, then inspired by recent advances in matrix and tensor-based dimensionality reduction algorithms, we present matrix-based MFA for directly handling 2-D input in the form of gray-level averaged images. For CBIR, we deal with the relevance feedback problem by extending MFA to marginal biased analysis, in which within-class compactness is characterized only by the distances between each positive sample and its neighboring positive samples. In addition, we present a new technique to acquire a direct optimal solution for MFA without resorting to objective function modification as done in many previous algorithms. We conduct comprehensive experiments on the USF HumanID gait database and the Corel image retrieval database. Experimental results demonstrate that MFA and its extensions outperform related algorithms in both applications. Index Terms—Content-based image retrieval (CBIR), dimensionality reduction, gait recognition, marginal Fisher analysis (MFA), relevance feedback. I.
A Least-Squares Framework for Component Analysis
, 2009
"... ... (SC) have been extensively used as a feature extraction step for modeling, clustering, classification, and visualization. CA techniques are appealing because many can be formulated as eigen-problems, offering great potential for learning linear and non-linear representations of data in closed-fo ..."
Abstract
-
Cited by 25 (2 self)
- Add to MetaCart
... (SC) have been extensively used as a feature extraction step for modeling, clustering, classification, and visualization. CA techniques are appealing because many can be formulated as eigen-problems, offering great potential for learning linear and non-linear representations of data in closed-form. However, the eigen-formulation often conceals important analytic and computational drawbacks of CA techniques, such as solving generalized eigen-problems with rank deficient matrices (e.g., small sample size problem), lacking intuitive interpretation of normalization factors, and understanding commonalities and differences between CA methods. This paper proposes a unified least-squares framework to formulate many CA methods. We show how PCA, LDA, CCA, LE, SC, and their kernel and regularized extensions, correspond to a particular instance of least-squares weighted kernel reduced rank regression (LS-WKRRR). The LS-WKRRR formulation of CA methods has several benefits: (1) provides a clean connection between many CA techniques and an intuitive framework to understand normalization factors; (2) yields efficient numerical schemes to solve CA techniques; (3) overcomes the small sample size problem; (4) provides a framework to easily extend CA methods. We derive new weighted generalizations of PCA, LDA, CCA and SC, and several novel CA techniques.
Spectral Regression: A Unified Approach for Sparse Subspace Learning
"... Recently the problem of dimensionality reduction (or, subspace learning) has received a lot of interests in many fields of information processing, including data mining, information retrieval, and pattern recognition. Some popular methods include Principal Component Analysis (PCA), Linear Discrimina ..."
Abstract
-
Cited by 23 (6 self)
- Add to MetaCart
Recently the problem of dimensionality reduction (or, subspace learning) has received a lot of interests in many fields of information processing, including data mining, information retrieval, and pattern recognition. Some popular methods include Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Locality Preserving Projection (LPP). However, a disadvantage of all these approaches is that the learned projective functions are linear combinations of all the original features, thus it is often difficult to interpret the results. In this paper, we propose a novel dimensionality reduction framework, called Unified Sparse Subspace Learning (USSL), for learning sparse projections. USSL casts the problem of learning the projective functions into a regression framework, which facilitates the use of different kinds of regularizers. By using a L1-norm regularizer (lasso), the sparse projections can be efficiently computed. Experimental results on real world classification and clustering problems demonstrate the effectiveness of our method.
Parzen Discriminant Analysis
"... In this paper, we propose a non-parametric Discriminant Analysis method (no assumption on the distributions of classes), called Parzen Discriminant Analysis (PDA). Through a deep investigation on the non-parametric density estimation, we find that minimizing/maximizing the distances between each dat ..."
Abstract
-
Cited by 18 (2 self)
- Add to MetaCart
(Show Context)
In this paper, we propose a non-parametric Discriminant Analysis method (no assumption on the distributions of classes), called Parzen Discriminant Analysis (PDA). Through a deep investigation on the non-parametric density estimation, we find that minimizing/maximizing the distances between each data sample and its nearby similar/dissimilar samples is equivalent to minimizing an upper bound of the Bayesian error rate. Based on this theoretical analysis, we define our criterion as maximizing the average local dissimilarity scatter with respect to a fixed average local similarity scatter. All local scatters are calculated in fixed size local regions, resembling the idea of Parzen estimation. Experiments in UCI machine learning database show that our method impressively outperforms other related neighbor based non-parametric methods. 1.
Human action recognition using local spatiotemporal discriminant embedding
- in Proc. CVPR
, 2008
"... Human action video sequences can be considered as nonlinear dynamic shape manifolds in the space of image frames. In this paper, we address learning and classifying human actions on embedded low-dimensional manifolds. We propose a novel manifold embedding method, called Local Spatio-Temporal Discrim ..."
Abstract
-
Cited by 15 (0 self)
- Add to MetaCart
(Show Context)
Human action video sequences can be considered as nonlinear dynamic shape manifolds in the space of image frames. In this paper, we address learning and classifying human actions on embedded low-dimensional manifolds. We propose a novel manifold embedding method, called Local Spatio-Temporal Discriminant Embedding (LSTDE). The discriminating capabilities of the proposed method are two-fold: (1) for local spatial discrimination, LSTDE projects data points (silhouette-based image frames of human action sequences) in a local neighborhood into the embedding space where data points of the same action class are close while those of different classes are far apart; (2) in such a local neighborhood, each data point has an associated short video segment, which forms a local temporal subspace on the embedded manifold. LSTDE finds an optimal embedding which maximizes the principal angles between those temporal subspaces associated with data points of different classes. Benefiting from the joint spatio-temporal discriminant embedding, our method is potentially more powerful for classifying human actions with similar space-time shapes, and is able to perform recognition on a frame-byframe or short video segment basis. Experimental results demonstrate that our method can accurately recognize human actions, and can improve the recognition performance over some representative manifold embedding methods, especially on highly confusing human action types. 1.
Linear Laplacian discrimination for feature extraction
- In CVPR, 2007. 6
"... Discriminant feature extraction plays a fundamental role in pattern recognition. In this paper, we propose the Linear Laplacian Discrimination (LLD) algorithm for discriminant feature extraction. LLD is an extension of Linear Discriminant Analysis (LDA). Our motivation is to address the issue that L ..."
Abstract
-
Cited by 13 (4 self)
- Add to MetaCart
(Show Context)
Discriminant feature extraction plays a fundamental role in pattern recognition. In this paper, we propose the Linear Laplacian Discrimination (LLD) algorithm for discriminant feature extraction. LLD is an extension of Linear Discriminant Analysis (LDA). Our motivation is to address the issue that LDA cannot work well in cases where sample spaces are non-Euclidean. Specifically, we define the within-class scatter and the between-class scatter using similarities which are based on pairwise distances in sample spaces. Thus the structural information of classes is contained in the within-class and the between-class Laplacian matrices which are free from metrics of sample spaces. The optimal discriminant subspace can be derived by controlling the structural evolution of Laplacian matrices. Experiments are performed on the facial database for FRGC version 2. Experimental results show that LLD is effective in extracting discriminant features. 1.