Results 11  20
of
37
Face Recognition Using Adaptive Margin Fisher’s Criterion and Linear Discriminant Analysis (AMFCLDA) IAJIT First Online Publication
, 2011
"... Abstract: Selecting a low dimensional feature subspace from thousands of features is a key phenomenon for optimal classification. Linear Discriminant Analysis (LDA) is a basic well recognized supervised classifier that is effectively employed for classification. However, two problems arise in intra ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract: Selecting a low dimensional feature subspace from thousands of features is a key phenomenon for optimal classification. Linear Discriminant Analysis (LDA) is a basic well recognized supervised classifier that is effectively employed for classification. However, two problems arise in intra class during Discriminant Analysis. Firstly, in training phase the number of samples in intra class is smaller than the dimensionality of the sample which makes LDA unstable. The other is high computational cost due to redundant and irrelevant data points in intra class. An Adaptive Margin Fisher’s Criterion Linear Discriminant Analysis (AMFCLDA) is proposed that addresses these issues and overcomes the limitations of intra class problems. Small Sample Size problem is resolved through modified maximum margin criterion which is a form of customized LDA and Convex hull. Inter class is defined using LDA while intra class is formulated using quick hull respectively. Similarly, computational cost is reduced by reformulating within class scatter matrix through Minimum Redundancy Maximum Relevance (mRMR) algorithm while preserving Discriminant Information. The proposed algorithm reveals encouraging performance. Finally, a comparison is made with existing approaches.
NonNegative Graph Embedding
"... We introduce a general formulation, called nonnegative graph embedding, for nonnegative data decomposition by integrating the characteristics of both intrinsic and penalty graphs [17]. In the past, such a decomposition was obtained mostly in an unsupervised manner, such as Nonnegative Matrix Facto ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We introduce a general formulation, called nonnegative graph embedding, for nonnegative data decomposition by integrating the characteristics of both intrinsic and penalty graphs [17]. In the past, such a decomposition was obtained mostly in an unsupervised manner, such as Nonnegative Matrix Factorization (NMF) and its variants, and hence unnecessary to be powerful at classification. In this work, the nonnegative data decomposition is studied in a unified way applicable for both unsupervised and supervised/semisupervised configurations. The ultimate data decomposition is separated into two parts, which separatively preserve the similarities measured by the intrinsic and penalty graphs, and together minimize the data reconstruction error. An iterative procedure is derived for such a purpose, and the algorithmic nonnegativity is guaranteed by the nonnegative property of the inverse of any Mmatrix. Extensive experiments compared with NMF and conventional solutions for graph embedding demonstrate the algorithmic properties in sparsity, classification power, and robustness to image occlusions. 1.
Semisupervised Marginal Discriminant Analysis Based on QR Decomposition
"... In this paper, a novel subspace learning method, semisupervised marginal discriminant analysis (SMDA), is proposed for classification. SMDA aims at maintaining the intrinsic neighborhood relations between the data points from the same class, while maximizing the margin between the neighboring data ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper, a novel subspace learning method, semisupervised marginal discriminant analysis (SMDA), is proposed for classification. SMDA aims at maintaining the intrinsic neighborhood relations between the data points from the same class, while maximizing the margin between the neighboring data points with different class labels. Different from traditional dimensionality reduction algorithms like linear discriminant analysis (LDA) and maximum margin criterion (MMC) which seeks only the global Euclidean structure, SMDA takes local structure of the data into account. Moreover, it is designed for semisupervised learning which incorporates both labeled and unlabeled data points and avoids suffering the small sample size (SSS) problem. QR decomposition is then employed to find the optimal transformation which makes the algorithm scalable and more efficient. Experiments on face recognition are presented to show the effectiveness of the method. 1.
Comments on “Efficient and Robust Feature Extraction by Maximum Margin Criterion”
"... Abstract—The goal of this comment is to first point out two loopholes in the paper by Li et al. (2006): 1) sodesigned efficient maximal margin criterion (MMC) algorithm for small sample size (SSS) problem is problematic and 2) the discussion on the equivalence with the nullspacebased methods in S ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract—The goal of this comment is to first point out two loopholes in the paper by Li et al. (2006): 1) sodesigned efficient maximal margin criterion (MMC) algorithm for small sample size (SSS) problem is problematic and 2) the discussion on the equivalence with the nullspacebased methods in SSS problem does not hold. Then, we will present a really efficient MMC algorithm for SSS problem. Index Terms—Efficient algorithm, equivalence, maximal margin criterion (MMC), null space, small sample size (SSS) problem. I. ORGANIZATION AND PREPARATION Organization: In this section, we will give some notations and a brief review of maximum margin criterion (MMC) [3] and point out the two loopholes. In Section II, we will propose a really efficient MMC, and then, conclude this comment in Section III. Let the training set be composed of ™ classes gIYgPY FFFYg™, the �th class have � � training samples, and � � � denote the �th hdimensional sample from the �th class. In total, there will be � a �aI � � training samples. In applications such as face recognition, the small sample size (SSS) problem often takes place, namely, h) �. The withinclass scatter matrix ƒ � and betweenclass scatter matrix ƒ ˜ can be denoted as ƒ � a I ƒ ˜ a I
A Scalable Supervised Algorithm for Dimensionality Reduction on Streaming Data *
"... Algorithms on streaming data have attracted increasing attention in the past decade. Among them, dimensionality reduction algorithms are greatly interesting due to the desirability of real tasks. Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) are two of the most widely use ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Algorithms on streaming data have attracted increasing attention in the past decade. Among them, dimensionality reduction algorithms are greatly interesting due to the desirability of real tasks. Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) are two of the most widely used dimensionality reduction approaches. However, PCA is not optimal for general classification problems because it is unsupervised and ignores valuable label information for classification. On the other hand, the performance of LDA is degraded when encountering limited available low dimensional spaces and singularity problem. Recently, Maximum Margin Criterion (MMC) was proposed to overcome the shortcomings of PCA and LDA. Nevertheless, the original MMC algorithm could not satisfy the streaming data model to handle largescale highdimensional data set. Thus an effective, efficient and scalable approach is needed. In this paper, we propose a supervised incremental dimensionality reduction algorithm and its extension to infer adaptive low dimensional spaces by optimizing the Maximum Margin Criterion. Experimental results on a synthetic dataset and real datasets demonstrate the superior performance of our proposed algorithm on streaming data.
Learning Semantic Patterns with Discriminant Localized Binary Projections
 IEEE Conference on Computer Vision and Pattern Recognition
, 2006
"... In this paper, we present a novel approach to learning semantic localized patterns with binary projections in a supervised manner. The pursuit of these binary projections is reformulated into a problem of feature clustering, which optimizes the separability of different classes by taking the members ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper, we present a novel approach to learning semantic localized patterns with binary projections in a supervised manner. The pursuit of these binary projections is reformulated into a problem of feature clustering, which optimizes the separability of different classes by taking the members within each cluster as the nonzero entries of a projection vector. An efficient greedy procedure is proposed to incrementally combine the subclusters by ensuring the cardinality constraints of the projections and the increase of the objective function. Compared with other algorithms for sparse representations, our proposed algorithm, referred to as Discriminant Localized Binary Projections (dlb), has the following characteristics: 1) dlb is supervised, hence is much more effective than other unsupervised sparse algorithms like Nonnegative Matrix Factorization (NMF) in terms of classification power; 2) similar to NMF, dlb can derive spatially localized sparse bases; furthermore, the sparsity of dlb is controllable, and an interesting result is that the bases have explicit semantics in human perception, like eyes and mouth; and 3) classification with dlb is extremely efficient, and only addition operations are required for dimensionality reduction. Extensive experimental results show significant improvements of dlb in sparsity and face recognition accuracy in comparison to the stateoftheart algorithms for dimensionality reduction and sparse representations. 1.
Parzen Discriminant Analysis
"... In this paper, we propose a nonparametric Discriminant Analysis method (no assumption on the distributions of classes), called Parzen Discriminant Analysis (PDA). Through a deep investigation on the nonparametric density estimation, we find that minimizing/maximizing the distances between each dat ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper, we propose a nonparametric Discriminant Analysis method (no assumption on the distributions of classes), called Parzen Discriminant Analysis (PDA). Through a deep investigation on the nonparametric density estimation, we find that minimizing/maximizing the distances between each data sample and its nearby similar/dissimilar samples is equivalent to minimizing an upper bound of the Bayesian error rate. Based on this theoretical analysis, we define our criterion as maximizing the average local dissimilarity scatter with respect to a fixed average local similarity scatter. All local scatters are calculated in fixed size local regions, resembling the idea of Parzen estimation. Experiments in UCI machine learning database show that our method impressively outperforms other related neighbor based nonparametric methods. 1.
Lorentzian Discriminant Projection and Its Applications
"... Abstract. This paper develops a supervised dimensionality reduction method, Lorentzian Discriminant Projection (LDP), for discriminant analysis and classification. Our method represents the structures of sample data by a manifold, which is furnished with a Lorentzian metric tensor. Different from cl ..."
Abstract
 Add to MetaCart
Abstract. This paper develops a supervised dimensionality reduction method, Lorentzian Discriminant Projection (LDP), for discriminant analysis and classification. Our method represents the structures of sample data by a manifold, which is furnished with a Lorentzian metric tensor. Different from classic discriminant analysis techniques, LDP uses distances from points to their withinclass neighbors and global geometric centroid to model a new manifold to detect the intrinsic local and global geometric structures of data set. In this way, both the geometry of a group of classes and global data structures can be learnt from the Lorentzian metric tensor. Thus discriminant analysis in the original sample space reduces to metric learning on a Lorentzian manifold. The experimental results on benchmark databases demonstrate the effectiveness of our proposed method. 1
implementation of Feature Extraction Module using Two Dimensional Maximum Margin Criteria which removes
"... Illumination variation is a challenging problem in face recognition research area. Same person can appear greatly different under varying lighting conditions. This paper consists of Face Recognition System which is invariant to illumination variations. Face recognition system which uses Linear Discr ..."
Abstract
 Add to MetaCart
Illumination variation is a challenging problem in face recognition research area. Same person can appear greatly different under varying lighting conditions. This paper consists of Face Recognition System which is invariant to illumination variations. Face recognition system which uses Linear Discriminant Analysis (LDA) as feature extractor have Small Sample Size (SSS). It consists of
Feature Extraction Base on Local Maximum Margin Criterion
"... Maximum Margin Criterion (MMC) based Feature Extraction method is more efficient than LDA for calculating the discriminant vectors since it does not need to calculate the inverse withinclass scatter matrix. However, MMC ignores the discriminative information within the local structures of samples. ..."
Abstract
 Add to MetaCart
Maximum Margin Criterion (MMC) based Feature Extraction method is more efficient than LDA for calculating the discriminant vectors since it does not need to calculate the inverse withinclass scatter matrix. However, MMC ignores the discriminative information within the local structures of samples. In this paper, we develop a novel criterion to address the issue, namely Local Maximum Margin Criterion (Local MMC). We define the total laplacian matrix, withinclass laplacian matrix and betweenclass laplacian matrix using the samples similar weighting. Local MMC gets the discriminant vectors by maximizing the difference between betweenclass laplacian matrix and withinclass laplacian matrix. Experiments on FERET face database show the effectiveness of the proposed Local MMC based feature extraction method. 1.