• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

P.C.: Face recognition by regularized discriminant analysis. (2007)

by D Q Dai, Yuen
Venue:IEEE Trans. SMC
Add To MetaCart

Tools

Sorted by:
Results 1 - 9 of 9

Face Recognition Using Adaptive Margin Fisher’s Criterion and Linear Discriminant Analysis (AMFC-LDA) IAJIT First Online Publication

by Marryam Murtaza, Muhammad Sharif, Mudassar Raza, Jamal Hussain Shah , 2011
"... Abstract: Selecting a low dimensional feature subspace from thousands of features is a key phenomenon for optimal classification. Linear Discriminant Analysis (LDA) is a basic well recognized supervised classifier that is effectively employed for classification. However, two problems arise in intra ..."
Abstract - Cited by 3 (1 self) - Add to MetaCart
Abstract: Selecting a low dimensional feature subspace from thousands of features is a key phenomenon for optimal classification. Linear Discriminant Analysis (LDA) is a basic well recognized supervised classifier that is effectively employed for classification. However, two problems arise in intra class during Discriminant Analysis. Firstly, in training phase the number of samples in intra class is smaller than the dimensionality of the sample which makes LDA unstable. The other is high computational cost due to redundant and irrelevant data points in intra class. An Adaptive Margin Fisher’s Criterion Linear Discriminant Analysis (AMFC-LDA) is proposed that addresses these issues and overcomes the limitations of intra class problems. Small Sample Size problem is resolved through modified maximum margin criterion which is a form of customized LDA and Convex hull. Inter class is defined using LDA while intra class is formulated using quick hull respectively. Similarly, computational cost is reduced by reformulating within class scatter matrix through Minimum Redundancy Maximum Relevance (mRMR) algorithm while preserving Discriminant Information. The proposed algorithm reveals encouraging performance. Finally, a comparison is made with existing approaches.
(Show Context)

Citation Context

...ethod alters the regularization parameters and eradicates the null space of both w S and b S which is responsible to evaporate the important features that are helpful for classification. Dai and Yuen =-=[6]-=- proposed three-parameter Regularized Discriminant Analysis (RDA) which though solves the SSS problem and works in the full space of w S and Sb which doesn’t lose any significant information but it ca...

A feature selection method using improved regularized linear discriminant analysis,”

by Alok Sharma , Kuldip K Paliwal , Seiya Imoto , Satoru Miyano - Machine Vision and Applications, , 2014
"... Abstract Investigation of genes, using data analysis and computer-based methods, has gained widespread attention in solving human cancer classification problem. DNA microarray gene expression datasets are readily utilized for this purpose. In this paper, we propose a feature selection method using ..."
Abstract - Cited by 3 (2 self) - Add to MetaCart
Abstract Investigation of genes, using data analysis and computer-based methods, has gained widespread attention in solving human cancer classification problem. DNA microarray gene expression datasets are readily utilized for this purpose. In this paper, we propose a feature selection method using improved regularized linear discriminant analysis technique to select important genes, crucial for human cancer classification problem. The experiment is conducted on several DNA microarray gene expression datasets and promising results are obtained when compared with several other existing feature selection methods.
(Show Context)

Citation Context

...not require a classifier during training process to select features. RLDA technique is one of the few pioneering techniques in the pattern classification literature. RLDA technique is used in the cases where SSS exist. In RLDA, a small perturbation, known as the regularization parameter α, is added to within-class scatter matrix SW, to overcome SSS problem. The matrix SW is approximated by SW + αI and the orientation matrix is computed by eigenvalue decomposition (EVD) of (SW + αI)−1SB, where SB is between-class scatter matrix. RLDA has been applied in face recognition and bioinformatics area [5,6,14]. In RLDA, it can be computationally expensive to find the optimum value of the parameter α as heuristic approach (e.g. cross-validation procedure, [16]) is applied. The value of the parameter could be sensitive and noisy especially when the number of training samples is scarce. In human cancer classification problem, the DNA microarray gene expression datasets, usually have very limited number of training samples which could adversely affect the classification performance of the RLDA technique. In order to find the gene subset associated with human cancers, we first determine the value of α f...

Semisupervised Dimensionality Reduction and Classification Through Virtual Label Regression

by Feiping Nie, Dong Xu, Xuelong Li, Senior Member, Shiming Xiang
"... Abstract—Semisupervised dimensionality reduction has been attracting much attention as it not only utilizes both labeled and unlabeled data simultaneously, but also works well in the situation of out-of-sample. This paper proposes an effective approach of semisupervised dimensionality reduction thro ..."
Abstract - Cited by 2 (1 self) - Add to MetaCart
Abstract—Semisupervised dimensionality reduction has been attracting much attention as it not only utilizes both labeled and unlabeled data simultaneously, but also works well in the situation of out-of-sample. This paper proposes an effective approach of semisupervised dimensionality reduction through label propaga-tion and label regression. Different from previous efforts, the new approach propagates the label information from labeled to unlabeled data with a well-designed mechanism of random walks, in which outliers are effectively detected and the obtained virtual labels of unlabeled data can be well encoded in a weighted regres-sion model. These virtual labels are thereafter regressed with a linear model to calculate the projection matrix for dimensionality reduction. By this means, when the manifold or the clustering assumption of data is satisfied, the labels of labeled data can be correctly propagated to the unlabeled data; and thus, the proposed approach utilizes the labeled and the unlabeled data more effec-tively than previous work. Experimental results are carried out upon several databases, and the advantage of the new approach is well demonstrated. Index Terms—Dimensionality reduction, label propagation, label regression, semisupervised learning, subspace learning. I.

1A Novel Regularization Learning for Single-view Patterns: Multi-view

by Discriminative Regularization, Zhe Wang, Songcan Chen, Hui Xue, Zhisong Pan
"... The existing Multi-View Learning (MVL) is to discuss how to learn from patterns with multiple information sources and has been proven its superior generalization to the usual Single-View Learning (SVL). However, in most real-world cases there are just single source patterns available such that the e ..."
Abstract - Cited by 1 (0 self) - Add to MetaCart
The existing Multi-View Learning (MVL) is to discuss how to learn from patterns with multiple information sources and has been proven its superior generalization to the usual Single-View Learning (SVL). However, in most real-world cases there are just single source patterns available such that the existing MVL cannot work. The purpose of this paper is to develop a new multi-view regularization learning for single source patterns. Concretely, for the given single source patterns, we first map them into M feature spaces by M different empirical kernels, then associate each generated feature space with our previous proposed Discriminative Regularization (DR), and finally synthesize M DRs into one single learning process so as to get a new Multi-view Discriminative Regularization (MVDR), where each DR can be taken as one view of the proposed MVDR. The proposed method achieves: 1) the complementarity for multiple views generated from single source patterns; 2) an analytic solution for classification; 3) a direct optimization formulation for multi-class problems without one-against-all or one-against-one strategies.
(Show Context)

Citation Context

... views given the class are conditionally independent. Here, the independence assumption is guaranteed by the patterns composed of two naturally-split attribute sets. Regularization learning [7], [8], =-=[10]-=-, [17], [39] is viewed as one effective method for improving the generalization performance of classifiers. It has a rich history which can date back to the theory of ill-posed problem [27], [39], [40...

Article Info ABSTRACT Article history:

by unknown authors , 2012
"... In this paper, a new Face Recognition method based on Two Dimensional Discrete Cosine Transform with Linear Discriminant Analysis (LDA) and K Nearest neighbours (KNN) classifier is proposed. This method consists of three steps, i) Transformation of images from special to frequency domain using Two d ..."
Abstract - Add to MetaCart
In this paper, a new Face Recognition method based on Two Dimensional Discrete Cosine Transform with Linear Discriminant Analysis (LDA) and K Nearest neighbours (KNN) classifier is proposed. This method consists of three steps, i) Transformation of images from special to frequency domain using Two dimensional discrete cosine transform ii) Feature extraction using Linear Discriminant Analysis and iii) classification using K Nearest Neighbour classifier. Linear Disceminant Analysis searches the directions for maximum discrimination of classes in addition to dimensionality reduction. Combination of Two Dimensional Discrete Cosine transform and Linear Discriminant Analysis is used for improving the capability of Linear Discriminant Analysis when few samples of images are available. K Nearest Neighbour classifier gives fast and accurate classification of face images that makes this method useful in online applications. Evaluation was performed on two face data bases. First database of 400 face images from AT&T face database, and the second database of thirteen students are taken. The proposed method gives fast and better recognition rate when compared to other classifiers. The main advantage of this method is its high speed processing capability and low computational requirements in terms of both speed and memory utilizations. Keyword:
(Show Context)

Citation Context

...cks of 8 × 8 pixels.s3. LINEAR DISCRIMINANT ANALYSIS (LDA)sLinear discriminant analysis (LDA) finds the directions for maximum discrimination of classes insaddition to dimensionality reduction [10] – =-=[15]-=-. This is achieved by maximizing the ratio of the magnitudesof between – class scattering matrix to the magnitude of within – class scattering matrix. Within – classsscattering matrix is defined ass( ...

Combined Classifier for Face Recognition using Legendre Moments

by D. Sridhar, Dr I. V, Murali Krishna, Pengurangan Dimensi, Ekstraksi Fitur, Tingkat Pengenalan
"... Legendre adalah invariants ortogonal dan skala sehingga mereka cocok untuk mewakili fitur dari gambar wajah. Metode yang diusulkan pengenalan Wajah terdiri dari tiga langkah, i) Fitur ekstraksi menggunakan momen Legendre ii) pengurangan dimensi menggunakan Analisis Discrminant Linear (LDA) dan iii) ..."
Abstract - Add to MetaCart
Legendre adalah invariants ortogonal dan skala sehingga mereka cocok untuk mewakili fitur dari gambar wajah. Metode yang diusulkan pengenalan Wajah terdiri dari tiga langkah, i) Fitur ekstraksi menggunakan momen Legendre ii) pengurangan dimensi menggunakan Analisis Discrminant Linear (LDA) dan iii) klasifikasi menggunakan Jaringan Syaraf Probabilistik (JSP). Analisis Diskriminan linier mencari petunjuk untuk diskriminasi maksimum kelas di samping pengurangan dimensi. Kombinasi momen Legendre dan Analisis Diskriminan Linear digunakan untuk meningkatkan kemampuan Analisis Diskriminan Linear ketika beberapa contoh gambar yang tersedia. Jaringan Syaraf Probabilistik memberikan klasifikasi cepat dan akurat gambar wajah. Evaluasi dilakukan pada dua basis wajah data. Basis data pertama dari 400 gambar wajah dari Laboratorium Penelitian Olivetty (ORLbasis data wajah, dan basisdata kedua siswa tiga belas diambil. Metode yang diusulkan memberikan tingkat pengenalan yang cepat dan lebih baik bila dibandingkan dengan pengklasifikasi lainnya.
(Show Context)

Citation Context

...6) 3. LINEAR DISCRIMINANT ANALYSIS (LDA) Linear discriminant analysis (LDA) tries to find the subspace that best discriminate different face classes in addition to dimensionality reduction [5], [8] – =-=[14]-=-. This is achieved by maximizing the ratio of the determinant of the between – class scattering matrix of the projected Computer Engineering and Applications Vol. 1, No. 2, December 2012 110 ISSN: 225...

7 Extensive Comparative Study: Robust Face Recognition

by unknown authors
"... Face recognition is a computer based person identification technique based on arithmetical and numerical features obtained from face images. In a face recognition system, illumination has been a great problem. In this work, light variation is tried to dominate to increase recognition rate. To improv ..."
Abstract - Add to MetaCart
Face recognition is a computer based person identification technique based on arithmetical and numerical features obtained from face images. In a face recognition system, illumination has been a great problem. In this work, light variation is tried to dominate to increase recognition rate. To improve the face recognition rate further, a novel technique is adopted, where facial images are vertically divided into two equal halves and D2DPCA dimension reduction technique is applied in each halves. Normalization is performed by two different techniques, which are Min-Max and Z-Score recognition rates achieved by them are 95.76 % and 95.90 respectively. Comparing proposed technique with conventional technique, Hence it is shown that proposed technique works well for illumination variant images and D2DPCA is better dimension reduction tool than 2DPCA.

implementation of Feature Extraction Module using Two Dimensional Maximum Margin Criteria which removes

by Kiran P. Gaikwad, Prasad S. Halgaonkar
"... Illumination variation is a challenging problem in face recognition research area. Same person can appear greatly different under varying lighting conditions. This paper consists of Face Recognition System which is invariant to illumination variations. Face recognition system which uses Linear Discr ..."
Abstract - Add to MetaCart
Illumination variation is a challenging problem in face recognition research area. Same person can appear greatly different under varying lighting conditions. This paper consists of Face Recognition System which is invariant to illumination variations. Face recognition system which uses Linear Discriminant Analysis (LDA) as feature extractor have Small Sample Size (SSS). It consists of
(Show Context)

Citation Context

...variance matrix becomes singular, and thus, the traditional LDA algorithm fails. To address this problem, a number of approaches have been suggested, including regularized discriminant analysis [10], =-=[11]-=-, inverse Fisher [12], [13], weighted piecewise LDA [14], pseudo-inverse LDA [15], null space method [16], direct LDA (DLDA) [17], and maximal margin criterion (MMC) [18]. Among them, the most popular...

DOI 10.1007/s11063-010-9132-2 A Novel Regularization Learning for Single-View

by Patterns Multi-view, Discriminative Regularization, Zhe Wang, Songcan Chen, Hui Xue, Zhisong Pan, Z. Wang, H. Xue, Z. Pan , 2010
"... Abstract The existing Multi-View Learning (MVL) is to discuss how to learn from patterns with multiple information sources and has been proven its superior generalization to the usual Single-View Learning (SVL). However, in most real-world cases there are just single source patterns available such t ..."
Abstract - Add to MetaCart
Abstract The existing Multi-View Learning (MVL) is to discuss how to learn from patterns with multiple information sources and has been proven its superior generalization to the usual Single-View Learning (SVL). However, in most real-world cases there are just single source patterns available such that the existing MVL cannot work. The purpose of this paper is to develop a new multi-view regularization learning for single source patterns. Concretely, for the given single source patterns, we first map them into M feature spaces by M different empirical kernels, then associate each generated feature space with our previous proposed Discriminative Regularization (DR), and finally synthesize M DRs into one sin-gle learning process so as to get a new Multi-view Discriminative Regularization (MVDR), where each DR can be taken as one view of the proposed MVDR. The proposed method achieves: (1) the complementarity for multiple views generated from single source patterns; (2) an analytic solution for classification; (3) a direct optimization formulation for multi-class problems without one-against-all or one-against-one strategies.
(Show Context)

Citation Context

...n that two views given the class are conditionally independent. Here, the independence assumption is guaranteed by the patterns composed of two naturally-split attribute sets. Regularization learning =-=[7,8,10,17,39]-=- is viewed as one effective method for improving the generalization performance of classifiers. It has a rich history which can date back to the theory of ill-posed problem [27,39,40]. By incorporatin...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University