Results 1 -
9 of
9
Face Recognition Using Adaptive Margin Fisher’s Criterion and Linear Discriminant Analysis (AMFC-LDA) IAJIT First Online Publication
, 2011
"... Abstract: Selecting a low dimensional feature subspace from thousands of features is a key phenomenon for optimal classification. Linear Discriminant Analysis (LDA) is a basic well recognized supervised classifier that is effectively employed for classification. However, two problems arise in intra ..."
Abstract
-
Cited by 3 (1 self)
- Add to MetaCart
(Show Context)
Abstract: Selecting a low dimensional feature subspace from thousands of features is a key phenomenon for optimal classification. Linear Discriminant Analysis (LDA) is a basic well recognized supervised classifier that is effectively employed for classification. However, two problems arise in intra class during Discriminant Analysis. Firstly, in training phase the number of samples in intra class is smaller than the dimensionality of the sample which makes LDA unstable. The other is high computational cost due to redundant and irrelevant data points in intra class. An Adaptive Margin Fisher’s Criterion Linear Discriminant Analysis (AMFC-LDA) is proposed that addresses these issues and overcomes the limitations of intra class problems. Small Sample Size problem is resolved through modified maximum margin criterion which is a form of customized LDA and Convex hull. Inter class is defined using LDA while intra class is formulated using quick hull respectively. Similarly, computational cost is reduced by reformulating within class scatter matrix through Minimum Redundancy Maximum Relevance (mRMR) algorithm while preserving Discriminant Information. The proposed algorithm reveals encouraging performance. Finally, a comparison is made with existing approaches.
A feature selection method using improved regularized linear discriminant analysis,”
- Machine Vision and Applications,
, 2014
"... Abstract Investigation of genes, using data analysis and computer-based methods, has gained widespread attention in solving human cancer classification problem. DNA microarray gene expression datasets are readily utilized for this purpose. In this paper, we propose a feature selection method using ..."
Abstract
-
Cited by 3 (2 self)
- Add to MetaCart
(Show Context)
Abstract Investigation of genes, using data analysis and computer-based methods, has gained widespread attention in solving human cancer classification problem. DNA microarray gene expression datasets are readily utilized for this purpose. In this paper, we propose a feature selection method using improved regularized linear discriminant analysis technique to select important genes, crucial for human cancer classification problem. The experiment is conducted on several DNA microarray gene expression datasets and promising results are obtained when compared with several other existing feature selection methods.
Semisupervised Dimensionality Reduction and Classification Through Virtual Label Regression
"... Abstract—Semisupervised dimensionality reduction has been attracting much attention as it not only utilizes both labeled and unlabeled data simultaneously, but also works well in the situation of out-of-sample. This paper proposes an effective approach of semisupervised dimensionality reduction thro ..."
Abstract
-
Cited by 2 (1 self)
- Add to MetaCart
Abstract—Semisupervised dimensionality reduction has been attracting much attention as it not only utilizes both labeled and unlabeled data simultaneously, but also works well in the situation of out-of-sample. This paper proposes an effective approach of semisupervised dimensionality reduction through label propaga-tion and label regression. Different from previous efforts, the new approach propagates the label information from labeled to unlabeled data with a well-designed mechanism of random walks, in which outliers are effectively detected and the obtained virtual labels of unlabeled data can be well encoded in a weighted regres-sion model. These virtual labels are thereafter regressed with a linear model to calculate the projection matrix for dimensionality reduction. By this means, when the manifold or the clustering assumption of data is satisfied, the labels of labeled data can be correctly propagated to the unlabeled data; and thus, the proposed approach utilizes the labeled and the unlabeled data more effec-tively than previous work. Experimental results are carried out upon several databases, and the advantage of the new approach is well demonstrated. Index Terms—Dimensionality reduction, label propagation, label regression, semisupervised learning, subspace learning. I.
1A Novel Regularization Learning for Single-view Patterns: Multi-view
"... The existing Multi-View Learning (MVL) is to discuss how to learn from patterns with multiple information sources and has been proven its superior generalization to the usual Single-View Learning (SVL). However, in most real-world cases there are just single source patterns available such that the e ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
The existing Multi-View Learning (MVL) is to discuss how to learn from patterns with multiple information sources and has been proven its superior generalization to the usual Single-View Learning (SVL). However, in most real-world cases there are just single source patterns available such that the existing MVL cannot work. The purpose of this paper is to develop a new multi-view regularization learning for single source patterns. Concretely, for the given single source patterns, we first map them into M feature spaces by M different empirical kernels, then associate each generated feature space with our previous proposed Discriminative Regularization (DR), and finally synthesize M DRs into one single learning process so as to get a new Multi-view Discriminative Regularization (MVDR), where each DR can be taken as one view of the proposed MVDR. The proposed method achieves: 1) the complementarity for multiple views generated from single source patterns; 2) an analytic solution for classification; 3) a direct optimization formulation for multi-class problems without one-against-all or one-against-one strategies.
Article Info ABSTRACT Article history:
, 2012
"... In this paper, a new Face Recognition method based on Two Dimensional Discrete Cosine Transform with Linear Discriminant Analysis (LDA) and K Nearest neighbours (KNN) classifier is proposed. This method consists of three steps, i) Transformation of images from special to frequency domain using Two d ..."
Abstract
- Add to MetaCart
(Show Context)
In this paper, a new Face Recognition method based on Two Dimensional Discrete Cosine Transform with Linear Discriminant Analysis (LDA) and K Nearest neighbours (KNN) classifier is proposed. This method consists of three steps, i) Transformation of images from special to frequency domain using Two dimensional discrete cosine transform ii) Feature extraction using Linear Discriminant Analysis and iii) classification using K Nearest Neighbour classifier. Linear Disceminant Analysis searches the directions for maximum discrimination of classes in addition to dimensionality reduction. Combination of Two Dimensional Discrete Cosine transform and Linear Discriminant Analysis is used for improving the capability of Linear Discriminant Analysis when few samples of images are available. K Nearest Neighbour classifier gives fast and accurate classification of face images that makes this method useful in online applications. Evaluation was performed on two face data bases. First database of 400 face images from AT&T face database, and the second database of thirteen students are taken. The proposed method gives fast and better recognition rate when compared to other classifiers. The main advantage of this method is its high speed processing capability and low computational requirements in terms of both speed and memory utilizations. Keyword:
Combined Classifier for Face Recognition using Legendre Moments
"... Legendre adalah invariants ortogonal dan skala sehingga mereka cocok untuk mewakili fitur dari gambar wajah. Metode yang diusulkan pengenalan Wajah terdiri dari tiga langkah, i) Fitur ekstraksi menggunakan momen Legendre ii) pengurangan dimensi menggunakan Analisis Discrminant Linear (LDA) dan iii) ..."
Abstract
- Add to MetaCart
(Show Context)
Legendre adalah invariants ortogonal dan skala sehingga mereka cocok untuk mewakili fitur dari gambar wajah. Metode yang diusulkan pengenalan Wajah terdiri dari tiga langkah, i) Fitur ekstraksi menggunakan momen Legendre ii) pengurangan dimensi menggunakan Analisis Discrminant Linear (LDA) dan iii) klasifikasi menggunakan Jaringan Syaraf Probabilistik (JSP). Analisis Diskriminan linier mencari petunjuk untuk diskriminasi maksimum kelas di samping pengurangan dimensi. Kombinasi momen Legendre dan Analisis Diskriminan Linear digunakan untuk meningkatkan kemampuan Analisis Diskriminan Linear ketika beberapa contoh gambar yang tersedia. Jaringan Syaraf Probabilistik memberikan klasifikasi cepat dan akurat gambar wajah. Evaluasi dilakukan pada dua basis wajah data. Basis data pertama dari 400 gambar wajah dari Laboratorium Penelitian Olivetty (ORLbasis data wajah, dan basisdata kedua siswa tiga belas diambil. Metode yang diusulkan memberikan tingkat pengenalan yang cepat dan lebih baik bila dibandingkan dengan pengklasifikasi lainnya.
7 Extensive Comparative Study: Robust Face Recognition
"... Face recognition is a computer based person identification technique based on arithmetical and numerical features obtained from face images. In a face recognition system, illumination has been a great problem. In this work, light variation is tried to dominate to increase recognition rate. To improv ..."
Abstract
- Add to MetaCart
Face recognition is a computer based person identification technique based on arithmetical and numerical features obtained from face images. In a face recognition system, illumination has been a great problem. In this work, light variation is tried to dominate to increase recognition rate. To improve the face recognition rate further, a novel technique is adopted, where facial images are vertically divided into two equal halves and D2DPCA dimension reduction technique is applied in each halves. Normalization is performed by two different techniques, which are Min-Max and Z-Score recognition rates achieved by them are 95.76 % and 95.90 respectively. Comparing proposed technique with conventional technique, Hence it is shown that proposed technique works well for illumination variant images and D2DPCA is better dimension reduction tool than 2DPCA.
implementation of Feature Extraction Module using Two Dimensional Maximum Margin Criteria which removes
"... Illumination variation is a challenging problem in face recognition research area. Same person can appear greatly different under varying lighting conditions. This paper consists of Face Recognition System which is invariant to illumination variations. Face recognition system which uses Linear Discr ..."
Abstract
- Add to MetaCart
(Show Context)
Illumination variation is a challenging problem in face recognition research area. Same person can appear greatly different under varying lighting conditions. This paper consists of Face Recognition System which is invariant to illumination variations. Face recognition system which uses Linear Discriminant Analysis (LDA) as feature extractor have Small Sample Size (SSS). It consists of
DOI 10.1007/s11063-010-9132-2 A Novel Regularization Learning for Single-View
, 2010
"... Abstract The existing Multi-View Learning (MVL) is to discuss how to learn from patterns with multiple information sources and has been proven its superior generalization to the usual Single-View Learning (SVL). However, in most real-world cases there are just single source patterns available such t ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract The existing Multi-View Learning (MVL) is to discuss how to learn from patterns with multiple information sources and has been proven its superior generalization to the usual Single-View Learning (SVL). However, in most real-world cases there are just single source patterns available such that the existing MVL cannot work. The purpose of this paper is to develop a new multi-view regularization learning for single source patterns. Concretely, for the given single source patterns, we first map them into M feature spaces by M different empirical kernels, then associate each generated feature space with our previous proposed Discriminative Regularization (DR), and finally synthesize M DRs into one sin-gle learning process so as to get a new Multi-view Discriminative Regularization (MVDR), where each DR can be taken as one view of the proposed MVDR. The proposed method achieves: (1) the complementarity for multiple views generated from single source patterns; (2) an analytic solution for classification; (3) a direct optimization formulation for multi-class problems without one-against-all or one-against-one strategies.