Results 1  10
of
14
Fisher Discriminant Analysis With Kernels
, 1999
"... A nonlinear classification technique based on Fisher's discriminant is proposed. The main ingredient is the kernel trick which allows the efficient computation of Fisher discriminant in feature space. The linear classification in feature space corresponds to a (powerful) nonlinear decision f ..."
Abstract

Cited by 458 (16 self)
 Add to MetaCart
A nonlinear classification technique based on Fisher's discriminant is proposed. The main ingredient is the kernel trick which allows the efficient computation of Fisher discriminant in feature space. The linear classification in feature space corresponds to a (powerful) nonlinear decision function in input space. Large scale simulations demonstrate the competitiveness of our approach.
Invariant Feature Extraction and Classification in Kernel Spaces
"... We incorporate prior knowledge to construct nonlinear algorithms for invariant feature extraction and discrimination. Employing a unified framework in terms of a nonlinear variant of the Rayleigh coefficient, we propose nonlinear generalizations of Fisher's discriminant and oriented PCA us ..."
Abstract

Cited by 52 (7 self)
 Add to MetaCart
We incorporate prior knowledge to construct nonlinear algorithms for invariant feature extraction and discrimination. Employing a unified framework in terms of a nonlinear variant of the Rayleigh coefficient, we propose nonlinear generalizations of Fisher's discriminant and oriented PCA using Support Vector kernel functions.
Antifaces: A novel, fast method for image detection
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2001
"... AbstractÐThis paper offers a novel detection method, which works well even in the case of a complicated image collectionÐfor instance, a frontal face under a large class of linear transformations. It is also successfully applied to detect 3D objects under different views. Call the collection of imag ..."
Abstract

Cited by 41 (6 self)
 Add to MetaCart
(Show Context)
AbstractÐThis paper offers a novel detection method, which works well even in the case of a complicated image collectionÐfor instance, a frontal face under a large class of linear transformations. It is also successfully applied to detect 3D objects under different views. Call the collection of images, which should be detected, a multitemplate. The detection problem is solved by sequentially applying very simple filters (or detectors), which are designed to yield small results on the multitemplate (hence, ªantifacesº), and large results on ªrandomº natural images. This is achieved by making use of a simple probabilistic assumption on the distribution of natural images, which is borne out well in practice. Only images which passed the threshold test imposed by the first detector are examined by the second detector, etc. The detectors are designed to act independently so that their false alarms are uncorrelated; this results in a false alarm rate which decreases exponentially in the number of detectors. This, in turn, leads to a very fast detection algorithm. Typically, …1 ‡ †N operations are required to classify an Npixel image, where <0:5. Also, the algorithm requires no training loop. The algorithm's performance compares favorably to the wellknown eigenface and support vector machine based algorithms, but is substantially faster. Index TermsÐImage detection, smoothness, distribution of natural images, rejectors. 1
Multiclass Discriminant Kernel Learning via Convex Programming
"... Regularized kernel discriminant analysis (RKDA) performs linear discriminant analysis in the feature space via the kernel trick. Its performance depends on the selection of kernels. In this paper, we consider the problem of multiple kernel learning (MKL) for RKDA, in which the optimal kernel matrix ..."
Abstract

Cited by 34 (0 self)
 Add to MetaCart
(Show Context)
Regularized kernel discriminant analysis (RKDA) performs linear discriminant analysis in the feature space via the kernel trick. Its performance depends on the selection of kernels. In this paper, we consider the problem of multiple kernel learning (MKL) for RKDA, in which the optimal kernel matrix is obtained as a linear combination of prespecified kernel matrices. We show that the kernel learning problem in RKDA can be formulated as convex programs. First, we show that this problem can be formulated as a semidefinite program (SDP). Based on the equivalence relationship between RKDA and least square problems in the binaryclass case, we propose a convex quadratically constrained quadratic programming (QCQP) formulation for kernel learning in RKDA. A semiinfinite linear programming (SILP) formulation is derived to further improve the efficiency. We extend these formulations to the multiclass case based on a key result established in this paper. That is, the multiclass RKDA kernel learning problem can be decomposed into a set of binaryclass kernel learning problems which are constrained to share a common kernel. Based on this decomposition property, SDP formulations are proposed for the multiclass case. Furthermore, it leads naturally to QCQP and SILP formulations. As the performance of RKDA depends on the regularization parameter, we show that this parameter can also be optimized in a joint framework with the kernel. Extensive experiments have been conducted and analyzed, and connections to other algorithms are discussed.
Fast and accurate text classification via multiple linear discriminant projections
 In VLDB
, 2002
"... Abstract. Support vector machines (SVMs) have shown superb performance for text classification tasks.They are accurate, robust, and quick to apply to test instances.Their only potential drawback is their training time and memory requirement.For n training instances held in memory, the bestknown SVM ..."
Abstract

Cited by 30 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Support vector machines (SVMs) have shown superb performance for text classification tasks.They are accurate, robust, and quick to apply to test instances.Their only potential drawback is their training time and memory requirement.For n training instances held in memory, the bestknown SVM implementations take time proportional to n a, where a is typically between 1.8 and 2.1. SVMs have been trained on data sets with several thousand instances, but Web directories today contain millions of instances that are valuable for mapping billions of Web pages into Yahoo!like directories.We present SIMPL, a nearly lineartime classification algorithm that mimics the strengths of SVMs while avoiding the training bottleneck.It uses Fisher’s linear discriminant, a classical tool from statistical pattern recognition, to project training instances to a carefully selected lowdimensional subspace before inducing a decision tree on the projected instances. SIMPL uses efficient sequential scans and sorts and is comparable in speed and memory scalability to widely used naive Bayes (NB) classifiers, but it beats NB accuracy decisively.It not only approaches and sometimes exceeds SVM accuracy, but also beats the running time of a popular SVM implementation by orders of magnitude.While describing SIMPL, we make a detailed experimental comparison of SVMgenerated discriminants with Fisher’s discriminants, and we also report on an analysis of the cache performance of a popular SVM implementation.Our analysis shows that SIMPL has the potential to be the method of choice for practitioners who want the accuracy of SVMs and the simplicity and speed of naive Bayes classifiers.
Multiclass Feature Selection with Kernel Grammatrixbased criteria
, 2012
"... Feature selection has been an important issue during the last decades to determine the most relevant features according to a given classification problem. Numerous methods emerged that take into account Support Vector Machines in the selection process. Such approaches are powerful but often complex ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Feature selection has been an important issue during the last decades to determine the most relevant features according to a given classification problem. Numerous methods emerged that take into account Support Vector Machines in the selection process. Such approaches are powerful but often complex and costly. In this paper, we propose new feature selection methods based on two criteria designed for the optimization of SVM: Kernel Target Alignment and Kernel Class Separability. We demonstrate how these two measures, when fully expressed, can build efficient and simple methods, easily applicable to multiclass problems, and iteratively computable with minimal memory requirements. An extensive experimental study is conducted both on artificial and realworld data sets to compare the proposed methods to state of the art feature selection algorithms. The results demonstrate the relevance of the proposed methods both in terms of performance and computational cost.
Centro de Investigación yde Estudios Avanzados delInstituto Politécnico NacionalUnidad ZacatencoDepartamento de ComputaciónData Reduction Methods forClassification with Support VectorMachines
, 2013
"... ..."
(Show Context)
BMC Bioinformatics BioMed Central Research article
, 2006
"... Recursive gene selection based on maximum margin criterion: a ..."
(Show Context)
comparison with SVMRFE
, 2006
"... Recursive gene selection based on maximum margin criterion: a ..."
(Show Context)