Results 1 - 10
of
689,210
PCA versus LDA
- IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 2001
"... In the context of the appearance-based paradigm for object recognition, it is generally believed that algorithms based on LDA (Linear Discriminant Analysis) are superior to those based on PCA (Principal Components Analysis) . In this communication we show that this is not always the case. We present ..."
Abstract
-
Cited by 465 (16 self)
- Add to MetaCart
In the context of the appearance-based paradigm for object recognition, it is generally believed that algorithms based on LDA (Linear Discriminant Analysis) are superior to those based on PCA (Principal Components Analysis) . In this communication we show that this is not always the case. We
Locally weighted learning
- ARTIFICIAL INTELLIGENCE REVIEW
, 1997
"... This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, ass ..."
Abstract
-
Cited by 594 (53 self)
- Add to MetaCart
This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias
Sparse Bayesian Learning and the Relevance Vector Machine
, 2001
"... This paper introduces a general Bayesian framework for obtaining sparse solutions to regression and classication tasks utilising models linear in the parameters. Although this framework is fully general, we illustrate our approach with a particular specialisation that we denote the `relevance vec ..."
Abstract
-
Cited by 958 (5 self)
- Add to MetaCart
This paper introduces a general Bayesian framework for obtaining sparse solutions to regression and classication tasks utilising models linear in the parameters. Although this framework is fully general, we illustrate our approach with a particular specialisation that we denote the `relevance
Using Linear Algebra for Intelligent Information Retrieval
- SIAM REVIEW
, 1995
"... Currently, most approaches to retrieving textual materials from scientific databases depend on a lexical match between words in users' requests and those in or assigned to documents in a database. Because of the tremendous diversity in the words people use to describe the same document, lexical ..."
Abstract
-
Cited by 672 (18 self)
- Add to MetaCart
Currently, most approaches to retrieving textual materials from scientific databases depend on a lexical match between words in users' requests and those in or assigned to documents in a database. Because of the tremendous diversity in the words people use to describe the same document
Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
, 1997
"... We develop a face recognition algorithm which is insensitive to gross variation in lighting direction and facial expression. Taking a pattern classification approach, we consider each pixel in an image as a coordinate in a high-dimensional space. We take advantage of the observation that the images ..."
Abstract
-
Cited by 2263 (18 self)
- Add to MetaCart
of a particular face, under varying illumination but fixed pose, lie in a 3-D linear subspace of the high dimensional image space -- if the face is a Lambertian surface without shadowing. However, since faces are not truly Lambertian surfaces and do indeed produce self-shadowing, images will deviate
Feature selection based on mutual information: Criteria of max-depe ndency, max-relevance, and min-redundancy
- IEEE Trans. Pattern Analysis and Machine Intelligence
"... Abstract—Feature selection is an important problem for pattern classification systems. We study how to select good features according to the maximal statistical dependency criterion based on mutual information. Because of the difficulty in directly implementing the maximal dependency condition, we f ..."
Abstract
-
Cited by 533 (7 self)
- Add to MetaCart
to select a compact set of superior features at very low cost. We perform extensive experimental comparison of our algorithm and other methods using three different classifiers (naive Bayes, support vector machine, and linear discriminate analysis) and four different data sets (handwritten digits
Limma: linear models for microarray data
- Bioinformatics and Computational Biology Solutions using R and Bioconductor
, 2005
"... This free open-source software implements academic research by the authors and co-workers. If you use it, please support the project by citing the appropriate journal articles listed in Section 2.1.Contents ..."
Abstract
-
Cited by 759 (13 self)
- Add to MetaCart
This free open-source software implements academic research by the authors and co-workers. If you use it, please support the project by citing the appropriate journal articles listed in Section 2.1.Contents
Using Discriminant Eigenfeatures for Image Retrieval
, 1996
"... This paper describes the automatic selection of features from an image training set using the theories of multi-dimensional linear discriminant analysis and the associated optimal linear projection. We demonstrate the effectiveness of these Most Discriminating Features for view-based class retrieval ..."
Abstract
-
Cited by 504 (15 self)
- Add to MetaCart
This paper describes the automatic selection of features from an image training set using the theories of multi-dimensional linear discriminant analysis and the associated optimal linear projection. We demonstrate the effectiveness of these Most Discriminating Features for view-based class
An iterative thresholding algorithm for linear inverse problems with a sparsity constraint
, 2008
"... ..."
Large Margin Classification Using the Perceptron Algorithm
- Machine Learning
, 1998
"... We introduce and analyze a new algorithm for linear classification which combines Rosenblatt 's perceptron algorithm with Helmbold and Warmuth's leave-one-out method. Like Vapnik 's maximal-margin classifier, our algorithm takes advantage of data that are linearly separable with large ..."
Abstract
-
Cited by 518 (2 self)
- Add to MetaCart
with large margins. Compared to Vapnik's algorithm, however, ours is much simpler to implement, and much more efficient in terms of computation time. We also show that our algorithm can be efficiently used in very high dimensional spaces using kernel functions. We performed some experiments using our
Results 1 - 10
of
689,210