• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 502,458
Next 10 →

Manifold based local classifiers: Linear and nonlinear approaches

by Hakan Cevikalp, Diane Larlus, Marian Neamtu, Bill Triggs, Frederic Jurie, H. Cevikalp, D. Larlus, M. Neamtu, B. Triggs, F. Jurie - In Pattern Recognition in review , 2007
"... Abstract In case of insufficient data samples in highdimensional classification problems, sparse scatters of samples tend to have many ‘holes’—regions that have few or no nearby training samples from the class. When such regions lie close to inter-class boundaries, the nearest neighbors of a query m ..."
Abstract - Cited by 6 (1 self) - Add to MetaCart
may lie in the wrong class, thus leading to errors in the Nearest Neighbor classification rule. The K-local hyperplane distance nearest neighbor (HKNN) algorithm tackles this problem by approximating each class with a smooth nonlinear manifold, which is considered to be locally linear. The method

J Sign Process Syst DOI 10.1007/s11265-008-0313-4 Manifold Based Local Classifiers: Linear and Nonlinear Approaches

by Hakan Cevikalp, Diane Larlus, Marian Neamtu, Bill Triggs, Frederic Jurie, H. Cevikalp, D. Larlus, M. Neamtu, B. Triggs, F. Jurie , 2008
"... Abstract In case of insufficient data samples in highdimensional classification problems, sparse scatters of samples tend to have many ‘holes’—regions that have few or no nearby training samples from the class. When such regions lie close to inter-class boundaries, the nearest neighbors of a query m ..."
Abstract - Add to MetaCart
may lie in the wrong class, thus leading to errors in the Nearest Neighbor classification rule. The K-local hyperplane distance nearest neighbor (HKNN) algorithm tackles this problem by approximating each class with a smooth nonlinear manifold, which is considered to be locally linear. The method

Bayesian Network Classifiers

by Nir Friedman, Dan Geiger, Moises Goldszmidt , 1997
"... Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with state-of-the-art classifiers such as C4.5. This fact raises the question of whether a classifier with less restr ..."
Abstract - Cited by 788 (23 self) - Add to MetaCart
restrictive assumptions can perform even better. In this paper we evaluate approaches for inducing classifiers from data, based on the theory of learning Bayesian networks. These networks are factored representations of probability distributions that generalize the naive Bayesian classifier and explicitly

Nonlinear Approximation

by Ronald A. DeVore - ACTA NUMERICA , 1998
"... ..."
Abstract - Cited by 970 (40 self) - Add to MetaCart
Abstract not found

A training algorithm for optimal margin classifiers

by Bernhard E. Boser, et al. - PROCEEDINGS OF THE 5TH ANNUAL ACM WORKSHOP ON COMPUTATIONAL LEARNING THEORY , 1992
"... A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of classifiaction functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjust ..."
Abstract - Cited by 1848 (44 self) - Add to MetaCart
is adjusted automatically to match the complexity of the problem. The solution is expressed as a linear combination of supporting patterns. These are the subset of training patterns that are closest to the decision boundary. Bounds on the generalization performance based on the leave-one-out method and the VC

Locally weighted learning

by Christopher G. Atkeson, Andrew W. Moore , Stefan Schaal - ARTIFICIAL INTELLIGENCE REVIEW , 1997
"... This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, ass ..."
Abstract - Cited by 594 (53 self) - Add to MetaCart
This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias

Locality-constrained linear coding for image classification

by Jinjun Wang, Jianchao Yang, Kai Yu, Fengjun Lv, Thomas Huang, Yihong Gong - IN: IEEE CONFERENCE ON COMPUTER VISION AND PATTERN CLASSIFICATOIN , 2010
"... The traditional SPM approach based on bag-of-features (BoF) requires nonlinear classifiers to achieve good image classification performance. This paper presents a simple but effective coding scheme called Locality-constrained Linear Coding (LLC) in place of the VQ coding in traditional SPM. LLC util ..."
Abstract - Cited by 437 (20 self) - Add to MetaCart
The traditional SPM approach based on bag-of-features (BoF) requires nonlinear classifiers to achieve good image classification performance. This paper presents a simple but effective coding scheme called Locality-constrained Linear Coding (LLC) in place of the VQ coding in traditional SPM. LLC

Linear spatial pyramid matching using sparse coding for image classification

by Jianchao Yang, Kai Yu, Yihong Gong, Thomas Huang - in IEEE Conference on Computer Vision and Pattern Recognition(CVPR , 2009
"... Recently SVMs using spatial pyramid matching (SPM) kernel have been highly successful in image classification. Despite its popularity, these nonlinear SVMs have a complexity O(n 2 ∼ n 3) in training and O(n) in testing, where n is the training size, implying that it is nontrivial to scaleup the algo ..."
Abstract - Cited by 488 (19 self) - Add to MetaCart
the algorithms to handle more than thousands of training images. In this paper we develop an extension of the SPM method, by generalizing vector quantization to sparse coding followed by multi-scale spatial max pooling, and propose a linear SPM kernel based on SIFT sparse codes. This new approach remarkably

Nonlinear component analysis as a kernel eigenvalue problem

by Bernhard Schölkopf, Alexander Smola, Klaus-Robert Müller - , 1996
"... We describe a new method for performing a nonlinear form of Principal Component Analysis. By the use of integral operator kernel functions, we can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map; for instance the space of all ..."
Abstract - Cited by 1554 (85 self) - Add to MetaCart
possible 5-pixel products in 16x16 images. We give the derivation of the method, along with a discussion of other techniques which can be made nonlinear with the kernel approach; and present first experimental results on nonlinear feature extraction for pattern recognition.

Benchmarking Least Squares Support Vector Machine Classifiers

by Tony Van Gestel, Johan A. K. Suykens, Bart Baesens, Stijn Viaene, Jan Vanthienen, Guido Dedene, Bart De Moor, Joos Vandewalle - NEURAL PROCESSING LETTERS , 2001
"... In Support Vector Machines (SVMs), the solution of the classification problem is characterized by a (convex) quadratic programming (QP) problem. In a modified version of SVMs, called Least Squares SVM classifiers (LS-SVMs), a least squares cost function is proposed so as to obtain a linear set of eq ..."
Abstract - Cited by 446 (46 self) - Add to MetaCart
of equations in the dual space. While the SVM classifier has a large margin interpretation, the LS-SVM formulation is related in this paper to a ridge regression approach for classification with binary targets and to Fisher's linear discriminant analysis in the feature space. Multiclass categorization
Next 10 →
Results 1 - 10 of 502,458
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University