Results 1 - 10
of
1,799
A tutorial on support vector machines for pattern recognition
- Data Mining and Knowledge Discovery
, 1998
"... The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SV ..."
Abstract
-
Cited by 3393 (12 self)
- Add to MetaCart
large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels. While very high VC dimension would normally bode ill for generalization performance, and while at present there exists no theory which shows that good generalization
Ideal spatial adaptation by wavelet shrinkage
- Biometrika
, 1994
"... With ideal spatial adaptation, an oracle furnishes information about how best to adapt a spatially variable estimator, whether piecewise constant, piecewise polynomial, variable knot spline, or variable bandwidth kernel, to the unknown function. Estimation with the aid of an oracle o ers dramatic ad ..."
Abstract
-
Cited by 1269 (5 self)
- Add to MetaCart
With ideal spatial adaptation, an oracle furnishes information about how best to adapt a spatially variable estimator, whether piecewise constant, piecewise polynomial, variable knot spline, or variable bandwidth kernel, to the unknown function. Estimation with the aid of an oracle o ers dramatic
Benchmarking Least Squares Support Vector Machine Classifiers
- NEURAL PROCESSING LETTERS
, 2001
"... In Support Vector Machines (SVMs), the solution of the classification problem is characterized by a (convex) quadratic programming (QP) problem. In a modified version of SVMs, called Least Squares SVM classifiers (LS-SVMs), a least squares cost function is proposed so as to obtain a linear set of eq ..."
Abstract
-
Cited by 476 (46 self)
- Add to MetaCart
(RBF) kernels. Both the SVM and LS-SVM classifier with RBF kernel in combination with standard cross-validation procedures for hyperparameter selection achieve comparable test set performances. These SVM and LS-SVM performances are consistently very good when compared to a variety of methods described
Kernel principal component analysis
- ADVANCES IN KERNEL METHODS - SUPPORT VECTOR LEARNING
, 1999
"... A new method for performing a nonlinear form of Principal Component Analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map; for instance the space of all ..."
Abstract
-
Cited by 274 (7 self)
- Add to MetaCart
A new method for performing a nonlinear form of Principal Component Analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map; for instance the space
Sampling signals with finite rate of innovation
- IEEE Transactions on Signal Processing
, 2002
"... Abstract—Consider classes of signals that have a finite number of degrees of freedom per unit of time and call this number the rate of innovation. Examples of signals with a finite rate of innovation include streams of Diracs (e.g., the Poisson process), nonuniform splines, and piecewise polynomials ..."
Abstract
-
Cited by 350 (67 self)
- Add to MetaCart
polynomials. Even though these signals are not bandlimited, we show that they can be sampled uniformly at (or above) the rate of innovation using an appropriate kernel and then be perfectly reconstructed. Thus, we prove sampling theorems for classes of signals and kernels that generalize the classic
On problems without polynomial kernels
- LECT. NOTES COMPUT. SCI
, 2007
"... Kernelization is a strong and widely-applied technique in parameterized complexity. In a nutshell, a kernelization algorithm, or simply a kernel, is a polynomial-time transformation that transforms any given parameterized instance to an equivalent instance of the same problem, with size and parame ..."
Abstract
-
Cited by 143 (17 self)
- Add to MetaCart
Kernelization is a strong and widely-applied technique in parameterized complexity. In a nutshell, a kernelization algorithm, or simply a kernel, is a polynomial-time transformation that transforms any given parameterized instance to an equivalent instance of the same problem, with size
On the Influence of the Kernel on the Consistency of Support Vector Machines
- Journal of Machine Learning Research
, 2001
"... In this article we study the generalization abilities of several classifiers of support vector machine (SVM) type using a certain class of kernels that we call universal. It is shown that the soft margin algorithms with universal kernels are consistent for a large class of classification problems ..."
Abstract
-
Cited by 212 (21 self)
- Add to MetaCart
consistency for the soft margin SVM's. Finally we prove that even for simple, noise free classification problems SVM's with polynomial kernels can behave arbitrarily badly.
Discrete orthogonal polynomial ensembles and the Plancherel measure
, 2001
"... We consider discrete orthogonal polynomial ensembles which are discrete analogues of the orthogonal polynomial ensembles in random matrix theory. These ensembles occur in certain problems in combinatorial probability and can be thought of as probability measures on partitions. The Meixner ensemble i ..."
Abstract
-
Cited by 189 (10 self)
- Add to MetaCart
is related to a two-dimensional directed growth model, and the Charlier ensemble is related to the lengths of weakly increasing subsequences in random words. The Krawtchouk ensemble occurs in connection with zig-zag paths in random domino tilings of the Aztec diamond, and also in a certain simplified
On graph kernels: Hardness results and efficient alternatives
- IN: CONFERENCE ON LEARNING THEORY
, 2003
"... As most ‘real-world’ data is structured, research in kernel methods has begun investigating kernels for various kinds of structured data. One of the most widely used tools for modeling structured data are graphs. An interesting and important challenge is thus to investigate kernels on instances tha ..."
Abstract
-
Cited by 184 (6 self)
- Add to MetaCart
that are represented by graphs. So far, only very specific graphs such as trees and strings have been considered. This paper investigates kernels on labeled directed graphs with general structure. It is shown that computing a strictly positive definite graph kernel is at least as hard as solving the graph isomorphism
The NP-completeness column: an ongoing guide
- JOURNAL OF ALGORITHMS
, 1987
"... This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NP-completeness. The presentation is modeled on that used by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness," W. H. Freem ..."
Abstract
-
Cited by 239 (0 self)
- Add to MetaCart
This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NP-completeness. The presentation is modeled on that used by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness," W. H
Results 1 - 10
of
1,799