Results 1  10
of
2,157,136
Online Learning with Kernels
, 2003
"... Kernel based algorithms such as support vector machines have achieved considerable success in various problems in the batch setting where all of the training data is available in advance. Support vector machines combine the socalled kernel trick with the large margin idea. There has been little u ..."
Abstract

Cited by 2787 (123 self)
 Add to MetaCart
Kernel based algorithms such as support vector machines have achieved considerable success in various problems in the batch setting where all of the training data is available in advance. Support vector machines combine the socalled kernel trick with the large margin idea. There has been little
Fisher Discriminant Analysis With Kernels
, 1999
"... A nonlinear classification technique based on Fisher's discriminant is proposed. The main ingredient is the kernel trick which allows the efficient computation of Fisher discriminant in feature space. The linear classification in feature space corresponds to a (powerful) nonlinear decision f ..."
Abstract

Cited by 493 (18 self)
 Add to MetaCart
A nonlinear classification technique based on Fisher's discriminant is proposed. The main ingredient is the kernel trick which allows the efficient computation of Fisher discriminant in feature space. The linear classification in feature space corresponds to a (powerful) nonlinear decision
Exploiting the kernel trick to . . .
, 2004
"... Motivation: The correlation among fragment ions in a tandem mass spectrum is crucial in reducing stochastic mismatches for peptide identification by database searching. Until now, an efficient scoring algorithm that considers the correlative information in a tunable and comprehensive manner has been ..."
Abstract
 Add to MetaCart
been lacking. Results: This paper provides a promising approach to utilizing the correlative information for improving the peptide identification accuracy.The kernel trick, rooted in the statistical learning theory, is exploited to address this issue with low computational effort. The common scoring
Scheduler Activations: Effective Kernel Support for the UserLevel Management of Parallelism
 ACM Transactions on Computer Systems
, 1992
"... Threads are the vehicle,for concurrency in many approaches to parallel programming. Threads separate the notion of a sequential execution stream from the other aspects of traditional UNIXlike processes, such as address spaces and I/O descriptors. The objective of this separation is to make the expr ..."
Abstract

Cited by 472 (21 self)
 Add to MetaCart
been fully satisfactory. This paper addresses this dilemma. First, we argue that the performance of kernel threads is inherently worse than that of userlevel threads, rather than this being an artifact of existing implementations; we thus argue that managing par allelism at the user level
A tutorial on support vector machines for pattern recognition
 Data Mining and Knowledge Discovery
, 1998
"... The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and nonseparable data, working through a nontrivial example in detail. We describe a mechanical analogy, and discuss when SV ..."
Abstract

Cited by 3307 (12 self)
 Add to MetaCart
large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels. While very high VC dimension would normally bode ill for generalization performance, and while at present there exists no theory which shows that good generalization
Maxmargin Markov networks
, 2003
"... In typical classification tasks, we seek a function which assigns a label to a single object. Kernelbased approaches, such as support vector machines (SVMs), which maximize the margin of confidence of the classifier, are the method of choice for many such tasks. Their popularity stems both from the ..."
Abstract

Cited by 587 (15 self)
 Add to MetaCart
In typical classification tasks, we seek a function which assigns a label to a single object. Kernelbased approaches, such as support vector machines (SVMs), which maximize the margin of confidence of the classifier, are the method of choice for many such tasks. Their popularity stems both from
LevelSpacing Distributions and the Airy Kernel
 COMMUNICATIONS IN MATHEMATICAL PHYSICS
, 1994
"... Scaling levelspacing distribution functions in the "bulk of the spectrum" in random matrix models of N x N hermitian matrices and then going to the limit N — » oo leads to the Fredholm determinant of the sine kernel sinπ(x — y)/π(x — y). Similarly a scaling limit at the "edge o ..."
Abstract

Cited by 422 (24 self)
 Add to MetaCart
Scaling levelspacing distribution functions in the "bulk of the spectrum" in random matrix models of N x N hermitian matrices and then going to the limit N — » oo leads to the Fredholm determinant of the sine kernel sinπ(x — y)/π(x — y). Similarly a scaling limit at the &
1Nonlinear Projection Trick in kernel methods: An alternative to the Kernel Trick
"... Abstract—In kernel methods such as kernel PCA and support vector machines, the so called kernel trick is used to avoid direct calculations in a high (virtually infinite) dimensional kernel space. In this paper, based on the fact that the effective dimensionality of a kernel space is less than the nu ..."
Abstract
 Add to MetaCart
Abstract—In kernel methods such as kernel PCA and support vector machines, the so called kernel trick is used to avoid direct calculations in a high (virtually infinite) dimensional kernel space. In this paper, based on the fact that the effective dimensionality of a kernel space is less than
Soft margin SVM The Kernel trick Kernel based SVMs Formal definition of Kernel function Examples of Kernels Kernel methods
, 2012
"... A brief introduction to constrained optimization Dual problem formulation Support vectors ..."
Abstract
 Add to MetaCart
A brief introduction to constrained optimization Dual problem formulation Support vectors
AN INFORMATION THEORETIC PERSPECTIVE TO KERNEL KMEANS ∗
"... In this paper, we provide an information theoretic perspective to kernel Kmeans. We show that kernel Kmeans corresponds to maximizing an integrated squared error divergence measure between Parzen window estimated cluster probability density functions. Equivalently, this corresponds to a Bayeslike ..."
Abstract
 Add to MetaCart
In this paper, we provide an information theoretic perspective to kernel Kmeans. We show that kernel Kmeans corresponds to maximizing an integrated squared error divergence measure between Parzen window estimated cluster probability density functions. Equivalently, this corresponds to a Bayes
Results 1  10
of
2,157,136