Results 1 - 10
of
845
Convolution Kernels on Discrete Structures
, 1999
"... We introduce a new method of constructing kernels on sets whose elements are discrete structures like strings, trees and graphs. The method can be applied iteratively to build a kernel on an infinite set from kernels involving generators of the set. The family of kernels generated generalizes the fa ..."
Abstract
-
Cited by 506 (0 self)
- Add to MetaCart
the theory of infinitely divisible positive definite functions. Fundamentals of this theory and the theory of reproducing kernel Hilbert spaces are reviewed and applied in establishing the validity of the method.
Online Learning with Kernels
, 2003
"... Kernel based algorithms such as support vector machines have achieved considerable success in various problems in the batch setting where all of the training data is available in advance. Support vector machines combine the so-called kernel trick with the large margin idea. There has been little u ..."
Abstract
-
Cited by 2831 (123 self)
- Add to MetaCart
use of these methods in an online setting suitable for real-time applications. In this paper we consider online learning in a Reproducing Kernel Hilbert Space. By considering classical stochastic gradient descent within a feature space, and the use of some straightforward tricks, we develop simple
Kernel independent component analysis
- Journal of Machine Learning Research
, 2002
"... We present a class of algorithms for independent component analysis (ICA) which use contrast functions based on canonical correlations in a reproducing kernel Hilbert space. On the one hand, we show that our contrast functions are related to mutual information and have desirable mathematical propert ..."
Abstract
-
Cited by 464 (24 self)
- Add to MetaCart
We present a class of algorithms for independent component analysis (ICA) which use contrast functions based on canonical correlations in a reproducing kernel Hilbert space. On the one hand, we show that our contrast functions are related to mutual information and have desirable mathematical
Manifold regularization: A geometric framework for learning from labeled and unlabeled examples
- JOURNAL OF MACHINE LEARNING RESEARCH
, 2006
"... We propose a family of learning algorithms based on a new form of regularization that allows us to exploit the geometry of the marginal distribution. We focus on a semi-supervised framework that incorporates labeled and unlabeled data in a general-purpose learner. Some transductive graph learning al ..."
Abstract
-
Cited by 578 (16 self)
- Add to MetaCart
algorithms and standard methods including Support Vector Machines and Regularized Least Squares can be obtained as special cases. We utilize properties of Reproducing Kernel Hilbert spaces to prove new Representer theorems that provide theoretical basis for the algorithms. As a result (in contrast to purely
A Reproducing Kernel Hilbert Space Framework for Information-Theoretic Learning
"... Abstract—This paper provides a functional analysis perspective of information-theoretic learning (ITL) by defining bottom-up a reproducing kernel Hilbert space (RKHS) uniquely determined by the symmetric nonnegative definite kernel function known as the cross-information potential (CIP). The CIP as ..."
Abstract
-
Cited by 13 (8 self)
- Add to MetaCart
Abstract—This paper provides a functional analysis perspective of information-theoretic learning (ITL) by defining bottom-up a reproducing kernel Hilbert space (RKHS) uniquely determined by the symmetric nonnegative definite kernel function known as the cross-information potential (CIP). The CIP
Support Vector Machines, Reproducing Kernel Hilbert Spaces and the Randomized GACV
, 1998
"... this paper we very briefly review some of these results. RKHS can be chosen tailored to the problem at hand in many ways, and we review a few of them, including radial basis function and smoothing spline ANOVA spaces. Girosi (1997), Smola and Scholkopf (1997), Scholkopf et al (1997) and others have ..."
Abstract
-
Cited by 189 (11 self)
- Add to MetaCart
this paper we very briefly review some of these results. RKHS can be chosen tailored to the problem at hand in many ways, and we review a few of them, including radial basis function and smoothing spline ANOVA spaces. Girosi (1997), Smola and Scholkopf (1997), Scholkopf et al (1997) and others have
Kernel partial least squares regression in reproducing kernel Hilbert space
- JOURNAL OF MACHINE LEARNING RESEARCH
, 2001
"... A family of regularized least squares regression models in a Reproducing Kernel Hilbert Space is extended by the kernel partial least squares (PLS) regression model. Similar to principal components regression (PCR), PLS is a method based on the projection of input (explanatory) variables to the late ..."
Abstract
-
Cited by 154 (10 self)
- Add to MetaCart
A family of regularized least squares regression models in a Reproducing Kernel Hilbert Space is extended by the kernel partial least squares (PLS) regression model. Similar to principal components regression (PCR), PLS is a method based on the projection of input (explanatory) variables
Dimensionality reduction for supervised learning with reproducing kernel Hilbert spaces
- Journal of Machine Learning Research
, 2004
"... We propose a novel method of dimensionality reduction for supervised learning problems. Given a regression or classification problem in which we wish to predict a response variable Y from an explanatory variable X, we treat the problem of dimensionality reduction as that of finding a low-dimensional ..."
Abstract
-
Cited by 162 (34 self)
- Add to MetaCart
using covariance operators on reproducing kernel Hilbert spaces. This characterization allows us to derive a contrast function for estimation of the effective subspace. Unlike many conventional methods for dimensionality reduction in supervised learning, the proposed method requires neither assumptions
REPRODUCING KERNEL HILBERT SPACES
, 2004
"... We reformulate the original component-by-component algorithm for rank-1 lattices in a matrix-vector notation so as to highlight its structural properties. For function spaces similar to a weighted Korobov space, we derive a technique which has construction cost O(sn log(n)), in contrast with the ori ..."
Abstract
- Add to MetaCart
We reformulate the original component-by-component algorithm for rank-1 lattices in a matrix-vector notation so as to highlight its structural properties. For function spaces similar to a weighted Korobov space, we derive a technique which has construction cost O(sn log(n)), in contrast
AND REPRODUCING KERNEL HILBERT SPACES
"... I have been blessed with many wonderful people around me in this journey towards my doctoral degree. First and foremost, I would like to thank my adviser, Dr. Vern I. Paulsen, for all his contribution of time, ideas, insightful comments, and funding in this pursuit of mine. It was an honor and pleas ..."
Abstract
- Add to MetaCart
and pleasure to work under his guidance. I just hope that some day I can be as good a mathematician as him. Next, I would like to thank my thesis committee members, Dr. David P. Blecher, Dr. Bernhard G. Bodmann, and Dr. David Sherman, for careful reading of my thesis, helpful suggestions, and ideas for future
Results 1 - 10
of
845