Results 1  10
of
67,224
Benchmarking Least Squares Support Vector Machine Classifiers
 NEURAL PROCESSING LETTERS
, 2001
"... In Support Vector Machines (SVMs), the solution of the classification problem is characterized by a (convex) quadratic programming (QP) problem. In a modified version of SVMs, called Least Squares SVM classifiers (LSSVMs), a least squares cost function is proposed so as to obtain a linear set of eq ..."
Abstract

Cited by 450 (46 self)
 Add to MetaCart
problems are represented by a set of binary classifiers using different output coding schemes. While regularization is used to control the effective number of parameters of the LSSVM classifier, the sparseness property of SVMs is lost due to the choice of the 2norm. Sparseness can be imposed in a second
Bayesian Inference in Trigonometric Support Vector Classifier
"... In the report, we propose a novel classifier, known as trigonometric support vector classifier, to integrate popular Bayesian techniques with support vector classifier. We describe a Bayesian framework in a functionspace view with a Gaussian process prior probability over the functions. The tri ..."
Abstract
 Add to MetaCart
. The trigonometric likelihood function with the desirable characteristics of normalization in likelihood and differentiability is used in likelihood evaluation. In the framework, maximum a posteriori estimate of the functions results in an extended support vector classifier problem. Bayesian methods are used
ON THE CONVERGENCE OF CONJUGATE TRIGONOMETRIC POLYNOMIALS
"... Abstract. Sufficient conditions are found, under which for f ∈ C([−pi, pi]) or f ∈ L([−pi, pi]) the convergence of a sequence of trigonometric polynomials in the norms of these spaces implies the convergence of their conjugates in the same norms. ..."
Abstract
 Add to MetaCart
Abstract. Sufficient conditions are found, under which for f ∈ C([−pi, pi]) or f ∈ L([−pi, pi]) the convergence of a sequence of trigonometric polynomials in the norms of these spaces implies the convergence of their conjugates in the same norms.
Minimal Kernel Classifiers
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2002
"... A finite concave minimization algorithm is proposed for constructing kernel classifiers that use a minimal number of data points both in generating and characterizing a classifier. The algorithm ..."
Abstract

Cited by 23 (7 self)
 Add to MetaCart
A finite concave minimization algorithm is proposed for constructing kernel classifiers that use a minimal number of data points both in generating and characterizing a classifier. The algorithm
Approximation By Trigonometric Polynomials
, 1999
"... Introduction In many practical situations, one needs to construct a model for an input/output process. For example, one is interested in the price of a stock five years from now. The rating industry description for the stock typically lists such indicators as the increase in the price over the last ..."
Abstract
 Add to MetaCart
Introduction In many practical situations, one needs to construct a model for an input/output process. For example, one is interested in the price of a stock five years from now. The rating industry description for the stock typically lists such indicators as the increase in the price over
Stability Results for Scattered Data Interpolation by Trigonometric Polynomials
 SIAM J. Sci. Comput
, 2007
"... A fast and reliable algorithm for the optimal interpolation of scattered data on the torus Td by multivariate trigonometric polynomials is presented. The algorithm is based on a variant of the conjugate gradient method in combination with the fast Fourier transforms for nonequispaced nodes. The main ..."
Abstract

Cited by 31 (14 self)
 Add to MetaCart
. The main result is that under mild assumptions the total complexity for solving the interpolation problem at M arbitrary nodes is of order O(M log M). This result is obtained by the use of localised trigonometric kernels where the localisation is chosen in accordance to the spatial dimension d. Numerical
Multiple Kernel Learning Using Nearest Neighbor Classifiers
"... We study the problem of multiple kernel learning (MKL) in a classification setting. We first examine the kernel alignment metric and show that maximizing the alignment of a kernel with the target kernel Y Y T corresponds to a constrained minimization of the margin loss of a weighted NearestNeighb ..."
Abstract
 Add to MetaCart
binations of classifier and loss functions for multiple kernel learning. We make a thorough empirical study of the combinations. The NN classifier is particularly suitable to perform MKL on large datasets, with training a speedup of O (n2) over MKL algorithms that use SVMs. 1
Binning in Gaussian Kernel Regularization
"... Gaussian kernel regularization is widely used in the machine learning literature and has proved successful in many empirical experiments. The periodic version of Gaussian kernel regularization has been shown to be minimax rate optimal in estimating functions in any finite order Sobolev space. How ..."
Abstract
 Add to MetaCart
Gaussian kernel regularization is widely used in the machine learning literature and has proved successful in many empirical experiments. The periodic version of Gaussian kernel regularization has been shown to be minimax rate optimal in estimating functions in any finite order Sobolev space
Results 1  10
of
67,224