Results 1 
3 of
3
New Support Vector Algorithms
, 2000
"... this article with the regression case. To explain this, we will introduce a suitable definition of a margin that is maximized in both cases ..."
Abstract

Cited by 322 (45 self)
 Add to MetaCart
this article with the regression case. To explain this, we will introduce a suitable definition of a margin that is maximized in both cases
LeaveOneOut Support Vector Machines
, 1999
"... We present a new learning algorithm for pattern recognition inspired by a recent upper bound on leaveoneout error [ Jaakkola and Haussler, 1999 ] proved for Support Vector Machines (SVMs) [ Vapnik, 1995; 1998 ] . The new approach directly minimizes the expression given by the bound in an attempt ..."
Abstract

Cited by 217 (4 self)
 Add to MetaCart
We present a new learning algorithm for pattern recognition inspired by a recent upper bound on leaveoneout error [ Jaakkola and Haussler, 1999 ] proved for Support Vector Machines (SVMs) [ Vapnik, 1995; 1998 ] . The new approach directly minimizes the expression given by the bound in an attempt to minimize leaveoneout error. This gives a convex optimization problem which constructs a sparse linear classifier in feature space using the kernel technique. As such the algorithm possesses many of the same properties as SVMs. The main novelty of the algorithm is that apart from the choice of kernel, it is parameterless  the selection of the number of training errors is inherent in the algorithm and not chosen by an extra free parameter as in SVMs. First experiments using the method on benchmark datasets from the UCI repository show results similar to SVMs which have been tuned to have the best choice of parameter. 1 Introduction Support Vector Machines (SVMs), motivated by minim...
Kronecker factorization for speeding up kernel machines
 In SIAM International Conference on Data Mining (SDM
, 2005
"... In kernel machines, such as kernel principal component analysis (KPCA), Gaussian Processes (GPs), and Support Vector Machines (SVMs), the computational complexity of finding a solution is O(n 3), where n is the number of training instances. To reduce this expensive computational complexity, we propo ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
In kernel machines, such as kernel principal component analysis (KPCA), Gaussian Processes (GPs), and Support Vector Machines (SVMs), the computational complexity of finding a solution is O(n 3), where n is the number of training instances. To reduce this expensive computational complexity, we propose using Kronecker factorization, which approximates a positive definite kernel matrix by the Kronecker product of two smaller positive definite matrices. This approximation can speed up the calculation of the kernelmatrix inverse or eigendecomposition involved in kernel machines. When the two factorized matrices have about the same dimensions, the computational complexity is improved from O(n 3) to O(n 2). Furthermore, if n is very large, Kronecker factorization can be recursively applied to further reduce the computational complexity. We propose two methods to carry out Kronecker factorization and apply them to speed up KPCA and GPs. In addition, we propose an effective approximate method for Gaussian process classification by integrating the surrogate maximization algorithm and the Kronecker factorization. Experiments show that our methods can drastically reduce the computation time of kernel machines without any significant degradation in their effectiveness. 1