Results 1 
4 of
4
New Support Vector Algorithms
, 2000
"... this article with the regression case. To explain this, we will introduce a suitable definition of a margin that is maximized in both cases ..."
Abstract

Cited by 322 (45 self)
 Add to MetaCart
this article with the regression case. To explain this, we will introduce a suitable definition of a margin that is maximized in both cases
Shrinking the Tube: A New Support Vector Regression Algorithm
, 1999
"... A new algorithm for Support Vector regression is described. For a priori chosen , it automatically adjusts a flexible tube of minimal radius to the data such that at most a fraction of the data points lie outside. Moreover, it is shown how to use parametric tube shapes with nonconstant radius. ..."
Abstract

Cited by 49 (6 self)
 Add to MetaCart
A new algorithm for Support Vector regression is described. For a priori chosen , it automatically adjusts a flexible tube of minimal radius to the data such that at most a fraction of the data points lie outside. Moreover, it is shown how to use parametric tube shapes with nonconstant radius. The algorithm is analysed theoretically and experimentally.
Support Vector Regression with Automatic Accuracy Control
 Proceedings of ICANN'98, Perspectives in Neural Computing, pages 111  116
, 1998
"... A new algorithm for Support Vector regression is proposed. For a priori chosen , it automatically adjusts a flexible tube of minimal radius to the data such that at most a fraction of the data points lie outside. The algorithm is analysed theoretically and experimentally. 1 Introduction Support Ve ..."
Abstract

Cited by 34 (4 self)
 Add to MetaCart
A new algorithm for Support Vector regression is proposed. For a priori chosen , it automatically adjusts a flexible tube of minimal radius to the data such that at most a fraction of the data points lie outside. The algorithm is analysed theoretically and experimentally. 1 Introduction Support Vector (SV) machines comprise a new class of learning algorithms, motivated by results of statistical learning theory [4]. Originally developed for pattern recognition, they represent the decision boundary in terms of a typically small subset [2] of all training examples, called the Support Vectors. In order for this property to carry over to the case of SV Regression, Vapnik devised the socalled "insensitive loss function [4] jy \Gamma f(x)j " = maxf0; jy \Gamma f(x)j \Gamma "g, which does not penalize errors below some " ? 0, chosen a priori. His algorithm, which we will henceforth call "SVR, seeks to estimate functions f(x) = (w \Delta x) + b; w; x 2 R N ; b 2 R; (1) based on data (x...
Linear Programs for Automatic Accuracy Control in Regression
 IN NINTH INTERNATIONAL CONFERENCE ON ARTIFICIAL NEURAL NETWORKS, CONFERENCE PUBLICATIONS NO. 470
, 1999
"... We have recently proposed a new approach to control the number of basis functions and the accuracy in Support Vector Machines. The latter is transferred to a linear programming setting, which inherently enforces sparseness of the solution. The algorithm computes a nonlinear estimate in terms of ker ..."
Abstract

Cited by 31 (4 self)
 Add to MetaCart
We have recently proposed a new approach to control the number of basis functions and the accuracy in Support Vector Machines. The latter is transferred to a linear programming setting, which inherently enforces sparseness of the solution. The algorithm computes a nonlinear estimate in terms of kernel functions and an ffl ? 0 with the property that at most a fraction of the training set has an error exceeding ffl. The algorithm is robust to local perturbations of these points' target values. We give an explicit formulation of the optimization equations needed to solve the linear program and point out which modifications of the standard optimization setting are necessary to take advantage of the particular structure of the equations in the regression case.