## Support vector machines: Training and applications (1997)

Venue: | A.I. MEMO 1602, MIT A. I. LAB |

Citations: | 177 - 3 self |

### BibTeX

@ARTICLE{Osuna97supportvector,

author = {Edgar E. Osuna and Robert Freund and Federico Girosi},

title = {Support vector machines: Training and applications},

journal = {A.I. MEMO 1602, MIT A. I. LAB},

year = {1997}

}

### Years of Citing Articles

### OpenURL

### Abstract

The Support Vector Machine (SVM) is a new and very promising classification technique developed by Vapnik and his group at AT&T Bell Laboratories [3, 6, 8, 24]. This new learning algorithm can be seen as an alternative training technique for Polynomial, Radial Basis Function and Multi-Layer Perceptron classifiers. The main idea behind the technique is to separate the classes with a surface that maximizes the margin between them. An interesting property of this approach is that it is an approximate implementation of the Structural Risk Minimization (SRM) induction principle [23]. The derivation of Support Vector Machines, its relationship with SRM, and its geometrical insight, are discussed in this paper. Since Structural Risk Minimization is an inductive principle that aims at minimizing a bound on the generalization error of a model, rather than minimizing the Mean Square Error over the data set (as Empirical Risk Minimization methods do), training a SVM to obtain the maximum margin classi er requires a different objective function. This objective function is then optimized by solving a large-scale quadratic programming problem with linear and box constraints. The problem is considered challenging, because the quadratic form is completely dense, so the memory