Results 1  10
of
293
A tutorial on support vector machines for pattern recognition
 Data Mining and Knowledge Discovery
, 1998
"... The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and nonseparable data, working through a nontrivial example in detail. We describe a mechanical analogy, and discuss when SV ..."
Abstract

Cited by 2497 (11 self)
 Add to MetaCart
The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and nonseparable data, working through a nontrivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe how support vector training can be practically implemented, and discuss in detail the kernel mapping technique which is used to construct SVM solutions which are nonlinear in the data. We show how Support Vector machines can have very large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels. While very high VC dimension would normally bode ill for generalization performance, and while at present there exists no theory which shows that good generalization performance is guaranteed for SVMs, there are several arguments which support the observed high accuracy of SVMs, which we review. Results of some experiments which were inspired by these arguments are also presented. We give numerous examples and proofs of most of the key theorems. There is new material, and I hope that the reader will find that even old material is cast in a fresh light.
Statistical pattern recognition: A review
 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 2000
"... The primary goal of pattern recognition is supervised or unsupervised classification. Among the various frameworks in which pattern recognition has been traditionally formulated, the statistical approach has been most intensively studied and used in practice. More recently, neural network techniques ..."
Abstract

Cited by 752 (23 self)
 Add to MetaCart
(Show Context)
The primary goal of pattern recognition is supervised or unsupervised classification. Among the various frameworks in which pattern recognition has been traditionally formulated, the statistical approach has been most intensively studied and used in practice. More recently, neural network techniques and methods imported from statistical learning theory have bean receiving increasing attention. The design of a recognition system requires careful attention to the following issues: definition of pattern classes, sensing environment, pattern representation, feature extraction and selection, cluster analysis, classifier design and learning, selection of training and test samples, and performance evaluation. In spite of almost 50 years of research and development in this field, the general problem of recognizing complex patterns with arbitrary orientation, location, and scale remains unsolved. New and emerging applications, such as data mining, web searching, retrieval of multimedia data, face recognition, and cursive handwriting recognition, require robust and efficient pattern recognition techniques. The objective of this review paper is to summarize and compare some of the wellknown methods used in various stages of a pattern recognition system and identify research topics and applications which are at the forefront of this exciting and challenging field.
A tutorial on support vector regression
, 2004
"... In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing ..."
Abstract

Cited by 540 (2 self)
 Add to MetaCart
In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing with large datasets. Finally, we mention some modifications and extensions that have been applied to the standard SV algorithm, and discuss the aspect of regularization from a SV perspective.
Making LargeScale Support Vector Machine Learning Practical
, 1998
"... Training a support vector machine (SVM) leads to a quadratic optimization problem with bound constraints and one linear equality constraint. Despite the fact that this type of problem is well understood, there are many issues to be considered in designing an SVM learner. In particular, for large lea ..."
Abstract

Cited by 511 (1 self)
 Add to MetaCart
Training a support vector machine (SVM) leads to a quadratic optimization problem with bound constraints and one linear equality constraint. Despite the fact that this type of problem is well understood, there are many issues to be considered in designing an SVM learner. In particular, for large learning tasks with many training examples, offtheshelf optimization techniques for general quadratic programs quickly become intractable in their memory and time requirements. SV M light1 is an implementation of an SVM learner which addresses the problem of large tasks. This chapter presents algorithmic and computational results developed for SV M light V2.0, which make largescale SVM training more practical. The results give guidelines for the application of SVMs to large domains.
Sequential minimal optimization: A fast algorithm for training support vector machines
 Advances in Kernel MethodsSupport Vector Learning
, 1999
"... This paper proposes a new algorithm for training support vector machines: Sequential Minimal Optimization, or SMO. Training a support vector machine requires the solution of a very large quadratic programming (QP) optimization problem. SMO breaks this large QP problem into a series of smallest possi ..."
Abstract

Cited by 328 (3 self)
 Add to MetaCart
This paper proposes a new algorithm for training support vector machines: Sequential Minimal Optimization, or SMO. Training a support vector machine requires the solution of a very large quadratic programming (QP) optimization problem. SMO breaks this large QP problem into a series of smallest possible QP problems. These small QP problems are solved analytically, which avoids using a timeconsuming numerical QP optimization as an inner loop. The amount of memory required for SMO is linear in the training set size, which allows SMO to handle very large training sets. Because matrix computation is avoided, SMO scales somewhere between linear and quadratic in the training set size for various test problems, while the standard chunking SVM algorithm scales somewhere between linear and cubic in the training set size. SMO’s computation time is dominated by SVM evaluation, hence SMO is fastest for linear SVMs and sparse data sets. On realworld sparse data sets, SMO can be more than 1000 times faster than the chunking algorithm. 1.
Learning Overcomplete Representations
, 2000
"... In an overcomplete basis, the number of basis vectors is greater than the dimensionality of the input, and the representation of an input is not a unique combination of basis vectors. Overcomplete representations have been advocated because they have greater robustness in the presence of noise, can ..."
Abstract

Cited by 286 (11 self)
 Add to MetaCart
(Show Context)
In an overcomplete basis, the number of basis vectors is greater than the dimensionality of the input, and the representation of an input is not a unique combination of basis vectors. Overcomplete representations have been advocated because they have greater robustness in the presence of noise, can be sparser, and can have greater flexibility in matching structure in the data. Overcomplete codes have also been proposed as a model of some of the response properties of neurons in primary visual cortex. Previous work has focused on finding the best representation of a signal using a fixed overcomplete basis (or dictionary). We present an algorithm for learning an overcomplete basis by viewing it as probabilistic model of the observed data. We show that overcomplete bases can yield a better approximation of the underlying statistical distribution of the data and can thus lead to greater coding efficiency. This can be viewed as a generalization of the technique of independent component analysis and provides a method for Bayesian reconstruction of signals in the presence of noise and for blind source separation when there are more sources than mixtures.
SVMTorch: Support Vector Machines for LargeScale Regression Problems
 Journal of Machine Learning Research
, 2001
"... Support Vector Machines (SVMs) for regression problems are trained by solving a quadratic optimization problem which needs on the order of l 2 memory and time resources to solve, where l is the number of training examples. In this paper, we propose a decomposition algorithm, SVMTorch 1 , whic ..."
Abstract

Cited by 270 (10 self)
 Add to MetaCart
Support Vector Machines (SVMs) for regression problems are trained by solving a quadratic optimization problem which needs on the order of l 2 memory and time resources to solve, where l is the number of training examples. In this paper, we propose a decomposition algorithm, SVMTorch 1 , which is similar to SVMLight proposed by Joachims (1999) for classification problems, but adapted to regression problems. With this algorithm, one can now efficiently solve largescale regression problems (more than 20000 examples). Comparisons with Nodelib, another publicly available SVM algorithm for largescale regression problems from Flake and Lawrence (2000) yielded significant time improvements. Finally, based on a recent paper from Lin (2000), we show that a convergence proof exists for our algorithm. 1. Introduction Vapnik (1995) has proposed a method to solve regression problems using support vector machines. It has yielded excellent performance on many regression and time ser...
Support vector machines for spam categorization
 IEEE TRANSACTIONS ON NEURAL NETWORKS
, 1999
"... We study the use of support vector machines (SVM’s) in classifying email as spam or nonspam by comparing it to three other classification algorithms: Ripper, Rocchio, and boosting decision trees. These four algorithms were tested on two different data sets: one data set where the number of features ..."
Abstract

Cited by 260 (3 self)
 Add to MetaCart
We study the use of support vector machines (SVM’s) in classifying email as spam or nonspam by comparing it to three other classification algorithms: Ripper, Rocchio, and boosting decision trees. These four algorithms were tested on two different data sets: one data set where the number of features were constrained to the 1000 best features and another data set where the dimensionality was over 7000. SVM’s performed best when using binary features. For both data sets, boosting trees and SVM’s had acceptable test performance in terms of accuracy and speed. However, SVM’s had significantly less training time.
Support Vector Machines for Classification and Regression
 UNIVERSITY OF SOUTHAMPTON, TECHNICAL REPORT
, 1998
"... The problem of empirical data modelling is germane to many engineering applications.
In empirical data modelling a process of induction is used to build up a model of the
system, from which it is hoped to deduce responses of the system that have yet to be observed.
Ultimately the quantity and qualit ..."
Abstract

Cited by 235 (5 self)
 Add to MetaCart
The problem of empirical data modelling is germane to many engineering applications.
In empirical data modelling a process of induction is used to build up a model of the
system, from which it is hoped to deduce responses of the system that have yet to be observed.
Ultimately the quantity and quality of the observations govern the performance
of this empirical model. By its observational nature data obtained is finite and sampled;
typically this sampling is nonuniform and due to the high dimensional nature of the
problem the data will form only a sparse distribution in the input space. Consequently
the problem is nearly always ill posed (Poggio et al., 1985) in the sense of Hadamard
(Hadamard, 1923). Traditional neural network approaches have suffered difficulties with
generalisation, producing models that can overfit the data. This is a consequence of the
optimisation algorithms used for parameter selection and the statistical measures used
to select the ’best’ model. The foundations of Support Vector Machines (SVM) have
been developed by Vapnik (1995) and are gaining popularity due to many attractive
features, and promising empirical performance. The formulation embodies the Structural
Risk Minimisation (SRM) principle, which has been shown to be superior, (Gunn
et al., 1997), to traditional Empirical Risk Minimisation (ERM) principle, employed by
conventional neural networks. SRM minimises an upper bound on the expected risk,
as opposed to ERM that minimises the error on the training data. It is this difference
which equips SVM with a greater ability to generalise, which is the goal in statistical
learning. SVMs were developed to solve the classification problem, but recently they
have been extended to the domain of regression problems (Vapnik et al., 1997). In the
literature the terminology for SVMs can be slightly confusing. The term SVM is typically
used to describe classification with support vector methods and support vector
regression is used to describe regression with support vector methods. In this report
the term SVM will refer to both classification and regression methods, and the terms
Support Vector Classification (SVC) and Support Vector Regression (SVR) will be used
for specification. This section continues with a brief introduction to the structural risk
Less is more: Active learning with support vector machines
, 2000
"... We describe a simple active learning heuristic which greatly enhances the generalization behavior of support vector machines (SVMs) on several practical document classification tasks. We observe a number of benefits, the most surprising of which is that a SVM trained on a wellchosen subset of the av ..."
Abstract

Cited by 223 (1 self)
 Add to MetaCart
(Show Context)
We describe a simple active learning heuristic which greatly enhances the generalization behavior of support vector machines (SVMs) on several practical document classification tasks. We observe a number of benefits, the most surprising of which is that a SVM trained on a wellchosen subset of the available corpus frequently performs better than one trained on all available data. The heuristic for choosing this subset is simple to compute, and makes no use of information about the test set. Given that the training time of SVMs depends heavily on the training set size, our heuristic not only offers better performance with fewer data, it frequently does so in less time than the naive approach of training on all available data. 1.