Results 1  10
of
25
LIBSVM: a Library for Support Vector Machines
, 2001
"... LIBSVM is a library for support vector machines (SVM). Its goal is to help users can easily use SVM as a tool. In this document, we present all its implementation details. 1 ..."
Abstract

Cited by 3408 (62 self)
 Add to MetaCart
LIBSVM is a library for support vector machines (SVM). Its goal is to help users can easily use SVM as a tool. In this document, we present all its implementation details. 1
Support Vector Machines: Hype or Hallelujah?
 SIGKDD Explorations
, 2003
"... Support Vector Machines (SVMs) and related kernel methods have become increasingly popular tools for data mining tasks such as classification, regression, and novelty detection. The goal of this tutorial is to provide an intuitive explanation of SVMs from a geometric perspective. The classification ..."
Abstract

Cited by 80 (0 self)
 Add to MetaCart
Support Vector Machines (SVMs) and related kernel methods have become increasingly popular tools for data mining tasks such as classification, regression, and novelty detection. The goal of this tutorial is to provide an intuitive explanation of SVMs from a geometric perspective. The classification problem is used to investigate the basic concepts behind SVMs and to examine their strengths and weaknesses from a data mining perspective. While this overview is not comprehensive, it does provide resources for those interested in further exploring SVMs.
A robust minimax approach to classification
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2002
"... When constructing a classifier, the probability of correct classification of future data points should be maximized. We consider a binary classification problem where the mean and covariance matrix of each class are assumed to be known. No further assumptions are made with respect to the classcondi ..."
Abstract

Cited by 61 (7 self)
 Add to MetaCart
When constructing a classifier, the probability of correct classification of future data points should be maximized. We consider a binary classification problem where the mean and covariance matrix of each class are assumed to be known. No further assumptions are made with respect to the classconditional distributions. Misclassification probabilities are then controlled in a worstcase setting: that is, under all possible choices of classconditional densities with given mean and covariance matrix, we minimize the worstcase (maximum) probability of misclassification of future data points. For a linear decision boundary, this desideratum is translated in a very direct way into a (convex) second order cone optimization problem, with complexity similar to a support vector machine problem. The minimax problem can be interpreted geometrically as minimizing the maximum of the Mahalanobis distances to the two classes. We address the issue of robustness with respect to estimation errors (in the means and covariances of the
Duality and Geometry in SVM Classifiers
 In Proc. 17th International Conf. on Machine Learning
, 2000
"... We develop an intuitive geometric interpretation of the standard support vector machine (SVM) for classification of both linearly separable and inseparable data and provide a rigorous derivation of the concepts behind the geometry. For the separable case finding the maximum margin between the ..."
Abstract

Cited by 59 (4 self)
 Add to MetaCart
We develop an intuitive geometric interpretation of the standard support vector machine (SVM) for classification of both linearly separable and inseparable data and provide a rigorous derivation of the concepts behind the geometry. For the separable case finding the maximum margin between the two sets is equivalent to finding the closest points in the smallest convex sets that contain each class (the convex hulls). We now extend this argument to the inseparable case by using a reduced convex hull reduced away from outliers. We prove that solving the reduced convex hull formulation is exactly equivalent to solving the standard inseparable SVM for appropriate choices of parameters. Some additional advantages of the new formulation are that the e#ect of the choice of parameters becomes geometrically clear and that the formulation may be solved by fast nearest point algorithms. By changing norms these arguments hold for both the standard 2norm and 1norm SVM. 1. Int...
Training νSupport Vector Classifiers: Theory and Algorithms
"... The νsupport vector machine (νSVM) for classification proposed by Schölkopf et al. has the advantage of using a parameter ν on controlling the number of support vectors. In this paper, we investigate the relation between νSVM and CSVM in detail. We show that in general they are two different p ..."
Abstract

Cited by 33 (10 self)
 Add to MetaCart
The νsupport vector machine (νSVM) for classification proposed by Schölkopf et al. has the advantage of using a parameter ν on controlling the number of support vectors. In this paper, we investigate the relation between νSVM and CSVM in detail. We show that in general they are two different problems with the same optimal solution set. Hence we may expect that many numerical aspects on solving them are similar. However, comparing to regular CSVM, its formulation is more complicated so up to now there are no effective methods for solving largescale νSVM. We propose a decomposition method for νSVM which is competitive with existing methods for CSVM. We also discuss the behavior of νSVM by some numerical experiments.
A study on SMOtype decomposition methods for support vector machines
 IEEE Transactions on Neural Networks
, 2006
"... Decomposition methods are currently one of the major methods for training support vector machines. They vary mainly according to different working set selections. Existing implementations and analysis usually consider some specific selection rules. In this article, we study Sequential Minimal Optim ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
Decomposition methods are currently one of the major methods for training support vector machines. They vary mainly according to different working set selections. Existing implementations and analysis usually consider some specific selection rules. In this article, we study Sequential Minimal Optimization (SMO)type decomposition methods under a general and flexible way of choosing the twoelement working set. Main results include: 1) a simple asymptotic convergence proof, 2) a general explanation of the shrinking and caching techniques, and 3) the linear convergence of the method. Extensions to some SVM variants are also discussed. I.
Information, Divergence and Risk for Binary Experiments
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2009
"... We unify fdivergences, Bregman divergences, surrogate regret bounds, proper scoring rules, cost curves, ROCcurves and statistical information. We do this by systematically studying integral and variational representations of these various objects and in so doing identify their primitives which all ..."
Abstract

Cited by 17 (6 self)
 Add to MetaCart
We unify fdivergences, Bregman divergences, surrogate regret bounds, proper scoring rules, cost curves, ROCcurves and statistical information. We do this by systematically studying integral and variational representations of these various objects and in so doing identify their primitives which all are related to costsensitive binary classification. As well as developing relationships between generative and discriminative views of learning, the new machinery leads to tight and more general surrogate regret bounds and generalised Pinsker inequalities relating fdivergences to variational divergence. The new viewpoint also illuminates existing algorithms: it provides a new derivation of Support Vector Machines in terms of divergences and relates Maximum Mean Discrepancy to Fisher Linear Discriminants.
A tutorial on νSupport Vector Machines
 APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY
, 2005
"... We briefly describe the main ideas of statistical learning theory, support vector machines (SVMs), and kernel feature spaces. We place particular emphasis on a description of the socalled nSVM, including details of the algorithm and its implementation, theoretical results, and practical applicatio ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
We briefly describe the main ideas of statistical learning theory, support vector machines (SVMs), and kernel feature spaces. We place particular emphasis on a description of the socalled nSVM, including details of the algorithm and its implementation, theoretical results, and practical applications.
The Huller: a simple and efficient online SVM
 In Machine Learning: ECML 2005, Lecture Notes in Artificial Intelligence, LNAI 3720
, 2005
"... Abstract. We propose a novel online kernel classifier algorithm that converges to the Hard Margin SVM solution. The same update rule is used to both add and remove support vectors from the current classifier. Experiments suggest that this algorithm matches the SVM accuracies after a single pass over ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
Abstract. We propose a novel online kernel classifier algorithm that converges to the Hard Margin SVM solution. The same update rule is used to both add and remove support vectors from the current classifier. Experiments suggest that this algorithm matches the SVM accuracies after a single pass over the training examples. This algorithm is attractive when one seeks a competitive classifier with large datasets and limited computing resources. 1
Training νSupport Vector Regression: Theory and Algorithms
"... We discuss the relation between ǫSupport Vector Regression (ǫSVR) and νSupport Vector Regression (νSVR). In particular we focus on properties which are different from those of CSupport Vector Classification (CSVC) and νSupport Vector Classification (νSVC). We then discuss some issues which d ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
We discuss the relation between ǫSupport Vector Regression (ǫSVR) and νSupport Vector Regression (νSVR). In particular we focus on properties which are different from those of CSupport Vector Classification (CSVC) and νSupport Vector Classification (νSVC). We then discuss some issues which do not occur in the case of classification: the possible range of ǫ and the scaling of target values. A practical decomposition method for νSVR is implemented and computational experiments are conducted. We show some interesting numerical observations specific to regression.