Results 1 - 10
of
1,612
A dual coordinate descent method for large-scale linear SVM.
- In ICML,
, 2008
"... Abstract In many applications, data appear with a huge number of instances as well as features. Linear Support Vector Machines (SVM) is one of the most popular tools to deal with such large-scale sparse data. This paper presents a novel dual coordinate descent method for linear SVM with L1-and L2-l ..."
Abstract
-
Cited by 207 (20 self)
- Add to MetaCart
Abstract In many applications, data appear with a huge number of instances as well as features. Linear Support Vector Machines (SVM) is one of the most popular tools to deal with such large-scale sparse data. This paper presents a novel dual coordinate descent method for linear SVM with L1-and L2
Feature selection for linear SVM with provable guarantees,”
- Journal of Machine Learning Research,
, 2015
"... Abstract We give two provably accurate featureselection techniques for the linear SVM. The algorithms run in deterministic and randomized time respectively. Our algorithms can be used in an unsupervised or supervised setting. The supervised approach is based on sampling features from support vector ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
Abstract We give two provably accurate featureselection techniques for the linear SVM. The algorithms run in deterministic and randomized time respectively. Our algorithms can be used in an unsupervised or supervised setting. The supervised approach is based on sampling features from support
Pegasos: Primal Estimated sub-gradient solver for SVM
"... We describe and analyze a simple and effective stochastic sub-gradient descent algorithm for solving the optimization problem cast by Support Vector Machines (SVM). We prove that the number of iterations required to obtain a solution of accuracy ɛ is Õ(1/ɛ), where each iteration operates on a singl ..."
Abstract
-
Cited by 542 (20 self)
- Add to MetaCart
single training example. In contrast, previous analyses of stochastic gradient descent methods for SVMs require Ω(1/ɛ2) iterations. As in previously devised SVM solvers, the number of iterations also scales linearly with 1/λ, where λ is the regularization parameter of SVM. For a linear kernel, the total
Making Large-Scale SVM Learning Practical
, 1998
"... Training a support vector machine (SVM) leads to a quadratic optimization problem with bound constraints and one linear equality constraint. Despite the fact that this type of problem is well understood, there are many issues to be considered in designing an SVM learner. In particular, for large lea ..."
Abstract
-
Cited by 1861 (17 self)
- Add to MetaCart
Training a support vector machine (SVM) leads to a quadratic optimization problem with bound constraints and one linear equality constraint. Despite the fact that this type of problem is well understood, there are many issues to be considered in designing an SVM learner. In particular, for large
Componentwise Triple Jump Acceleration for Training Linear SVM
"... The triple jump extrapolation method is an effective approximation of Aitken’s acceleration for accelerating the convergence of many machine learning algorithms that can be formulated as fixedpoint iteration. In the remainder of this abstract, we briefly review the general idea of the triple jump me ..."
Abstract
- Add to MetaCart
method and then describe how to apply it to accelerate stochastic gradient descent (SGD) for training linear support vector machines (SVM). 1 Triple Jump Extrapolation Let w ∈ R N be a N-dimensional weight vector of a model. A machine learning problem can be considered as fixed-point iteration
A fast method for training linear svm in the primal
- In Proceedings of the 2008 European Conference on Machine Learning and Knowledge Discovery in Databases - Part I
, 2008
"... Abstract. We propose a new algorithm for training a linear Support Vector Machine in the primal. The algorithm mixes ideas from non smooth optimization, subgradient methods, and cutting planes methods. This yields a fast algorithm that compares well to state of the art algorithms. It is proved to re ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
Abstract. We propose a new algorithm for training a linear Support Vector Machine in the primal. The algorithm mixes ideas from non smooth optimization, subgradient methods, and cutting planes methods. This yields a fast algorithm that compares well to state of the art algorithms. It is proved
Coordinate Descent Method for Large-scale L2-loss Linear SVM
"... Linear support vector machines (SVM) are useful for classifying largescale sparse data. Problems with sparse features are common in applications such as document classification and natural language processing. In this paper, we propose a novel coordinate descent algorithm for training linear SVM wit ..."
Abstract
-
Cited by 46 (12 self)
- Add to MetaCart
Linear support vector machines (SVM) are useful for classifying largescale sparse data. Problems with sparse features are common in applications such as document classification and natural language processing. In this paper, we propose a novel coordinate descent algorithm for training linear SVM
Training and Testing Low-degree Polynomial Data Mappings via Linear SVM
- JOURNAL OF MACHINE LEARNING RESEARCH
"... Kernel techniques have long been used in SVM to handle linearly inseparable problems by transforming data to a high dimensional space, but training and testing large data sets is often time consuming. In contrast, we can efficiently train and test much larger data sets using linear SVM without kerne ..."
Abstract
-
Cited by 30 (8 self)
- Add to MetaCart
Kernel techniques have long been used in SVM to handle linearly inseparable problems by transforming data to a high dimensional space, but training and testing large data sets is often time consuming. In contrast, we can efficiently train and test much larger data sets using linear SVM without
Training Linear SVMs in Linear Time
, 2006
"... Linear Support Vector Machines (SVMs) have become one of the most prominent machine learning techniques for high-dimensional sparse data commonly encountered in applications like text classification, word-sense disambiguation, and drug design. These applications involve a large number of examples n ..."
Abstract
-
Cited by 549 (6 self)
- Add to MetaCart
Linear Support Vector Machines (SVMs) have become one of the most prominent machine learning techniques for high-dimensional sparse data commonly encountered in applications like text classification, word-sense disambiguation, and drug design. These applications involve a large number of examples n
Results 1 - 10
of
1,612