• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 1,612
Next 10 →

A dual coordinate descent method for large-scale linear SVM.

by Cho-Jui Hsieh , Kai-Wei Chang , Chih-Jen Lin , S Sathiya Keerthi , S Sundararajan - In ICML, , 2008
"... Abstract In many applications, data appear with a huge number of instances as well as features. Linear Support Vector Machines (SVM) is one of the most popular tools to deal with such large-scale sparse data. This paper presents a novel dual coordinate descent method for linear SVM with L1-and L2-l ..."
Abstract - Cited by 207 (20 self) - Add to MetaCart
Abstract In many applications, data appear with a huge number of instances as well as features. Linear Support Vector Machines (SVM) is one of the most popular tools to deal with such large-scale sparse data. This paper presents a novel dual coordinate descent method for linear SVM with L1-and L2

Feature selection for linear SVM with provable guarantees,”

by Saurabh Paul , Malik Magdon-Ismail , Petros Drineas - Journal of Machine Learning Research, , 2015
"... Abstract We give two provably accurate featureselection techniques for the linear SVM. The algorithms run in deterministic and randomized time respectively. Our algorithms can be used in an unsupervised or supervised setting. The supervised approach is based on sampling features from support vector ..."
Abstract - Cited by 2 (0 self) - Add to MetaCart
Abstract We give two provably accurate featureselection techniques for the linear SVM. The algorithms run in deterministic and randomized time respectively. Our algorithms can be used in an unsupervised or supervised setting. The supervised approach is based on sampling features from support

Pegasos: Primal Estimated sub-gradient solver for SVM

by Shai Shalev-Shwartz, Yoram Singer, Nathan Srebro, Andrew Cotter
"... We describe and analyze a simple and effective stochastic sub-gradient descent algorithm for solving the optimization problem cast by Support Vector Machines (SVM). We prove that the number of iterations required to obtain a solution of accuracy ɛ is Õ(1/ɛ), where each iteration operates on a singl ..."
Abstract - Cited by 542 (20 self) - Add to MetaCart
single training example. In contrast, previous analyses of stochastic gradient descent methods for SVMs require Ω(1/ɛ2) iterations. As in previously devised SVM solvers, the number of iterations also scales linearly with 1/λ, where λ is the regularization parameter of SVM. For a linear kernel, the total

Making Large-Scale SVM Learning Practical

by Thorsten Joachims , 1998
"... Training a support vector machine (SVM) leads to a quadratic optimization problem with bound constraints and one linear equality constraint. Despite the fact that this type of problem is well understood, there are many issues to be considered in designing an SVM learner. In particular, for large lea ..."
Abstract - Cited by 1861 (17 self) - Add to MetaCart
Training a support vector machine (SVM) leads to a quadratic optimization problem with bound constraints and one linear equality constraint. Despite the fact that this type of problem is well understood, there are many issues to be considered in designing an SVM learner. In particular, for large

Componentwise Triple Jump Acceleration for Training Linear SVM

by Han-shen Huang, Yu-ming Chang, Chun-nan Hsu
"... The triple jump extrapolation method is an effective approximation of Aitken’s acceleration for accelerating the convergence of many machine learning algorithms that can be formulated as fixedpoint iteration. In the remainder of this abstract, we briefly review the general idea of the triple jump me ..."
Abstract - Add to MetaCart
method and then describe how to apply it to accelerate stochastic gradient descent (SGD) for training linear support vector machines (SVM). 1 Triple Jump Extrapolation Let w ∈ R N be a N-dimensional weight vector of a model. A machine learning problem can be considered as fixed-point iteration

A fast method for training linear svm in the primal

by Trinh-minh-tri Do, Thierry Artières - In Proceedings of the 2008 European Conference on Machine Learning and Knowledge Discovery in Databases - Part I , 2008
"... Abstract. We propose a new algorithm for training a linear Support Vector Machine in the primal. The algorithm mixes ideas from non smooth optimization, subgradient methods, and cutting planes methods. This yields a fast algorithm that compares well to state of the art algorithms. It is proved to re ..."
Abstract - Cited by 1 (1 self) - Add to MetaCart
Abstract. We propose a new algorithm for training a linear Support Vector Machine in the primal. The algorithm mixes ideas from non smooth optimization, subgradient methods, and cutting planes methods. This yields a fast algorithm that compares well to state of the art algorithms. It is proved

Coordinate Descent Method for Large-scale L2-loss Linear SVM

by Kai-wei Chang, Cho-jui Hsieh, Chih-jen Lin
"... Linear support vector machines (SVM) are useful for classifying largescale sparse data. Problems with sparse features are common in applications such as document classification and natural language processing. In this paper, we propose a novel coordinate descent algorithm for training linear SVM wit ..."
Abstract - Cited by 46 (12 self) - Add to MetaCart
Linear support vector machines (SVM) are useful for classifying largescale sparse data. Problems with sparse features are common in applications such as document classification and natural language processing. In this paper, we propose a novel coordinate descent algorithm for training linear SVM

Linear SVM training using separability and interior point

by Kristian Woodsend, Jacek Gondzio , 2008
"... methods ..."
Abstract - Add to MetaCart
Abstract not found

Training and Testing Low-degree Polynomial Data Mappings via Linear SVM

by Yin-wen Chang, Cho-jui Hsieh, Kai-wei Chang, Michael Ringgaard, Chih-jen Lin - JOURNAL OF MACHINE LEARNING RESEARCH
"... Kernel techniques have long been used in SVM to handle linearly inseparable problems by transforming data to a high dimensional space, but training and testing large data sets is often time consuming. In contrast, we can efficiently train and test much larger data sets using linear SVM without kerne ..."
Abstract - Cited by 30 (8 self) - Add to MetaCart
Kernel techniques have long been used in SVM to handle linearly inseparable problems by transforming data to a high dimensional space, but training and testing large data sets is often time consuming. In contrast, we can efficiently train and test much larger data sets using linear SVM without

Training Linear SVMs in Linear Time

by Thorsten Joachims , 2006
"... Linear Support Vector Machines (SVMs) have become one of the most prominent machine learning techniques for high-dimensional sparse data commonly encountered in applications like text classification, word-sense disambiguation, and drug design. These applications involve a large number of examples n ..."
Abstract - Cited by 549 (6 self) - Add to MetaCart
Linear Support Vector Machines (SVMs) have become one of the most prominent machine learning techniques for high-dimensional sparse data commonly encountered in applications like text classification, word-sense disambiguation, and drug design. These applications involve a large number of examples n
Next 10 →
Results 1 - 10 of 1,612
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University