• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 120,767
Next 10 →

Making Large-Scale SVM Learning Practical

by Thorsten Joachims , 1998
"... Training a support vector machine (SVM) leads to a quadratic optimization problem with bound constraints and one linear equality constraint. Despite the fact that this type of problem is well understood, there are many issues to be considered in designing an SVM learner. In particular, for large lea ..."
Abstract - Cited by 1861 (17 self) - Add to MetaCart
Training a support vector machine (SVM) leads to a quadratic optimization problem with bound constraints and one linear equality constraint. Despite the fact that this type of problem is well understood, there are many issues to be considered in designing an SVM learner. In particular, for large

Pegasos: Primal Estimated sub-gradient solver for SVM

by Shai Shalev-Shwartz, Yoram Singer, Nathan Srebro, Andrew Cotter
"... We describe and analyze a simple and effective stochastic sub-gradient descent algorithm for solving the optimization problem cast by Support Vector Machines (SVM). We prove that the number of iterations required to obtain a solution of accuracy ɛ is Õ(1/ɛ), where each iteration operates on a singl ..."
Abstract - Cited by 542 (20 self) - Add to MetaCart
We describe and analyze a simple and effective stochastic sub-gradient descent algorithm for solving the optimization problem cast by Support Vector Machines (SVM). We prove that the number of iterations required to obtain a solution of accuracy ɛ is Õ(1/ɛ), where each iteration operates on a

Training Linear SVMs in Linear Time

by Thorsten Joachims , 2006
"... Linear Support Vector Machines (SVMs) have become one of the most prominent machine learning techniques for high-dimensional sparse data commonly encountered in applications like text classification, word-sense disambiguation, and drug design. These applications involve a large number of examples n ..."
Abstract - Cited by 549 (6 self) - Add to MetaCart
is based on an alternative, but equivalent formulation of the SVM optimization problem. Empirically, the Cutting-Plane Algorithm is several orders of magnitude faster than decomposition methods like SVM-Light for large datasets.

Global Optimization with Polynomials and the Problem of Moments

by Jean B. Lasserre - SIAM JOURNAL ON OPTIMIZATION , 2001
"... We consider the problem of finding the unconstrained global minimum of a real-valued polynomial p(x) : R R, as well as the global minimum of p(x), in a compact set K defined by polynomial inequalities. It is shown that this problem reduces to solving an (often finite) sequence of convex linear ma ..."
Abstract - Cited by 577 (48 self) - Add to MetaCart
matrix inequality (LMI) problems. A notion of Karush-Kuhn-Tucker polynomials is introduced in a global optimality condition. Some illustrative examples are provided.

Making Large-Scale Support Vector Machine Learning Practical

by Thorsten Joachims , 1998
"... Training a support vector machine (SVM) leads to a quadratic optimization problem with bound constraints and one linear equality constraint. Despite the fact that this type of problem is well understood, there are many issues to be considered in designing an SVM learner. In particular, for large lea ..."
Abstract - Cited by 628 (1 self) - Add to MetaCart
Training a support vector machine (SVM) leads to a quadratic optimization problem with bound constraints and one linear equality constraint. Despite the fact that this type of problem is well understood, there are many issues to be considered in designing an SVM learner. In particular, for large

Optimal approximation by piecewise smooth functions and associated variational problems

by David Mumford - Commun. Pure Applied Mathematics , 1989
"... (Article begins on next page) The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters. Citation Mumford, David Bryant, and Jayant Shah. 1989. Optimal approximations by piecewise smooth functions and associated variational problems. ..."
Abstract - Cited by 1294 (14 self) - Add to MetaCart
(Article begins on next page) The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters. Citation Mumford, David Bryant, and Jayant Shah. 1989. Optimal approximations by piecewise smooth functions and associated variational problems

No Free Lunch Theorems for Optimization

by David H. Wolpert, et al. , 1997
"... A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of “no free lunch ” (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performan ..."
Abstract - Cited by 961 (10 self) - Add to MetaCart
A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of “no free lunch ” (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset

Training Support Vector Machines: an Application to Face Detection

by Edgar Osuna, Robert Freund, Federico Girosi , 1997
"... We investigate the application of Support Vector Machines (SVMs) in computer vision. SVM is a learning technique developed by V. Vapnik and his team (AT&T Bell Labs.) that can be seen as a new method for training polynomial, neural network, or Radial Basis Functions classifiers. The decision sur ..."
Abstract - Cited by 727 (1 self) - Add to MetaCart
global optimality, and can be used to train SVM's over very large data sets. The main idea behind the decomposition is the iterative solution of sub-problems and the evaluation of optimality conditions which are used both to generate improved iterative values, and also establish the stopping

Object Detection with Discriminatively Trained Part Based Models

by Pedro F. Felzenszwalb, Ross B. Girshick, David McAllester, Deva Ramanan
"... We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their ..."
Abstract - Cited by 1422 (49 self) - Add to MetaCart
is a reformulation of MI-SVM in terms of latent variables. A latent SVM is semi-convex and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples

Some optimal inapproximability results

by Johan Håstad , 2002
"... We prove optimal, up to an arbitrary ffl? 0, inapproximability results for Max-Ek-Sat for k * 3, maximizing the number of satisfied linear equations in an over-determined system of linear equations modulo a prime p and Set Splitting. As a consequence of these results we get improved lower bounds for ..."
Abstract - Cited by 751 (11 self) - Add to MetaCart
for the efficient approximability of many optimization problems studied previously. In particular, for Max-E2-Sat, Max-Cut, Max-di-Cut, and Vertex cover. Warning: Essentially this paper has been published in JACM and is subject to copyright restrictions. In particular it is for personal use only.
Next 10 →
Results 1 - 10 of 120,767
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University