• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 251
Next 10 →

An iterative algorithm learning the maximal margin classifier

by Vojtech Franc, Vaclav Hlavac , 2003
"... A simple learning algorithm for maximal margin classi ers (also support vector machines with quadratic cost function) is proposed. We build our iterative algorithm on top of the Schlesinger–Kozinec algorithm (S–K-algorithm) from 1981 which nds a maximal margin hyperplane with a given precision for s ..."
Abstract - Cited by 7 (0 self) - Add to MetaCart
A simple learning algorithm for maximal margin classi ers (also support vector machines with quadratic cost function) is proposed. We build our iterative algorithm on top of the Schlesinger–Kozinec algorithm (S–K-algorithm) from 1981 which nds a maximal margin hyperplane with a given precision

An Application of a Random Sampling Technique to Primal-Form Maximal-Margin Classifiers

by Jose Balcazar, Yang Dai, Osamu Watanabe
"... Random sampling techniques have been developed in for geometric/combinatorial optimization problems; see, e.g., [Cla88, Cla95, AS93, GW99]. In this note, we apply one of these techniques for obtaining (hopefully) efficient support vector machine training algorithm. In particular, we propose one way ..."
Abstract - Add to MetaCart
Random sampling techniques have been developed in for geometric/combinatorial optimization problems; see, e.g., [Cla88, Cla95, AS93, GW99]. In this note, we apply one of these techniques for obtaining (hopefully) efficient support vector machine training algorithm. In particular, we propose one way to find "outliers" by using the sampling technique.

Large Margin Classification Using the Perceptron Algorithm

by Yoav Freund, Robert E. Schapire - Machine Learning , 1998
"... We introduce and analyze a new algorithm for linear classification which combines Rosenblatt 's perceptron algorithm with Helmbold and Warmuth's leave-one-out method. Like Vapnik 's maximal-margin classifier, our algorithm takes advantage of data that are linearly separable with large ..."
Abstract - Cited by 521 (2 self) - Add to MetaCart
We introduce and analyze a new algorithm for linear classification which combines Rosenblatt 's perceptron algorithm with Helmbold and Warmuth's leave-one-out method. Like Vapnik 's maximal-margin classifier, our algorithm takes advantage of data that are linearly separable

SVM: Terminology 4(6) Error or

by Andrew Kusiak
"... The maximal margin classifier is similar to the perceptron: • It also assumes that the data points are linearly separable • It aims at finding the separating hyperplane with the maximal geometric margin (not just anyone- typical of a perceptron) x 2 ..."
Abstract - Add to MetaCart
The maximal margin classifier is similar to the perceptron: • It also assumes that the data points are linearly separable • It aims at finding the separating hyperplane with the maximal geometric margin (not just anyone- typical of a perceptron) x 2

A training algorithm for optimal margin classifiers

by Bernhard E. Boser, et al. - PROCEEDINGS OF THE 5TH ANNUAL ACM WORKSHOP ON COMPUTATIONAL LEARNING THEORY , 1992
"... A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of classifiaction functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjust ..."
Abstract - Cited by 1865 (43 self) - Add to MetaCart
A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of classifiaction functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters

Machine Learning, 37(3):277-296, 1999. Large Margin Classification Using the Perceptron Algorithm

by unknown authors
"... Abstract. We introduce and analyze a new algorithm for linear classification which combines Rosenblatt’s perceptron algorithm with Helmbold and Warmuth’s leave-one-out method. Like Vapnik’s maximal-margin classifier, our algorithm takes advantage of data that are linearly separable with large margin ..."
Abstract - Add to MetaCart
Abstract. We introduce and analyze a new algorithm for linear classification which combines Rosenblatt’s perceptron algorithm with Helmbold and Warmuth’s leave-one-out method. Like Vapnik’s maximal-margin classifier, our algorithm takes advantage of data that are linearly separable with large

Large Margin Classification Using the Perceptron Algorithm Machine Learning, 37(3):277-296, 1999.

by unknown authors
"... Abstract. We introduce and analyze a new algorithm for linear classification which combines Rosenblatt’s perceptron algorithm with Helmbold and Warmuth’s leave-one-out method. Like Vapnik’s maximal-margin classifier, our algorithm takes advantage of data that are linearly separable with large margin ..."
Abstract - Add to MetaCart
Abstract. We introduce and analyze a new algorithm for linear classification which combines Rosenblatt’s perceptron algorithm with Helmbold and Warmuth’s leave-one-out method. Like Vapnik’s maximal-margin classifier, our algorithm takes advantage of data that are linearly separable with large

Large Margin Classification Using the Perceptron Algorithm Machine Learning, 37(3):277-296, 1999.

by unknown authors
"... Abstract. We introduce and analyze a new algorithm for linear classification which combines Rosenblatt’s perceptron algorithm with Helmbold and Warmuth’s leave-one-out method. Like Vapnik’s maximal-margin classifier, our algorithm takes advantage of data that are linearly separable with large margin ..."
Abstract - Add to MetaCart
Abstract. We introduce and analyze a new algorithm for linear classification which combines Rosenblatt’s perceptron algorithm with Helmbold and Warmuth’s leave-one-out method. Like Vapnik’s maximal-margin classifier, our algorithm takes advantage of data that are linearly separable with large

Bayes Optimal Hyperplanes → Maximal Margin Hyperplanes

by Simon Tong, Daphne Koller - IJCAI'99 WORKSHOP ON SUPPORT VECTOR MACHINES (ROBOTICS.STANFORD.EDU/~KOLLER , 1999
"... Maximal margin classifiers are a core technology in modern machine learning. They have strong theoretical justifications and have shown empirical successes. We provide an alternative justification for maximal margin hyperplane classifiers by relating them to Bayes optimal classifiers that use P ..."
Abstract - Cited by 3 (0 self) - Add to MetaCart
Maximal margin classifiers are a core technology in modern machine learning. They have strong theoretical justifications and have shown empirical successes. We provide an alternative justification for maximal margin hyperplane classifiers by relating them to Bayes optimal classifiers that use

Max-margin Markov networks

by Ben Taskar, Carlos Guestrin, Daphne Koller , 2003
"... In typical classification tasks, we seek a function which assigns a label to a single object. Kernel-based approaches, such as support vector machines (SVMs), which maximize the margin of confidence of the classifier, are the method of choice for many such tasks. Their popularity stems both from the ..."
Abstract - Cited by 604 (15 self) - Add to MetaCart
In typical classification tasks, we seek a function which assigns a label to a single object. Kernel-based approaches, such as support vector machines (SVMs), which maximize the margin of confidence of the classifier, are the method of choice for many such tasks. Their popularity stems both from
Next 10 →
Results 1 - 10 of 251
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University