• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 15,369
Next 10 →

The Nature of Statistical Learning Theory

by Vladimir N. Vapnik , 1999
"... Statistical learning theory was introduced in the late 1960’s. Until the 1990’s it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990’s new types of learning algorithms (called support vector machines) based on the deve ..."
Abstract - Cited by 13236 (32 self) - Add to MetaCart
Statistical learning theory was introduced in the late 1960’s. Until the 1990’s it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990’s new types of learning algorithms (called support vector machines) based

Rule extraction from linear support vector machines

by Glenn Fung, Sathyakama S, R. Bharat Rao - In KDD , 2005
"... We describe an algorithm for converting linear support vector machines and any other arbitrary hyperplane-based linear classifiers into a set of non-overlapping rules that, unlike the original classifier, can be easily interpreted by humans. Each iteration of the rule extraction algorithm is formula ..."
Abstract - Cited by 26 (0 self) - Add to MetaCart
We describe an algorithm for converting linear support vector machines and any other arbitrary hyperplane-based linear classifiers into a set of non-overlapping rules that, unlike the original classifier, can be easily interpreted by humans. Each iteration of the rule extraction algorithm

Incremental and Decremental Learning for Linear Support Vector Machines

by Enrique Romero, Ignacio Barrio, Lluís Belanche, Departament De Llenguatges I Sistemes
"... Abstract. We present a method to find the exact maximal margin hyperplane for linear Support Vector Machines when a new (existing) component is added (removed) to (from) the inner product. The maximal margin hyperplane with the new inner product is obtained in terms of that for the old inner product ..."
Abstract - Cited by 1 (0 self) - Add to MetaCart
Abstract. We present a method to find the exact maximal margin hyperplane for linear Support Vector Machines when a new (existing) component is added (removed) to (from) the inner product. The maximal margin hyperplane with the new inner product is obtained in terms of that for the old inner

A Bahadur Representation of the Linear Support Vector Machine

by Ja-yong Koo, Yoonkyung Lee, Yuwon Kim, Changyi Park
"... Editor: John Shawe-Taylor The support vector machine has been successful in a variety of applications. Also on the theoretical front, statistical properties of the support vector machine have been studied quite extensively with a particular attention to its Bayes risk consistency under some conditio ..."
Abstract - Cited by 5 (0 self) - Add to MetaCart
conditions. In this paper, we study somewhat basic statistical properties of the support vector machine yet to be investigated, namely the asymptotic behavior of the coefficients of the linear support vector machine. A Bahadur type representation of the coefficients is established under appropriate

Consensus-based distributed linear support vector machines

by Pedro A. Forero, Alfonso Cano, Georgios B. Giannakis - In ACM/IEEE International Conference on Information Processing in Sensor Networks , 2010
"... This paper develops algorithms to train linear support vector machines (SVMs) when training data are distributed across different nodes and their communication to a centralized node is prohibited due to, for example, communication overhead or privacy reasons. To accomplish this goal, the centralized ..."
Abstract - Cited by 2 (0 self) - Add to MetaCart
This paper develops algorithms to train linear support vector machines (SVMs) when training data are distributed across different nodes and their communication to a centralized node is prohibited due to, for example, communication overhead or privacy reasons. To accomplish this goal

Deep learning using linear support vector machines

by Yichuan Tang - In ICML , 2013
"... Recently, fully-connected and convolutional neural networks have been trained to achieve state-of-the-art performance on a wide vari-ety of tasks such as speech recognition, im-age classification, natural language process-ing, and bioinformatics. For classification tasks, most of these “deep learnin ..."
Abstract - Cited by 11 (1 self) - Add to MetaCart
learning ” models employ the softmax activation function for prediction and minimize cross-entropy loss. In this paper, we demonstrate a small but consistent advantage of replacing the soft-max layer with a linear support vector ma-chine. Learning minimizes a margin-based loss instead of the cross

Feature selection using linear support vector machines

by Janez Brank , Marko Grobelnik , Nataša Milić-Frayling , Janez Brank , Marko Grobelnik , Dunja Mladenić , Jožef Stefan , Nataša Milić-Frayling , 2002
"... ..."
Abstract - Cited by 17 (0 self) - Add to MetaCart
Abstract not found

Multiclass Latent Locally Linear Support Vector Machines

by Marco Fornoni, Francesco Orabona, Soon Ong, Tu Bao Ho
"... Kernelized Support Vector Machines (SVM) have gained the status of off-the-shelf clas-sifiers, able to deliver state of the art performance on almost any problem. Still, their practical use is constrained by their computational and memory complexity, which grows super-linearly with the number of tra ..."
Abstract - Cited by 3 (1 self) - Add to MetaCart
Kernelized Support Vector Machines (SVM) have gained the status of off-the-shelf clas-sifiers, able to deliver state of the art performance on almost any problem. Still, their practical use is constrained by their computational and memory complexity, which grows super-linearly with the number

Nomograms for Visualizing Linear Support Vector Machines

by Aleks Jakulin, Ivan Bratko
"... Support vector machines are often considered to be black box learning algorithms. We show that for linear kernels it is possible to open this box and visually depict the content of the SVM classifier in high-dimensional space in the interactive format of a nomogram. We provide a crosscalibration met ..."
Abstract - Add to MetaCart
Support vector machines are often considered to be black box learning algorithms. We show that for linear kernels it is possible to open this box and visually depict the content of the SVM classifier in high-dimensional space in the interactive format of a nomogram. We provide a crosscalibration

STOCHASTIC SUBGRADIENT APPROACH FOR SOLVING LINEAR SUPPORT VECTOR MACHINES -- AN OVERVIEW

by Jan Rupnik , 2008
"... This paper is an overview of a recent approach for solving linear support vector machines (SVMs), the PEGASOS algorithm. The algorithm is based on a technique called the stochastic subgradient descent and employs it for solving the optimization problem posed by the soft margin SVM- a very popular cl ..."
Abstract - Cited by 4 (0 self) - Add to MetaCart
This paper is an overview of a recent approach for solving linear support vector machines (SVMs), the PEGASOS algorithm. The algorithm is based on a technique called the stochastic subgradient descent and employs it for solving the optimization problem posed by the soft margin SVM- a very popular
Next 10 →
Results 1 - 10 of 15,369
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University