Results 1 - 10
of
15,369
The Nature of Statistical Learning Theory
, 1999
"... Statistical learning theory was introduced in the late 1960’s. Until the 1990’s it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990’s new types of learning algorithms (called support vector machines) based on the deve ..."
Abstract
-
Cited by 13236 (32 self)
- Add to MetaCart
Statistical learning theory was introduced in the late 1960’s. Until the 1990’s it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990’s new types of learning algorithms (called support vector machines) based
Rule extraction from linear support vector machines
- In KDD
, 2005
"... We describe an algorithm for converting linear support vector machines and any other arbitrary hyperplane-based linear classifiers into a set of non-overlapping rules that, unlike the original classifier, can be easily interpreted by humans. Each iteration of the rule extraction algorithm is formula ..."
Abstract
-
Cited by 26 (0 self)
- Add to MetaCart
We describe an algorithm for converting linear support vector machines and any other arbitrary hyperplane-based linear classifiers into a set of non-overlapping rules that, unlike the original classifier, can be easily interpreted by humans. Each iteration of the rule extraction algorithm
Incremental and Decremental Learning for Linear Support Vector Machines
"... Abstract. We present a method to find the exact maximal margin hyperplane for linear Support Vector Machines when a new (existing) component is added (removed) to (from) the inner product. The maximal margin hyperplane with the new inner product is obtained in terms of that for the old inner product ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
Abstract. We present a method to find the exact maximal margin hyperplane for linear Support Vector Machines when a new (existing) component is added (removed) to (from) the inner product. The maximal margin hyperplane with the new inner product is obtained in terms of that for the old inner
A Bahadur Representation of the Linear Support Vector Machine
"... Editor: John Shawe-Taylor The support vector machine has been successful in a variety of applications. Also on the theoretical front, statistical properties of the support vector machine have been studied quite extensively with a particular attention to its Bayes risk consistency under some conditio ..."
Abstract
-
Cited by 5 (0 self)
- Add to MetaCart
conditions. In this paper, we study somewhat basic statistical properties of the support vector machine yet to be investigated, namely the asymptotic behavior of the coefficients of the linear support vector machine. A Bahadur type representation of the coefficients is established under appropriate
Consensus-based distributed linear support vector machines
- In ACM/IEEE International Conference on Information Processing in Sensor Networks
, 2010
"... This paper develops algorithms to train linear support vector machines (SVMs) when training data are distributed across different nodes and their communication to a centralized node is prohibited due to, for example, communication overhead or privacy reasons. To accomplish this goal, the centralized ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
This paper develops algorithms to train linear support vector machines (SVMs) when training data are distributed across different nodes and their communication to a centralized node is prohibited due to, for example, communication overhead or privacy reasons. To accomplish this goal
Deep learning using linear support vector machines
- In ICML
, 2013
"... Recently, fully-connected and convolutional neural networks have been trained to achieve state-of-the-art performance on a wide vari-ety of tasks such as speech recognition, im-age classification, natural language process-ing, and bioinformatics. For classification tasks, most of these “deep learnin ..."
Abstract
-
Cited by 11 (1 self)
- Add to MetaCart
learning ” models employ the softmax activation function for prediction and minimize cross-entropy loss. In this paper, we demonstrate a small but consistent advantage of replacing the soft-max layer with a linear support vector ma-chine. Learning minimizes a margin-based loss instead of the cross
Multiclass Latent Locally Linear Support Vector Machines
"... Kernelized Support Vector Machines (SVM) have gained the status of off-the-shelf clas-sifiers, able to deliver state of the art performance on almost any problem. Still, their practical use is constrained by their computational and memory complexity, which grows super-linearly with the number of tra ..."
Abstract
-
Cited by 3 (1 self)
- Add to MetaCart
Kernelized Support Vector Machines (SVM) have gained the status of off-the-shelf clas-sifiers, able to deliver state of the art performance on almost any problem. Still, their practical use is constrained by their computational and memory complexity, which grows super-linearly with the number
Nomograms for Visualizing Linear Support Vector Machines
"... Support vector machines are often considered to be black box learning algorithms. We show that for linear kernels it is possible to open this box and visually depict the content of the SVM classifier in high-dimensional space in the interactive format of a nomogram. We provide a crosscalibration met ..."
Abstract
- Add to MetaCart
Support vector machines are often considered to be black box learning algorithms. We show that for linear kernels it is possible to open this box and visually depict the content of the SVM classifier in high-dimensional space in the interactive format of a nomogram. We provide a crosscalibration
STOCHASTIC SUBGRADIENT APPROACH FOR SOLVING LINEAR SUPPORT VECTOR MACHINES -- AN OVERVIEW
, 2008
"... This paper is an overview of a recent approach for solving linear support vector machines (SVMs), the PEGASOS algorithm. The algorithm is based on a technique called the stochastic subgradient descent and employs it for solving the optimization problem posed by the soft margin SVM- a very popular cl ..."
Abstract
-
Cited by 4 (0 self)
- Add to MetaCart
This paper is an overview of a recent approach for solving linear support vector machines (SVMs), the PEGASOS algorithm. The algorithm is based on a technique called the stochastic subgradient descent and employs it for solving the optimization problem posed by the soft margin SVM- a very popular
Results 1 - 10
of
15,369