Results 11  20
of
67
On l1norm multiclass support vector machines: Methodology and theory
 Journal of the American Statistical Association
"... Binary Support Vector Machines (SVM) have proven effective in classification. However, problems remain with respect to feature selection in multiclass classification. This article proposes a novel multiclass SVM, which performs classification and feature selection simultaneously via L1norm penali ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
Binary Support Vector Machines (SVM) have proven effective in classification. However, problems remain with respect to feature selection in multiclass classification. This article proposes a novel multiclass SVM, which performs classification and feature selection simultaneously via L1norm penalized sparse representations. The proposed methodology, together with our developed regularization solution path, permits feature selection within the framework of classification. The operational characteristics of the proposed methodology is examined via both simulated and benchmark examples, and is compared to some competitors in terms of the accuracy of prediction and feature selection. The numerical results suggest that the proposed methodology is highly competitive. 1
Highly accurate classification of watsoncrickbasepairs on termini of single DNA molecules, Biophys
 J
"... ABSTRACT We introduce a computational method for classification of individual DNA molecules measured by an ahemolysin channel detector. We show classification with better than 99 % accuracy for DNA hairpin molecules that differ only in their terminal WatsonCrick basepairs. Signal classification wa ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
(Show Context)
ABSTRACT We introduce a computational method for classification of individual DNA molecules measured by an ahemolysin channel detector. We show classification with better than 99 % accuracy for DNA hairpin molecules that differ only in their terminal WatsonCrick basepairs. Signal classification was done in silico to establish performance metrics (i.e., where train and test data were of known type, via singlespecies data files). It was then performed in solution to assay real mixtures of DNA hairpins. Hidden Markov Models (HMMs) were used with Expectation/Maximization for denoising and for associating a feature vector with the ionic current blockade of the DNA molecule. Support Vector Machines (SVMs) were used as discriminators, and were the focus of offline training. A multiclass SVM architecture was designed to place less discriminatory load on weaker discriminators, and novel SVM kernels were used to boost discrimination strength. The tuning on HMMs and SVMs enabled biophysical analysis of the captured molecule states and state transitions; structure revealed in the biophysical analysis was used for better feature selection.
Support vector machines and the multiple hypothesis test problem
 IEEE Trans. Signal Processing
, 2001
"... Abstract—Two enhancements are proposed to the application and theory of support vector machines. The first is a method of multicategory classification based on the binary classification version of the support vector machine (SVM). The method, which is called theary SVM, represents each category in ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
(Show Context)
Abstract—Two enhancements are proposed to the application and theory of support vector machines. The first is a method of multicategory classification based on the binary classification version of the support vector machine (SVM). The method, which is called theary SVM, represents each category in binary format, and to each bit of that representation is assigned a conventional SVM. This approach requires only log 2 ( ) SVMs, where is the number of classes. We give an example of classification on an octaphaseshiftkeying (8PSK) pattern space to illustrate main concepts. The second enhancement is that of adding equality constraints to the conventional binary classification SVM. This allows pinning the classification boundary to points that are known a priori to lie on the boundary. Applications of this method often arise in problems having some type of symmetry. We present one such example where theary SVM is used to classify symbols of a twouser, multiuser detection pattern space. Index Terms—Boundary constraint, equality constrained SVM,ary classification,ary SVM, multicategory classification, pattern recognition, support vector machine.
Maximum margin training of generative kernels
, 2004
"... Generative kernels, a generalised form of Fisher kernels, are a powerful form of kernel that allow the kernel parameters to be tuned to a specific task. The standard approach to training these kernels is to use maximum likelihood estimation. This paper describes a novel approach based on maximummar ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
Generative kernels, a generalised form of Fisher kernels, are a powerful form of kernel that allow the kernel parameters to be tuned to a specific task. The standard approach to training these kernels is to use maximum likelihood estimation. This paper describes a novel approach based on maximummargin training of both the kernel parameters and a Support Vector Machine (SVM) classifier. It combines standard SVM training with a gradientdescent based kernel parameter optimisation scheme. This allows the kernel parameters to be explicitly trained for the data set and the SVM scorespace. Initial results on an artificial task and the Deterding data show that such an approach can reduce classification error rates. 1 1
The Margin Vector, Admissible Loss and Multiclass Marginbased Classifiers
"... We propose a new framework to construct the marginbased classifiers, in which the binary and multicategory classification problems are solved by the same principle; namely, marginbased classification via regularized empirical risk minimization. ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
We propose a new framework to construct the marginbased classifiers, in which the binary and multicategory classification problems are solved by the same principle; namely, marginbased classification via regularized empirical risk minimization.
Bayesian multicategory support vector machines
 In Uncertainty in Artificial Intelligence, 2006. Ji Zhu, Saharon Rosset, Trevor
"... We show that the multiclass support vector machine (MSVM) proposed by Lee et al. (2004) can be viewed as a MAP estimation procedure under an appropriate probabilistic interpretation of the classifier. We also show that this interpretation can be extended to a hierarchical Bayesian architecture and ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
(Show Context)
We show that the multiclass support vector machine (MSVM) proposed by Lee et al. (2004) can be viewed as a MAP estimation procedure under an appropriate probabilistic interpretation of the classifier. We also show that this interpretation can be extended to a hierarchical Bayesian architecture and to a fullyBayesian inference procedure for multiclass classification based on data augmentation. We present empirical results that show that the advantages of the Bayesian formalism are obtained without a loss in classification accuracy.
Robust truncated hinge loss support vector machines. Journal of the American Statistical Association 102 974–983. MR2411659 Seo Young Park Department of Statistics and Operations Research CB3260
, 2007
"... The support vector machine (SVM) has been widely applied for classification problems in both machine learning and statistics. Despite its popularity, however, SVM has some drawbacks in certain situations. In particular, the SVM classifier can be very sensitive to outliers in the training sample. Mor ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
The support vector machine (SVM) has been widely applied for classification problems in both machine learning and statistics. Despite its popularity, however, SVM has some drawbacks in certain situations. In particular, the SVM classifier can be very sensitive to outliers in the training sample. Moreover, the number of support vectors (SVs) can be very large in many applications. To circumvent these drawbacks, we propose the robust truncated hinge loss SVM (RSVM), which uses a truncated hinge loss. The RSVM is shown to be more robust to outliers and to deliver more accurate classifiers using a smaller set of SVs than the standard SVM. Our theoretical results show that the RSVM is Fisherconsistent, even when there is no dominating class, a scenario that is particularly challenging for multicategory classification. Similar results are obtained for a class of marginbased classifiers.
Large margin multicategory discriminant models and scalesensitive Ψdimensions
, 2006
"... ..."
(Show Context)
A new learning method for piecewise linear regression. ICANN
, 2002
"... Abstract. A new connectionist model for the solution of piecewise linear regression problems is introduced; it is able to reconstruct both continuous and non continuous real valued mappings starting from a finite set of possibly noisy samples. The approximating function can assume a different linear ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
Abstract. A new connectionist model for the solution of piecewise linear regression problems is introduced; it is able to reconstruct both continuous and non continuous real valued mappings starting from a finite set of possibly noisy samples. The approximating function can assume a different linear behavior in each region of an unknown polyhedral partition of the input domain. The proposed learning technique combines local estimation, clustering in weight space, multicategory classification and linear regression in order to achieve the desired result. Through this approach piecewise affine solutions for general nonlinear regression problems can also be found. 1
A framework for kernelbased multicategory classification
, 2005
"... A geometric framework for understanding multicategory classification is introduced, through which many existing ‘alltogether ’ algorithms can be understood. The structure allows the derivation of a parsimonious optimisation function, which is a direct extension of the binary ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
A geometric framework for understanding multicategory classification is introduced, through which many existing ‘alltogether ’ algorithms can be understood. The structure allows the derivation of a parsimonious optimisation function, which is a direct extension of the binary