Results 11  20
of
357
Distance weighted discrimination
 J. Am. Statist. Assoc
, 2007
"... Abstract High Dimension Low Sample Size statistical analysis is becoming increasingly important in a wide range of applied contexts. In such situations, it is seen that the popular Support Vector Machine suffers from "data piling" at the margin, which can diminish generalizability. This l ..."
Abstract

Cited by 36 (9 self)
 Add to MetaCart
Abstract High Dimension Low Sample Size statistical analysis is becoming increasingly important in a wide range of applied contexts. In such situations, it is seen that the popular Support Vector Machine suffers from "data piling" at the margin, which can diminish generalizability. This leads naturally to the development of Distance Weighted Discrimination, which is based on Second Order Cone Programming, a modern computationally intensive optimization method.
Model induction with support vector machines
 Introduction and PWASET VOLUME 26 DECEMBER 2007 ISSN 13076884 797 © 2007 WASET.ORG OF WORLD ACADEMY OF SCIENCE, ENGINEERING AND TECHNOLOGY VOLUME 26 DECEMBER 2007 ISSN 13076884 applications.”Journal of Computing in Civil Engineering, ASCE
, 2001
"... ..."
(Show Context)
Linear spectral mixture models and support vector machines for remote sensing
 IEEE Trans. Geosci. Remote Sens
, 1999
"... Abstract—Mixture modeling is becoming an increasingly important tool in the remote sensing community as researchers attempt to resolve subpixel, area information. This paper compares a wellestablished technique, linear spectral mixture models (LSMM), with a much newer idea based on data selection, ..."
Abstract

Cited by 27 (3 self)
 Add to MetaCart
(Show Context)
Abstract—Mixture modeling is becoming an increasingly important tool in the remote sensing community as researchers attempt to resolve subpixel, area information. This paper compares a wellestablished technique, linear spectral mixture models (LSMM), with a much newer idea based on data selection, support vector machines (SVM). It is shown that the constrained least squares LSMM is equivalent to the linear SVM, which relies on proving that the LSMM algorithm possesses the “maximum margin ” property. This in turn shows that the LSMM algorithm can be derived from the same optimality conditions as the linear SVM, which provides important insights about the role of the bias term and rank deficiency in the pure pixel matrix within the LSMM algorithm. It also highlights one of the main advantages for using the linear SVM algorithm in that it performs automatic “pure pixel ” selection from a much larger database. In addition,
Person Identification in Webcam Images: An Application of SemiSupervised Learning
 LEARNING, ICML2005 WORKSHOP ON LEARNING WITH PARTIALLY CLASSIFIED TRAINING DATA
, 2005
"... An application of semisupervised learning is made to the problem of person identification in low quality webcam images. Using a set of images of ten people collected over a period of four months, the person identification task is posed as a graphbased semisupervised learning problem, where only a ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
(Show Context)
An application of semisupervised learning is made to the problem of person identification in low quality webcam images. Using a set of images of ten people collected over a period of four months, the person identification task is posed as a graphbased semisupervised learning problem, where only a few training images are labeled. The importance of domain knowledge in graph construction is discussed, and experiments are presented that clearly show the advantage of semisupervised learning over standard supervised learning. The data used in the study is available to the research community to encourage further investigation of this problem.
Multimodal Identity Verification using Support Vector Machines (SVM)
 in Proc. of the Intl. Conf. on Information Fusion, FUSION
, 2000
"... The contribution of this paper is twofold: (1) to formulate a decision fusion problem encountered in the design of a multimodal identity verification system as a particular classification problem, (2) to propose to solve this problem by a Support Vector Machine (SVM). The multimodal identity verif ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
(Show Context)
The contribution of this paper is twofold: (1) to formulate a decision fusion problem encountered in the design of a multimodal identity verification system as a particular classification problem, (2) to propose to solve this problem by a Support Vector Machine (SVM). The multimodal identity verification system under consideration is built of d modalities in parallel, each one delivering as output a scalar number, called score, stating how well the claimed identity is verified. A fusion module receiving as input the d scores has to take a binary decision: accept or reject identity. This fusion problem has been solved using Support Vector Machines. The performances of this fusion module have been evaluated and compared with other proposed methods on a multimodal database, containing both vocal and visual modalities. Keywords: Decision Fusion, Support Vector Machine, MultiModal Identity Verification. 1 Introduction Automatic identification/verification is rapidly becoming an importa...
A probabilistic framework for SVM regression and error bar estimation
 Machine Learning
, 2002
"... In this paper, we elaborate on the wellknown relationship between Gaussian Processes (GP) and Support Vector Machines (SVM) under some convex assumptions for the loss functions. This paper concentrates on the derivation of the evidence and error bar approximation for regression problems. An error b ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
(Show Context)
In this paper, we elaborate on the wellknown relationship between Gaussian Processes (GP) and Support Vector Machines (SVM) under some convex assumptions for the loss functions. This paper concentrates on the derivation of the evidence and error bar approximation for regression problems. An error bar formula is derived based on the ɛinsensitive loss function.
Development of TwoStage SVMRFE gene selection . . .
 IEEE TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS
"... Extracting a subset of informative genes from microarray expression data is a critical data preparation step in cancer classification and other biological function analyses. Though many algorithms have been developed, the Support Vector Machine–Recursive Feature Elimination (SVMRFE) algorithm is o ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
Extracting a subset of informative genes from microarray expression data is a critical data preparation step in cancer classification and other biological function analyses. Though many algorithms have been developed, the Support Vector Machine–Recursive Feature Elimination (SVMRFE) algorithm is one of the best gene feature selection algorithms. It assumes that a smaller &quot;filterout &quot; factor in the SVMRFE, which results in a smaller number of gene features eliminated in each recursion, should lead to extraction of a better gene subset. Because the SVMRFE is highly sensitive to the &quot;filterout &quot; factor, our simulations have shown that this assumption is not always correct and that the SVMRFE is an unstable algorithm. To select a set of key gene features for reliable prediction of cancer types or subtypes and other applications, a new twostage SVMRFE algorithm has been developed. It is designed to effectively eliminate most of the irrelevant, redundant and noisy genes while keeping information loss small at the first stage. A fine selection for the final gene subset is then performed at the second stage. The twostage SVMRFE overcomes the instability problem of the SVMRFE to achieve better algorithm utility. We have demonstrated that the twostage SVMRFE is significantly more accurate and more reliable than the SVMRFE and three correlationbased methods based on our analysis of three publicly available microarray expression datasets. Furthermore, the twostage SVMRFE is computationally efficient because its time complexity is O(d * log2d), where d is the size of the original gene set. Supplementary material is available at
Robust Support Vector Machine with Bullet Hole Image Classification
 IEEE Transactions on Systems, Man, And Cybernetics—Part C: Applications And Reviews
"... Abstract—This paper proposes a robust support vector machine for pattern classification, which aims at solving the overfitting problem when outliers exist in the training data set. During the robust training phase, the distance between each data point and the center of class is used to calculate th ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
(Show Context)
Abstract—This paper proposes a robust support vector machine for pattern classification, which aims at solving the overfitting problem when outliers exist in the training data set. During the robust training phase, the distance between each data point and the center of class is used to calculate the adaptive margin. The incorporation of the average techniques to the standard support vector machine (SVM) training makes the decision function less detoured by outliers, and controls the amount of regularization automatically. Experiments for the bullet hole classification problem show that the number of the support vectors is reduced, and the generalization performance is improved significantly compared to that of the standard SVM training. Index Terms—Bullet hole classification, robust support vector machine. I.
Dialog act classification from prosodic features using support vector machines
 In Proc. Speech Prosody
, 2002
"... ..."
Multidocument Summarization Using Support Vector Regression
"... Most multidocument summarization systems follow the extractive framework based on various features. While more and more sophisticated features are designed, the reasonable combination of features becomes a challenge. Usually the features are combined by a linear function whose weights are tuned man ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
(Show Context)
Most multidocument summarization systems follow the extractive framework based on various features. While more and more sophisticated features are designed, the reasonable combination of features becomes a challenge. Usually the features are combined by a linear function whose weights are tuned manually. In this task, Support Vector Regression (SVR) model is used for automatically combining the features and scoring the sentences. Two important problems are inevitably involved. The first one is how to acquire the training data. Several automatic generation methods are introduced based on the standard reference summaries generated by human. Another indispensable problem in SVR application is feature selection, where various features will be picked out and combined into different feature sets to be tested. With the aid of DUC 2005 and 2006 data sets, comprehensive experiments are conducted with consideration of various SVR kernels and feature sets. Then the trained SVR model is used in the main task of DUC 2007 to get the extractive summaries. 1.