### Table 2. A comparison with two well-know classification techniques: support vector machine (SVM) and neural network.

"... In PAGE 23: ...As we can see in Table2 , SVM gives the highest rate of correctly classified instances followed by our method and neural network. However, the difference is minor.... ..."

### Table 1. Classification accuracy and standard deviation for MLP and SVM. Multilayer Perceptron Support Vector Machine Moment

### Table 2: Negative log-likelihood on test set (NLL) and the error rate on test set (ERR) for optimal Bayes classifier (Optimal), Bayes classifier (Bayes), kernel logistic regression (Klogr), probabilistic output of classical support vector classifier (SVC), standard Gaussian processes for classification (GPC) and Bayesian trigonometric support vector classifier (BTSVC) on the two dimensional simulated data set.

2003

"... In PAGE 12: ... The data set is composed of 1000 training samples and 20002 test samples. The negative log-likelihood on test set and test error rate are recorded in Table2 , together with the results of other probabilistic approaches that includes optimal Bayes classifier (Duda et al., 2001) using the true generative model, Bayes classifier using the generative model estimated from training data, kernel logistic regression (Keerthi et al.... ..."

Cited by 5

### Table 2. Negative log-likelihood on test set (NLL) and the error rate on test set (ERR) for optimal Bayes classifier (Optimal), Bayes classifier (Bayes), kernel logistic regression (Klogr), probabilistic output of classical support vector classifier (SVC), standard Gaussian processes for classification (GPC) and Bayesian trigonometric support vector classifier (BTSVC) on the two dimensional simulated data set.

2003

"... In PAGE 12: ... The data set is composed of 1000 training samples and 20002 test samples. The negative log-likelihood on test set and test error rate are recorded in Table2 , together with the results of other probabilistic ap- proaches that includes optimal Bayes classifier (Duda et al., 2001) using the true generic model, Bayes classifier using the generic model estimated from training data, kernel logistic regression (Keerthi et al.... ..."

Cited by 5

### Table 2. Classification of BRCA1- and BRCA2-associated breast cancers using support vector machines and leave-one-out classification

"... In PAGE 6: ...breast tumor samples accurately identified them as positive or negative for BRCA1 mutations or positive or negative for BRCA2 mutations ( Table2 ). Eleven of 12 samples with BRCA1 mutation were correctly identified in the BRCA1 classification.... ..."

### Table 3: As in Table 2, but now for support vector machines. Data set Bagging Bragging Nice Trimmed

"... In PAGE 6: ...1 Characteristics of the data sets can be found in Table 1. To demonstrate the ability of trimmed bagging in improving the predictive performance of any base classifler, stable or unstable, we consider the following base classiflers: (a) decision trees (Table 2), as an example of an unstable classi- fler (b) support vector machines (SVM) ( Table3 ), linear discriminant analysis (Table 4), and logistic regression (Table 5) as examples of stable classiflers. All these base classiflers are well-known and routineously used.... In PAGE 7: ... The better performance of bagging becomes questionable when using stable classiflers. Results in Table3 indeed conflrm that bagging does not work with a support vector machine. In only one case bagging provides a signiflcant increase in the predictive performance of SVMs.... ..."

### TABLE II: THE RESULTS OF CLASSIFYING THE house-car DATA SET USING SUPPORT VECTOR MACHINES.

2004

Cited by 10

### Table 3. Predictions by Barnard Chemical Information/ Support Vector Machine Model and IC50 for the Common Drugs in the Data Set

2005

"... In PAGE 5: ... Figure 2 shows structures of common drugs in the data set. The predictions by the BCI/SVM model (in cross-validation) and IC50 for the inhibition of BFC metabolism from the literature10 are listed in Table3 . The HTS results obtained in our lab were in agreement with these IC50 values.... ..."

### Table 6: Results from support vector machine

"... In PAGE 6: ...java B.java Figure 4: Comparison of predicted and observed ranking In Table6 (d) we obtained a precision of 0.6671 for the test in version 2.... In PAGE 6: ... recall = correct predicted failures all failures As an example, the recall of the test in version 2.0 shown in Table6 (d) indicates that over two third of the failure-prone components are actually identified as failure-prone. Again, a random guess would have had a probability of 0.... In PAGE 6: ... For example, take the test in version 2.0 of Table6 (b) with a recall of about 0.... In PAGE 6: ...iles. For example, take the test in version 2.0 of Table 6(b) with a recall of about 0.1 and Table6 (d) with a recall of about 0.7.... In PAGE 8: ...g., in Table6 (d), the precision for the top 5% of version 2.1 is substantially higher than the overall precision (90% vs.... In PAGE 8: ...esults from version 2.0 and 2.1 are similar with respect to classifi- cation. Take for example Table6 (d), the recall and precision obtained from testing in version 2.... ..."

### Table 1: Results on test set: closed challenge parsing using support vector machines. Technical Re- port CSLR-2003-01, Center for Spoken Language Re- search, University of Colorado at Boulder.

"... In PAGE 4: ... This affected fewer than twenty labels on the development data, and added only about 0:1 to the overall f-measure. 4 Results The results on the test section of the CoNLL 2004 data are presented in Table1 below. The overall result, an f- score of 60:66, is considerably below results reported for systems using a parser on a comparable data set.... ..."