### Table 3. Average (across all classes) of sensitivity, speciflcity and the MCC for all predictors on the non-plant data. Sorting Results Non-plant Data Detection Network Sorter Kernel Sensitivity Speciflcity MCC Accuracy

"... In PAGE 13: ...777 84.7% Note: See Table3 for details. and recurrent architectures.... ..."

### Table 2: Parameter values of proposed kernels and Support Vector Machines

2004

"... In PAGE 6: ... Support Vector Machine (SVM) was selected as the kernel-based classifier for training and classifi- cation. Table2 shows some of the parameter values that we used in the comparison. We set thresholds of = 2:7055 (FSSK1) and = 3:8415 (FSSK2) for the proposed methods; these values represent the 10% and 5% level of significance in the 2 distribu- tion with one degree of freedom, which used the 2 significant test.... ..."

Cited by 4

### Table 1: Test Error Rates on the USPS Handwritten Digit Database.

"... In PAGE 12: ... It simply tries to separate the training data by a hyperplane with large margin. Table1 illustrates two advantages of using nonlinear kernels. First, per- formance of a linear classifier trained on nonlinear principal components is better than for the same number of linear components; second, the perfor- mance for nonlinear components can be further improved by using more components than is possible in the linear case.... ..."

### Table 1: Classification accuracy (%) on image data compar- ing our method (LDM) vs. Euclidean (EDM), probabilistic global metric (PGDM) and support vector machine (SVM).

2006

"... In PAGE 5: ... We refer to this algorithm as Probabilistic Global Distance Metric Learning , or PGDM for short. Experimental Results for Image Classification Classification Accuracy The classification accuracy using Euclidean distance, the probabilistic global distance metric (PGDM), and the local distance metric (LDM) is shown in Table1 . Clearly, LDM outperforms the other two algorithms in terms of the classification accuracy.... In PAGE 5: ... We estimate the top eigenvectors based on the mixture of labeled and unlabeled images, and these eigenvectors are used to learn the local distance metric. The classification accuracy and the retrieval accuracy of the local distance metric learning with unlabeled data are presented in Table1 and Figure 3. We observe that both the classification and retrieval accuracy improve noticeably when unlabeled data is available.... ..."

Cited by 3

### Table 1. Mean recognition rates for the three different texture image descriptors using Gaussian- kernel Support Vector Machines as classifiers.

"... In PAGE 10: ...3 Results Summarization A summary of our experimental results is provided in Tables 1, and 2. Table1 compares for each dataset, the mean recognition rates obtained by the three texture image descrip- tors using different scales (s = 2, 3) and different orientations (o = 4, 5, 6, 7, 8). In this set of experiments, we used Gaussian-kernel Support Vector Machines (SVMs) as tex- ture classification mechanisms.... ..."

### Table 1: Comparison of correct classification rates on the plain COIL 100 dataset with results from (Roobaert amp; Van Hulle 1999): NNC is a nearest neighbor classifier on the direct images, Columbia is the eigenspace+spline recognition model by Nayar et al. (1996), and SVM is a polynomial kernel support vector machine. Our results are given for the template-VTUs setting and optimized-VTUs with one VTU per object.

2003

"... In PAGE 16: ... Roobaert amp; Hulle (1999) performed an extensive comparison of a support vector machine-based approach and the Columbia object recognition system using eigenspaces and splines (Na- yar, Nene, amp; Murase 1996) on the plain COIL 100 data, varying object and training view numbers. Their results are given in Table1 , together with the re- sults of the sparse WTM network using either the template-VTU setup without optimization, or the optimized VTUs with one VTU per object for a fair com- parison. The results show that the hierarchical network outperforms the other two approaches for all settings.... ..."

Cited by 28

### Table 1: Comparison of correct classification rates on the plain COIL 100 dataset with results from (Roobaert amp; Van Hulle 1999): NNC is a nearest neighbor classifier on the direct images, Columbia is the eigenspace+spline recognition model by Nayar et al. (1996), and SVM is a polynomial kernel support vector machine. Our results are given for the template-VTUs setting and optimized-VTUs with one VTU per object.

"... In PAGE 16: ... Roobaert amp; Hulle (1999) performed an extensive comparison of a support vector machine-based approach and the Columbia object recognition system using eigenspaces and splines (Na- yar, Nene, amp; Murase 1996) on the plain COIL 100 data, varying object and training view numbers. Their results are given in Table1 , together with the re- sults of the sparse WTM network using either the template-VTU setup without optimization, or the optimized VTUs with one VTU per object for a fair com- parison. The results show that the hierarchical network outperforms the other two approaches for all settings.... ..."

### Table 1: Comparison of correct classification rates on the plain COIL 100 dataset with results from (Roobaert amp; Van Hulle 1999): NNC is a nearest neighbor classifier on the direct images, Columbia is the eigenspace+spline recognition model by Nayar et al. (1996), and SVM is a polynomial kernel support vector machine. Our results are given for the template-VTUs setting and optimized-VTUs with one VTU per object.

"... In PAGE 16: ... Roobaert amp; Hulle (1999) performed an extensive comparison of a support vector machine-based approach and the Columbia object recognition system using eigenspaces and splines (Na- yar, Nene, amp; Murase 1996) on the plain COIL 100 data, varying object and training view numbers. Their results are given in Table1 , together with the re- sults of the sparse WTM network using either the template-VTU setup without optimization, or the optimized VTUs with one VTU per object for a fair com- parison. The results show that the hierarchical network outperforms the other two approaches for all settings.... ..."

### Table 2: Linear programming training and testing set correctness for linear and nonlinear support vector machines 6 Conclusion

1999

"... In PAGE 9: ...70% 93.41% Tenfold Testing Correctness Tenfold Training Correctness Table 1: SOR training and testing set correctness for linear and quadratic kernels Table2 shows training and testing set correctness, using the linear pro- gramming formulation (17) with various kernels, under tenfold cross vali- dation for the above mentioned datasets. We implemented a linear kernel, a quadratic kernel, a symmetric sixth degree polynomial, and a symmetric sinusoidal kernel based on the following formulations which in general are inde nite kernels: Kernel 5.... ..."

Cited by 15

### Table 2: Total accuracy and kappa of the support vector machine and decision tree classifica- tion or pixel image and segmented data

in CLASSIFYING SEGMENTED MULTITEMPORAL SAR DATA FROM AGRICULTURAL AREAS USING SUPPORT VECTOR MACHINES

"... In PAGE 4: ... This approach was used successfully in several studies for classifying optical and SAR data (ii,vi,xxv). RESULTS amp; DISCUSSION The accuracy assessment shows the positive effect of image segmentation on the classifica- tion accuracy of the SAR data ( Table2 ). Using an adequate aggregation scale the classifica- tion accuracy is increased.... In PAGE 4: ... A larger sample set can slightly improve the classification accuracy. The accuracy assessment shows that in case of segmented data support vector machines lead to better results than simple decision trees ( Table2 ). The best accuracy of a decision tree is 75.... ..."