### Table 4 Prediction accuracies (LogR: logistic regression model, SVM: support vector machines, NN: neural networks) 10-fold

in Credit Rating Analysis With Support Vector Machines and Neural Networks: A Market Comparative Study

"... In PAGE 10: ... When performing the cross- validation procedures for the neural networks, 10% of the data was used as a validation set. Table4 summa- rizes the prediction accuracies of the four models using both cross-validation procedures. For comparison pur- poses, the prediction accuracies of a regression model that achieved relatively good performance in the liter- ature, the logistic regression model, are also reported in Table 4.... In PAGE 10: ... Table 4 summa- rizes the prediction accuracies of the four models using both cross-validation procedures. For comparison pur- poses, the prediction accuracies of a regression model that achieved relatively good performance in the liter- ature, the logistic regression model, are also reported in Table4 . The following observations are summarized: support vector machines achieved the best performance Table 4 Prediction accuracies (LogR: logistic regression model, SVM: support vector machines, NN: neural networks) 10-fold... ..."

### Table 1: Biorthogonal Filters for the Discrete Wavelet Transform L a Filter Coe cients b PSNR c He d

1995

"... In PAGE 9: ... However, it is more e ciently implemented by installing a multiplier with a factor 2 in the synthesis stage. The PSNR and error entropies listed in Table1 are average values over ten test images of size 512 by 512. Three transforms iterations were performed on: Lena, Clown, Baboon, Sailboat, Barbara, Miramar, Channel, Landsat, LAX, and Forest.... In PAGE 9: ... Three transforms iterations were performed on: Lena, Clown, Baboon, Sailboat, Barbara, Miramar, Channel, Landsat, LAX, and Forest. 5 DISCUSSION Taking a look at Table1 con rms Daubechies apos; popular 9/7 lter as a preferable candidate for image coding applications where low distortion (high PSNR) is of prime importance. However, within the limits of our primitive quantization scheme, Haar lter and the binomial2/6 lter appear quite competitive.... In PAGE 10: ... Shapiro,26 for example, achieved PSNR values of more than 35 dB with a rate of only one bit per pixel using, however, a signi cantly more sophisticated (zero tree) coding scheme. If combined lossy, lossless data transmission is a design objective, than Table1 reveals that the zeroth-order error entropy can be assumed to be of the order of two bits. Hence, after performing the last lossy inverse transform step, an amount of information approximately equal to one quarter of the original data remains to be send to obtain an exact replica of the original image.... ..."

Cited by 6

### Table 2: Test error rates on the USPS handwritten digit database for linear Support Vector machines trained

1998

"... In PAGE 10: ....g. local translation invariance, be it b y generat- ing \virtual quot; translated examples or bychoosing a suitable kernel, could further improve the results. Table2 nicely illustrates two advantages of us- ing nonlinear kernels: rst, performance for non- linear principal components is better than for the same number of linear components;; second, the performance for nonlinear components can be fur- ther improved by using more components than possible in the linear case. 9 We conclude this section with a commenton the approach taken.... ..."

Cited by 567

### Table 3 shows performances of models 3-11. In general, support vector machines outperform artificial neural networks which, in turn, outperform nearest neighbor classifiers. The best performing SVM model was the one using the 2D three-level wavelet histogram features with the histogram edit distance in eqn (11).

"... In PAGE 7: ... We selected the support vector machine because it has gained considerable popularity recently and has become state-of-the-art [26]. Table3 . Dichotomy model performance results: ... ..."

### Table 5. Average (across all classes) of sensitivity, speciflcity and MCC for the combined predictors on the non-plant data. Combined Sorters Results Non-Plant Data Networks Sorter Sensitivity Speciflcity MCC

"... In PAGE 15: ...837 0.788 Note: See Table5 for details. Table 7.... ..."

### Table 1. Average retrieval rate (%) in the top 15 matches using pyramid wavelet transform (DWT) and wavelet frames (DWF).

"... In PAGE 3: ... The em- ployed wavelet transforms are the traditional wavelet pyramids (DWT) and the non-subsampled discrete wavelet frames (DWF) using the 8-tap Daubechies orthogonal wavelets. Table1 sum- maries the comparison in performance in average percentages of retrieving relevant images in the top 15 matches.... ..."

### Table 1: Discrete Wavelet Transform

### Table 1. Average retrieval rate (%) in the top 15 matches using pyramid wavelet transform (DWT) and wavelet frames (DWF).

"... In PAGE 3: ... The employed wavelet transforms are the traditional wavelet pyramids (DWT) and the non-subsampled discrete wavelet frames (DWF) using the 8-tap Daubechies orthogonal wavelets. Table1 summaries the compar- ison in performance in average percentages of retrieving relevant images in the top 15 matches. The first observation is that the statistical approach (GGD amp; KLD) always outperforms all other tested traditional methods.... ..."

### Table 3: As in Table 2, but now for support vector machines. Data set Bagging Bragging Nice Trimmed

"... In PAGE 6: ...1 Characteristics of the data sets can be found in Table 1. To demonstrate the ability of trimmed bagging in improving the predictive performance of any base classifler, stable or unstable, we consider the following base classiflers: (a) decision trees (Table 2), as an example of an unstable classi- fler (b) support vector machines (SVM) ( Table3 ), linear discriminant analysis (Table 4), and logistic regression (Table 5) as examples of stable classiflers. All these base classiflers are well-known and routineously used.... In PAGE 7: ... The better performance of bagging becomes questionable when using stable classiflers. Results in Table3 indeed conflrm that bagging does not work with a support vector machine. In only one case bagging provides a signiflcant increase in the predictive performance of SVMs.... ..."