### Table 1. Performance of SVM-based classifiers on handwritten digit recognition. Author Database Tr Size Test Size Error Rate

2004

"... In PAGE 2: ... By reviewing the literature, we can find several vari- ations of SVMs as well as results on several different databases. Table1 summarizes some works found in the literature. Perhaps, the most used benchmark to evaluate SVMs is MNIST, which is a modified version of NIST database and was originally set up by the AT amp;T group [14].... In PAGE 2: ...ymmetries of the problem (i.e., feature extraction) to reach better results. This explains the different results reported in Table1 for MNIST. Liu et al [16] show a comparative study on hand- written digit recognition using different classifiers and databases.... ..."

Cited by 2

### Table 1: The recognition rate of a handwritten word recognizer bene ts from the inclusion of preprocessing operations such as smoothing, skew correction, and slant correction as described in this paper using horizontal and vertical runs.

2000

"... In PAGE 1: ... It has particular advantages in skew-angle correction and character segmentation. Our experiments with a preliminary version of a word recognizer based on horizontal and vertical runs show that operations such as skew-angle correction can signi cantly improve recognition rates ( Table1 ). Some researchers [2, 4] have demonstrated recognition methods that do not use slant correction.... ..."

Cited by 1

### Table 1. Digit recognition accuracy (%) of the proposed sytem and the missing data recognizer using the initial mask

"... In PAGE 3: ... The top-down processing stage will be used to identify reliable regions above 1 kHz. Table1 summarizes the performance of the proposed system when using the ideal binary mask below 1 kHz ( Integrated Mask ). Performance is measured in terms of word-level recogni- tion accuracy at various SNRs.... ..."

### Table 2 Hierarchical Regression Results

2002

"... In PAGE 8: ...1. Hierarchical Regression Results We report the results of the hierarchical regressions of the diffusion parameters in Column I of Table2 . The estimated effects of the various explanatory variables on the three BDM parameters are mostly consistent with the expected effects hypothesized in the previous sec- tion.... In PAGE 9: ...nd 0.2% respectively.8 The finding is of potential im- portance in assessing the attractiveness of emerging markets such as China, where such macroenviron- mental characteristics are undergoing rapid change. As noted in Table2 , our study is the first to present such empirical insights on penetration potential in the marketing literature. With respect to the estimated effects of the various variables on the coefficients of external and internal influence (which determine the speed of diffusion), we find a strong negative result for illiteracy level as expected.... In PAGE 11: ... Finally, since consumer and business products may have different diffusion patterns, we estimated the model by restricting the analysis to only consumer products (VCR players, camcorder, microwave, and CD players) and dropping cellular phones and fax machines, which are used by both consumers and businesses. The estimates are reported in column II of Table2 . There were hardly any differences in the estimates of penetration level.... ..."

Cited by 5

### TABLE V Comparison of various approaches for the handwritten digit recognition problem. The three criteria of comparison are the memory requirement measured as the number of free parameters, learning speed measured as the number learning epochs, and generalization accuracy measured as success percentage on the writer-dependent (WD) and writer-independent (WI) test sets unseen during training or cross-validation. Values are averages and standard deviations of 10 independent runs. SP is the simple perceptron that constitutes the base for comparison. MLP (i) is the multi-layer perceptron with i hidden units. VOTE-P (n) is voting over n perceptrons trained on partitions and VOTE-B (n) is similar except that bootstrapping is used. MOE-O (n) is the cooperative mixture of experts with n experts and MOE-M (n) is the competitive version.

### Table 4. Handwritten Digit Recognition Test Results Database Samples for HMM

2000

Cited by 2

### TABLE I TOP 20 MATCHES FOR TWO HANDWRITTEN DIGITS.

### Table 1: Results of the classi#0Ccation of handwritten digits for several sizes and

"... In PAGE 5: ... It turned out that the computational cost to calculate the tanh values can be neglected. The corresponding results are shown in Table1 for several sizes and o#0Bsets of the receptive #0Celds. Table 1 shows that the computational cost of the preprocessing procedure can be reduced by about a factor of 4 or 5 compared to the global PCA approachat a comparable computational cost of the classi#0Ccation itself without an increase of the recognition error.... In PAGE 5: ... The corresponding results are shown in Table 1 for several sizes and o#0Bsets of the receptive #0Celds. Table1 shows that the computational cost of the preprocessing procedure can be reduced by about a factor of 4 or 5 compared to the global PCA approachat a comparable computational cost of the classi#0Ccation itself without an increase of the recognition error. 3.... ..."

### Table 1: Misclassification error percentage (top) and standard deviation (bottom) for the best convex combination on different handwritten digit recognition tasks, using different distance metrics/transformations. See text for description.

2005

"... In PAGE 7: ... For convex minimization, as the starting kernel in the algorithm in Figure 1 we always used the average of the a59 kernels and as the maximum number of iterations a2 a42 a27a39a22a34a22 . Table1 shows the results obtained using 3 graph construction methods. The first method is Euclidean where the distance between two images is the Euclidean distance.... ..."

Cited by 7

### Table 1: Misclassification error percentage (top) and standard deviation (bottom) for the best convex combination of kernels on different handwritten digit recognition tasks, using different distances. See text for description.

2005

"... In PAGE 6: ...his submatrix. The regularization parameter was set to 10 5 in all algorithms. For convex minimization, as the starting kernel in the algorithm in Figure 1 we always used the average of the n kernels and as the maximum number of iterations T = 100. Table1 shows the results obtained using three distances as combined with k-NN (k 2 IN10). The first distance is the Euclidean distance between images.... ..."

Cited by 7