### Table 10 reports the results for the Linear, Quadratic and Logistic models using the same

1998

"... In PAGE 30: ... Finally, the sales to total assets shows that companies with FFS are operating at less efficiency since they get fewer sales for the same total assets. We offer the AutoNet model from Table10 as our best preliminary model for detection of FFS. Its ability to predict 63 percent of the holdout sample is statistically significant at lt; 1 %.... ..."

Cited by 6

### Table 1: Comparison of algorithms with linear and quadratic ap- proximations.

"... In PAGE 10: ... Each optimization used the same initial point. This led to the results shown in Table1 . Clearly, very simi- lar optimization results are obtained for these two designs in terms of the coding gain.... ..."

### Table 3 Comparisons of using linear and quadratic objective functions

"... In PAGE 11: ... Table3 compares the results of state assignment using different objective functions in the place- ment phase. The approach using the linear objective function costs 6% more cubes than the one using the quadratic objective function which confirms our observation in Section 4.... ..."

### Table 3 Comparisons of using linear and quadratic objective functions

"... In PAGE 14: ... Table3 compares the results of state assignment using different objective functions in the place- ment phase. The approach using the linear objective function costs 6% more cubes than the one using the quadratic objective function which confirms our observation in Section 4.... ..."

### Table 1. Average results over 25 runs of SVM with linear and quadratic kernel, on ovarian and prostate data

2004

"... In PAGE 4: ... In our experiments we use soft-margin SVM classi ers with C=100, and two types of kernel functions, linear and quadratic. Table1 contains the results obtained by applying an SVM classi er with linear and quadratic kernel to the ovarian and prostate cancer datasets. Experiments... ..."

Cited by 4

### Table 1. Recognition accuracies of the system on synthetic and scanned data. The SVM results are shown for linear and quadratic kernels.

2002

"... In PAGE 3: ... We consider an average of 20000 samples for the ex- periment. Results can be seen in Table1 . As can be seen from the top section of the Table 1, for the homogeneous data, recognition is very good for all combinations.... In PAGE 3: ... Results can be seen in Table 1. As can be seen from the top section of the Table1 , for the homogeneous data, recognition is very good for all combinations. Rare misclassifications were present due to the distortions cre- ated by the skew correction.... In PAGE 3: ... Rare misclassifications were present due to the distortions cre- ated by the skew correction. KNN based classification re- sults were computed for many values of a43 ; a43a100a22 a97 is shown in Table1 . SVM-based experiment was carried out for lin- ear(SVML) and quadratic(SVMQ) kernels.... In PAGE 4: ... This resulted in an average of 30000 labelled sam- ples. Results are shown in Table1 . SVM-based classifiers are found to perform better than KNN.... In PAGE 4: ... Some of the degraded Telugu characters are shown in Fig- ure 2. The results of degradation are given in Table1 . It can be observed that the degradation is more serious for Telugu than Hindi.... ..."

### Table 1. Recognition accuracies of the system on synthetic and scanned data. The SVM results are shown for linear and quadratic kernels.

2002

"... In PAGE 3: ... We consider an average of 20000 samples for the ex- periment. Results can be seen in Table1 . As can be seen from the top section of the Table 1, for the homogeneous data, recognition is very good for all combinations.... In PAGE 3: ... Results can be seen in Table 1. As can be seen from the top section of the Table1 , for the homogeneous data, recognition is very good for all combinations. Rare misclassifications were present due to the distortions cre- ated by the skew correction.... In PAGE 3: ... Rare misclassifications were present due to the distortions cre- ated by the skew correction. KNN based classification re- sults were computed for many values of K; K BP 5isshown in Table1 . SVM-based experiment was carried out for lin- ear(SVML) and quadratic(SVMQ) kernels.... In PAGE 4: ... This resulted in an average of 30000 labelled samples. Results are shown in Table1 . SVM-based clas- sifiers are found to perform better than KNN.... ..."

### Table 7.1: Average results over 25 runs of SVM with linear and quadratic kernel, on ovarian and prostate data.

### Table 1. Number of Iterations to Decrease Residual by 10?6, 1D Daubechies AFIF Linear Quadratic J

1997

"... In PAGE 18: ...The results of the numerical study are summarized in Table1 and de- picted graphically in Figure 4. A rst interesting conclusion in considering the results of Table 1 is that the multilevel -preconditioner for both the AFIF and Daubechies scaling functions yield nearly identical results.... In PAGE 18: ...The results of the numerical study are summarized in Table 1 and de- picted graphically in Figure 4. A rst interesting conclusion in considering the results of Table1 is that the multilevel -preconditioner for both the AFIF and Daubechies scaling functions yield nearly identical results. Both methods require just under 30 iterations to reduce the residual to a value of 10?6 of its starting value.... ..."

Cited by 2

### Table 4: Iteration count and CPU time for the de ation scheme constant linear quadratic

1997

"... In PAGE 23: ... All CPU times (in seconds) are for an SGI Onyx with su cient RAM to ensure that disk swapping is not required. Table4 shows the number of iterations and CPU time required to reduce the pres- sure residual for the rst time step by ve orders of magnitude for the de ation scheme using piecewise (discontinuous) constant, bilinear, and biquadratic coarse grid spaces. Our parallel production code [16] has been based upon the piecewise constant prolongation op- erator.... In PAGE 23: ... The higher-order coarse grid spaces were studied in [18] in an attempt to improve this scheme. Although the results of Table4 show a two-fold reduction in CPU time for the rst time step, these extensions yielded only a thirty percent reduction in subsequent steps, and would be even less e ective in lR3 due to the rapid increase in the dimension of the coarse grid problem. These two considerations motivated the present study.... ..."

Cited by 24