### Table 1. Features and opportunities of an ELFE accelerator.

"... In PAGE 2: ...In Table1 I list the main features of the ELFE accelerator, and the oppor- tunities that they provide. Compared to existing electron and muon beams, the advantages of ELFE are in luminosity (compared to the muon beams at CERN and Fermilab), in duty factor (compared to SLAC) and in energy (compared to TJNAF).... ..."

### Table 6. Average Accuracies for SVM classifier with different kernels and different features Features SVM Kernel Type % Overall Male Female

"... In PAGE 13: ... More specifically, the basic idea behind Fisher Linear discriminant analysis is to find the best set of vectors that can minimize the intra cluster variability while maximizing the inter cluster distances. The simulation results for FLD are shown in Table6 . Here, we show the results using PCA and direct image features also for comparison purposes.... In PAGE 13: ... These 5 coefficients are used for classification. The results of our study and the overall performance is shown in Table6 . We notice a slight improvement in performance with compared with directly using the image pixel intensities as features.... ..."

### Table 1. Features and opportunities of an ELFE accelerator. Features Opportunities

"... In PAGE 2: ...In Table1 I list the main features of the ELFE accelerator, and the oppor- tunities that they provide. Compared to existing electron and muon beams, the advantages of ELFE are in luminosity (compared to the muon beams at CERN and Fermilab), in duty factor (compared to SLAC) and in energy (compared to TJNAF).... ..."

### Table 1 The comparison of correct recognition rates (%) of Kernel Eigenface, Kernel Fisherface and the proposed CKFD over 10 tests

"... In PAGE 5: ... The experimental results are shown in Table 1. From Table1 , we can see that CKFD is superior to Kernel Eigenface and Kernel Fisherface when its two categories of discriminant features (199 features of Category I and 15 features of Category II) are combined. The performance of Kernel Fish- erface is similar to that of CKFD using only the features ofCategory I.... In PAGE 5: ...Table1... ..."

### Table 1 The comparison of correct recognition rates (%) of Kernel Eigenface, Kernel Fisherface and the proposed CKFD over 10 tests

"... In PAGE 5: ... The experimental results are shown in Table 1. From Table1 , we can see that CKFD is superior to Kernel Eigenface and Kernel Fisherface when its two categories of discriminant features (199 features of Category I and 15 features of Category II) are combined. The performance of Kernel Fish- erface is similar to that of CKFD using only the features ofCategory I.... In PAGE 5: ...Table1... ..."

### Table 2: Features that influence hospital mortality according to CART and stepwise logistic regression analysis

1996

Cited by 2

### Table 2. Comparison of acceleration techniques with our 2 analysis

2006

Cited by 4

### Table 1 Acceleration of (?1=10 + 10i; 1; 95=100) with the J transformation

1998

"... In PAGE 13: ... It is seen that the 2J transformation achieves the best results. The attainable accuracy for this transformation is limited to about 9 decimal digits by the fact that the stability index displayed in the column Dn of Table1 grows relatively fast. Note that for n = 46, the number of digits (as given by the negative decadic logarithm of the relative error) and the decadic logarithm of the stability index sum up to approximately 32, i.... In PAGE 14: ... Note that this value was chosen to display basic features relevant to the stability analysis, and is not necessarily the optimal value. As in Table1 , the relative errors and the stability indices... ..."

Cited by 2

### Table 2 Acceleration of (?1=10 + 10i; 1; 95=100) with the J transformation ( = 10)

1998

"... In PAGE 14: ... It is easily seen that the new sequence also converges linearly with = limn!1( sn+1 ? s)=( sn ? s) = q . For gt; 1, both the e ectiveness and the stability of the various transformations are increased as shown in Table2 for the case = 10. Note that this value was chosen to display basic features relevant to the stability analysis, and is not necessarily the optimal value.... In PAGE 15: ... We remark that for 6 = 1 this is not identical to the u variant of the Levin transformation as applied to the partial sums fs0; s ; s2 ; : : :g because in the case of the u variant one would have to use the remainder estimates !n = (n + 0)(s n ? s (n?1)). It is seen from Table2 that again the best accuracy is obtained for the 2J transformation. The d(1) transformation is worse, but better than the pJ transformations for p = 1 and p = 3.... ..."

Cited by 2

### Table 1. Flow of the kernel MaxEnt procedure. There are two possible outputs; the input space kernel matrix Ky, and the kernel space data set y.

"... In PAGE 3: ... In terms of the eigenvectors of the kernel feature space correlation matrix, we project (xi) onto a subspace spanned by different eigenvectors, which is possibly not the most variance preserving (remember that variance in the kernel feature space data set is given by the sum of the largest eigenvalues). The kernel MaxEnt procedure, as described above, is summarized in Table1 . It is important to realize that kernel MaxEnt outputs two quantities, which may be used for further data analysis.... ..."