### Table 1 Determinants and discriminants. K L unrami ed rami ed non-dyadic

"... In PAGE 52: ...roof. It is clear that isometric hermitian lattices have equal determinants. Conversely, suppose d(h0) = d(h0 0) and assume that d(h0) 6 = if K L is a rami ed non-dyadic extension. By Table1 , M and M0 have orthogonal bases. The orthogonal decomposition of M in (6.... In PAGE 52: ...14. Suppose that K L is a rami ed non-dyadic extension and d(h0) = : By Table1 , (M; h0) is isometric to (1) ( ) or H(1); where 2 NrL=K(S ): These integral hermitian lattices have equal determinants but they are not isometric, since their Jordan splittings are of di erent types. Since (1) ( ) = (1) ( ) by Proposition 5.... In PAGE 53: ...roof. In Proposition 6.2 we showed that if (M; h0) = (M0; h0 0); then (M; h) and (M0; h) are isomorphic S{orders. First note that since h0 and h0 0 belong to the same class of hermitian forms on V; we have that d(h0) = d(h0 0) in K =NrL=K(L ): In other words, j = l (l) 0 k for some l 2 L : gt;From this it follows that d(h0) = d(h0 0) in R=NrL=K(S ) if and only if j = k: Let K L be an unrami ed or a rami ed non-dyadic extension in which case we assume that (M; h) is not a maximal S{order in M2(L): Suppose that (M; h) and (M0; h) are isomorphic S{orders, so d( (M; h)) = d( (M0; h)): Since d( (M; h)) = d( (M0; h)) 6 = S in the rami ed non-dyadic case, it follows from Table1 that d(h0) = d(h0 0) for some 2 R : So j = k and hence d(h0) = d(h0 0): By Proposition 6.8, (M; h0) = (M0; h0 0) if d(h0) 6 = ; and (M; h0) = (M0; h0 0) = (1) ( ) if d(h0) = : Assume that K L is a rami ed non-dyadic extension and suppose that (M; h) is a maximal S{order in M2(L): Thus d( (M; h)) = S and according to Table 1, M is isometric to (1) ( ) or H(1): These lattices are not isometric, since their Jordan splittings are of di erent types.... In PAGE 53: ... First note that since h0 and h0 0 belong to the same class of hermitian forms on V; we have that d(h0) = d(h0 0) in K =NrL=K(L ): In other words, j = l (l) 0 k for some l 2 L : gt;From this it follows that d(h0) = d(h0 0) in R=NrL=K(S ) if and only if j = k: Let K L be an unrami ed or a rami ed non-dyadic extension in which case we assume that (M; h) is not a maximal S{order in M2(L): Suppose that (M; h) and (M0; h) are isomorphic S{orders, so d( (M; h)) = d( (M0; h)): Since d( (M; h)) = d( (M0; h)) 6 = S in the rami ed non-dyadic case, it follows from Table 1 that d(h0) = d(h0 0) for some 2 R : So j = k and hence d(h0) = d(h0 0): By Proposition 6.8, (M; h0) = (M0; h0 0) if d(h0) 6 = ; and (M; h0) = (M0; h0 0) = (1) ( ) if d(h0) = : Assume that K L is a rami ed non-dyadic extension and suppose that (M; h) is a maximal S{order in M2(L): Thus d( (M; h)) = S and according to Table1 , M is isometric to (1) ( ) or H(1): These lattices are not isometric, since their Jordan splittings are of di erent types. gt;From Table 1, it follows that their corresponding S{orders are maximal.... In PAGE 53: ...8, (M; h0) = (M0; h0 0) if d(h0) 6 = ; and (M; h0) = (M0; h0 0) = (1) ( ) if d(h0) = : Assume that K L is a rami ed non-dyadic extension and suppose that (M; h) is a maximal S{order in M2(L): Thus d( (M; h)) = S and according to Table 1, M is isometric to (1) ( ) or H(1): These lattices are not isometric, since their Jordan splittings are of di erent types. gt;From Table1 , it follows that their corresponding S{orders are maximal. 6.... ..."

### Table 4 Mean test error rates and their standards deviations (AB Reg: Reg- ularized AdaBoost, KFD: kernel fisher discriminant). The best method is written in bold face and the second best is emphasized. dataset SKSP KSP AB Reg SVM KFD

"... In PAGE 6: ... We use all samples in the other class for i since the learning set is not large. The mean test error rates and their standard deviations are de- scribed in Table4 . The results except for KSP and SKSP are re- ferred from the papers above.... ..."

### Table 1: The results of the experiments described in Section 3.1. N is the size of the training set, d the dimension, #SV the number of support vectors for the SVM, and #k.ev. the number of kernel evaluations required by a boosted hypercuts classifier. Means and standard deviations in 30 trials are reported for each data set. WBC,WPBC,WDBC are Wisconsin Breast Cancer, Prognosis and Diagnosis data sets, respectively.

"... In PAGE 6: ... For each data data set the above experiment was repeated 30 times. The columns of Table1 , left to right, show the following, with means and standard deviations... In PAGE 7: ... 3.2 Discussion The most important conclusion from these empirical results is that for all data sets, the RBF boosted dyadic hypercuts achieve test performance statistically equivalent to that of SVMs4, and usually better than that of k-NN classifiers, while the com- plexity of the trained classifier is typically lower (in some cases, which appear in bold in Table1 , the difference in complexity is significant). In addition, our experiments demonstrate the trade-off between the complexity and accuracy of the hypercuts.... ..."

### Table 1: The results of the experiments described in Section 3.1. N is the size of the training set, d the dimension, #SV the number of support vectors for the SVM, and #k.ev. the number of kernel evaluations required by a boosted hypercuts classifier. Means and standard deviations in 30 trials are reported for each data set. WBC,WPBC,WDBC are Wisconsin Breast Cancer, Prognosis and Diagnosis data sets, respectively.

in Abstract

"... In PAGE 6: ... For each data data set the above experiment was repeated 30 times. The columns of Table1 , left to right, show the following, with means and standard deviations... In PAGE 7: ... 3.2 Discussion The most important conclusion from these empirical results is that for all data sets, the RBF boosted dyadic hypercuts achieve test performance statistically equivalent to that of SVMs4, and usually better than that of k-NN classifiers, while the com- plexity of the trained classifier is typically lower (in some cases, which appear in bold in Table1 , the difference in complexity is significant). In addition, our experiments demonstrate the trade-off between the complexity and accuracy of the hypercuts.... ..."

### Table 1: The results of the experiments described in Section 3.1. N is the size of the training set, d the dimension, #SV the number of support vectors for the SVM, and #k.ev. the number of kernel evaluations required by a boosted hypercuts classi er. Means and standard deviations in 30 trials are reported for each data set. WBC,WPBC,WDBC are Wisconsin Breast Cancer, Prognosis and Diagnosis data sets, respectively.

in Abstract

"... In PAGE 6: ... For each data data set the above experiment was repeated 30 times. The columns of Table1 , left to right, show the following, with means and standard deviations... In PAGE 7: ... 3.2 Discussion The most important conclusion from these empirical results is that for all data sets, the RBF boosted dyadic hypercuts achieve test performance statistically equivalent to that of SVMs4, and usually better than that of k-NN classi ers, while the com- plexity of the trained classi er is typically lower (in some cases, which appear in bold in Table1 , the di erence in complexity is signi cant). In addition, our experiments demonstrate the trade-o between the complexity and accuracy of the hypercuts.... ..."

### Table1: Feature Selection by AdaBoost

"... In PAGE 4: ... Therefore, AdaBoost can be considered as a feature selection algorithm. The brief process of AdaBoost is described in Table1 , in which T Gabor features corresponding to the T weak classifiers are selected as the most discriminating features. 2.... ..."

### Table 7.1: Comparison of percentage test error of AdaBoost (AB), Regularized AdaBoost (ABR), Support Vector Machines (SVM) and Transductive Linear Discrimination (TLD) on seven datasets.

1999

Cited by 5

### Table 2. A well-known medical dataset used as a test dataset for binary discrimination with kernel models

"... In PAGE 15: ... 5.2 Discrete Density Estimation Table2 represents a well-known medical binary data set described in Anderson et al. [26] and used throughout the statistical literature to test non-parametric statistical mod- els [27, 28, 29].... In PAGE 15: ... The aim is then to build a model for the data showing which combination of symptoms are most likely to indicate the presence of the disease in pa- tients. For the data in Table2 , we obtained the Bernoulli mix- ture model given in Table 3. Note that we can read off the most weighty pattern in Table 1 from the mixture model in Table 3.... In PAGE 15: ... Note that we can read off the most weighty pattern in Table 1 from the mixture model in Table 3. More specifically, observations 20, 32 and 36 are representatives of the most predominant binary pattern in Table2 . Patients exhibiting these patterns of symptoms would therefore most likely be considered stricken with the disease.... ..."

### Table 4.2: Phoneme recognition results comparing our kernel-based discriminative algorithm versus HMM.

2007