### Table 1: Effective VC dimension and constant K with confi- dence intervals

"... In PAGE 5: ...We have provided an experimental validation of this result and an extension of the formula in the case where q lt;1. The regression model behaves well on our simulations and we provide a synthesis of experimental results in Table1 and Figures 1, 2. Other examples Our machinery has also been tested on other families (see [10] for further examples).... ..."

### Table 1: Effective VC dimension and constant K with confi- dence intervals

"... In PAGE 5: ... We have provided an experimental validation of this result and an extension of the formula in the case where q lt; 1. The regression model behaves well on our simulations and we provide a synthesis of experimental results in Table1 and Figures 1, 2. Other examples Our machinery has also been tested on other families (see [10] for further examples).... ..."

### Table 1: Effective VC dimension and constant K with confi- dence intervals

75

"... In PAGE 5: ... We have provided an experimental validation of this result and an extension of the formula in the case where q lt; 1. The regression model behaves well on our simulations and we provide a synthesis of experimental results in Table1 and Figures 1, 2. Other examples Our machinery has also been tested on other families (see [10] for further examples).... ..."

### Table 1: The true and the measured values of VC-dimension in 4 control experiments Figures 4a-d demonstrate how well the function ( l h ), where h is the estimate of the e ective VC- dimension, describes the experimental data. 6.4 Smoothing experiments In this section, we measure the e ect of \smoothing quot; the input on the capacity. Contrary to the previ- ous section, this e ect was not predicted theoretically. As in the previous section, the learning machine

1994

"... In PAGE 9: .... The resulting vector y was then fed to a linear classi er with n inputs. It is easy to see that the theoretical value of the e ective VC-dimension is k, as this can be reduced to a linear classi er with k inputs. Table1 , which shows the estimated VC-dimension for k = 10; 20; 30; 40, indicates that there is a strong agreement between the estimated e ective VC-dimension, and the theoretically... ..."

Cited by 29

### Table 1: Design structures for linear estimators

1969

"... In PAGE 15: ...5 to produce class labels 0 and 1. Experimental results comparing the original uniform design structure (Vapnik et al, 1994) and the proposed optimized structure for measuring the VC-dimension of linear estimators are shown in Table1 and Figure 2. Figure 2 shows the result of measuring the VC-dimension of a linear classi er for the 11-dimensional input vectors X = (x1; x2; : : : ; x11).... In PAGE 15: ... Figure 2 compares ten independent estimation trials using the original uniform design (Figure 2(a)) with those using the optimized design structure (Figure 2(b)). The design structures are shown in Table1 , in which k is the number of repeated experiments at each sample size, and n/h is the ratio of the sample size and the VC-dimension. Experimental results in Figure 2 demonstrate two (interacting) factors a ecting accurate estimation of the VC-dimension: 1.... In PAGE 18: ... This explains why the VC-dimension estimated by the uniform design is smaller than that by the optimized design which tends to use larger sample sizes. Finally, we point out that the optimized design structure shown in Table1 can be used for measuring VC-dimension of any linear estimator. 5 MEASURING THE VC-DIMENSION FOR PENALIZED ESTIMATORS This section describes experimental estimation of the VC-dimension for penal- ized linear estimators.... In PAGE 21: ... Likewise, this optimized design structure can be used for measuring VC-dimension of any penalized linear estimator. Note that the optimized design structure for penalized linear estimators is almost identical to that of linear estimators (see Table1 ). We conjecture that the optimized design structure proposed in this paper may be used for other type of estimators as well.... ..."

Cited by 4

### Table 1: A survey of the results. If not otherwise stated, W; k, and n refer to the number of parameters, computation nodes, and input nodes, respectively. Upper bounds are valid for the pseudo dimension, lower bounds for the VC dimension.

2001

"... In PAGE 49: ... We have derived upper and lower bounds on these dimensions for neural networks in which multiplication occurs as a fundamental operation in the interaction of network elements. An overview of the results is given in Table1 , where we present the bounds mainly in asymptotic form, abstracting from most of the constant factors. The bounds are given in terms of the numbers of network parameters and computation nodes and, for classes, in terms of the restrictions that characterize the architectures in the respective class.... ..."

Cited by 13

### Table 1: A survey of the results. If not otherwise stated, W; k, and n refer to the number of parameters, computation nodes, and input nodes, respectively. Upper bounds are valid for the pseudo dimension, lower bounds for the VC dimension.

2001

"... In PAGE 53: ... We have derived upper and lower bounds on these dimensions for neural networks in which multiplication occurs as a fun- damental operation in the interaction of network elements. An overview of the results is given in Table1 , where we present the bounds mainly in asymptotic form, abstracting from most of the constant factors. The bounds are given in terms of the numbers of network parameters and computation nodes and, for classes, in terms of the restrictions that characterize the architectures in the respective class.... ..."

Cited by 13

### TABLE 2.14: Performance of the classi ers with degree predicted by the VC-bound. Each row describes one two-class-classi er separating one digit (stated in the rst column) from the rest. The remaining columns contain: deg: the degree of the best polynomial as predicted by the described procedure, param.: the dimensionality of the high dimensional space, which is also the VC-dimension for the set of all separating hyperplanes in that space, hest:: the VC-dimension estimate for the actual classi ers, which is much smaller than the number of free parameters of linear classi ers in that space, 1 { 7: the numbers of errors on the test set for polynomial classi ers of degrees 1 through 7. The table shows that the decribed procedure chooses polynomial degrees which are optimal or close to optimal.

1997

### Table 1: Results for the numbers of simple and star- shaped polygons.

1996

"... In PAGE 8: ...All these numbers are listed in Table1 . In our tests, all polygons which describe the same geometric gure were counted exactly once.... ..."

Cited by 2

### Table 2: Performance of the classi ers with degree predicted by the VC{bound. Each row describes one two{class{ classi er separating one digit (stated in the rst column) from the rest. The remaining columns contain: degree: the degree of the best polynomial as predicted by the described procedure, parameters: the dimensionality of the high dimensional space, which is also the maximum possible VC{dimension for linear classi ers in that space, hestim:: the VC dimension estimate for the actual classi ers, which is much smaller than the number of free parameters of linear classi ers in that space, 1 { 7: the numbers of errors on the test set for polynomial classi ers of degrees 1 through 7. The table shows that the decribed procedure chooses polynomial degrees which are optimal or close to optimal.

1995

"... In PAGE 5: ... We can then compare this prediction with the actual polynomial degree which gives the best performance on the test set. The re- sults are shown in Table2 ; cf.... ..."

Cited by 158