Results 1 - 10
of
5,893
Table 8. Summary of the results, expressed in terms of gener- alization accuracy.
Table 1: Various SDM designs speci ed from the gener- alized design
Table 1. Some results with gener alize d bise ction and a variant of Algorithm 5.2
Table 3: Overview of average generalization accuracies obtained with ib1-gr, fambl-gr and rise on increas- ing portions of the gp, gs, and ms data sets. Gener- alization accuracy denotes the percentage of correctly classi ed test instances. `| apos; means that the experi- ment could not be performed.
1999
"... In PAGE 6: ... rise was not applied to the full data sets due to the computational limitations mentioned earlier. The generalization accuracies yielded by the three learning algorithms on the three tasks are listed in Table3... ..."
Cited by 5
Table 4.4: Best accuracies for phone classiflcation on the validation set using models generated from 3, 4 and 5 monophone iterations plus 3 gener- alized triphone iterations and the tree in Figure 4.5.
2003
Table 1. When l tested the tangent planes at these points with the algorithm of the previous section, l found a volume of 7.97690, which is smaller than 8.0, the volume for the Voronoi cell of D4. More extreme examples may exist. Thus D4 does not give the minimum solution to the gener- alized problem, though it may still be the best for the strict kissing sphere problem.
Tables 4.1, 4.2, and 4.3 show the performance of the multilevel scheme for the gener- alized Stokes problem on a two dimensional domain with mesh sizes 64 64, 128 128, and 256 256. The number of unknowns for these problems is 12159, 48895, and 196095, respectively. The column titled p indicates the number of processors used; and the columns titled T (n; p), Tcomp, and Tcomm show the total time (in seconds), time spent on computation, and time spent in communication, respectively, to solve the problem. The speedup and e ciency on p processors with respect to 4 processors are shown in the columns S and E, respectively.
Table 1: Errors and convergence rates for the x-component of the pressure gradient.
1997
Cited by 13
Table 2: Errors and convergence rates for the y-component of the pressure gradient.
1997
Cited by 13
Table 2: keeping back one example as an unseen case, we nd once again that the gener- alization performance is typically worse than chance, see Figure 2. This tends to con rm the hypothesis that BP relies primarily on statistical information extracted from the example set. Generalization and Statistical Neutrality A statistically neutral mapping has no correlation between input values and output values. Thus, if the mapping has any input/output rule at all, it cannot be contingent upon states of individual input variables. It must be based on relational states between those variables. In other words, statistically neutral mappings have relational input/output rules.1 1We de ne a problem as relational if the output value depends upon relations between input variables, and not upon any individual input variable.
1995
"... In PAGE 5: ...- gt; yes computer consumes moisture - gt; no computer consumes silicon - gt; yes computer dislikes heat - gt; yes computer dislikes electricity - gt; no computer dislikes moisture - gt; yes computer dislikes silicon - gt; no We can establish the neutrality of this training set empirically by tabulat- ing the relevant conditional probabilities. ( Table2 shows the complete set of probabilities which have a rst or zeroth-order condition.) Note that, in con- trast to parity problems, the expected value of input component values are not at chance-level.... ..."
Cited by 4
Results 1 - 10
of
5,893