### Table 14.4. A comparison of the power-tail asymptote for Wc 2 (t) in the long-tail example to exact values obtained by numerical transform inversion.

1997

Cited by 38

### Table 14.4. A comparison of the power-tail asymptote for W c 2 (t) in the long-tail example to exact values obtained by numerical transform inversion.

1997

Cited by 38

### Table 5 A comparison of the power-tail asymptote for Wc 2 (t) in the long-tail example to exact values obtained by numerical transform inversion. time exact asymptote (14.8)

1950

Cited by 2

### Table 14.3. A comparison of the non-exponential asymptote in the boundary case of the second M/M, M/1 example having 1 = 4=9 and 2 = 2=9 to exact values obtained by numerical transform inversion. Our third example has two classes with a common long-tail distribution having mean 1. For this example we use a Pareto mixture of exponentials (PME) distribution,

1997

Cited by 38

### Table 4 shows p-values for pairwise comparisons of the methods, obtained from paired t-tests. Note that the test quantity in comparing the predictive performance of the models in Tables 2 and 4 is RMS error, even though the long-tailed residual models allow large errors, that cost much in the RMS error function. This serves as posterior model checking, since the RMSE is the relevant error measure in the application, and we want to be sure that the long tails of model residuals has not led to undermodeling. In general, making the residual model more flexible shifts the posterior mass towards simpler (a priori more probable) models, since high likelihood can be obtained by matching the residual model and the realized residuals. Some conclusions from the results are listed in the following.

2001

"... In PAGE 14: ... The presented RMSE values show the standardized model residuals relative to the standard deviation of the data. See Table4 for pairwise comparison of the models. Method Noise model ARD RMSE std 1.... In PAGE 15: ...14 Table4 : Pairwise comparisons of various MLP models in predicting the air-% variable. The values in the matrix are p-values, obtained from paired t-tests.... ..."

Cited by 12

### Table 4 shows p-values for pairwise comparisons of the methods, obtained from paired t-tests. Note that the test quantity in comparing the predictive performance of the models in Tables 2 and 4 is RMS error, even though the long-tailed residual models allow large errors, that cost much in the RMS error function. This serves as posterior model checking, since the RMSE is the relevant error measure in the application, and we want to be sure that the long tails of model residuals has not led to undermodeling. In general, making the residual model more flexible shifts the posterior mass towards simpler (a priori more probable) models, since high likelihood can be obtained by matching the residual model and the realized residuals. Some conclusions from the results are listed in the following.

2001

"... In PAGE 14: ... The presented RMSE values show the standardized model residuals relative to the standard deviation of the data. See Table4 for pairwise comparison of the models. Method Noise model ARD RMSE std 1.... In PAGE 15: ...14 Table4 : Pairwise comparisons of various MLP models in predicting the air-% variable. The values in the matrix are p-values, obtained from paired t-tests.... ..."

Cited by 12

### Table 2. Prediction IPC loss for deep pipelines

"... In PAGE 9: ... Table 2 shows the resulting IPC loss from Baseline scheduling to the two feedback-adjusted prediction schemes. The relative depth in Table2 is the constant by which the pipeline depths in Table 1 were multiplied. Table 2.... In PAGE 9: ... This means that the predictors can be expected to perform about as well on deep pipelines as they do on shorter ones. However, the results in Table2 assume that the scheduler remains unpipelined and capable of scheduling dependent instructions back-to-back, which is somewhat unrealistic. Table 3 shows the same data for processors where the wakeup logic (predicted or otherwise) is pipelined.... ..."

### Table 1 displays the performance extrema for this deterministic proof by contra- diction strategy on the testbed as well as the mean values over all successful runs. The values in brackets indicate the deviation from the mean. Fig. 2 shows the underlying dis- tribution of the run time for these experiments. In fact, the distribution exhibits heavy- tailed behavior [2] which is manifested in the long tail of the distribution stretching for several orders of magnitude.

2001

"... In PAGE 4: ... Table1 . Statistics for successful runs (108 out of 160) on testbed using deterministic strategy.... ..."

Cited by 1

### Table 8. Isolation variables showing significant effects on the probability of occurrence for 11 forest bird species. Results were derived from second-step regression models containing only one isolation variable. Regression coefficients were replaced by their significance levels

"... In PAGE 11: ... In a biogeographical context woodland birds may show larger population sizes in certain regions as compared to others. A comparison between east- ern and central/southern regions revealed that 16 out of 42 forest bird species showed a higher fre- quency of occurrence in either eastern or cen- tral/southern regions ( Table8 ). In eastern regions 9 species characteristic of mature woods were more frequent, whereas 7 more common and less typical woodland species were more frequent in central/ southern regions.... ..."

### Table 1. Recognition as a Function of Eigenvector Range

"... In PAGE 6: ...3. Results Table1 contains the d0, hit and false alarm rates, and the overall percent correct classi cations, as well as the mean cosines between original and reconstructed images for the old and new faces. Several things are worth noting.... In PAGE 7: ... Error bars are not included in Fig. 2 because even with the lowest of the d0s in Table1 the model apos;s ability to discriminate old from new faces is signi - cantly greater than chance (Binomial test, N = 159, p lt; :05). The third and most important point to notice from Table 1 is that despite this general monotonic in- crease in error of reconstruction as the range of eigen- vectors changes to include only those in the higher di- mensions of the face space, the ability of the model to discriminate between old and new faces does not sim- ilarly decrease.... In PAGE 7: ...anges. Error bars are not included in Fig. 2 because even with the lowest of the d0s in Table 1 the model apos;s ability to discriminate old from new faces is signi - cantly greater than chance (Binomial test, N = 159, p lt; :05). The third and most important point to notice from Table1 is that despite this general monotonic in- crease in error of reconstruction as the range of eigen- vectors changes to include only those in the higher di- mensions of the face space, the ability of the model to discriminate between old and new faces does not sim- ilarly decrease. There is a dropo in discriminabil- ity in the second two eigenvector ranges and then a steady increase in discriminability that peaks in the 50-70 range.... ..."