### Table 1: Implementation results for chaotic time-series prediction

1998

"... In PAGE 4: ...Matlab neural network toolbox and trained using conventional backpropagation algorithms. A summary of the implementation results obtained are presented in Table1 . Two different simulation approaches were used for the chaotic time series prediction problem.... In PAGE 4: ...ifference between the predicted and actual results, in terms of the prediction error is illustrated in Fig. 5. This compares favourably with a conventional fuzzy approach which employed an even finer-grained partitioning strategy ranging from 15 to 29 fuzzy sets to achieve a similar accuracy [Wang92]. For further comparison, the results using a conventional neural network approach which contains 40 nodes in the hidden layer are also included in Table1 . Previous work demonstrated that this size of network resulted in a similar degree of accuracy as a conventional fuzzy reasoning approach employing seven fuzzy sets on each input domain [Wang92].... In PAGE 4: ... Previous work demonstrated that this size of network resulted in a similar degree of accuracy as a conventional fuzzy reasoning approach employing seven fuzzy sets on each input domain [Wang92]. Table1 illustrates that the FNN approach provides a more accurate prediction of the time-series as compared to the conventional neural network approach. However, these results do not highlight that the training time of the conventional neural network was more than a factor of two slower than the largest FNN employed.... ..."

Cited by 2

### Table 1 and Table 2, can be thought of as a collection of observations made sequentially in time, and thus modeled using time series.

2001

"... In PAGE 6: ...05 3.75 Table1 .... In PAGE 6: ...ion in mSQL 1.0.7 and so on (cfr. the row labelled Ident. in Table1 and Table 2). During the a96 a3 a17a23 experiment (i.... In PAGE 6: ...ompared against the actual one (i.e., a7 a8 a35a34 a9 ) in order to eval- uate the one step ahead percent prediction error. If a37 a7 a39a38 is the predicted value and and a7 a40a38 is the actual value, the percent prediction error is defined as follows: a52 a42a41a6a43a4a44a45a41a6a46 a68 a52 a47a43a4a41 a0 a12 a21a44 a68 a12 a21a48a2a46a49a41a6a43a2a43a4a48a2a43 a16 a50a52a51 a30 a42 a7 a39a38a93a74 a37 a7 a39a38 a44 a7 a38 a53 a46 a2 a2 (12) The one step ahead percent prediction error related to the performed experiment is showed both in Table1 and Table 2 in the rows labeled as E1. For example, in the last column of Table 1 the one step ahead percent prediction evaluated during the experiments a5 a7 a52 a55a54 is reported.... In PAGE 6: ... If a37 a7 a39a38 is the predicted value and and a7 a40a38 is the actual value, the percent prediction error is defined as follows: a52 a42a41a6a43a4a44a45a41a6a46 a68 a52 a47a43a4a41 a0 a12 a21a44 a68 a12 a21a48a2a46a49a41a6a43a2a43a4a48a2a43 a16 a50a52a51 a30 a42 a7 a39a38a93a74 a37 a7 a39a38 a44 a7 a38 a53 a46 a2 a2 (12) The one step ahead percent prediction error related to the performed experiment is showed both in Table 1 and Table 2 in the rows labeled as E1. For example, in the last column of Table1 the one step ahead percent prediction evaluated during the experiments a5 a7 a52 a55a54 is reported. A plotting of E1 is also shown in Figure 3.... ..."

Cited by 13

### Table 2: Bounds on ergodic averages for the re ecting random walk

1999

"... In PAGE 13: ...12 Example 1: Re ecting Random Walk (ctd) We rst examine further the re ecting random walk given in Section 3, under the assumption that (V ) = 2. Table2 shows four sets of parameter values, and in the rst three cases the bounds are ordered with (39) better than (38) better than (32). In the last case (32) is better than (38), but again (39) represents a very substantial improvement.... ..."

Cited by 49

### Table 1: Timing results for sequential and concurrent garbage collection schemes. Stop- amp;-Copy is a simple sequential

"... In PAGE 8: ... IO bu ers are dynamically al- located heap objects in SML/NJ and subject to collection and hence require mutator synchroniza- tion when they are read or written. Table1 gives times for collecting the garbage using the concur- rent and the two sequential collectors. The con- current collector performs extremely well in reduc- ing the amount of time the mutator spends in the garbage collector (less than 1% of the stop-and- copy or generational times).... ..."

### Table 2. Behavior of the expansion of the prime 61 relative to di erent bases, compared with prime 31 ergodic to base 3.

"... In PAGE 11: ... Each line is a period, but it is not a minimal period except in the ergodic cases b = 2; 6; 7; 10 and 30. In the ergodic cases b = 2; 6; 10; 30 we see from Table2 that each single digit 0; 1; : : : ; b?1 occurs exactly the same number of times, that is, we have exact equidistribution of single digits. This follows from the equidistribution theorem and the (accidental) fact that bjp ? 1 for the cases chosen.... In PAGE 13: ...1=61 in Table2 , in fact, it is the binary expansion of 234=61 (mod 1) = 29=61. Irrespective of whether 61 is ergodic or not, all sequences in Table 2 ex- hibit certain intuitively acceptable features of randomness; of course, they also conform with our de nition of randomness.... In PAGE 13: ...1=61 in Table 2, in fact, it is the binary expansion of 234=61 (mod 1) = 29=61. Irrespective of whether 61 is ergodic or not, all sequences in Table2 ex- hibit certain intuitively acceptable features of randomness; of course, they also conform with our de nition of randomness. This new class of random sequences of digits has one immediate practical application: it enables precise statements to be made in a debate with the \fanatical school quot; of probability theory.... ..."

### Table 3. Simple and Adaptive Randomization

2003

"... In PAGE 9: ...3.1 Simple and Adaptive Randomization Schemes Table3 shows the performance and characteristics of the subgraphs discovered by multiple runs (ranging from one to ten) of GREW-SE and GREW-ME for the cases in which these multiple runs were performed using the simple and the adaptive randomization schemes. From these results we can see that the two randomiza- tion schemes are quite effective in allowing GREW to find a larger number of frequent subgraphs.... In PAGE 10: ... Their sizes are either 9 or 10 and their frequencies are all 13. On the other hand, as shown in Table3 , GREW-SE and GREW-ME with the adaptive randomization scheme can find patterns up to size 19 and 40, respectively whose fre- quency is at least 100, which is about 10 times more than the frequency of the best three patterns reported by SUBDUE. The runtime of GREW-SE and GREW-ME are also signifi- cantly shorter than that of SUBDUE.... ..."

Cited by 6

### Table 3. Simple and Adaptive Randomization

2003

"... In PAGE 10: ...3.1 Simple and Adaptive Randomization Schemes Table3 shows the performance and characteristics of the subgraphs discovered by multiple runs (ranging from one to ten) of GREW-SE and GREW-ME for the cases in which these multiple runs were performed using the simple and the adaptive randomization schemes. From these results we can see that the two randomiza- tion schemes are quite effective in allowing GREW to find a larger number of frequent subgraphs.... In PAGE 11: ... Their sizes are either 9 or 10 and their frequencies are all 13. On the other hand, as shown in Table3 , GREW-SE and GREW-ME with the adaptive randomization scheme can find patterns up to size 19 and 40, respectively whose fre- quency is at least 100, which is about 10 times more than the frequency of the best three patterns reported by SUBDUE. The runtime of GREW-SE and GREW-ME are also signifi- cantly shorter than that of SUBDUE.... ..."

Cited by 6

### Table 1: Timings of the sequential matrix multiplication algorithm

1996

"... In PAGE 9: ... Each execution time measurement was averaged over 20 experiments. The timings of the sequential version of the matrix multiplication algorithm ran on a single T9000 transputer are shown in Table1 (s de nes the size of matrices; Ave, , Max and Min denote the average execution time, the standard deviation, the maximum and minimum execution time, respectively, among the 20 experiments). Experiments on SIM1 The two versions of the matrix multiplication algorithm were implemented.... ..."

Cited by 1

### Table 2: Timings of the sequential matrix multiplication algorithm

"... In PAGE 12: ... Each execution time measurement was averaged over 20 experiments. The timings of the sequential version of the matrix multiplication algorithm ran on a single T9000 transputer are shown in Table2 (s de nes the size of matrices; Ave, , Max and Min denote the average execution time, the standard deviation, the maximum and minimum execution time, respectively, among the 20 experiments). Experiments on SIM1 The two versions of the matrix multiplication algorithm were implemented.... ..."

### T able 1:On the left: NMSE of the predictions for time series A. On the right:

2000

"... In PAGE 5: ... We adopt for the series A an embedding model having the same dimension m = 16 proposed in [8] and for the series D an embedding model with m = 20 as reported in [10]. Table1 (left) compares the NMSE (Normalized Mean Squared Error) on the A test set of the local predictor based on the consistency criterion (CC) with the local method based on cross-v alidation(Press) proposed in [4] and with the performance statistics reported by Sauer [8] and Wan [9]. T able1 (righ t) compares the RMSE (Root Mean Squared Error) on the seriesD of the D-Facto public.... ..."

Cited by 1