### TABLE I MULTI-ROVER NE AND SIMULATION PARAMETER SETTINGS. Neuro-Evolution and Simulation Parameter Settings Environment size 200 x 200

### Table 1 also shows the minimum number of trials needed for learning (Best), the maximum number of trials (Worst), and the number of failures (from the 100 tests) for both neurocontrollers. It should be noted that although on average the AHC algorithm needs fewer attempts for learning to balance the pendulum dur- ing one simulated hour, it takes much more time than our system. This is due, in part, to the adaptable partition of the state space, and because the external reinforcement is more informative when the system fails than when it does not.

1997

"... In PAGE 7: ... We have found that our system presents better performance than the origi- nal system when applying other measures. Table1 shows that the clustering- reinforcement algorithm perfors 30 times faster (on average) than the Adaptive Heuristic Critic algorithm. In a recent work, Moriarty and Miikkulainen have also presented a reinforcement learning method called SANE (Symbiotic, Adap- tive Neuro-Evolution) which is 9 to 16 times faster than the AHC algorithm [16].... In PAGE 7: ... Table1 . Inverted pendulum balance attempts (1), simulated time needed for... ..."

Cited by 7

### Table 9. Average RRVG for the conventional NE (A) versus the CONE (B) method.

2006

"... In PAGE 13: ... 9.2 Neuro-Evolution Method Comparison Table9 presents the performance results for the conventional NE method (A)... ..."

Cited by 1

### Table 2: Time series Gaussian Naive Bayes errors FEATURE SELECTION Avg. 4799 4820 4847 5680 5710

"... In PAGE 5: ... In Table 1 the results of applying the fTAN algorithm to the data averaged over the 54 time points are shown. In Table 3, the corresponding Gaussian Naive Bayes classifier results are presented; whereas in Table2 the algorithm was applied to the time series data. Table 4 contains the Support Vector Machine classifier results on the time series data, and Table 5 on the averaged data.... ..."

### Table 4: Time series Support Vector Machine errors FEATURE SELECTION Avg. 4799 4820 4847 5680 5710

### Table 3. Mackey-Glass time series prediction results (normalized RMS Error on test set, averaged over 30 runs) with Baldwinian learning.

2004

"... In PAGE 8: ... For tness evaluation of networks in (1) training set is used instead of validation set. Table3 gives the mean and standard deviation values, of the normalized root-mean-squared (RMS) errors achieved on test set by the co-evolutionary model with di erent credit assignment strategies, for 30 runs with di erent seeds. Here normalized RMS errors are obtained by dividing the absolute RMS error values by the standard deviation of x(t) [13, 14, 16].... ..."

Cited by 3

### Table 1 Prediction results of traffic series measured in different time intervals (station 433)

"... In PAGE 7: ...nd another 24-hour workday data set for testing. At station N27.9, we also use 1,440 and 480 approach-base data points for 5-minute and 15-minute traffic series, respectively, thus a consecutive five-workday data set are selected for training and another consecutive five-workday data set for testing. The prediction results are summarized in Table1 and Table 2. From the tables, all of the RMSEs, including the training set and the testing set, are sufficiently small to show that the RBFNN model is highly satisfactory to predict the real-world short-interval flow, speed, and occupancy series.... ..."

### Table 2 Prediction results of traffic series measured in different time intervals (Station N27.9)

"... In PAGE 7: ...nd another 24-hour workday data set for testing. At station N27.9, we also use 1,440 and 480 approach-base data points for 5-minute and 15-minute traffic series, respectively, thus a consecutive five-workday data set are selected for training and another consecutive five-workday data set for testing. The prediction results are summarized in Table 1 and Table2 . From the tables, all of the RMSEs, including the training set and the testing set, are sufficiently small to show that the RBFNN model is highly satisfactory to predict the real-world short-interval flow, speed, and occupancy series.... ..."