### Table 13. The performance of C5.0 on the top genes selected by different techniques in the leukemia dataset. Classification Accuracy No. of Genes

2005

"... In PAGE 29: ...0 is able to achieve a 94.1% rate when using the 7 genes selected by ACA and maintain at the same accuracy level even more genes selected by ACA are used (see Table13 ). This again supports that using only the top genes in each cluster found by ACA are good enough for training C5.... In PAGE 30: ...0 (71.1% as shown in Table13 ). Based on their best performance scenarios, the experimental results of using their optimal configuration of both 10 clusters (where 10 happens to be the cluster number determined by ACA as well) yields one of the best results (70.... In PAGE 30: ...6% for the k-means algorithm as shown in Table 14 whereas 71.1% for the biclustering algorithm as shown Table13 ). The performance by neural networks on the top genes selected by the k-means algorithm and that by C5.... In PAGE 30: ...1% (see Table 14) and 94.1% (see Table13 ), respectively, far superior to their performance. It is interesting to observe that the number of clusters determined by ACA (10 in this case), if used as a candidate of k, both the k-means algorithm and the biclustering algorithm yield the best result.... ..."

Cited by 2

### Table 4. Classification performance of RBF networks

"... In PAGE 8: ... The RBF training algorithm was stopped after adding 50, 100, 200 and 400 neurons. The RBF network performance ( Table4 ) was best for spread constant 20 and 100 allocated neurons (72.... ..."

### Table 4: Simulations of absolute delta-hedging errors for RBF networks for an at-the-money call option with X =50,

1994

"... In PAGE 9: ...4.2 Tracking Error Comparisons Table4 reports selected raw simulation results for a call option with 3 months to expiration and a strikeprice X of $50. In eachrow, the absolute tracking errors for delta-hedging this option are reported for the network pricing formula training on a single training path, the entries in each column corresponding to a di erent test path for which the absolute tracking error is calculated.... In PAGE 9: ... In such cases, an RBF pricing formula maywell be more accurate since it is trained directly on the discretely-sampled data, and not based on a continuous-time approximation. Of course, other columns in Table4 show that Black- Scholes can perform signi cantly better than the RBF formula [for example, compare the (1;; 1)-entry of 0.6968 with the Black-Scholes value of 0.... ..."

Cited by 67

### Table 4: Simulations of absolute delta-hedging errors for RBF networks for an at-the-money call option with X =50,

1994

"... In PAGE 9: ...4.2 Tracking Error Comparisons Table4 reports selected raw simulation results for a call option with 3 months to expiration and a strikeprice X of $50. In eachrow, the absolute tracking errors for delta-hedging this option are reported for the network pricing formula training on a single training path, the entries in each column corresponding to a di erent test path for which the absolute tracking error is calculated.... In PAGE 9: ... In such cases, an RBF pricing formula maywell be more accurate since it is trained directly on the discretely-sampled data, and not based on a continuous-time approximation. Of course, other columns in Table4 show that Black- Scholes can perform signi cantly better than the RBF formula [for example, compare the (1;; 1)-entry of 0.6968 with the Black-Scholes value of 0.... ..."

Cited by 67

### Table 1: backpropagation network results. Results of training backpropagation networks of varying sizes for recognizing distinc- tive states.

"... In PAGE 4: ... 5 Discussion 5.1 Backpropagation performance The results in Table1 show that backpropagation is an excel- lent method for creating view recognizers when all the views that need to be recognized are known in advance. Even with as few as 5 hidden units the network was able to correctly recognize more than 90% of the test views.... ..."

### Table 14: Learn schedule for back-propagation neural network Learn count 10000 30000 50000

"... In PAGE 19: ... We reproduced this experiment with 14 hidden units, without a pruning. We used the learn schedule displayed in Table14 to train a network with 14 units in the hidden layer. Epoch size 1 was selected for this experiment.... ..."

### Table 4. The prediction accuracy of RBF networks. The RBF network constructed with 50 genes selected by the P-metric value shows the best performance.

"... In PAGE 12: ... So, we ran the active RAN algorithm on the second and the third set of genes ten times respectively. Table4 shows the result of each experiment. The RBF network with 50 gene expression levels classifies all training samples correctly and its test error is 1.... ..."

### Table 5. Computing times and numbers of iterations for the comparison between our network and the RBF network.

"... In PAGE 14: ... Surfaces obtained for 48, 96, 160, 336 and 576 control points with our network (2nd row) and with the RBF network (3rd row). Table5 gives the associated training times, as measured on a 1.7 GHz Pentium IV PC, and numbers of iterations.... In PAGE 14: ... Up to 576 control points are necessary to obtain an acceptable surface. Table5 shows that our approach requires a longer training time, at least for limited numbers of control points. Indeed, it takes many iterations to match our highly non linear multilayer network to the surface of the torus.... ..."

### Table 5.1: The required precision on the training sets for the pruning experiments. 1. Initialization of the network of order n with random weights (compare chapter 4). During a real application of high order perceptrons, the order n would be initially chosen to be one and then increased by one whenever it is clear that the corresponding network is unable to learn the task. However, for the research purposes, various orders are examined (see for example table 5.2 on page 49). 2. Application of the backpropagation algorithm until the in table 5.1 given percentage of training data are correctly classi ed by the network, respectively the mean square error on the training set reaches the value listed in this table (see table 5.2 for the learning rate). 3. Removal of connections from the network. Depending on the size of the network, from one connection up to 5% of the connections are removed from the network (see

### Table 4: Summary of recognition results. A summary of the best view recognition results from all experiments: Straight backpropagation recognizer, nearest-neighbor on raw sensor data, nearest-neighbor based on the output from RBF-based Chorus mod- els with three different prototype sets, and nearest-neighbor based on backpropagation-based chorus models on the same three sets. The RBF-Chorus results use the best spread parameter for each set of prototypes. The BP-Chorus results use the 180x30x10 topology. Results are given for training with 10 and 100 sensor views of each place.

"... In PAGE 5: ... 5.3 Performance of chorus model clustering The results of Table4 show that, with the correct parameter settings, Chorus of Prototypes produces a representation that preserves much of the distinctiveness of the raw laser scan data at significantly reduced dimensionality. In particular, the chorus model built from prototype set 3 achieved nearly the same performance as measuring the distance on raw laser data, and on average, the three prototype sets performed only slightly below the distance measures on the raw data.... ..."