### Table XXIII: MLP comparison. One Hidden Layer Two Hidden Layers

2005

### Table 5. Average percent correct classification for HeLa data using BPNN with two hidden layers of various sizes at the 37 quot;best quot; features. Values shown are averages for test data over ten trials. Each value has a 95% confidence interval of approximately 5%. Percentages at or above 84% are shaded.

2000

Cited by 7

### Table 7: Cross-validated training of the two-hidden-layer neural network classi- fiers. Best performers in each training/testing run Training data set (Positives) Sp Sn PPV NPV

"... In PAGE 32: ... Each cross-validation run tested networks with the number of first hidden layer neurons ranging from 1 to 40 and the number of second hidden layer neurons ranging from 1 to 20. Table7 lists out the best performing neural networks on each cross-validation training/testing run. Each run provided us with at least one network which performed perfectly on the positives (i.... ..."

Cited by 2

### Table 1: Normalized root mean square error (NRMS) for a training set of n n n points, obtained by the two best performing standard MLP networks (out of 12 di erent achi- tectures, with various (linear decreasing) step size parameter schedules ) 100000 stepest gradient descent steps were performed for the MLP-net and one pass through the set for PSOM network. Visual inspection of (1e) and (1b) shows a very good approximation of the desired, highly non-linear mapping. In view of the extremely minimal training set of 3 3 3 in (1c), this appears to be a quite remarkable result. We recently compared another standard network type, the well-known back-prop net, in a one and two hidden layer (of tanh() type) con guration (with output layer 2

"... In PAGE 3: ...BP-network. Even for larger training set sizes, we didn apos;t succeed in training them to a performance comparable to the PSOM network, Table1 shows the result of two of the best BP-nets compared to the PSOM. How does the PSOM work? In the next section we explain the algorithm, albeit in a condensed form.... ..."

### Table 3: Results for benchmark problem cancer1, 2 and 3. The \net size quot; column contains the number of levels in the LVQ codebook, and the size of the two hidden layers in the RPROP case.

### Table 1: Backpropagation artificial neural network versus regression.

"... In PAGE 11: ... 4. Results and Discussion Results for the baseline performance of the best performing two hidden layer backpropagation neural network using all eight input variable values and the rhythmicity regression equation are shown in Table1 . Accuracy is the total percentage of correct predictions.... In PAGE 12: ...3.6 percent classification accuracy is established by two of the seven variable models. Because the sharpness variable produced the largest decrease in classification performance when it was left out, a regression model using just the sharpness variable is constructed to determine the correlation between sharpness and epileptiform seizures and another regression model using both the sharpness and physiologic state (the second largest performance decrease) is also constructed. Both of these new regression models have a smaller accuracy than the original rhythmicity regression model (shown in Table1 ), with the sharpness regression model... ..."

### Table 2: Results (averaged over 20 runs). The L2-model error is measured in relation to its initial value (= 100%).

"... In PAGE 20: ... Learning time was limited to 15,000 steps each. Table2 summarizes the results. Random exploration: This exploration technique performed extremely poorly, not at least because the task at hand was disadvantageous for random exploration.... In PAGE 21: ... This model is very accurate along the path, but clearly inaccurate elsewhere. It is worth noting that this approach did not minimize the number of crashes (costs, see Table2 ), although it focused almost exclusively on exploitation. Directed exploration using the competence map: In our experiments, competence was estimated by a neural network with two hidden layers with six hidden units each.... ..."

### Table 2: Results #28averaged over 20 runs#29. The L 2

"... In PAGE 20: ... Learning time was limited to 15,000 steps each. Table2 summarizes the results. Random exploration: This exploration technique performed extremely poorly, not at least because the task at hand was disadvantageous for random exploration.... In PAGE 21: ... This model is very accurate along the path, but clearly inaccurate elsewhere. It is worth noting that this approach did not minimize the number of crashes #28costs, see Table2 #29, although it focused almost exclusively on exploitation. Directed exploration using the competence map: In our experiments, competence was estimated by a neural network with two hidden layers with six hidden units each.... ..."

### Table 4 Confusion Matrix for Pen Gesture Classification Using MLP Gesture Is Recognized as:

"... In PAGE 10: ... The MLP used seven input nodes, two hidden layers with eight nodes each, and four output nodes. In Table4 , the confusion matrix for the MLP analysis is shown. As can be seen from the confusion matrix, the MLP is unable to distinguish between area and marker: Every area is recognized as a marker.... ..."

### TABLE VII: Best nets. (a) Two hidden slabs with two activation functions. (b)Three hidden slabs with three activation functions. (c) Two hidden slabs with two activation functions and a jump connection. a) Type Two hidden layers

in Polymer

2004