### Table 1 Neural network architectures

2003

"... In PAGE 6: ...etter as the scale is increased, i.e. as the data becomes smoother. On the final smooth trend curve, resid(t)in Table1 , a crude linear extrapolation estimate, i.e.... In PAGE 6: ...avelet coefficients at higher frequency levels (i.e. lower scales) provided some benefit for estimating variation at less high frequency levels. Table1 sum- marizes what we did, and the results obtained. DRNN is the dynamic recurrent neural network model used.... In PAGE 6: ...sed. The architecture is shown in Fig. 3. The memory order of this network is equivalent to applying a time- lagged vector of the same size as the memory order. Hence the window in Table1 is the equivalent lagged vector length. In Table 1, NMSE is normalized mean squared error, DVS is direction variation symmetry (see above), and DS is directional symmetry, i.... In PAGE 6: ... Hence the window in Table 1 is the equivalent lagged vector length. In Table1 , NMSE is normalized mean squared error, DVS is direction variation symmetry (see above), and DS is directional symmetry, i.e.... In PAGE 7: ...ion of these results can be found in Ref. [4]. For further work involving the DRNN neural network resolution scale. From Table1 , we saw how these windows were of effective length 10, 15, 20, and 25 in terms of inputs to be considered. Fig.... ..."

### Table 2. Neural network performance comparison.

"... In PAGE 5: ... Feedforward multi -layer perceptron (MLP) and Elman networks, with different complexity, were used and tested on a validation set formed by 784 independent samples. Table2 shows the obtained results. Although both networks have similar performances, the Elman recurrent network, with 10 hidden neurons and tan - sigmoidal activation function, exhibits lower training times, converging more rapidly to the desired error value.... ..."

### Table 1: Complexity of the recurrent neural network generated by Algorithm 2.

1995

"... In PAGE 24: ... Boldface transitions have been introduced as escape rules from temporary states. According to Table1 , jN j = m+K+X+3n and jVj = m(m+1)+K(n+1)+4X+K+6n. The numbers K and X depends on the original state transition function.... ..."

Cited by 14

### Table 6: Performance of a Neural Network in Estimating the Cost of a Complex User Defined Method.

1997

"... In PAGE 14: ...ll points. The experimented network had a structure of 2-4-3-1. About 3000 cycles over the learning set was sufficient for the network to converge, which took about 10 seconds. The performance of the network went beyond our hopes and the average relative error between the estimated cost and the real execution cost for the 100 execution points was bellow 1% ( Table6 ). We can see from Figure 8 the shapes of both the real executions... ..."

Cited by 4

### Table 8 Average rms errors for second group of neural networks

2004

"... In PAGE 11: ... This is a more realistic approach to investigate the performance of the neural networks as if they were implemented as an on-line tool condition monitoring system. Average rms errors obtained after running simu- lations for these networks are given in Table8 . With the addition of cutting forces to inputs, substantially smaller average rms errors are obtained.... ..."

### Table 3: Typical terminal, function and root sets for evolving a recurrent neural network

1999

"... In PAGE 4: ... 4.3 Recurrent Neural Networks To evolve a GPN as a recurrent neural network, further changes are needed to the terminal, function and root sets (see Table3 ). Each terminal has an associated weight.... ..."

Cited by 1

### Table 1: Classi cation performance of second-order recurrent neural network.

"... In PAGE 2: ... We extracted from the SWISS-PROT database 873 genuine globin sequences, two thirds of which were used for training and one third with non-globin sequences for testing. Experimental results(illustrated in Table1 )show that the trained network is able to distinguish members of the globin family from non-members with a high degree of accuracy. Table 1: Classi cation performance of second-order recurrent neural network.... ..."

### Table 1. Coding of the words in vector of 20 bits presented to the input layer of the simple recurrent neural network.

in Lack

"... In PAGE 5: ... As a result, all items are represented orthogonal with respect to each other. Table1 lists the 20-bit binary vectors representing these words. The lexicon was divided into four separate groups of two nouns and two verbs each.... ..."

### Table 3 : Typical terminal, function and root sets for evolving a recurrent neural network.

### Table 2: Neural Network Prediction Error

"... In PAGE 5: ...able 1: Numbers of Extracted Data Set ............................................................................ 7 Table2 : Neural Network Prediction Error.... In PAGE 13: ... We simulate the neural network using testing input data set by: gt; gt; % Tein and Teout are testing input-output data sets gt; gt; simout = sim(net,Tein); gt; gt; % compare between simout and Teout The comparisons between simout and Teout are in term of average absolute error and root mean square error as defined earlier. Table2 shows the errors for each of the six outputs from neural network. Table 2: Neural Network Prediction Error ... ..."