### Table 1 Neural network architectures

2003

"... In PAGE 6: ...etter as the scale is increased, i.e. as the data becomes smoother. On the final smooth trend curve, resid(t)in Table1 , a crude linear extrapolation estimate, i.e.... In PAGE 6: ...avelet coefficients at higher frequency levels (i.e. lower scales) provided some benefit for estimating variation at less high frequency levels. Table1 sum- marizes what we did, and the results obtained. DRNN is the dynamic recurrent neural network model used.... In PAGE 6: ...sed. The architecture is shown in Fig. 3. The memory order of this network is equivalent to applying a time- lagged vector of the same size as the memory order. Hence the window in Table1 is the equivalent lagged vector length. In Table 1, NMSE is normalized mean squared error, DVS is direction variation symmetry (see above), and DS is directional symmetry, i.... In PAGE 6: ... Hence the window in Table 1 is the equivalent lagged vector length. In Table1 , NMSE is normalized mean squared error, DVS is direction variation symmetry (see above), and DS is directional symmetry, i.e.... In PAGE 7: ...ion of these results can be found in Ref. [4]. For further work involving the DRNN neural network resolution scale. From Table1 , we saw how these windows were of effective length 10, 15, 20, and 25 in terms of inputs to be considered. Fig.... ..."

### Table 1: Total number of Neural Network Models

"... In PAGE 3: ... Table1 : Total number of Neural Network Models One method to speed the modeling process was to increase node additions by two. To illustrate these concepts, consider a neural network with 3 inputs and 1 output.... In PAGE 3: ...-1-1-1; ... ; 3-3-5-5-1; and 3-5-5-5-1. For N inputs, the number of neural network architecture permutations equals: 1 + N + N2 + N3 where 1 means there is only one neural network architecture with zero hidden layers, N is the number of ways of creating a neural network architecture with one layer, N2 is the number of ways of creating a neural network architecture with two layers, and N3is the number of ways of creating a neural network architecture with three layers. Table1 shows the total number of permutations of neural network architectures, metric categories, and group configurations. In order to build and train 33,190 neural networks, an automated neural network program was used.... ..."

### Table 1. Architectural details of the individual neural networks comprising MASCOT.

2001

"... In PAGE 15: ...two gating networks. The input and output layer sizes of the eight neural network modules, and the number of connections in each, are displayed in Table1 . The details of the sizes of the individual expert and gating networks which feature in the mixture-of-experts model are shown in Table 2.... ..."

Cited by 2

### Table 8 Neural network parameters

"... In PAGE 30: ... The successful ANN models were saved in a file along with information about that particular model, such as variable selection, variable transformations, and number of hidden nodes established. Table8 gives network parameters used to build the ANN models. Testing the ANN Models The ANN models were tested by running them using the test data sets prepared for each element.... ..."

### TABLE V ARCHITECTURES OF SILL NETWORKS AND STANDARD NEURAL NETWORKS FOR WHICH THE MINIMUM MSE IS OBTAINED BY THE MODELS IN EXPERIMENT 2

### Table 1: Architectural specifications of the hybrid neural network architecture

"... In PAGE 3: ... These Hebbian connections are used to spread the activations from one Kohonen map to another such that a localised activity pattern in either Kohonen map will cause a corresponding localised activity pattern on the other Kohonen map, and this would be the basis of concept lexicalisation. Table1 gives the architectural specifcations of the three neural networks to be used for the simulation with detailed description to follow in the forthcoming discussion. Table 1: Architectural specifications of the hybrid neural network architecture ... ..."

### Table 2. Neural network modeling

2006

"... In PAGE 5: ... In model A, we calculated the IBIs from the steady-state solution, whereas in model B we continuously solved the network without discarding the transients. The model param- eters are summarized in Table2 . The parameters were the same as previously described (4), except for TNI1 (see Table 2).... ..."

### Table 1. Neural Network Models.

### Table 1. Comparison of the HCMAC neural network with the MHCMAC neural network Models

"... In PAGE 15: ... D. Comparison of HCMAC Neural Network with the MHCMAC Neural Network Table1 compares the HCMAC neural network with the MHCMAC neural network in terms of memory requirement, topology structure and input feature assignment approach. Table 1 shows that the memory requirement of the original HCMAC neural network grows with the power 2 of the ceiling logarithm of the input dimensions, but the memory requirement of the MHCMAC neural network grows only linearly with the input feature dimensions.... In PAGE 15: ... Comparison of HCMAC Neural Network with the MHCMAC Neural Network Table 1 compares the HCMAC neural network with the MHCMAC neural network in terms of memory requirement, topology structure and input feature assignment approach. Table1 shows that the memory requirement of the original HCMAC neural network grows with the power 2 of the ceiling logarithm of the input dimensions, but the memory requirement of the MHCMAC neural network grows only linearly with the input feature dimensions. Moreover, the learning structure of the self-organizing HCMAC neural network is expanded based on a full binary tree topology, but the MHCMAC neural network is expanded based on an exact binary tree topology.... ..."