### Table 2: For test function with two inputs, mean (over 50 data samples) and 95% confidence interval for standardized MSE at 225 test locations, and for the temperature and ozone datasets, cross-validated standardized MSE, for the six methods. Method Function with 2 inputs Temp. data Ozone data

2004

Cited by 4

### Table 2: For test function with two inputs, mean (over 50 data samples) and 95% confidence interval for standardized MSE at 225 test locations, and for the temperature and ozone datasets, cross-validated standardized MSE, for the six methods. Method Function with 2 inputs Temp. data Ozone data

2004

Cited by 4

### Table 2: For test function with two inputs, mean (over 50 data samples) and 95% confidence interval for standardized MSE at 225 test locations, and for the temperature and ozone datasets, cross-validated standardized MSE, for the six methods. Method Function with 2 inputs Temp. data Ozone data

2004

Cited by 4

### Table 1. Comparison of the HCMAC neural network with the MHCMAC neural network Models

"... In PAGE 15: ... D. Comparison of HCMAC Neural Network with the MHCMAC Neural Network Table1 compares the HCMAC neural network with the MHCMAC neural network in terms of memory requirement, topology structure and input feature assignment approach. Table 1 shows that the memory requirement of the original HCMAC neural network grows with the power 2 of the ceiling logarithm of the input dimensions, but the memory requirement of the MHCMAC neural network grows only linearly with the input feature dimensions.... In PAGE 15: ... Comparison of HCMAC Neural Network with the MHCMAC Neural Network Table 1 compares the HCMAC neural network with the MHCMAC neural network in terms of memory requirement, topology structure and input feature assignment approach. Table1 shows that the memory requirement of the original HCMAC neural network grows with the power 2 of the ceiling logarithm of the input dimensions, but the memory requirement of the MHCMAC neural network grows only linearly with the input feature dimensions. Moreover, the learning structure of the self-organizing HCMAC neural network is expanded based on a full binary tree topology, but the MHCMAC neural network is expanded based on an exact binary tree topology.... ..."

### Table 2 Neural network configurations

"... In PAGE 4: ...utput. A separate neural network was trained for identification of each solvent. The inputs corresponded to Raman spectra of mixtures and the single output corresponded to a prediction whether or not the solvent was present in the mixture. The neural network configurations for each solvent are detailed in Table2 . These settings were found through experimentation.... ..."

Cited by 2

### Table 3: Neural Network Results

"... In PAGE 8: ... This gives back the cost of the solution. Table3 illustrates the testing results of the five experiments. Table 3: Neural Network Results ... ..."

### Table 6: Neural network results.

"... In PAGE 5: ... Figure 5: Signal space for neural network. Table6 shows that the network using the 7- signal characteristic set gave the correct result 93.... ..."

### Table 5: Options for Neural Networks

1998

"... In PAGE 4: ... All but IBM have advanced learning options and employ cross-validation to govern when to stop. Table5 summarizes these properties. Table 5: Options for Neural Networks... ..."

Cited by 4

### Table 4: Neural network tools.

2003

"... In PAGE 20: ... They are applicable in almost every situation where a relationship between input and output parameters exists, even in cases where this relationship is very complex and cannot be expressed or handled by mathematical or other modelling means. Table4 summarizes the features of the most commonly used neural network tools. Beyond general purpose and stand-alone tools, there exist library tools, such as the SPRLIB and the ANNLIB (developed by the Delft University Technology at Netherlands) emphasizing on image classification and pattern recognition applications.... ..."

Cited by 2

### Table 1 Neural network architectures

2003

"... In PAGE 6: ...etter as the scale is increased, i.e. as the data becomes smoother. On the final smooth trend curve, resid(t)in Table1 , a crude linear extrapolation estimate, i.e.... In PAGE 6: ...avelet coefficients at higher frequency levels (i.e. lower scales) provided some benefit for estimating variation at less high frequency levels. Table1 sum- marizes what we did, and the results obtained. DRNN is the dynamic recurrent neural network model used.... In PAGE 6: ...sed. The architecture is shown in Fig. 3. The memory order of this network is equivalent to applying a time- lagged vector of the same size as the memory order. Hence the window in Table1 is the equivalent lagged vector length. In Table 1, NMSE is normalized mean squared error, DVS is direction variation symmetry (see above), and DS is directional symmetry, i.... In PAGE 6: ... Hence the window in Table 1 is the equivalent lagged vector length. In Table1 , NMSE is normalized mean squared error, DVS is direction variation symmetry (see above), and DS is directional symmetry, i.e.... In PAGE 7: ...ion of these results can be found in Ref. [4]. For further work involving the DRNN neural network resolution scale. From Table1 , we saw how these windows were of effective length 10, 15, 20, and 25 in terms of inputs to be considered. Fig.... ..."