### Table 1 lists the various neural networks that are used to implement the above mentioned model of child language development, together with a specification of the typical input and output for each process that may be involved in child language development.

"... In PAGE 3: ... Table1 : The various neural networks implementing the above-mentioned model of child language development. The table legend is IP = Input Layer, OP = Output Layer, HI = Hidden Layer, INT = Intermediate Layer 4.... ..."

### Table 1 lists the various neural networks that are used to implement the above mentioned model of child language development, together with a specification of the typical input and output for each process that may be involved in child language development. Psychological

"... In PAGE 3: ... Table1 : The various neural networks implementing the above-mentioned model of child language development. The table legend is IP = Input Layer, OP = Output Layer, HI = Hidden Layer, INT = Intermediate Layer 4.... ..."

### Table 4. Generalization capabilities for feedforward neural networks with different numbers of hidden-layer neurons.

"... In PAGE 14: ...80 0.50 Table4 presents the results concerning the generalization properties of the feedforward neural networks trained with both algorithms. The data employed in all the inversions... In PAGE 15: ...01 calculated by means of the method described in section 4. Table4 concerns only stratified soils, for the neural inversion is not employed when the soils are homogeneous. The main reason for degrading the data is to simulate the effects on the inversions of environmental noise, and discrepancies between the model and the geophysical reality.... ..."

### Table 1 Test set Se and Sp of the neural network model in classifying cardiac beats for various training algorithms and number of units in the hidden layer

"... In PAGE 9: ... Table1 displays the experimental results obtained from the use of various training algorithms and different numbers of units in the hidden layer of the neural network. The Bayesian regularisation method, described in the previous section, was found to be the most effective.... ..."

### Table 1. Comparison of the HCMAC neural network with the MHCMAC neural network Models

"... In PAGE 15: ... D. Comparison of HCMAC Neural Network with the MHCMAC Neural Network Table1 compares the HCMAC neural network with the MHCMAC neural network in terms of memory requirement, topology structure and input feature assignment approach. Table 1 shows that the memory requirement of the original HCMAC neural network grows with the power 2 of the ceiling logarithm of the input dimensions, but the memory requirement of the MHCMAC neural network grows only linearly with the input feature dimensions.... In PAGE 15: ... Comparison of HCMAC Neural Network with the MHCMAC Neural Network Table 1 compares the HCMAC neural network with the MHCMAC neural network in terms of memory requirement, topology structure and input feature assignment approach. Table1 shows that the memory requirement of the original HCMAC neural network grows with the power 2 of the ceiling logarithm of the input dimensions, but the memory requirement of the MHCMAC neural network grows only linearly with the input feature dimensions. Moreover, the learning structure of the self-organizing HCMAC neural network is expanded based on a full binary tree topology, but the MHCMAC neural network is expanded based on an exact binary tree topology.... ..."

### Table 1: Total number of Neural Network Models

"... In PAGE 3: ... Table1 : Total number of Neural Network Models One method to speed the modeling process was to increase node additions by two. To illustrate these concepts, consider a neural network with 3 inputs and 1 output.... In PAGE 3: ...-1-1-1; ... ; 3-3-5-5-1; and 3-5-5-5-1. For N inputs, the number of neural network architecture permutations equals: 1 + N + N2 + N3 where 1 means there is only one neural network architecture with zero hidden layers, N is the number of ways of creating a neural network architecture with one layer, N2 is the number of ways of creating a neural network architecture with two layers, and N3is the number of ways of creating a neural network architecture with three layers. Table1 shows the total number of permutations of neural network architectures, metric categories, and group configurations. In order to build and train 33,190 neural networks, an automated neural network program was used.... ..."

### Table 4 Progression in classification accuracy for 2-D subcellular patterns one-hidden-layer neural network with 20 hidden nodes. MV: Majority voting

"... In PAGE 10: ... and image sets. Table4 shows the performance of this classi- 902 Journal of Biomedical Optics d September/October 2004 d Vol. 9 No.... In PAGE 11: ... The same strategy of construct- ing the optimal majority-voting classifier was conducted on these two new feature subsets. As seen in Table4 , the result was a small improvement in classification accuracy ~to 92%!, and the same accuracy was obtained with and without the DNA features ~indicating that some of the new features cap- tured approximately the same information!. The results in Table 4 summarize extensive work to opti- mize the classification of protein patterns in 2-D images, but the prior conclusion that including the DNA features provides an improvement of approximately 2%.... In PAGE 11: ... As seen in Table 4, the result was a small improvement in classification accuracy ~to 92%!, and the same accuracy was obtained with and without the DNA features ~indicating that some of the new features cap- tured approximately the same information!. The results in Table4 summarize extensive work to opti- mize the classification of protein patterns in 2-D images, but the prior conclusion that including the DNA features provides an improvement of approximately 2%. Since feature selection improved classification accuracy in the previous experiments, we conducted a comparison of eight different feature reduction methods ~described in Sec.... ..."

### Table 1. Principal statistical/neural network models used to categorize faces by sex.

### Table 3 Properties and statistical parameters of the neural network models Model No. of input No. of data No. of No. of hidden AAPE SSE

1997

"... In PAGE 9: ... Using 1012 input data entries, a correlation for ANN model-A was obtained with one hidden layer and 25 neurons. As illustrated in Table3 , the model correlated the experimental data . with an average error of 32.... In PAGE 11: ...he best accuracy. A one-hidden layer network was found to be suitable. The number of neurons in the hidden layer was varied until a minimum sum squared error was obtained. Table3 demonstrates the final neural network properties and statistical parameters of these models. The number of neurons tabulated for the different models delivered acceptable results.... In PAGE 13: ...ig. 9. Cross plot of ANN model-D. models were considered to facilitate comparisons with models available in the literature and also to check whether or not more specialized models might lead to better accuracies than a generalized model. Table3 compares the statistical parameters for these models. The results show that the comprehensive ANN model-D deliver a competitive correlation coefficient.... ..."