### Table 1: Architectural specifications of the hybrid neural network architecture

"... In PAGE 3: ... These Hebbian connections are used to spread the activations from one Kohonen map to another such that a localised activity pattern in either Kohonen map will cause a corresponding localised activity pattern on the other Kohonen map, and this would be the basis of concept lexicalisation. Table1 gives the architectural specifcations of the three neural networks to be used for the simulation with detailed description to follow in the forthcoming discussion. Table 1: Architectural specifications of the hybrid neural network architecture ... ..."

### Table 2: For test function with two inputs, mean (over 50 data samples) and 95% confidence interval for standardized MSE at 225 test locations, and for the temperature and ozone datasets, cross-validated standardized MSE, for the six methods. Method Function with 2 inputs Temp. data Ozone data

2004

Cited by 4

### Table 2: For test function with two inputs, mean (over 50 data samples) and 95% confidence interval for standardized MSE at 225 test locations, and for the temperature and ozone datasets, cross-validated standardized MSE, for the six methods. Method Function with 2 inputs Temp. data Ozone data

2004

Cited by 4

### Table 2: For test function with two inputs, mean (over 50 data samples) and 95% confidence interval for standardized MSE at 225 test locations, and for the temperature and ozone datasets, cross-validated standardized MSE, for the six methods. Method Function with 2 inputs Temp. data Ozone data

2004

Cited by 4

### Table 2. The distribution of design methodology

"... In PAGE 13: ... Count simply generates the statistics of design meth- odologies and reuse metrics for each application. Ten different applications from packages to special purpose applications are summarized with each design methodology in Table2 according to proposed design methodologies. Five out of the ten applications have a high percentage of functions which use adhoc design approach.... In PAGE 15: ... The non-bit-sliced and feature oriented are the most common approaches for these type of applications. This finding is very much consistent with Table2 . as well.... ..."

### Table 1: Performance of Different Neural Network Training Functions

2003

"... In PAGE 7: ...We used the same testing data (6890), same network architecture and same activations functions to identify the best training function that plays a vital role for in classifying intrusions. Table1 summarizes the results of three different networks: network using SCG performed with an accuracy of 95.25%; network using RP achieved an accuracy of 97.... ..."

Cited by 11

### Table 3: Weight discretization in multilayer neural networks: on-chip learning. by allowing a dynamic rescaling of the weights (and hence the weight range) by adapting the gain of the activation function. The calculation of an activation value aj in a multilayer network is namely done as follows:

"... In PAGE 5: ... This means in speci c that at least the weight values are represented with only a limited precision. Simulations have shown that the popular backpropagation algorithm (see for example [Rumelhart-86]) is highly sensitive to the use of limited precision weights and that training fails when the weight accuracy is lower than 16 bits ( rst two references in Table3 ). This is mainly because the weight updates are often smaller than the quantization step which prevents the weights from changing.... In PAGE 5: ... In order to reduce the chip area needed for weight storage and to overcome system noise, a further reduction of the number of allowed weight values is desirable. Several weight discretization algorithms have therefore been designed and an extensive list of them and the attainable reduction in required precision is given in Table3 . Some of these weight discretization algorithms have already proven their usefulness in hardware implementations.... ..."

### Table 2. Neural network performance comparison.

"... In PAGE 5: ... Feedforward multi -layer perceptron (MLP) and Elman networks, with different complexity, were used and tested on a validation set formed by 784 independent samples. Table2 shows the obtained results. Although both networks have similar performances, the Elman recurrent network, with 10 hidden neurons and tan - sigmoidal activation function, exhibits lower training times, converging more rapidly to the desired error value.... ..."

### Table 2 Applications of neural networks for multivariate calibration

"... In PAGE 5: ... The extrapolation capabilities of neural network models have been shown to be very poor.13 Practical Applications Table2 shows the practical applications of neural networks in multivariate calibration found in the recent literature. The last entry in Table 2 needs some clarification.... In PAGE 5: ...13 Practical Applications Table 2 shows the practical applications of neural networks in multivariate calibration found in the recent literature. The last entry in Table2 needs some clarification. Here the neural network does not model a calibration process of an analytical technique, but represents an empirical model of the manufac- turing process of the samples.... ..."