### Table 2. Scales of the 4-point Daubechies Discrete Wavelet Transform

1993

"... In PAGE 6: ... The DWT coefficients for each running-mean ERP were squared, summed, averaged and plotted as a function of time relative to the stimulus. Each row of graphs represents one scale of the transform beginning with the smallest scales at the top (see Table2 ) and proceeding to the largest scale at the bottom. Each column of graphs corresponds to one electrode site in the order Fz, Cz, Pz, from left to right.... In PAGE 12: ...transform was based on the 4-point Daubechies filters which appeared to be superior to the 20-point filters used in the initial linear regression models. Second, since low frequency information seemed valuable in the linear regression models, the range of the transform was extended, adding a fifth scale ( Table2 ). Third, selection of the coefficients was not performed by the decimation approach taken for the linear regression models.... ..."

Cited by 4

### Table 3. Mean square error of F 0 contours generated by neural network with respect to natural speech Neural net No. of elements MSE

"... In PAGE 3: ... For the experiments, we utilized the SNNS neural net- work simulation software [8]. The results of F 0 contour prediction on the test data set are shown in Table3 . In the table, the MSE (mean square error) value corresponds to the average squared difference between the generated F 0 con- tour and the contour extracted from natural speech, in log scale.... ..."

### Table 1. Conversion of binary codes on thermometer scale

"... In PAGE 5: ... The signals form these inputs do not carry an important information for parity problem. The binary code change on thermometer scale In this experiments the neural network converts binary code on thermometer scale as in Table1 . The input vector of neural network consists of three inputs from Table 1.... In PAGE 5: ... The binary code change on thermometer scale In this experiments the neural network converts binary code on thermometer scale as in Table 1. The input vector of neural network consists of three inputs from Table1 .and two inputs with noisy information.... ..."

### Table 1 : Speech parameter settings for several emotions

2001

Cited by 6

### Table 3 . Mapping Knowledge Base into Neural Network

"... In PAGE 11: ....2. Correspondences Between Rules and Neural Network In KBANN approach [20, 21], a symbolic explanation-based learner uses a roughly correct domain theory to explain why an example belongs to the target concept. The explanation tree (hierarchical knowledge base) produced is mapped into a neural network : this mapping, specified by Table3 , defines the topology of networks created by KBANN as well as their initial link weights. Table 3 .... ..."

Cited by 1

### Table 1 Emotion Classification Accuracy with KNN for each Emotion

"... In PAGE 8: ... The neural network structure used with the Marquardt Backpropagation Algorithm was consisted of an input layer with 12 nodes, a hidden layer with 17 nodes, and an output layer with 3 nodes. Table1 and Table 2 and report the classification accuracy of each emotion set with KNN and MBP respectively. Table 1 Emotion Classification Accuracy with KNN for each Emotion ... ..."

### Table 1: Knowledge Base { ANN Correspondences Knowledge Base Neural Network

1990

"... In PAGE 2: ...the knowledge base, as described in Table1 . The next section presents the approach KBANN uses to translate rules into neural networks.... ..."

Cited by 180

### Table 8 Neural network parameters

"... In PAGE 30: ... The successful ANN models were saved in a file along with information about that particular model, such as variable selection, variable transformations, and number of hidden nodes established. Table8 gives network parameters used to build the ANN models. Testing the ANN Models The ANN models were tested by running them using the test data sets prepared for each element.... ..."

### Table 3-2. VLSI neural network Chips

1996

"... In PAGE 38: ...Table3 -1. Analog VLSI vs.... ..."

### Table 1 Neural network architectures

2003

"... In PAGE 6: ...etter as the scale is increased, i.e. as the data becomes smoother. On the final smooth trend curve, resid(t)in Table1 , a crude linear extrapolation estimate, i.e.... In PAGE 6: ...avelet coefficients at higher frequency levels (i.e. lower scales) provided some benefit for estimating variation at less high frequency levels. Table1 sum- marizes what we did, and the results obtained. DRNN is the dynamic recurrent neural network model used.... In PAGE 6: ...sed. The architecture is shown in Fig. 3. The memory order of this network is equivalent to applying a time- lagged vector of the same size as the memory order. Hence the window in Table1 is the equivalent lagged vector length. In Table 1, NMSE is normalized mean squared error, DVS is direction variation symmetry (see above), and DS is directional symmetry, i.... In PAGE 6: ... Hence the window in Table 1 is the equivalent lagged vector length. In Table1 , NMSE is normalized mean squared error, DVS is direction variation symmetry (see above), and DS is directional symmetry, i.e.... In PAGE 7: ...ion of these results can be found in Ref. [4]. For further work involving the DRNN neural network resolution scale. From Table1 , we saw how these windows were of effective length 10, 15, 20, and 25 in terms of inputs to be considered. Fig.... ..."