### Table 1: Knowledge Base { ANN Correspondences Knowledge Base Neural Network

1990

"... In PAGE 2: ...the knowledge base, as described in Table1 . The next section presents the approach KBANN uses to translate rules into neural networks.... ..."

Cited by 180

### Table 1 Comparison of core/periphery fitness measures using Beck et al. (2003; ND) data

2004

"... In PAGE 5: ....P Boyd, W.J. Fitzgerald, R.J. Beck/Social Networks columns 4 and 5 of Table1 . Column 6 of Table 1 compares the results from the UCINET (Version 6.... In PAGE 5: ... For all 12 groups, all three of these algorithms matched the exhaustive search by consistently finding the global optimum from several starting configurations. [ Table1 about here] From the results in Table 1, the genetic algorithm in UCINET finds the global optimum in two out of our 12 cases. The UCINET fit statistic is among the five best for seven of the 12 cases, and among the ten best for nine of the 12 cases.... In PAGE 5: ... For all 12 groups, all three of these algorithms matched the exhaustive search by consistently finding the global optimum from several starting configurations. [Table 1 about here] From the results in Table1 , the genetic algorithm in UCINET finds the global optimum in two out of our 12 cases. The UCINET fit statistic is among the five best for seven of the 12 cases, and among the ten best for nine of the 12 cases.... In PAGE 7: ... A low probability along with an intuitively high observed fitness value suggests that the observed data may have a core/periphery structure. To illustrate this permutation test, we used Mathematica to program a random permutation generator based upon the observed within group distribution of messages for each of the 12 groups from Table1 . As with the observed data, diagonal cells were also ignored for these permutations.... In PAGE 7: ... For Group 1, for example, no random permutation in each of the 3 runs produced an optimal fitness value equal to or greater than the observed fitness value of 0.867 (see Table1 ). For Group 3, 43 of the random permutations in the first run produced optimal fitness values equal to or greater than the observed fitness value (0.... ..."

Cited by 1

### Table 3 . Mapping Knowledge Base into Neural Network

"... In PAGE 11: ....2. Correspondences Between Rules and Neural Network In KBANN approach [20, 21], a symbolic explanation-based learner uses a roughly correct domain theory to explain why an example belongs to the target concept. The explanation tree (hierarchical knowledge base) produced is mapped into a neural network : this mapping, specified by Table3 , defines the topology of networks created by KBANN as well as their initial link weights. Table 3 .... ..."

Cited by 1

### Table 1 Neural network architectures

2003

"... In PAGE 6: ...etter as the scale is increased, i.e. as the data becomes smoother. On the final smooth trend curve, resid(t)in Table1 , a crude linear extrapolation estimate, i.e.... In PAGE 6: ...avelet coefficients at higher frequency levels (i.e. lower scales) provided some benefit for estimating variation at less high frequency levels. Table1 sum- marizes what we did, and the results obtained. DRNN is the dynamic recurrent neural network model used.... In PAGE 6: ...sed. The architecture is shown in Fig. 3. The memory order of this network is equivalent to applying a time- lagged vector of the same size as the memory order. Hence the window in Table1 is the equivalent lagged vector length. In Table 1, NMSE is normalized mean squared error, DVS is direction variation symmetry (see above), and DS is directional symmetry, i.... In PAGE 6: ... Hence the window in Table 1 is the equivalent lagged vector length. In Table1 , NMSE is normalized mean squared error, DVS is direction variation symmetry (see above), and DS is directional symmetry, i.e.... In PAGE 7: ...ion of these results can be found in Ref. [4]. For further work involving the DRNN neural network resolution scale. From Table1 , we saw how these windows were of effective length 10, 15, 20, and 25 in terms of inputs to be considered. Fig.... ..."

### Table 2. The manual sub-structuring based approach employing neural networks for five sub-structures by partitioning electric transmission tower.

"... In PAGE 5: ... We first performed manual partitioning of the electric transmission tower into four legs and a head (Figure 4). The results of applying the neural network classifiers for predicting the existence of the damage within the struc- tures identified using manual sub-structuring approach is given in Table2 . The prediction performance of neural network classifiers is measured by computing the overall classification accuracy, as well as by computing the re- call, precision and F-value for each class (since in damage detection problem target variable (element) of interest is damaged one, we consider damage class and.... In PAGE 5: ... In order to alleviate the effect of neural network instability in our experiments, measures for pre- diction accuracy for each substructure are averaged over 20 trials of the neural network learning algorithm. From Table2 it can be observed that the manual sub- structuring approach followed by building neural network classification models can be very accurate for predicting the presence of damage within the substructures at the first level of partitioning. The achieved accuracy was higher than 98% for four substructures, while for leg 2, the classification accuracy was slightly worse (95.... ..."

### Table 4. Design Pattern Decision Tree Neural network

2005

"... In PAGE 8: ... Table4 . Overall learning precision results It can be seen that the two learning methods produced very similar results, however the precision was worse in the case of Adapter Object.... ..."

Cited by 7

### Table 5. Illustration of fuzzy control for the neural network

### Table 5 shows the results of models from regression, neural networks and neuro-fuzzy

"... In PAGE 20: ... ANFIS (Adaptive Neuro Fuzzy Inference System) was used to develop the neuro- fuzzy model (Jang, 1993). Table5 . Criteria values for the tool wear data (jackknife approach) ... ..."

### TABLE 3. PERFORMANCE COMPARISON BETWEEN HYBRID LEARNING APPROACH AND CONVENTIONAL NEURAL NETWORKS

2002

Cited by 3