### Table 4: Neural Network Modeling of Bandpass Filter

"... In PAGE 4: ...oss was 1-2.5 dB. Sample measured results are shown in Figure 9. The measured data was used to obtain a neural network model ( Table4 ) for the center frequency, bandwidth, and minimum pass band insertion loss. Figure 8.... In PAGE 4: ...3 mm-Wave Filter Synthesis Results The neural network models were used to design LTCC mm-wave low pass and band pass filters using the neuro- genetic approach. The genetic algorithm parameters chosen for filter synthesis are shown in Table4 . These parameters were chosen such that the algorithm converged to the desired optimal point with few iterations.... ..."

### Table 5. Comparative classification percentages of the neural network with respect to the opti- mized neural network.

2007

"... In PAGE 9: ... Comparative classification percentages of the associative memory using median opera- tor with respect to the optimized associative memories M 32 M 64 M 96 M Bolt 100% 100% 100% 100% Washer 100% 100% 100% 100% Eyebolt 90% 90% 90% 90% Hook 50% 50% 50% 50% Dovetail 85% 85% 85% 85% For the case of the multilayer neural network trained with the back-propagation al- gorithm, the performance for , and was 89 %, 100% and 100%, while for M it was 98%. Table5 summarizes the classification results for all neural networks trained with the back-propagation algorithm. 32 M 64 M 94 M Table 5.... ..."

### Table 2 Optimal neural network configurations found using genetic algorithm

in Combining Genetic Algorithms, Neural Networks and Wavelet Transforms for Analysis of Raman Spectra

"... In PAGE 8: ...8 5 5.2 0 50 100 150 200 Generations R M SE P ( % ) Data compressed to 16 data points Data compressed to 32 data points Figure 4 Fitness of best individual on island number 16 The configurations for the neural network with 16 and 32 inputs chosen by the genetic algorithm are detailed in Table2 . Table 2 Optimal neural network configurations found using genetic algorithm ... ..."

### Table 3. The optimal neural network architecture for predicting financial indices Series Direction Net

"... In PAGE 13: ... We increment the number of hidden nodes in a stepwise manner to achieve the optimal configuration. Table3 shows the final architectural and training details of neural networks used for predicting direction and series value of index returns. Each neural network consists of an input layer, hidden layer and output layer.... ..."

### Table 4. Variables optimized using neural networks trained with AC algorithm and back- propagation algorithm

### Table 4: Validating accuracies obtained from the new approach and different FFBP neural networks using the raw data

"... In PAGE 45: ... Every time one of the structures was set aside for testing and the other four were used for learning. Results of the experiments can be found in Table4 , where a prediction is deemed correct if 4S prediction was correct after rounding to the nearest integer. If the prediction was exactly between two integers (e.... In PAGE 45: ... We can see that 4S performs better than FOIL and mFOIL, slightly better than GOLEM and CLAUDIEN, similar to MILP, and FORS and worse than FFOIL. Table4 Results of FEM domain experiments Minimal number of examples 2 5 10 Maximal number of literals 2 4 6 2 4 6 2 4 6 Structure used for testing A (55 edges) 22 22 22 22 20 19 21 21 22 B (42 edges) 10 12 12 11 10 7 14 10 13 C (28 edges) 5 10 10 6 5 7 8 8 8 D (57 edges) 16 21 22 14 20 12 14 21 18 E (96 edges) 20 22 22 11 6 3 9 28 4 Total correct (from 278) 73 87 88 64 61 48 66 88 65 Table 5 Comparison to other systems on FEM domain mFOIL Struc ture FO IL Lapl m=0 GOL EM MI LP FFOI L FOR S CLAU DIEN 4S A 16 23 22 21 21 21 22 31 22 B 9 12 12 12 12 15 12 9 12 C 8 9 9 10 11 11 8 5 10 D 12 6 6 16 16 22 16 19 22 E 16 12 12 21 30 54 29 15 22 Sum 61 62 61 80 90 123 87 79 88 % 22 22 22 29 32 44 31 28 32 7. DISCUSSION Experiments have shown that the 4S system is capable of combining ILP and numerical regression.... In PAGE 72: ... The optimization is difficult. Table4 shows the validating results of these neural networks. -5 0 5 10 15 20 110192837465564738291 data points r e l a t i v e i n je c t iv it y ( % ) measured 5-5-1 goal=0.... In PAGE 72: ...Table 4: Validating accuracies obtained from the new approach and different FFBP neural networks using the raw data Results in Table4 reflect the complexity in optimizing neural network setups. However, it is fairly easier to obtain an optimized model using the new approach.... In PAGE 78: ... Except in the special case of zero MQL, the approximation technique results are less than 9% of error as listed in table 4. Table4 . MQL Percentage of Error for Three Meshes.... ..."

### Table 3: Neural Network Models using BFGS optimization procedure.

### Table 1: Comparison of AIS to Genetic Algorithms and Neural Networks.

"... In PAGE 9: ... Some works have pointed out the similarities and the differences between AIS and other heuristics6,7. It should be noted that some of the items in Table1 are gross simplifications, both to benefit the design of the table and not to overwhelm the reader. Many of these points are debatable; however, we believe that this comparison is valuable nevertheless to show approximately where AIS fit in.... ..."

### Table 1. Correspondences between a domain theory and a neural network.

1994

"... In PAGE 4: ... One can think of this preexisting information as prior knowledge about the task at hand, and the question is: how can neural networks effectively use these quot;hints quot; (Abu-Mostafa, 1990)? One answer, the KBANN approach (Towell, Shavlik, amp; Noordewier, 1990; Towell, 1992), creates knowledge-based artificial neural networks by producing neural networks whose topological structure matches the dependency structure of the rules in an approximately-correct quot;domain theory quot; (a collection of inference rules about the current task). Table1 shows the correspondences between a domain theory and a neural network, and Figure 2 alcontains a simple example of the K approach to mapping a domain theory into a neural networks. KBANN has been applied to successfully refining domain theories for real-world problems such as gene finding (Towell et al.... ..."

Cited by 61

### Table 1. Correspondences between a domain theory and a neural network.

1994

"... In PAGE 4: ... One can think of this preexisting information as prior knowledge about the task at hand, and the question is: how can neural networks effectively use these quot;hints quot; (Abu-Mostafa, 1990)? One answer, the KBANN approach (Towell, Shavlik, amp; Noordewier, 1990; Towell, 1992), creates knowledge-based artificial neural networks by producing neural networks whose topological structure matches the dependency structure of the rules in an approximately-correct quot;domain theory quot; (a collection of inference rules about the current task). Table1 shows the correspondences between a domain theory and a neural network, and Figure 2 alcontains a simple example of the K approach to mapping a domain theory into a neural networks. KBANN has been applied to successfully refining domain theories for real-world problems such as gene finding (Towell et al.... ..."

Cited by 61