### Table 2 Two-Stage Query-Based Learning Results

"... In PAGE 4: ... The best network architecture was found to contain one hidden layer with 5 neurons. QBL was found effective when we started with only a small and sparse data set as shown in Table2 . A larger training set of 2869 samples was also generated using the sampling rate of 2 along the pitch and roll angles, and 3 samples along the other parameters.... ..."

### Table 3 Two Stage Regression Estimates

"... In PAGE 12: ... Another set of findings is worth paying special attention to: the results above suggest that developed legal and regulatory framework positively influences industrial growth both through investment and total factor productivity. To put it differently, the fact that after controlling for an estimated rate of investment, coefficients on PRS indicators remain strong (see Table3 ), suggests that institutional quality affects both the amount of investment in the economy as well as the efficiency of resource allocation (at least in the industrial sector). It can be inferred from the regression results that ceteris paribus a one percent increase in per capita industrial value added can be achieved by either a 7.... In PAGE 12: ... (-2.31) (-2.32) (0.86) (3.08) The results are consistent with those reported in Table3 and if contrasted with the latter can hardly be viewed as indicative of multicollinearity. 10 Reported elasticities are based on the sixth specification, i.... ..."

### Table 3. The effectiveness of using the two-stage model prediction system for 2, 3, 4, and 5-class authorship attribution tasks.

"... In PAGE 9: ... The performance of the two-stage model prediction system is compared to the results shown in Figure 1 and Table 1. As shown in Table3 , 9 correct predictions are made out of 21 attribution tasks in total for binary classification. Also 21 and 19 out of 35 for 3 and 4 class attribution... ..."

### Table 1: Two-Stage LCGs Statistics

"... In PAGE 6: ... Here after a detailed example of a PRNG in the TSRG family whose randomizer is LCG with modulus 5 and multiplier 3 with initial condition 1 denoted as a164 a11a87a101a18a14a61a165a18a14a22a33a35a24 and adapted Lehmer generator a164 a11a149a166a57a14a100a101a4a24 . Table1 summarizes the results, which imply that 3 is the total average and 2 is the bad initial condition and TSRG period a130 a7a167a11a87a168 a111 a113a54a24a36a108a95a60a49a7a104a33a38a60 .... ..."

### Table 1: Two-Stage LCGs Statistics

"... In PAGE 6: ... Here after a detailed example of a PRNG in the TSRG family whose randomizer is LCG with modulus 5 and multiplier 3 with initial condition 1 denoted as C4B4BHBN BFBN BDB5 and adapted Lehmer generator C4B4BJBN BHB5. Table1 summarizes the results, which imply that 3 is the total average and 2 is the bad initial condition and TSRG period D4 BP B4BI A2 BGB5BPBE BP BDBE.... ..."

### Table 8: Two-stage Algorithm Test Results

2000

"... In PAGE 5: ... 4.3 Testing the Two-stage Algorithm Table8 shows the results of the Two-stage algorithm for our data sets. The maximally effective cut of point for all sets lies closer.... ..."

Cited by 6

### Table 1 summarizes the classification rate of the 10 neuro-fuzzy networks obtained by the two-stage learning process. As an illustrative example, in Figure 5 the final fuzzy rules extracted by our network are shown for the first trial. The fuzzy rule bases generated by our approach provide good performance, in terms of classification rate, when compared with other classifiers proposed in literature for the same benchmark problem [12][13][14]. Most of these classifiers were able to predict testing data with the number of misclassified patterns between 2-5. Moreover, it is worth mentioning that most of these results are obtained by computing the apparent misclassification rate, i.e. the error estimated with a one-shot train and test procedure, which is usually an over-optimistic estimate of the actual misclassification rate. In addition, our results outperform those reported in literature in terms of simplicity, providing the smaller number of rules.

2000

"... In PAGE 7: ... Table1 . Results of the 10-fold cross validation.... ..."

Cited by 7

### Table 7: Two-stage opamp macromodeling results: gain response

1996

"... In PAGE 21: ... For example, for an opamp such a macromodel would monitor the actual designed voltage gain corresponding to the input voltage gain speci cations. Such a macromodel for a two-stage opamp was developed and is shown in Table7 . In this table the rst column corresponds to the minimum input gain speci cation provided to the synthesis tool.... ..."

Cited by 10

### Table 3: Summary of results from using the two-stage approach and the

"... In PAGE 11: ...he cost for grade C is 0). Thus the total quality cost F = Q + C = 19:35. Unless otherwise stated, the unit for Q is million yen /year for the rest of the paper. The results of the initial design are summarized in Table3 . The same table also contains results given by Taguchi apos;s two-stage approach as performed by Mori (4).... ..."

### Table 1: Design specifications for two-stage op-amp.

1998

"... In PAGE 5: ... The load capacitance is 5pF and the supply voltages are Vdd = 5V and Vss = 0V. A simple design example Table1 describes the sample design problem, and shows the perfor- mance of the design obtained by GPCAD using GP1 models, and the simulated performance with BSIM1 models (HSPICE level 13). The objective was to maximize the unity gain bandwidth subject to the other given constraints.... ..."

Cited by 19