### Table 1: Characteristics of neural network survival analysis methods.

2005

"... In PAGE 11: ... Again, no extension is presented to deal with time-varying inputs. Discussion Table1 presents an overview of the characteristics of the neural network based methods for survival analysis discussed in the previous subsections. From the literature review above, it becomes clear that for large scale data sets, the approaches of Faraggi, Mani and... In PAGE 21: ... 3rd most imp. insurance premium insurance premium frequency paid Table1 0: Predicting default in first 12 months on oversampled data set. gives the results for loan default between 12 and 24 months.... In PAGE 21: ... Note that when comparing Tables 10 and 11 with Tables 8 and 9, it becomes clear that the oversampling allowed to correctly detect a higher proportion of bads as bad. Analogous to the previous subsection, we Actual Logit Cox NN G-predicted G 2015 1753 1744 1757 G-predicted B 0 262 271 258 B-predicted G 0 262 271 258 B-predicted B 394 132 123 136 Table1 1: Predicting default 12-24 months on oversampled data set. can also generate 3D surface plots from the neural network outputs in order to present a general view of the sensitivity of the survival probabilities with respect to the continuous inputs.... ..."

Cited by 4

### Table 1: Weight discretization in multilayer neural networks: o -chip learning.

"... In PAGE 4: ... neural network paradigms. A compact overview of a large variety of results on the e ects of limited precision in neural networks can be found in Table1 to 4. These tables list the number of bits that are required for satisfactory (learning) performance and brie y describe the core idea of the algorithms.... In PAGE 4: ... Only the forward propagation pass in the recall phase is performed on-chip whichmakes these quantization e ects amenable for mathematical analysis using a statistical model. Some of the results have been summarized in Table1 which indicate that the accuracy needed in the on-chip forward pass is around 8 bits. In [Pich e-95] a comparison between Heaviside and sigmoidal multilayer networks is given, showing that the weight precision required inaHeaviside network is much higher and even doubles when a layer is added to the network.... In PAGE 6: ...lgorithms with the entropy(number of bits) upper bounds of the data set [Beiu-96.2]. Finally,wewould like to point out that a comparativebenchmarking study of quantization e ects on di erent neural network models and the improvements that can be obtained byweight discretization algorithms has not yet been done. The accuracies listed in Table1 to 4 are therefore highly biased by... ..."

### Table 1. Comparison of learning time between two neural networks

2005

"... In PAGE 4: ...GBNN. Fig.4 is the segmentation result of the three-layer RBFNN, which uses competitive learning algorithm at the first layer and the gradient-descent learning algorithm at the second layer. Table1 compares the learning time of two neural networks, and Table 2 compares the segmentation accuracy of the two methods. Figure 2.... In PAGE 5: ...6.4% 78.7% 73.6% 94.3% From the results of Table1 , we can see that the learning speed of the FGBNN is faster than that of the three-layer RBFNN. From the results of Table 2, we can also conclude that the segmentation accuracy of the FGBNN is higher than that of the three-layer RBFNN.... ..."

### Table 1. Classification accuracy for each class of networks. Average sensitivity and specificity are reported from 603 neural network models after 800 epochs of training.

"... In PAGE 5: ...5 within 100 epochs. The results of classification as a random or scale free network topology are summarized in Table1 . Since only two classes are discriminated, sensitivity and specificity display symmetric information.... ..."

### Table 1 Neural network architectures

2003

"... In PAGE 6: ...etter as the scale is increased, i.e. as the data becomes smoother. On the final smooth trend curve, resid(t)in Table1 , a crude linear extrapolation estimate, i.e.... In PAGE 6: ...avelet coefficients at higher frequency levels (i.e. lower scales) provided some benefit for estimating variation at less high frequency levels. Table1 sum- marizes what we did, and the results obtained. DRNN is the dynamic recurrent neural network model used.... In PAGE 6: ...sed. The architecture is shown in Fig. 3. The memory order of this network is equivalent to applying a time- lagged vector of the same size as the memory order. Hence the window in Table1 is the equivalent lagged vector length. In Table 1, NMSE is normalized mean squared error, DVS is direction variation symmetry (see above), and DS is directional symmetry, i.... In PAGE 6: ... Hence the window in Table 1 is the equivalent lagged vector length. In Table1 , NMSE is normalized mean squared error, DVS is direction variation symmetry (see above), and DS is directional symmetry, i.e.... In PAGE 7: ...ion of these results can be found in Ref. [4]. For further work involving the DRNN neural network resolution scale. From Table1 , we saw how these windows were of effective length 10, 15, 20, and 25 in terms of inputs to be considered. Fig.... ..."

### Table 7: Recommendations for Neural Network Use with Education Policy Analysis Questions

in Enhancing our Understanding of the Complexities of Education: "Knowledge Extraction from Data" using

"... In PAGE 22: ... (See Table 6) Table 6: Over and Under-representation of Asian/Pacific Island Students Group CHI FIL JAP KOR SEA PI SA WA ME OTH 1 -1% 3% -2% -4% 6% 4% -5% -1% -1% 1% 2 -1% 1% 4% -9% 5% 1% -1% -2% 4% -1% 3 0% 0% 1% 1% -3% -3% 3% 1% 1% -1% 4 9% -5% 1% 2% -7% 0% 0% -2% -2% 4% 5 -2% -3% -1% 6% -1% 0% 2% 0% -1% -1% Si milar discrepancies appear among Hispanic subgroups. Table7 suggests that the pattern of representation of the Hispanic aggregate group was substantially driven by the distribution of Mexican (MEX) students. Cuban students, to the contrary, were more likely to be found grouped with Asian/Pacific Island or White students than their Hispanic, Mexican counterparts.... In PAGE 22: ... Cuban students, to the contrary, were more likely to be found grouped with Asian/Pacific Island or White students than their Hispanic, Mexican counterparts. Table7 : Over and Under-representation of Hispanic Students Group MEX CUB PR OTHH 1 4.3% -1.... In PAGE 23: ... Yet, similar problems are likely to occur even when conventional methods are used. Table7 provides rough guidelines for applying neural networks to problems or questions related to education policy. Broadly speaking, the first two studies presented in this paper point to the particular value of hybrid neural/regression methods that apply neural or genetic algorithm estimation techniques to identify or construct a best predicting non-linear regression equation.... ..."

### Table 5 The descriptions of the rules generated from neural networks

in data

"... In PAGE 7: ... The rules generated from inductive learning methods consist of 16 rules, 10 of which are nonbankrupt rules while the others are bankrupt rules as shown in Table 4. The numbers of rules generated from neural networks are 12 as listed in Table5 . Half are nonbankrupt rules while the other half are nonbankrupt rules.... ..."

### Table 1: Neural network estimation results

"... In PAGE 8: ... The decision whether to use the neural net estimation or the analysis tool results can be based on a cost function re ecting the required delity and criticality of the results. Table1 shows four test results of the neural network for the aerodynamic analysis tool. Best results were obtained when the training was done for 1000 cycles with the struc- ture shown in Figure 5 and the learning rate was set to 0.... ..."

### Table 2: For test function with two inputs, mean (over 50 data samples) and 95% confidence interval for standardized MSE at 225 test locations, and for the temperature and ozone datasets, cross-validated standardized MSE, for the six methods. Method Function with 2 inputs Temp. data Ozone data

2004

Cited by 4

### Table 2: For test function with two inputs, mean (over 50 data samples) and 95% confidence interval for standardized MSE at 225 test locations, and for the temperature and ozone datasets, cross-validated standardized MSE, for the six methods. Method Function with 2 inputs Temp. data Ozone data

2004

Cited by 4