Results 1 - 10
of
714
Table 3. The results of ANN
in BUSINESS FAILURE PREDICTION WITH SUPPORT VECTOR MACHINES AND NEURAL NETWORKS: A COMPARATIVE STUDY *
"... In PAGE 9: ...on-bankruptcy: 75.00%). Three-layer BP ANN In ANN, each data set is split into three subsets: a training set, a test set and a validation set of 60%, 20%, and 20% of the data, respectively. Table3 shows the results of three-layer BP ANN. As the training epoch increases, the prediction accuracy of training set becomes higher.... In PAGE 10: ...As you see in Table3 , the best prediction accuracy of ANN (75.96%) to the test data is similar to that of SVM (76.... ..."
TABLE III SETTINGS FOR THE ANN
Table 1 ANN architectures
2006
Table 4.5: Overall Accuracy (%) Using Mid(R-L)/FRONT, Mid(R-L)/HIND and Other Variables, Data Configurations 1 and 2, Evaluation Data Sets
2005
Table 2: Comparison between standalone ANN and GA with ANN after 40
1999
"... In PAGE 5: ...2. Genetic Algorithm with ANN after 40 Generations Table2 shows the performance of the di#0Berent feature sets after running under the GA for 40 generations. All of the datasets have their best performance in excess of 97.... ..."
Cited by 1
Table 10: Characteristics of ANN and ES Expert Systems ANN
"... In PAGE 9: ...able 9: Financial Categorization of 18 Firms; An Example..................................... 39 Table10 : Characteristics of ANN and ES .... In PAGE 51: ... The main goal in the expert networks research is to create a synergy by the combination of the advantages of each basic technology. Table10... ..."
Table 1. Settings for the ANN Experiment
1998
"... In PAGE 3: ...1. The most successful settings for the ANN can be seen in Table1 . The settings which remained constant through all experiments included: learning rate and momentum, both set to 0.... ..."
Cited by 10
Table 1. Comparison of ANN and HIFAM.
1997
"... In PAGE 5: ... The results reported for ANN are taken from [8] and rep- resent a set of experiments with various runs and different architectures using the RPROP-algorithm (12 architectures, 3 runs per architecture, at most 3000 epochs 36 trials per data set). Table1 shows the results of the HIFAM in compar- ison to those of the ANN (showing the best result for each data set bold face). The tree structures and input partitions for each run of a HIFAM test were set manually and chosen by trial and error, the number of trials per data set always being clearly below 36 (2 or 3 different structures, from 2 to 10 fuzzy sets per input, evenly distributed).... ..."
Cited by 2
Results 1 - 10
of
714