### Table 1 gives the details of how large the data sets are and how many randomly selected positive and negative training examples we used. All experiments are carried out with five-fold cross-validation

1997

"... In PAGE 24: ... AE MSE PD PF T1 2 + 6 2 + 6 2 + 6 2 + 6 T2 1 + 70 1 + 70 1 + 70 3 + 207 T3 2 + 105 2 + 105 2 + 105 2 + 105 T4 8 + 244 8 + 244 8 + 244 8 + 250 T5 11 + 580 11 + 580 15 + 719 0 + 180 T6 17 + 516 17 + 516 25 + 758 13 + 517 T7 1 + 229 1 + 229 7 + 832 0 + 375 T8 13 + 1351 13 + 1351 5 + 530 0 + 375 Table 3: number of nodes explored/evaluated. Data Set number of facts number of positive examples number of negative examples T1 213 37 63 T2 175 22 117 T3 175 86 116 T4 175 46 212 T5 933 33 21 T6 717 a 64 24 s 24 66 i 12 88 T7 1600 60 20 T8 1600 20 60 Table1 : Statistics of data sets. AE MSE PD PF T1 1.... ..."

Cited by 3

### Table I. Computational Complexity Training Classification

2005

Cited by 24

### Table 1: Example compound word friedenspolitik quot;peace policy quot; with the counts of how many total words in the training corpus have the same beginning letters (middle row) and ending letters (bottom row).

"... In PAGE 2: ... Our data-driven splitting algo- rithm iteratively generates splittings that are statistically relevant with respect to the training corpus. For each distinct word of the training set we generate an array, illustrated in Table1 , tabu- lating for each letter of that word the total number of words in the training set which began or ended with the same sequence of letters, i.... ..."

### Table 1: Summary of the data sets used in this paper. Shown are the number of examples in the data set; the number of output classes; the number of continuous and discrete features describing the examples; the number of input, output, and hidden units used in the networks; and how many epochs each network was trained. Features Neural Network

1997

"... In PAGE 3: ... Datasets Our data sets were drawn from the UCI repository with emphasis on ones that were previously investigated by other researchers. Table1 gives the characteristics of the data sets we chose. The data sets chosen vary across a number of dimensions including: the type of the features in the data set (i.... In PAGE 3: ...he features in the data set (i.e., continuous, discrete, or a mix of the two); the number of output classes; and the number of examples in the data set. Table1 also... In PAGE 4: ...8 in our neural networks experiments. Experimental Results Table 2 shows the neural network and decision tree error rates for the data sets described in Table1 for the ve neural network methods and three decision tree methods discussed in this paper. Discussion One conclusion that can be drawn from the results is that both the Simple Ensemble and Bagging ap- proaches almost always produces better performance than just training a single classi er.... ..."

Cited by 86

### Table 1: Summary of the data sets used in this paper. Shown are the number of examples in the data set; the number of output classes; the number of continuous and discrete features describing the examples; the number of input, output, and hidden units used in the networks; and how many epochs each network was trained. Features Neural Network

1997

"... In PAGE 3: ... Datasets Our data sets were drawn from the UCI repository with emphasis on ones that were previously investigated by other researchers. Table1 gives the characteristics of the data sets we chose. The data sets chosen vary across a number of dimensions including: the type of the features in the data set (i.... In PAGE 3: ...he features in the data set (i.e., continuous, discrete, or a mix of the two); the number of output classes; and the number of examples in the data set. Table1 also... In PAGE 4: ...8 in our neural networks experiments. Experimental Results Table 2 shows the neural network and decision tree error rates for the data sets described in Table1 for the ve neural network methods and three decision tree methods discussed in this paper. Discussion One conclusion that can be drawn from the results is that both the Simple Ensemble and Bagging ap- proaches almost always produces better performance than just training a single classi er.... ..."

Cited by 86

### Table 1. Overlap matrix for selected GO categories

"... In PAGE 3: ...4% of all GO classes. Also, many of the categories are represented by the sequences which further reduces the set of categories to predict ( Table1 ). Predictors were successfully trained for the majority of the classes where many training examples were available.... ..."

### Table 1: Summary of the data sets used in this paper. Shown are the number of examples in the data set; the number of output classes; the number of continuous and discrete input features; the number of input, output, and hidden units used in the neural networks tested; and how many epochs each neural network was trained.

1999

"... In PAGE 8: ... These data sets were hand selected such that they (a) came from real-world problems, (b) varied in characteristics, and (c) were deemed useful by previous researchers. Table1 gives the characteristics of our data sets. The data sets chosen vary across a number of dimensions including: the type of the features in the data set (i.... In PAGE 8: ...et (i.e., continuous, discrete, or a mix of the two); the number of output classes; and the number of examples in the data set. Table1 also shows the architecture and training parameters used in our neural networks experiments. 3.... In PAGE 9: ... 3.3 Data Set Error Rates Table 2 shows test-set error rates for the data sets described in Table1 for five neural network methods and four decision tree methods. (In Tables 4 and 5 we show these error rates as well as the standard deviation for each of these values.... ..."

Cited by 109

### Table 1: Summary of the data sets used in this paper. Shown are the number of examples in the data set; the number of output classes; the number of continuous and discrete input features; the number of input, output, and hidden units used in the neural networks tested; and how many epochs each neural network was trained.

1999

"... In PAGE 8: ... These data sets were hand selected such that they (a) came from real-world problems, (b) varied in characteristics, and (c) were deemed useful by previous researchers. Table1 gives the characteristics of our data sets. The data sets chosen vary across a number of dimensions including: the type of the features in the data set (i.... In PAGE 8: ...et (i.e., continuous, discrete, or a mix of the two); the number of output classes; and the number of examples in the data set. Table1 also shows the architecture and training parameters used in our neural networks experiments. 3.... In PAGE 9: ... 3.3 Data Set Error Rates Table 2 shows test-set error rates for the data sets described in Table1 for ve neural network methods and four decision tree methods. (In Tables 4 and 5 we show these error rates as well as the standard deviation for each of these values.... ..."

Cited by 109

### Table 1: Summary of the data sets used in this paper. Shown are the number of examples in the data set; the number of output classes; the number of continuous and discrete input features; the number of input, output, and hidden units used in the neural networks tested; and how many epochs each neural network was trained.

1999

"... In PAGE 8: ... These data sets were hand selected such that they (a) came from real-world problems, (b) varied in characteristics, and (c) were deemed useful by previous researchers. Table1 gives the characteristics of our data sets. The data sets chosen vary across a number of dimensions including: the type of the features in the data set (i.... In PAGE 8: ...et (i.e., continuous, discrete, or a mix of the two); the number of output classes; and the number of examples in the data set. Table1 also shows the architecture and training parameters used in our neural networks experiments. 3.... In PAGE 9: ... 3.3 Data Set Error Rates Table 2 shows test-set error rates for the data sets described in Table1 for five neural network methods and four decision tree methods. (In Tables 4 and 5 we show these error rates as well as the standard deviation for each of these values.... ..."

Cited by 109