### Table 1. Results on artificial data

"... In PAGE 6: ...ig. 2. A graphical representation of the dataset which can be optimally separated only with all three test types and the tree obtained by GDT-Mix system that when such compound relationships will exist in the real data our algorithm may tackle them successfully revealing invaluable information for specialists in a certain domain to which it might have been applied. The results on the range of datasets designed for this investigation are col- lected in Table1 . Because we analyze artificial data in this experiment, we know how, in terms of the type of tests used in the tree, the optimal solution can be represented.... In PAGE 6: ... The main aim of our endeavor in this work is to show that GDT-Mix can easily adjust to the specific problem. The analysis of Table1 proved that the GDT-Mix in-... ..."

### Table 1. Results on artificial data

"... In PAGE 6: ...as presented. All systems were tested with a default set of parameters. 3.1 Artificial Datasets Results of experiments with artificial datasets are gathered in the Table1 . For all domains GDT-MA and GDT-AP performed very well, both in terms of classi- fication accuracy and tree complexity.... ..."

### Table 1. Artificial Data Sets.

1996

"... In PAGE 4: ... Both these problems are taken from Fukunaga apos;s book [7], and are 8-dimensional, 2-class problems, where each class has a Gaussian distribution. For each problem, the class means and the diagonal elements of the co- variance matrices (o -diagonal elements are zero) are given in Table1 . From these speci cations, 1000 train- ing patterns and 1000 test patterns were generated for each problem.... ..."

Cited by 7

### Table 5. Bonferroni flags on Artificial data set.

"... In PAGE 8: ... Table 2 and Table 3 show the number of associations flagged as being significant after the Benjamini/Hochberg adjustment of p-values for the real and artificial data sets respectively. Table 4 and Table5 show the number of associations flagged as being significant after the Bonferroni adjustment of p-values for the real and artificial data sets respectively. Significance level (%) Number of associations Percentage of total non-zero associations 5 810997 25.... ..."

### Table 1 Artificial data generating processes

1997

"... In PAGE 9: ... As a byproduct, the experiments provide some indication of the ability of these methods to detect departures from the conventional probit model specification and of the mixture of normals probit model to approximate other distributions of the disturbance. The latter questions are of no interest to a purely subjective Bayesian, but are probably of considerable concern to non-Bayesians We used five data generating processes, shown in Table1 . In each process, there is a single explanatory variable with mean zero and standard deviation five, and a coefficient of one.... In PAGE 11: ...for the five data sets, respectively. Each figure corresponds to one of the artificial data generating processes shown in Table1 , and each contains six panels. Each panel shows a relevant range of x values on the horizontal axis.... ..."

### Table 2. Artificial data sets under analysis

2005

"... In PAGE 7: ... 8 shows results obtained using structure function estimated on image B. When estimating the structure function for each data set in Table2 , we use bins in the range 100 700 days (Haarsma et al. 1999).... ..."

### Table 1. Classification results for the artificial data

2002

"... In PAGE 5: ... Each kernel was combined with the polynomial kernel (Vapnik, 1995). Table1 shows the classification accuracy averaged over several experiments. POLY means the degree of the polynomial kernel combined with the tree kernels or the BoL kernel.... ..."

Cited by 20

### Table 1: Learning statistics (artificial data)

1997

"... In PAGE 4: ....41 1.58 In the experiments, we changed the number of hidden units from 1 to 3 (h = 1,2,3) and performed 100 tri als for each of them. Table1 shows the basic statistics of MSE values, MDL values, iterations, and processing times (sec.).... ..."

Cited by 11

### Table 2: Learning statistics (noisy artificial data)

1997

"... In PAGE 4: ... mean of 0 and a standard deviation of 0.1. The other experimental conditions were exactly the same as be fore. Table2 shows the results. The best MSE values were minimized when h = 3, while the best MDL values 1Our experiments were done on HP/9000/735 computer.... ..."

Cited by 11