Results 1 - 10
of
12,789
Table 1: Statistics of real life datasets.
2006
"... In PAGE 10: ... The datasets in- clude: the Mondial [18] geography dataset; the human subset of PIR protein information dataset from Protein Information Resource [1]; the DBLP [15] bibliography dataset. The statis- tics of each dataset are shown in Table1 : the schema and tuple class depth (the latter is usually smaller because of the skip-... ..."
Cited by 1
Table 1: Description of di#0Berent real-life datasets.
"... In PAGE 4: ... These datasets di#0Ber in the values of parameters like the number of #28tid,item#29 pairs, number of transactions #28tids#29, number of items and the average number of items per transaction. Table1... In PAGE 7: ... 5.3 Performance comparison of SQL-OR approaches We studied the performance of six SQL-OR approaches us- ing the datasets summarized in Table1 . Figure 8 shows the results for only four approaches: GatherJoin, GatherCount, GatherPrune and Vertical.... In PAGE 9: ... 6.1 Timing comparison In Figure 10, we show the performance of Cache-Mine, Stored- procedure, UDF and the hybrid SQL-OR implementation for the datasets in Table1 . We do not show the times for the Loose-coupling option because its performance was very close to the Stored-procedure option.... ..."
Table 6: Results for the real-life datasets: p-value.
2007
"... In PAGE 18: ... The median and IQR are given to match the characteristic features of the boxplots. The p-value should be as low as possible, and from Figure 5 as well as Table6 it is emminent that the L2-penalized Cox regression has the lowest p-values for all three data sets. Moreover, its p-values also have the lowest spread of all methods.... ..."
Cited by 1
Table 7: Results for the real-life datasets: R2-value.
2007
Cited by 1
Table 8: Results for the real-life datasets: variance of the martingale residuals.
2007
Cited by 1
Table 2 Comparison of learning performances on real-life dataset
"... In PAGE 7: ...6. The comparison is shown in Table2 and Fig. 3(b) where only the original structure and the structure learnt by SSEM are presented.... ..."
Table 5. Results on real-life datasets Satellite Pendigits Dig44 Vst method
2000
"... In PAGE 9: ... Each database was split into three disjoint parts: a learning set (LS), a pruning set (PS) and a test set (TS). Results are summarized in Table5 (from top to bottom): { A single decision tree was built from LS, then post-pruned using PS with classical and median discretization. { Using the same 25 bootstrap samples, we built 25 fully developed trees ( = 1:0) with classical discretization, we pruned them individually, then in a combined way and tested the three bagged sets of trees using class- probability estimates averaging.... In PAGE 10: ... Combined pruning. From Table5 , it is clear that individual pruning tends to produce trees which are not complex enough given the reduction of variance due to bagging. On the other hand, combined pruning always decreases complexity (by 20% on average) without notable change in accuracy with respect to full... ..."
Cited by 1
Table 1: Datasets Used In Experiments. Top: Real- life Datasets; Bottom: Synthetic Datasets
2002
"... In PAGE 5: ... For the experimental study we used nine real life and three synthetic datasets. Their characteristics are summarized in Table1 . All datasets except 3DSin have been used before extensively in experimental studies.... ..."
Cited by 9
Table 2. Results of running flve methods on real-life datasets (average CPU time in seconds for each query point)
"... In PAGE 16: ... The datasets range from 8 to 160 dimensions. Table2 shows the results of the flve search methods using the Local-T pruning strategy. It is obvious that dynamic search with sampling-based learning process works best in all the real-life datasets.... ..."
Table 1: Performance of the algorithms on real life databases.
1998
"... In PAGE 6: ... The databases and their descriptions are available on the UCI Machine Learning Repository [13]. The number of rows, columns, and minimal dependencies found (N) in each database are shown in Table1 . The datasets labeled Wisconsin breast cancer n are concatenations of n copies of the Wisconsin breast cancer data.... In PAGE 6: ... To avoid duplicate rows, all values in each copy were appended with a unique string specific to that copy. The top three rows of Table1 show the performance of the algorithms on three small databases. Our algorithms perform competitivelyin all cases.... In PAGE 7: ... This is a good demon- stration of how different approaches to pruning the search space have different effects. The bottom part of Table1 reports the performance of TANE on five large databases. For TANE/MEM and FDEP, some experiments are marked with (*) as infeasible; for TANE/MEM because of the lack of main memory, and for FDEP if it did not finish within 5 hours.... ..."
Cited by 35
Results 1 - 10
of
12,789