### Table 1 summarizes the symbols and definitions introduced in this section. In the sequel we show how they can be applied for multiway spatial joins.

1999

"... In PAGE 3: ...CostWQ(Ri,q) number of node accesses for a window query q on Ri CostRJ(Ri,Rj) number of node accesses for a spatial join between two R-trees Ri and Rj Table1 Table of symbols 3. MULTIWAY SPATIAL JOINS A multiway spatial join can be represented by a graph Q where Q[i][j] denotes the join condition between Ri and Rj.... ..."

Cited by 31

### Table 1. Optimal Processors for Split and Merge Model

1991

"... In PAGE 10: ...o this ratio is 0.004 or less. In the connected components algorithm discussed later, the ratio is less than 0.001. Table1 gives the optimal processor numbers for two image sizes and three algorithms under various combining methods. We see from this table that the number of processors that can be effectively used under the split and merge model is on the order of hundreds, not thousands, except with enormous images and algorithms in which the combining function is trivial.... In PAGE 11: ....2.4 Interprocessor Bandwidth Interprocessor bandwidth is critical in the split and merge model. The numbers in Table1 indicate a severe limita- tion in the number of processors that can be effectively exploited, even assuming favorable ratios like 0.001 between the and functions.... In PAGE 41: ... Border replicated means that the first and last rows and columns of the image are replicated, so that pixels (-1,0), (-2,0), (-3,0), etc. have the same Table1 . Correspondence between Adapt and C types Adapt Type C Type signed_byte char unsigned_byte unsigned char byte unsigned char signed_integer int unsigned_integer unsigned int integer int float float long_float... ..."

Cited by 2

### Table 3. Average prediction accuracies (with standard deviations) of the decision trees obtained using the di erent splitting strategies and evaluation functions.

"... In PAGE 33: ...5 apos;s standard options IG and GR. Table3 records the average prediction accuracies (with standard deviations) that were obtained when the trees were not required to be reduced; i.e.... In PAGE 33: ... Moreover, in individual domains the di erences are relatively consistently either in favor or against GR throughout the strategies. The only outstanding value di erence in Table3 is the di erence on the domain Sonar. In this example set there are only few duplicate values, which appears to be an impairing factor for the GR function.... In PAGE 34: ...Table 4. Statistically signi cant di erences in the prediction accuracies of Table3 when comparing the evaluation functions within splitting strategies. Binary Greedy Optimal Diff.... In PAGE 35: ...35 Table 5. Statistically signi cant di erences in the prediction accuracies of Table3 when comparing the splitting strategies without changing the evaluation function. GR BGlog Diff.... In PAGE 35: ... These results support quite rmly that the choice of the numerical attribute evaluation function has an e ect on the prediction accuracy of the resulting decision tree, no matter which splitting strategy is used. The results in Table3 also seem to indicate that the choice of partitioning strategy has only a marginal e ect on the outcome of induction. In order to test this hypothesis statistically, we also paired the results obtained by all the three strategies with each other according to the evaluation function that was used.... ..."

### Table 3: Optimized candidate splits for node 1.

1995

Cited by 1

### Table 3: Average tree agreement with di erent splitting functions. splitting data set

1998

"... In PAGE 13: ... This demonstrates the validity of the forest construction method and its independence of the splitting functions. Tree agreement Table3 shows the average tree agreement (averaged over all 1225 (50 49/2) pairs of 50 trees) for the eight splitting functions and the four datasets, estimated using the testing samples. The absolute magnitude of the measure is data dependent.... ..."

Cited by 176

### Table 4. Protein classification with split data sets. Training set Split1 Split2 Split3

2001

"... In PAGE 7: ... One of the partitions was used to train the decision tree, the other two were used to test the resulting classifier. The results (shown in Table4 ) demonstrate the ability of the decision tree to generalize effectively beyond the training data. PS50160 PS50010 PDOC00295 PS50003 PS50064 PS50048 PDOC50003 PDOC00605 PDOC00360 PDOC00360 PDOC00378 0 1 0 0 0 1 1 1 0 1 Figure 5.... ..."

Cited by 8

### Table 2 The splitting variables

"... In PAGE 6: ... No fraud com- panies with a high z score present high profitability, whereas fraud companies with a low z score present low profitability. Table2 depicts the splitting variables in the order they appear in the Decision Tree. In the second experiment we constructed the Neural Network model.... ..."

### Table 1: Benchmark circuit characteristics. To generate multi-way partitionings from MELO orderings, we apply the \DP-RP quot; algorithm of [1]. DP-RP accepts a vertex ordering and returns a restricted partitioning, i.e., a k-way partitioning such that each cluster is a contiguous subset of the ordering. DP-RP uses dynamic programming to nd the optimal set of k ? 1 splitting points in the ordering; it applies to all of the partitioning objectives that we have discussed. We choose to minimize the Scaled Cost objective since it has no size constraints, provides a single quantity that measures the quality of a linear ordering, and permits comparisons to previous algorithms. With the Scaled Cost objective and no cluster size bounds, DP-RP has O(kn2) time complexity.

1995

"... In PAGE 13: ...Experimental Results In this section, we present four sets of experiments that we performed with MELO: Comparisons of the weighting schemes proposed in the previous section, Comparisons with di erent values of d, Multi-way partitioning comparisons with recursive spectral bipartitioning (RSB), KP [10], and SFC [1] algorithms, Balanced 2-way partitioning comparisons with SB [25] and PARABOLI [38]. Our experiments use the set of ACM/SIGDA benchmarks listed in Table1 (available via the World Wide Web at http://ballade.... ..."

Cited by 56

### Table 5. The most significant splitting points during regression tree construction.

2006

"... In PAGE 6: ... The regression tree splitting forms a significant part of the model construction, and contributes to its accuracy. We present in Table5 the initial and most significant splits. For mcf the most significant splits are for L2 latency, L1 data cache latency, and L2 size, while for vortex these are L1 data cache latency, instruction cache size and issue queue size.... ..."

Cited by 4