### Table 13: Evaluation of generation method using FFT runtime predictors for a Pentium. Place = generated formula X is best known formula. Slower = first generated formula is X% slower than best known formula.

2002

"... In PAGE 29: ...rom the space we considered (see Section 2.4) for sizes 212 to 218. Recall that the trained regression trees were trained only on data for FFTs of size 216. Table13 displays the results of using our generation method to construct fast FFT implementa- tions. With any of the feature sets, our method was able to construct the fastest known FFT formula of sizes 214 to 216 within the first 10 formulas that it generated.... ..."

Cited by 6

### Table 13: Evaluation of generation method using FFT runtime predictors for a Pentium. Place = generated formula X is best known formula. Slower = first generated formula is X% slower than best known formula.

2002

"... In PAGE 29: ...rom the space we considered (see Section 2.4) for sizes 212 to 218. Recall that the trained regression trees were trained only on data for FFTs of size 216. Table13 displays the results of using our generation method to construct fast FFT implementa- tions. With any of the feature sets, our method was able to construct the fastest known FFT formula of sizes 214 to 216 within the first 10 formulas that it generated.... ..."

Cited by 6

### Table 13: Evaluation of generation method using FFT runtime predictors for a Pentium. Place = generated formula X is best known formula. Slower = first generated formula is X% slower than best known formula.

2002

"... In PAGE 29: ...rom the space we considered (see Section 2.4) for sizes 212 to 218. Recall that the trained regression trees were trained only on data for FFTs of size 216. Table13 displays the results of using our generation method to construct fast FFT implementa- tions. With any of the feature sets, our method was able to construct the fastest known FFT formula of sizes 214 to 216 within the first 10 formulas that it generated.... ..."

Cited by 6

### Table 1 Hs of 3-dimensional unit cube

2005

"... In PAGE 17: ...possible 3-dimensional unit cubes. Table1 shows the performance of BAA and HAA for 3-dimensional unit cubes. It showed that HAA performs better than BAA except for 1-dimen- sional query.... ..."

### Table 3. Conditional probabilities associated with the response from the slower release (10)

2004

"... In PAGE 21: ...less reliable release (normally the old release) but is now worse than the reliability of the better release (normally the new release). This observation, true with respect to all types of responses - correct and incorrect - may be due to the specific way the correlation between the releases has been parameterised ( Table3 ). A more detailed study with a wider variety of values and different combinations of the conditional probabilities will provide further details about the interplay between the properties of the individual releases and of the chosen architecture for managed upgrade.... ..."

### Table 6 Work using max-degree abstraction Radius of Abstraction

1996

"... In PAGE 42: ... This means that an abstract solution will provide very little guidance, if any, and the search techniques will degenerate to breadth first search. This degeneration can be seen in Table6 , where the work of the refinement techniques is much smaller than the work done by breadth first search... ..."

Cited by 31

### Table 1: Comparison of the classi cation approach and the abstraction approach. The image preprocessing workload in CLA is heavier than in ABA. In both cases, the images are segmented, and a feature vector is computed for each connected component. In CLA, this is followed by a classi cation process resulting in a more time-consuming preprocessing step in CLA compared to ABA. In terms of the time to insert the images into the database after the preprocessing is complete, ABA is slower than CLA. In terms of user interaction during the image interpretation process, in ABA there is no need for human interaction. On the other hand, in CLA the user must assist in creating the training set which is used to automatically classify the images. Retrieving queries purely on the basis of content (with no spatial constraint, e.g., example query Q1) is slightly slower in ABA than in CLA. Our results varied from a retrieval time that was greater by a factor of 1.2 for the small data set, to a retrieval time that was greater by a factor of 2.4 for the large data set. A similar observation was made for hybrid queries (e.g., query Q2). In terms of exibility, ABA has a few advantages. The rst advantage is that it is applicable for a larger number of applications. CLA is only applicable when all the classes of the application are 17

"... In PAGE 18: ...Table1... ..."

### Table 1 Hs of 3-d unit cube

"... In PAGE 23: ... We calculate Hs(Qd) for all possible d-dimension range queries over all possible 3-dimensional unit cubes. Table1 shows the performance of BAA and HAA for 3-dimensional unit cubes. It showed that HAA performs better than BAA except for 1-dimension query.... ..."

### Table 2 Queries of 3-d unit cube

"... In PAGE 23: ...5 when using HAA, but the value increased to larger than 1 when using BAA. Table2 shows the performance of each algorithm on queries. Although BAA was slightly better for 1-dimension queries, HAA performed much better for higher dimension queries.... ..."

### Table 5: Illustration of Aggregation Cuboids and Data Cube.

2003

"... In PAGE 8: ... We notice that aggregation cuboids are solely decided by their a7 dimensions, but the core cuboid also depends on the tuples missing from it. Table5 illustrates concepts defined in Definition 2, following the example in Table 4. First we obtain two identical augmented dimensions as a31a7a2a10a2a15a2a14a2 a37a33.... In PAGE 8: ... First we obtain two identical augmented dimensions as a31a7a2a10a2a15a2a14a2 a37a33. As shown in the left cross tabulation in Table5 , the Cartesian product of the two augmented dimensions gives 25 vectors. Among them we have two 1-* ag- gregation cuboids as a31 a5 a7a2 a37a6a2 a5 a10a2 a37a6a2 a5 a15a2 a37a6a2 a5 a14a2 a37a6a33 and a31a37a2 a7a6a2 a5 a37a2a10a6a2 a5 a37a2a15a6a2 a5 a37a2a14a6a33, and one 2-* aggregation cuboids a5 a37a2 a37a6.... In PAGE 8: ... Among them we have two 1-* ag- gregation cuboids as a31 a5 a7a2 a37a6a2 a5 a10a2 a37a6a2 a5 a15a2 a37a6a2 a5 a14a2 a37a6a33 and a31a37a2 a7a6a2 a5 a37a2a10a6a2 a5 a37a2a15a6a2 a5 a37a2a14a6a33, and one 2-* aggregation cuboids a5 a37a2 a37a6. The right cross tabulation in Table5 shows a 2-dimensional data cube a29 a13 a21a22a23a24 a2 a2 a17 a16a16 a0, where a13 a21a22a23a24 is the same core cuboid as in Table 4, and a2 a17 a16a16 includes three aggregation cuboids. Notice that the two augmented dimensions are shown only for the purpose of clarity, and they are not a part of the data cube.... In PAGE 9: ... Aggregation cuboids in a2 a17 a16a16 or its subsets are sorted first in ascending order according to the num- ber of *-elements in their aggregation vectors, and then in descending order on the index of the *- elements. For example, a2 a17 a16a16 shown in Table5 is sorted as a31a31 a5 a7a2 a37a6a2 a5 a10a2 a37a6a2 a5 a15a2 a37a6a2 a5 a14a2 a37a6a33a2 a31 a5 a37a2 a7a6a2 a5 a37a2a10a6 a2 a5 a37a2a15a6a2 a5 a37a2a14a6a33a2 a31 a5 a37a2 a37a6a33a33. 3.... In PAGE 9: ... Table 6 illustrates the concept of aggregation matrix. The cross tabulation shows the same data cube as in Table5 , but the tuples and 1-* aggregation vectors are indexed with subscripts according to our order convention. For clarity purpose, normal font are used for the indexes of tuples while italic font for those of... In PAGE 15: ... Hence any aggregation set contains more than one tuple. For example, in Table5 , the aggregation set of all the 1-* aggregation vectors contain at least two tuples, and the only 2-* aggregation vector contains all the nine tuples in the core cuboid. On the other hand, a core cuboid containing fewer tuples than the upper bound given by Theorem 1 is always trivially compromised.... In PAGE 15: ... Two data cubes whose core cuboids have the same cardinality but different distribution of missing tuples can have different trivial compromiseability. For example, the core cuboid a13 a21a22a23a24 in Table5 is not trivially compromised. Without changing the cardinality of a13 a21a22a23a24 , we delete the tuple a5 a10a2a10a6 and add a new tuple a5 a7a2a14a6 to obtain a new core cuboid a13 a5 a21a22a23a24 .... In PAGE 16: ... The Inductive Hypothesis: For any a1a1a2a1a3 a7 a14, we can build a two dimensional data cube a29 a13 a21a22a23a24 a2 a2 a17 a16a16 a0 with dimensions a10a1a2a10a3, such that a13 a21a22a23a24 is non-trivially compromised by a2 a1. The Base Case: When a1a1 a12 a1a3 a12 a14, the data cube shown in Table5 validates the base case of our inductive hypothesis. The Inductive Case: Assuming that there exists non-trivially compromiseable two dimensional data cube a29 a13 a21a22a23a24 a2 a2 a17 a16a16 a0 with dimensions a11a7a2a1a1 a12 and a11a7a2a1a3 a12 , we show how to obtain a non-trivially compromiseable two dimensional data cube with dimensions a11a7a2a1a1 a9 a7 a12 and a11a7a2a1a3 a12 .... In PAGE 17: ... The second claim of Lemma 1 holds because for any aggregation vector containing more than one *-value, a set of 1-* aggrega- tion vectors exists such that they have the same aggregation set. For example, in Table5 the 2-* aggregation vector a5 a37a2 a37a6 and any of the two 1-* aggregation cuboids have the same aggregation set. Because of the second claim, it is sufficient to consider a2 a1 instead of a2 a17 a16a16 to determine the compromiseability of data cubes.... In PAGE 19: ... a0 a19 a30 a1 a5 a13 a14a15a16a16 a2a1a1a6 a28 a30 a1 a5 a13 a21a22a23a24 a2a1a1a6 a19a12 a10. The Base Case: In Table5 , the core cuboid a13 a21a22a23a24 satisfies a19 a13 a14a15a16a16 a28 a13 a21a22a23a24 a19a12 a10a1a1 a9 a10a1a3 a6 a0. We also have a19 a30 a1 a5 a13 a14a15a16a16 a2a14a6 a6 a30 a1 a5 a13 a21 a2a14a6 a19a12 a10.... In PAGE 22: ... a2 The first claim of Corollary 1 says that if the a7a8a9 1-* aggregation cuboid is essential for any non-trivially compromises, then every slice of the core cuboid on the a7a8a9 dimension must contain at least one missing tuples. As an example, in the core cuboid shown in Table5 , every slice on the two dimensions contains either one or two missing tuples. The second claim of Corollary 1 proves non-compromiseability for those core cuboids that have full slices on a7 a6 a7 of the a7 dimensions.... ..."

Cited by 3