### Table 1. Comparison of results for various approaches.

"... In PAGE 8: ... 4. Numerical Results Table1 compares the balance and uniformity (t,s) of (n,2) de Bruijn sequences... In PAGE 9: ... In the case of Algorithm II, the characteristics of the sequences obtained by the optimal mappings with respect to both balance and uniformity criteria are shown. ------------------------- Table1 goes here ------------------------- In Table 1, we observe that: 1. Although Algorithm I generates sequences with optimal uniformity (minimum s), the corresponding balance criterion t is rather large.... In PAGE 9: ... In the case of Algorithm II, the characteristics of the sequences obtained by the optimal mappings with respect to both balance and uniformity criteria are shown. -------------------------Table 1 goes here ------------------------- In Table1 , we observe that: 1. Although Algorithm I generates sequences with optimal uniformity (minimum s), the corresponding balance criterion t is rather large.... ..."

### Table 2: Technology Mapping results

"... In PAGE 8: ... The results show that the Boolean approach reduces the number of matching algorithm calls, nd smaller area circuits in better CPU time, and reduces the initial network graph because generic 2-input base function are used. Table2 presents a comparison between SIS and Land for the library 44-2.genlib, which is distributed with the SIS package.... ..."

### Table 1. Comparison of Traditional and IQ.

"... In PAGE 2: ... According to the defined model, we deduce the lifetime for- mula for both query protocols. After we assign practical val- ues of sensor parameters obtained from Berkeley motes to the formula, we get the deduced results listed in Table1 . The de- duction procedure is available in the technical report version [8].... In PAGE 2: ... The de- duction procedure is available in the technical report version [8]. Table1 denotes the RLSN and RLIS of different sensors using two query protocols. In the table, we can find that the sensor network using the IQ protocol has larger RLSN than that of us- ing Traditional as shown in last row by providing a global optimization to balance the load to the whole sensor network as shown in the first three rows, where the sensors at different loca- tion have different RLIS in Traditional and have same RLIS in IQ.... ..."

### Table 1: Comparison of Traditional and IQ.

"... In PAGE 2: ... According to the defined model, we deduce the lifetime formula for both query protocols. After we assign practical values of sensor parameters obtained from Berkeley motes to the formula, we get the deduced results listed in Table1 . The deduction procedure is available in the technical report version [5].... In PAGE 2: ... The deduction procedure is available in the technical report version [5]. Table1 denotes the RLSN and RLIS of different sensors using two query protocols. In the table, we can find that the sensor network using the IQ protocol has larger RLSN than that of using Traditional as shown in last row by providing a global optimization to balance the load to the whole sensor network as shown in the first three rows, where the sensors at different location have different RLIS in Traditional and have same RLIS in IQ.... ..."

### Table 1. Comparison of our algorithms against traditional link analysis algorithms

"... In PAGE 5: ...f participants per query was 4.192. Most of participants had a computer science background and extensive experience with web search. In Table1 , we present the average high relevance (HR) and relevance (R) ratios of all algorithms. In the table, the algorithm name followed by -A refers to the approxima- tion version (location-independent) of the corresponding algo- rithm.... In PAGE 5: ... In the table, the algorithm name followed by -A refers to the approxima- tion version (location-independent) of the corresponding algo- rithm. One can observe from Table1 that when the semantics of geographic entities are combined with link analysis of pages, the performance of geographically-orientedsearch is clearly im- proved. The performance of traditional link-analysis algorithms 8http://search.... ..."

### Table 1: Traditional estimates and bounds.

1999

"... In PAGE 11: ...xpression on the items in the set. Let v1; : : : ; vn be a uniform random sample of the multiset fx1; : : : ; xmg. We wish to estimate the aggregate (AVG, SUM, and COUNT) on all m values based on this sample of n values. Table1 summarizes the traditional estimates and the bounds for AVG, SUM and COUNT with no predicates, where p is the desired confidence probability. Shown are upper bounds for t such that Pr(je ? j t) p, where is the precise result to an aggregate, and e is an estimate based on n samples.... In PAGE 11: ... The last column indicates whether or not a bound is guaranteed with probability p or holds with probability p only under large sample assumptions [HHW97, Haa97]. Comparing the bounds in Table1 , we see that among the two bounds using ^ , the Chebychev (estimated ) bound is better than the CLT bound whenever n gt; 1=(z2 p(1?p)). Since n must be sufficiently large for either approximation to hold, the Chebychev bound is better unless the desired error probability is inversely proportional to n.... In PAGE 12: ... We report an estimate and a bound based on the ej. We can apply any of the methods in Table1 to obtain the chunk estimators, ej, and the confidence bounds on the estimators. Since each chunk estimator is based on only a subsample, the confidence in a single chunk estimator is less than if it were based on the entire sample.... In PAGE 13: ... Thus the best choice for k depends on the relationship of and t in Equation 2 as a function of k, and the desired confidence p = 1 ? qk. In the remainder of this section, we highlight our results analyzing and comparing the effects of applying the various methods in Table1 , and determining the optimal number of chunks. Table 2 summarizes our analysis on the use of Chebychev for Equation 2 in conjunction with various values for p, with and without chunking.... In PAGE 14: ... The bounds are shown for Chebychev (known ) . Alternatively, as in Table1 , we can obtain bounds for Chebychev (estimated ) by plugging in ^ for in Table 2, where ^ is computed over all the sample points, not just those in one chunk. We can also obtain bounds for Chebychev (conservative) by plugging in (MAX ? MIN)=2 for .... In PAGE 15: ...s queries without joins (i.e., as single-table queries). There are several popular methods (see Table1 ) for obtaining error bounds for approximate answers to (single-table) aggregation queries. We have presented a detailed analysis that demonstrates the precise trade-offs among these methods, as well as a method based on subsampling which we call chunking .... In PAGE 20: ... Figure 5 plots the error bounds for the PropJoin allocation scheme for a summary size of 2%. It shows the 90% confidence bounds of three of the five techniques in Table1 , namely, Hoeffding, Chebychev (estimated ), and Chebychev (conservative).10 These bounds are compared with bounds based on chunk statistics.... ..."

Cited by 116

### Table 1: Traditional estimates and bounds.

1999

"... In PAGE 11: ... We wish to estimate the aggregate (AVG, SUM, and COUNT) on all a174 values based on this sample of a146 values. Table1 summarizes the traditional estimates and the bounds for AVG, SUM and COUNT with no predicates, where a182 is the desired confidence probability. Shown are upper bounds for a130 such that a183a94a184a3a29a144a122 a158 a90a178a185 a122 a80 a130a61a32 a118 a182 , where a185 is the precise result to an aggregate, and a158 is an estimate based on a146 samples.... In PAGE 11: ... The last column indicates whether or not a bound is guaranteed with probability a182 or holds with probability a182 only under large sample assumptions [HHW97, Haa97]. Comparing the bounds in Table1 , we see that among the two bounds using a164 a165 , the Chebychev (estimated a165 ) bound is better than the CLT bound whenever a146a192a191 a1a12a10 a29a55a162 a111 a163 a29 a1a91a90 a182 a32a61a32 . Since a146 must be sufficiently large for either approximation to hold, the Chebychev bound is better unless the desired error probability is inversely proportional to a146 .... In PAGE 12: ... We report an estimate and a bound based on the a158 a156 . We can apply any of the methods in Table1 to obtain the chunk estimators, a158 a156 , and the confidence bounds on the estimators. Since each chunk estimator is based on only a subsample, the confidence in a single chunk estimator is less than if it were based on the entire sample.... In PAGE 13: ... Thus the best choice for a24 depends on the relationship of a204 and a130 in Equation 2 as a function of a24 , and the desired confidence a182 a72 a1a194a90 a157 a25 . In the remainder of this section, we highlight our results analyzing and comparing the effects of applying the various methods in Table1 , and determining the optimal number of chunks. Table 2 summarizes our analysis on the use of Chebychev for Equation 2 in conjunction with various values for a182 , with and without chunking.... In PAGE 14: ... The bounds are shown for Chebychev (known a165 ) . Alternatively, as in Table1 , we can obtain bounds for Chebychev (estimated a165 ) by plugging in a164 a165 for a165 in Table 2, where a164 a165 is computed over all the sample points, not just those in one chunk. We can also obtain bounds for Chebychev (conservative) by plugging in a29 MAX a90 MINa32 a10 a14 for a165 .... In PAGE 15: ...s queries without joins (i.e., as single-table queries). There are several popular methods (see Table1 ) for obtaining error bounds for approximate answers to (single-table) aggregation queries. We have presented a detailed analysis that demonstrates the precise trade-offs among these methods, as well as a method based on subsampling which we call chunking .... In PAGE 20: ... Figure 5 plots the error bounds for the PropJoin allocation scheme for a summary size of a14 a149 . It shows the 90% confidence bounds of three of the five techniques in Table1 , namely, Hoeffding, Chebychev (estimated a165 ), and Chebychev (conservative).10 These bounds are compared with bounds based on chunk statistics.... ..."

Cited by 116

### Table 1: Operators and algorithms in a centralized query optimizer and their additional parameters

1995

"... In PAGE 4: ...Figure 1: Example of an operator tree and access plan Table1 lists some operators and algorithms implementing them together with their additional parameters. Operator Trees.... In PAGE 12: ....e., an operator O has algorithms A1 through An, and Null, as implementations. The pre-processor classifies O as an enforcer-operator, and algorithms A1 through An as enforcer-algorithms. An example of an enforcer-operator is the SORT operator, and an enforcer-algorithm is the Merge sort algorithm (shown in Table1 ). Enforcer-algorithms in the Prairie model are translated into enforcers in the Volcano model; enforcer-operators disappear in Volcano when the P2V pre-processor combines several I-rules to generate a Volcano rule (this is described in Section 3.... ..."

Cited by 13

### Table 1. Query Optimization and Query Plan Evaluation Times (in seconds)

2003

"... In PAGE 9: ...2 Quality of Plans and Optimization Time In this section, we examine the time taken to execute the query plans produced by the optimization algorithms, and the time taken by each algorithm to optimize the queries (the total query evaluation time is the sum of these two times). Both these results are presented in Table1 . In this table, the query optimization time is shown in a boldface font, and the plan execution time is shown in an italics font.... In PAGE 9: ....2.1 Quality of Plans To put the query plan execution times in perspective, we randomly (but not exhaustively) generated a number of query plans for each query, and picked the worst of these plans. This bad plan, which is shown in the last column of Table1 , is not necessarily the worst plan for a query. It is simply shown here to quantify the impact of a good query optimization algorithm.... In PAGE 9: ... It is simply shown here to quantify the impact of a good query optimization algorithm. By examining the plan execution times for the algo- rithms in Table1 (see the columns under Eval.), we ob- serve that the query plan execution times varies dramati- cally across different evaluation plans.... ..."

Cited by 34