### Table 1. Time-space tradeoffs for boolean BPs computing certain fundamental functions

2004

"... In PAGE 4: ... Specific results for certain target functions. We have applied the general method described in the foregoing paragraph to several well-studied target functions, and our results are summarized in Table1 be- low. In each case, we were able to find families of codes that encode the desired function on one hand and have a sufficiently large minimum distance on the other hand (see Section 4 for more details).... In PAGE 4: ... In each case, we were able to find families of codes that encode the desired function on one hand and have a sufficiently large minimum distance on the other hand (see Section 4 for more details). We note that the bounds1 in Table1 are based on Theorem 4 and Theorem 5, which are slightly stronger than (1). We also point out that the branching programs we consider are multi-output.... In PAGE 4: ...Table 1. Time-space tradeoffs for boolean BPs computing certain fundamental functions In all cases, except for the third row in Table1 , the underlying computation model is a deterministic boolean branching program that is not restricted in any way (not necessarily oblivious, or leveled, or read/write limited, and so forth). This makes it somewhat difficult to compare our results to the best previously known bounds, since these bounds usually apply to more restricted computation models.... In PAGE 4: ... One of these applies only to q-way BPs, where q grows as nO(1) and must be at least 2120. Since we are concerned with boolean (2-way) BPs, bounds of this kind are not directly comparable to those in Table1 . For boolean BPs, Sauerhoff and Woelfel [34] prove the following.... In PAGE 4: ... There exists a positive constant c such that for all r 6 c log n, the space of all the BPs in this set is bounded by n=r234r . There are several important differences between Theorem 1 and our bounds for IMUL in Table1 . First, Theorem 1 applies to nondeterministic BPs whereas our results do not; in this sense Theorem 1 is more 1All the logarithms in Table1, and throughout this paper, are to base 2.... In PAGE 4: ... There are several important differences between Theorem 1 and our bounds for IMUL in Table 1. First, Theorem 1 applies to nondeterministic BPs whereas our results do not; in this sense Theorem 1 is more 1All the logarithms in Table1 , and throughout this paper, are to base 2.... In PAGE 5: ... This difference does not seem to be significant, since it is known [12,38] that the middle bit is the hardest one to compute. A third difference is that the number of reads r in Theorem 1 is restricted to O(log n), whereas our bounds in the second and third rows of Table1 hold without this restriction. Note that when the number of reads is limited to r, the computation time T is also limited, since T 6 rn.... In PAGE 5: ...ondeterministic BPs and read-r vs. unrestricted BPs. However, ignoring these differences, we can try to make a comparison as follows. If r is constant and m = (n), then Theorem 2 reduces to S = (n), which is exactly the same result we get from the first row of Table1 for the case T = O(n). On the other hand, if r is allowed to grow, say r = 1=4 log n, then the bound on S in Table 1 becomes stronger than Theorem 2.... In PAGE 5: ... If r is constant and m = (n), then Theorem 2 reduces to S = (n), which is exactly the same result we get from the first row of Table 1 for the case T = O(n). On the other hand, if r is allowed to grow, say r = 1=4 log n, then the bound on S in Table1 becomes stronger than Theorem 2. With regard to DFT, the best known (to us) lower bound on the time-space tradeoff of boolean BPs, due to [1, 40], establishes TS = (n2).... In PAGE 5: ... Here, if time is superlinear in n, then the resulting bound on the space is sublinear. In contrast, the bound in the fourth row of Table1 makes it possible to provide superlinear bounds on space when time is also superlinear. For example, for2 T = !(n log1 T n) our results imply that S = !(n log1 S n), where T, S are arbitrary positive constants.... In PAGE 5: ... We then explain how these results lead to lower bounds on the time-space tradeoff of branching programs. In Section 4, we deal with specific target functions and prove the bounds compiled in Table1 . In particular, in Section 4.... In PAGE 5: ... From this, we infer the lower bound for the DFT operation. Finally, in Section 5, we describe several typical functional forms for the gen- eral lower bounds in Table1 . We also compare these results with the complexity of known algorithms [19].... In PAGE 11: ... Operation Time Space Model n-bit FMUL, CONV, MVMUL ! (n) (n) General BP2 n-bit FMUL, CONV, MVMUL ! nlog n log1+ T logn , 8 T gt; 0 ! n1 S , 8 S gt; 0 General BP2 n-bit IMUL ! n log1+ T logn , 8 T gt; 0 ! n1 S , 8 S gt; 0 General BP2 n-bit IMUL t = ! log n log1+ T log n , 8 T gt; 0 ! n1 S , 8 S gt; 0 read-r/write-w BP2 t = max fr, wg n-point DFT ! n log1 T n , 8 T gt; 0 ! n log1 S n , 8 S gt; 0 General BP2 n-point DFT ! nlog2 n log2 logn ! n1 S , 8 S gt; 0 General BP2 Table 2. Typical functional forms for the lower bounds in Table 1 It can be easily shown (using MATHEMATICATM for instance) that if we substitute the lower bounds in Table 2 for T and S in the corresponding time-space tradeoff expressions given in Table1 , they vanish asymptotically as n ! 1, thereby verifying the results in Table 2. Upper bounds from known efficient algorithms.... ..."

### Table 2. Relative scale and displacement in time and space for sensor and display.

"... In PAGE 9: ... These possibilities may be summed up by saying that, for both time and space, the scan and display may be either aligned, displaced, differ in scale, or be related by a distortion mapping, as shown in Table 2. The relationship of Table2 to the overall taxonomy of Table 1 is that Table 2 emphasizes the similarity of the values which can be assigned to the two dimensions time and space of the taxonomy.... ..."

### Table 6. Waiting Time Spacings under different traffic loading distributions (a1 a6 a79a78 a6

2000

"... In PAGE 6: ... In all the cases con- sidered, the system utilization is a52a64a25 a6 a29 a6 . The results are shown in Table6 . We can observe that under different traf- fic distributions, our proposed algorithm is highly effective in achieving the specified waiting time ratios.... ..."

Cited by 10

### Table 1: Asymptotic complexity for matrix construction method time space

1997

"... In PAGE 1: ... Yet, other polynomial multiplication methods, such as Karatsuba apos;s, may o er simpler though asymptotically slower alternatives; the latter may be advantageous in cer- tain circumstances, as discussed in section 8. Table1 com- pares the existing and the achieved complexities, in terms of matrix row and column dimension, respectively denoted a and c and the number of variables n, as explained in sec- tion 6. Note that a gt; c and typically a; c n.... ..."

Cited by 21

### Table 2. General Time and Space Results for the DT Framework

1997

"... In PAGE 16: ... For the ParcPlace and Geode libraries, we assume that a completely random ordering of the classes and selectors is representative of the natural ordering. Table2 presents the total time and memory requirements for each of these data sam- ples, applied to each of the techniques on the best, worst and natural (real) input order- ings. The DT code is implemented in C++, was compiled withg++ -O2, and executed on a Sparc-Station 20/50.... In PAGE 16: ... In [AR92], the incremental algorithm for SC took 12 minutes on a Sun 3/80 when applied to the Smalltalk-80 Version 2.5 hierarchy (which is slightly smaller than the Parcplace1 library presented in Table2 ), where this time excludes the processing of 4 A more accurate measure of fill-rate is possible, but is not relevant to this paper. So as not to... ..."

Cited by 8

### Table 2. General Time and Space Results for the DT Framework

1997

"... In PAGE 16: ... For the ParcPlace and Geode libraries, we assume that a completely random ordering of the classes and selectors is representative of the natural ordering. Table2 presents the total time and memory requirements for each of these data sam- ples, applied to each of the techniques on the best, worst and natural (real) input order- ings. The DT code is implemented in C++, was compiled withg++ -O2, and executed on a Sparc-Station 20/50.... In PAGE 16: ... In [AR92], the incremental algorithm for SC took 12 minutes on a Sun 3/80 when applied to the Smalltalk-80 Version 2.5 hierarchy (which is slightly smaller than the Parcplace1 library presented in Table2 ), where this time excludes the processing of 4 A more accurate measure of fill-rate is possible, but is not relevant to this paper. So as not to... ..."

Cited by 8

### Table 1. Asymptotic complexity for matrix construction method time space

2002

"... In PAGE 2: ... Yet, for smaller input sizes, other polynomial multiplication methods, such as Karatsuba apos;s, may o er simpler though asymptotically slower alternatives. Table1 compares the existing and the achieved complexities, in terms of row and column dimension, respectively denoted a and c, and the number of variables n, as explained in section 6. Note that a gt; c and typically c n.... ..."

Cited by 7

### Table 1. Time and space requirements to construct the tangible reachability set T

2004

"... In PAGE 11: ... For all models, we assume that immediate transitions have equal priority. Table1 compares the costs, in terms of computational and storage requirements, of constructing the tangible reachability set T for elimination during generation ver- sus elimination after generation. When vanishing states are eliminated during generation, we construct the next-state function N0 using Equation 3 and then generate T directly.... In PAGE 13: ... Substantial reduction in the cost of the transitive and reflexive closure computation might make elimination during generation more competitive; until this occurs, elimination after generation appears to be a better solution in practice. The first row for each model in Table1 corresponds to the largest tangible reachability set that could be generated using explicit techniques. For a rough comparison of times with explicit approaches, generation of the tangible states for the FMS model with N = 8 required about 2.... ..."

Cited by 2

### Table 4: Weighted Time and Space Averages Based on File Type Frequencies

1993

"... In PAGE 9: ... For example, a small, quickly generated index would not be a reasonable tradeoff if one could not use this index to locate desired data. Second, aggregate measure- ments (as given in Table4 ) are affected by the dis- tribution of different file types in the sample file sys- tems. Ideally, we would have measured each index- ing system against the same file system data.... In PAGE 10: ... This machine is approximately one-third as fast as the Sun 4/280. Table4 shows that Essence can index data fas- ter than WAIS. Taking into account the slower machine on which SFS was measured, SFS appears to index data somewhat faster than Essence does.... ..."

Cited by 36

### Table 4: Weighted Time and Space Averages Based on File Type Frequencies

1993

"... In PAGE 9: ... For example, a small, quickly generated index would not be a reasonable tradeoff if one could not use this index to locate desired data. Second, aggregate measure- ments (as given in Table4 ) are affected by the dis- tribution of different file types in the sample file sys- tems. Ideally, we would have measured each index- ing system against the same file system data.... In PAGE 10: ... This machine is approximately one-third as fast as the Sun 4/280. Table4 shows that Essence can index data fas- ter than WAIS. Taking into account the slower machine on which SFS was measured, SFS appears to index data somewhat faster than Essence does.... ..."

Cited by 36