### Table 4: Weighted Time and Space Averages Based on File Type Frequencies

1993

"... In PAGE 9: ... For example, a small, quickly generated index would not be a reasonable tradeoff if one could not use this index to locate desired data. Second, aggregate measure- ments (as given in Table4 ) are affected by the dis- tribution of different file types in the sample file sys- tems. Ideally, we would have measured each index- ing system against the same file system data.... In PAGE 10: ... This machine is approximately one-third as fast as the Sun 4/280. Table4 shows that Essence can index data fas- ter than WAIS. Taking into account the slower machine on which SFS was measured, SFS appears to index data somewhat faster than Essence does.... ..."

Cited by 36

### Table 4: Weighted Time and Space Averages Based on File Type Frequencies

1993

"... In PAGE 9: ... For example, a small, quickly generated index would not be a reasonable tradeoff if one could not use this index to locate desired data. Second, aggregate measure- ments (as given in Table4 ) are affected by the dis- tribution of different file types in the sample file sys- tems. Ideally, we would have measured each index- ing system against the same file system data.... In PAGE 10: ... This machine is approximately one-third as fast as the Sun 4/280. Table4 shows that Essence can index data fas- ter than WAIS. Taking into account the slower machine on which SFS was measured, SFS appears to index data somewhat faster than Essence does.... ..."

Cited by 36

### Table 4: Weighted Time and Space Averages Based on File Type Frequencies

1993

"... In PAGE 9: ... For example, a small, quickly generated index would not be a reasonable tradeoff if one could not use this index to locate desired data. Second, aggregate measure- ments (as given in Table4 ) are affected by the dis- tribution of different file types in the sample file sys- tems. Ideally, we would have measured each index- ing system against the same file system data.... In PAGE 10: ... This machine is approximately one-third as fast as the Sun 4/280. Table4 shows that Essence can index data fas- ter than WAIS. Taking into account the slower machine on which SFS was measured, SFS appears to index data somewhat faster than Essence does.... ..."

Cited by 36

### Table 1. Time-space tradeoffs for boolean BPs computing certain fundamental functions

2004

"... In PAGE 4: ... Specific results for certain target functions. We have applied the general method described in the foregoing paragraph to several well-studied target functions, and our results are summarized in Table1 be- low. In each case, we were able to find families of codes that encode the desired function on one hand and have a sufficiently large minimum distance on the other hand (see Section 4 for more details).... In PAGE 4: ... In each case, we were able to find families of codes that encode the desired function on one hand and have a sufficiently large minimum distance on the other hand (see Section 4 for more details). We note that the bounds1 in Table1 are based on Theorem 4 and Theorem 5, which are slightly stronger than (1). We also point out that the branching programs we consider are multi-output.... In PAGE 4: ...Table 1. Time-space tradeoffs for boolean BPs computing certain fundamental functions In all cases, except for the third row in Table1 , the underlying computation model is a deterministic boolean branching program that is not restricted in any way (not necessarily oblivious, or leveled, or read/write limited, and so forth). This makes it somewhat difficult to compare our results to the best previously known bounds, since these bounds usually apply to more restricted computation models.... In PAGE 4: ... One of these applies only to q-way BPs, where q grows as nO(1) and must be at least 2120. Since we are concerned with boolean (2-way) BPs, bounds of this kind are not directly comparable to those in Table1 . For boolean BPs, Sauerhoff and Woelfel [34] prove the following.... In PAGE 4: ... There exists a positive constant c such that for all r 6 c log n, the space of all the BPs in this set is bounded by n=r234r . There are several important differences between Theorem 1 and our bounds for IMUL in Table1 . First, Theorem 1 applies to nondeterministic BPs whereas our results do not; in this sense Theorem 1 is more 1All the logarithms in Table1, and throughout this paper, are to base 2.... In PAGE 4: ... There are several important differences between Theorem 1 and our bounds for IMUL in Table 1. First, Theorem 1 applies to nondeterministic BPs whereas our results do not; in this sense Theorem 1 is more 1All the logarithms in Table1 , and throughout this paper, are to base 2.... In PAGE 5: ... This difference does not seem to be significant, since it is known [12,38] that the middle bit is the hardest one to compute. A third difference is that the number of reads r in Theorem 1 is restricted to O(log n), whereas our bounds in the second and third rows of Table1 hold without this restriction. Note that when the number of reads is limited to r, the computation time T is also limited, since T 6 rn.... In PAGE 5: ...ondeterministic BPs and read-r vs. unrestricted BPs. However, ignoring these differences, we can try to make a comparison as follows. If r is constant and m = (n), then Theorem 2 reduces to S = (n), which is exactly the same result we get from the first row of Table1 for the case T = O(n). On the other hand, if r is allowed to grow, say r = 1=4 log n, then the bound on S in Table 1 becomes stronger than Theorem 2.... In PAGE 5: ... If r is constant and m = (n), then Theorem 2 reduces to S = (n), which is exactly the same result we get from the first row of Table 1 for the case T = O(n). On the other hand, if r is allowed to grow, say r = 1=4 log n, then the bound on S in Table1 becomes stronger than Theorem 2. With regard to DFT, the best known (to us) lower bound on the time-space tradeoff of boolean BPs, due to [1, 40], establishes TS = (n2).... In PAGE 5: ... Here, if time is superlinear in n, then the resulting bound on the space is sublinear. In contrast, the bound in the fourth row of Table1 makes it possible to provide superlinear bounds on space when time is also superlinear. For example, for2 T = !(n log1 T n) our results imply that S = !(n log1 S n), where T, S are arbitrary positive constants.... In PAGE 5: ... We then explain how these results lead to lower bounds on the time-space tradeoff of branching programs. In Section 4, we deal with specific target functions and prove the bounds compiled in Table1 . In particular, in Section 4.... In PAGE 5: ... From this, we infer the lower bound for the DFT operation. Finally, in Section 5, we describe several typical functional forms for the gen- eral lower bounds in Table1 . We also compare these results with the complexity of known algorithms [19].... In PAGE 11: ... Operation Time Space Model n-bit FMUL, CONV, MVMUL ! (n) (n) General BP2 n-bit FMUL, CONV, MVMUL ! nlog n log1+ T logn , 8 T gt; 0 ! n1 S , 8 S gt; 0 General BP2 n-bit IMUL ! n log1+ T logn , 8 T gt; 0 ! n1 S , 8 S gt; 0 General BP2 n-bit IMUL t = ! log n log1+ T log n , 8 T gt; 0 ! n1 S , 8 S gt; 0 read-r/write-w BP2 t = max fr, wg n-point DFT ! n log1 T n , 8 T gt; 0 ! n log1 S n , 8 S gt; 0 General BP2 n-point DFT ! nlog2 n log2 logn ! n1 S , 8 S gt; 0 General BP2 Table 2. Typical functional forms for the lower bounds in Table 1 It can be easily shown (using MATHEMATICATM for instance) that if we substitute the lower bounds in Table 2 for T and S in the corresponding time-space tradeoff expressions given in Table1 , they vanish asymptotically as n ! 1, thereby verifying the results in Table 2. Upper bounds from known efficient algorithms.... ..."

### Table 1: MAX-DICUT: Mean number of satis ed clauses with standard deviation. OB is the oblivious search, NOB the non{oblivious one, see text for details.

"... In PAGE 7: ... OB is the oblivious search, NOB the non{oblivious one, see text for details. Table1 summarizes the mean number of satis ed claused obtained by simple local search algorithm, using either the oblivious (LS-OB) or the non-oblivious (LS-NOB) function. The main result is that the NOB local search does lead to local optima of a better average quality with respect to OB.... In PAGE 7: ... This result con rms what has been found in [8] for the case of the usual disjunctive SAT problem. Better local optima are found if OB local search starts from a local optimum of NOB (line NOB amp;OB in Table1 ), and still better ones if 10 n additional iterations of LS+ are allowed (the best move is accepted even if it leads to worse function values). By considering the dependence on the density, let us note that the relative improvement of the average number of clauses satis ed by LS-OB and LS-NOB decreases for larger densities, passing from approximately 0.... ..."

### Table 1 MAX{DICUT: Mean number of satis ed clauses with standard deviation. OB is the oblivious search, NOB the non{oblivious one, see text for details.

1999

"... In PAGE 11: ... OB is the oblivious search, NOB the non{oblivious one, see text for details. Table1 summarizes the mean number of satis ed clauses obtained by sim- ple Local Search algorithms, using either the oblivious (LS-OB) or the non- oblivious (LS-NOB) function. The main result is that the NOB Local Search does lead to local optima of a better average quality with respect to OB.... In PAGE 11: ... This result con rms what has been found in (8) for the case of the usual disjunctive SAT problem. Better local optima are found if OB Local Search starts from a local optimum of NOB (line NOB amp; OB in Table1 ), and still better ones if 10 n additional iterations of LS+ are allowed (the best move is accepted even if it leads to worse function values). By considering the dependence on the density, let us note that the relative im- provement (clauses satis ed by LS-NOB minus clauses satis ed by LS-OB, divided by clauses satis ed by LS-OB) of the average number of clauses sat- is ed by LS-OB and LS-NOB decreases for larger densities, ranging from approximately 0.... ..."

Cited by 1

### Table 2. Checking time/space for JML Annotated Java Programs

2006

"... In PAGE 18: ...2 Experiments We applied Bogor to reason about six Java programs, most of which are multi-threaded and manipulate non-trivial heap- allocated data structures. Table2 reports several measures of program size: loc is the number of control points in the source text, threads is the number of threads of control in the instance of the program, and objects is the maximum number of allocated objects on any program execution. All programs were annotated with JML invariants, pre/postcon- ditions, and assignable clauses; the table highlights the chal- lenging features used in the specifications for each program.... In PAGE 19: ... In the following subsections, we give a brief description of each of the six programs used in the experiments and the driver used to perform each experiment. As mentioned be- fore, Table2 gives details about further configuration of the test drivers: number of threads and number of objects. We also give an overview of the kind of properties verified in each system.... ..."

Cited by 3

### Table 1. Trade-offs between N and the total solution time on the SPACE-960-r MINLP

"... In PAGE 11: ... Experimentally, we have observed a convex relationship between N and the total solution time. We illustrate this observation in Table1 for various values of N on the SPACE-960-r MINLP from the MacMINLP library [12]. It shows the average time to solve a subproblem, the total time to solve N subproblems in one iteration, the number of iterations needed to resolve the inconsistent global constraints, and the overall time to solve the problem.... In PAGE 12: ...013 0.015 close to the optimal value (as illustrated in Table1 ). Next, we reduce N by half (Line 8) and repeat the process.... In PAGE 12: ... As a result, we only evaluate one subproblem in each iteration of Figure 7 in order to estimate Tp(N) (Line 4). For the SPACE-960-r MINLP in Table1 , we set N to 480, 240, 120, 60, 30, 15. We stop at N = 15 and report N = 30 when overall time starts to increase.... ..."

### Table 1. Trade-offs between N and the total solution time on the SPACE-960-r MINLP

2005

"... In PAGE 11: ...ig.7. An iterative algorithm to estimate the optimal number of partitions the total solution time. We illustrate this observation in Table1 for various values of N on the SPACE-960-r MINLP from the MacMINLP library [12]. It shows the average time to solve a subproblem, the total time to solve N subproblems in one iteration, the number of iterations needed to resolve the inconsistent global constraints, and the overall time to solve the problem.... In PAGE 11: ... Assuming the number of iterations for resolving the global constraints to be small, overall time will be related to the time to solve the original problem by a constant factor. This assumption is generally true for the benchmarks tested when N is close to the optimal value (as illustrated in Table1 ). Next, we reduce N by half (Line 8) and repeat the process.... In PAGE 12: ...013 0.015 For the SPACE-960-r MINLP in Table1 , we set N to 480, 240, 120, 60, 30, 15. We stop at N =15and report N =30when overall time starts to increase.... ..."

Cited by 4

### Table 2 lists the time and space complexities of SVD, 2DPCA, and GLRAM. It is clear that GLRAM and 2DPCA have much smaller costs in time and space than SVD.

2005

"... In PAGE 16: ... Thus choosing a large d in general improves the performance of GLRAM in reconstruction and classification. However, the computation cost of GLRAM also increases as d increases, as shown in Table2 (Note that d = lscript1 = lscript2). There is a tradeoff between the performance and the computation cost, when choosing the best d in GLRAM.... ..."