### TABLE I Algorithm Evaluation Linear Time Space

1995

### TABLE I Algorithm Evaluation Linear Time Space

1995

### TABLE III Algorithm Evaluation Linear Time Space

1995

### Table 1 shows the volume/bend number tradeo apparent in the existing grid drawing algorithms. The algorithm of Eades, Stirk and Whitesides [6] (denoted A) and the compact algorithm of Eades, Symvonis and Whitesides [7] (B) require the least volume at the cost of more bends per edge. Their 3- bends algorithm (C) and the linear time incremental algorithm of Papakostas and Tollis [16] (D) establish an upper bound of 3 for the bend number.

1998

"... In PAGE 2: ...O(pn 3) O(pn3) 27n3 4:66n3 bend number 16 7 3 3 Table1 : Upper Bounds of Grid Drawing Algorithms Eades, Symvonis and Whitesides [7] had conjectured that there does not exist a 2-bend grid drawing of K7. Wood [21] presents a counterexample to this conjecture.... ..."

Cited by 4

### Table 7.2: Pseudocode for the graph construction algorithm. x lt; y. The above sketch implies the O(n3) algorithm shown in Table 7.2. With the constraint y lt; x + c, it can be turned into a linear-time algorithm with a large constant factor. The algorithm is perhaps too simple: it doesn apos;t consider the e ect groups of tones have on temporal coherence. Also, di erent grouping cues interact in complex ways [10]. However, we believe it leads to plausible results as long as the functions fm and fr are reasonable. We have derived sample fr and fm functions in a slightly ad hoc manner, not to mention using common sense. The functions are

### Table 2: Time of two nearest neighbor query approaches Variation ratio Time (sec., linear) Time (sec., hierarchical)

"... In PAGE 20: ... In [14], we present an e cient algorithm which uses a hierarchical quasi-Voronoi diagram to search for the nearest neighbor. Table2 shows average computation time for each sequence on the SGI INDIGO 2. The time was obtained based on the two di erent nearest neighbor query approaches, namely, the... ..."

### Table 2: Weight and cardinality statistics of the covers produced by the various al- gorithms on 30 random permutations of a data set consisting of biological data (56 sequences of 75 nucleotides each). Here GREEDY2 does outperform GREEDY1 on many instances. [2] R. Bar-Yehuda and S. Even, A Linear-Time Approximation Algorithm for the Weighted Vertex Cover Problem, J. Algorithms, 2 (1981), pp. 199{203. [3] V. Chvatal, A Greedy Heuristic for the Set-Covering Problem, Mathematics of Operations Research, 4 (1972), pp. 233{235. [4] T. H. Cormen, C. E. Leiserson, and R. Rivest, Introduction to Algorithms, MIT Press, 1990.

1996

"... In PAGE 20: ...Table2 shows the performance of the various algorithms for the (weighted) WOPC problem, where the objective is to minimize the total weight of the cover rather than its cardinality. Both the weight and cardinality of the solutions produced by GREEDY1 and GREEDY2 for the data sets are shown in the table.... ..."

Cited by 15

### Table 8: Closest Pair Given a set of line segments in the plane, the line intersection problem is the problem of determining all intersections of line segments in this set. For the rst four problems, algorithms running in O(n log(n)) time were implemented for the rst execu- tion. The second execution, using certi cation trails, runs in linear time. The rst execution algorithm used for line intersection runs in (O((k + n) log(n)) time where k is the number of intersections and n the num- ber of points. The second execution runs in O(k + n) time. Note that k may be quadratic in n.

1993

Cited by 4

### Table 1 shows that the extraction of rules can be per- formed quite efficiently. Our first algorithm, which has an exponential running time, cannot scale to process large corpora and extract a sufficient number of rules that a syntax-based statistical MT system would require. The second algorithm, which runs in linear time, is on the other hand barely affected by the size of rules it extracts.

"... In PAGE 7: ... Table1 : Running time in seconds of the two algorithms on 1000 sentences. k represent the maximum size of rules to extract.... ..."

### Table 1 shows that the extraction of rules can be per- formed quite efficiently. Our first algorithm, which has an exponential running time, cannot scale to process large corpora and extract a sufficient number of rules that a syntax-based statistical MT system would require. The second algorithm, which runs in linear time, is on the other hand barely affected by the size of rules it extracts.

"... In PAGE 7: ... Table1 : Running time in seconds of the two algorithms on 1000 sentences. k represent the maximum size of rules to extract.... ..."