### Table 1: Summary of Main Results for I/O Complexity of Parallel Disk Sorting Algorithms. Algorithms with boldface names are asymptotically optimal: M = Merge sort, D = Distri- bution sort. SM = merge sort with any striping (S) allocation. SRM and SRD use Simple Randomized striping (SR). FRM and FRD use Fully Random (FR) allocation. RCM and RCD use Randomized Cycling (RC). handled concurrently. One way to interpret m is to view it as the amount of additional memory needed to match the performance of the algorithm on the multihead I/O model [1] (where load balancing disk accesses is not an issue).4

2001

"... In PAGE 22: ... One way to interpret m is to view it as the amount of additional memory needed to match the performance of the algorithm on the multihead I/O model [1] (where load balancing disk accesses is not an issue).4 Table1 summarizes our new results as applied to sorting, as well as the complexities of the algorithms mentioned in Section 1.1 for comparison purposes.... ..."

Cited by 12

### Table 9: Sparse Matrix Graphs. In the leftmost column, graphs are described by le name. When the optimal coloring size is known, this is noted in parentheses in the rst column. For each of RLF, Saturation and Hybrid, the following data is listed: (i) The size of the best coloring found by each algorithm. (ii) The number of runs achieving this coloring, over the total number of runs. (iii) The average running time, over all runs obtaining the best coloring. These runs were done on a single CM-5 processor which requires much less time to do i/o than a parallel run. These tests were not done on the Hybrid, due to lack of memory (the parallel memory allocator greatly overallocates memory on the CM-5).

1993

"... In PAGE 11: ... However, adding a simple clique nder, such as Matula and Johri apos;s dfmax, to the algorithm to nd lower bounds would have enabled the algorithm to quickly prove the colorings were optimal. Graphs for Parallelizing Iterative Solutions of Sparse Linear Systems (See Table9 ). The RLF algorithms optimally colored all of the graphs.... ..."

Cited by 19

### Tables I, II and III summarize the results for the cubic capacitor, bus crossing and woven bus structures, respectively, employing upto eight processors. It is observed that on one processor the grid con- volution algorithm takes between 50-70% of the total CPU time for larger problems. When the computation of the convolution is signifi- cant, goodspeedupsandparallel efficienciesare obtained asexpected. Typical results include a speedup of about 5 on 8 processors and a parallel efficiency of about 60%. The speedup on two processors is

### Table 1: A taxonomy of memory allocation algorithms discussed in this paper.

2000

"... In PAGE 3: ... Un- like Hoard, both of these allocators passively induce false sharing by allowing pieces of the same cache line to be recycled. Table1 presents a summary of allocator algorithms, along with their speed, scalability, false sharing and blowup characteristics. Hoard is the only one that solves all four problems.... ..."

Cited by 55

### Table 7: A taxonomy of memory allocation algorithms discussed in this paper.

2000

"... In PAGE 11: ...2. Table7 presents a summary of the above allocator algorithms, along with their speed, scalability, false sharing and blowup char- acteristics. As can be seen from the table, the algorithms closest to Hoard are Vee and Hsu, DYNIX, and LKmalloc.... ..."

Cited by 55

### Table 1: A taxonomy of memory allocation algorithms discussed in this paper.

"... In PAGE 3: ... Un- like Hoard, both of these allocators passively induce false sharing by allowing pieces of the same cache line to be recycled. Table1 presents a summary of allocator algorithms, along with their speed, scalability, false sharing and blowup characteristics. Hoard is the only one that solves all four problems.... ..."

### Table 7: A taxonomy of memory allocation algorithms discussed in this paper.

"... In PAGE 11: ...2. Table7 presents a summary of the above allocator algorithms, along with their speed, scalability, false sharing and blowup char- acteristics. As can be seen from the table, the algorithms closest to Hoard are Vee and Hsu, DYNIX, and LKmalloc.... ..."

### Table 7: A taxonomy of memory allocation algorithms discussed in this paper.

"... In PAGE 11: ...2. Table7 presents a summary of the above allocator algorithms, along with their speed, scalability, false sharing and blowup char- acteristics. As can be seen from the table, the algorithms closest to Hoard are Vee and Hsu, DYNIX, and LKmalloc.... ..."