### Table 2: Comparison of Parallel -based Game-Tree Search Implementations

"... In PAGE 3: ... 2.2 Comparison of the Implementations Table2 summarizes an implementation of each algorithm given in Table 1. The rst column gives the name of the algorithm, and the reference to the paper that contains the details about the implementation.... In PAGE 10: ... By always selecting only one candidate move, DM-PVSplit generalizes into PV-Split. Since the PV set does not respect the structure of the minimal tree, the last two columns in Table2 re ect this by referring to the PV set, and not Knuth and Moore apos;s classi cation of minimal tree nodes. The algorithm is designed for use on strongly ordered trees.... ..."

### Table 2 Random search and parallel genetic algorithm comparison

1998

"... In PAGE 4: ... EXTENDED PARALLEL GENETIC ALGORITHM 2{ initialize population; create several equal evolution threads; wait while termination criterion is not reached; delete all threads; } Evolution thread forever{ perform tournament selection; delete selected individual; perform crossover; replace deleted individual; perform mutation; } Figure 5 The structure of parallel genetic algorithm with equal threads (EPGA_2) The parallel genetic algorithm was tested on several multidimensional problems. Table2 shows the results of the optimization of 38 dimensional approximation problem [14]. The global minimum of that problem is equal or greater than 0 (the smaller solution value is a better solution).... ..."

Cited by 2

### Table 1 The parallel multidirectional search algorithm.

1991

"... In PAGE 14: ... A distributed memory implementation. We begin with a statement of the basic algorithm, shown in Table1 . Each of the p processors3 constructs one vertex vi and its function value fvi.... ..."

Cited by 101

### Table 4: Some parallel genetic algorithm implementations. Platform GA type Topology Researcher/year

"... In PAGE 63: ... For this reason the scope of many parallel genetic algorithm implementations and experiments is to find out how the population should be divided into parallel processors and how possible sub-populations should interact for quick convergence. Some experiments are listed in Table4 , where it can be seen how the same platform have been used to study different topologies for centralized, distributed and network model parallel genetic algorithms [44]. The most popular computing platforms have been transputer-based systems, which is no wonder due to their low cost and simple interconnection scheme [125].... ..."

### Table 2: The performance of different implementations of multilevel k-way partitioning algorithm. This table shows the performance of the MPI- and SHMEM-based parallel algorithm, of the coarse-grain parallel multilevel refinement algorithm, and of the serial algorithm on an SGI workstation. In the case of the results of the parallel algorithms, for each graph, the performance is shown for 16-, 32-, 64-, and 128-way partitions on 16, 32, 64, and 128 processors, respectively. All times are in seconds.

1997

"... In PAGE 9: ... of Edges Description AUTO 448695 3314611 3D Finite element mesh MDUAL 258569 513132 Dual of a 3D Finite element mesh MDUAL2 988605 1947069 Dual of a 3D Finite element mesh Table 1: Various graphs used in evaluating the parallel multilevel k-way graph partitioning algorithm. Table2 shows the performance of various implementations of the multilevel k-way partitioning algorithm. The first two subtables show the performance of the coarse-grain and SHMEM-based parallel partitioning algorithms, respec-... In PAGE 10: ... Also, because the coarse-grain implementation is memory efficient, this increases the amount of time spent in the algorithm to set-up the appropriate data structures. The third subtable in Table2 shows the performance achieved by the coarse-grain parallel multilevel refinement algorithm. These results were obtained by using as the initial graph distribution, the partitioning obtained by the parallel multilevel k-way partitioning algorithm.... ..."

Cited by 25

### Table 2: The performance of different implementations of multilevel k-way partitioning algorithm. This table shows the performance of the MPI- and SHMEM-based parallel algorithm, of the coarse-grain parallel multilevel refinement algorithm, and of the serial algorithm on an SGI workstation. In the case of the results of the parallel algorithms, for each graph, the performance is shown for 16-, 32-, 64-, and 128-way partitions on 16, 32, 64, and 128 processors, respectively. All times are in seconds.

1997

"... In PAGE 9: ... of Edges Description AUTO 448695 3314611 3D Finite element mesh MDUAL 258569 513132 Dual of a 3D Finite element mesh MDUAL2 988605 1947069 Dual of a 3D Finite element mesh Table 1: Various graphs used in evaluating the parallel multilevel k-way graph partitioning algorithm. Table2 shows the performance of various implementations of the multilevel k-way partitioning algorithm. The first two subtables show the performance of the coarse-grain and SHMEM-based parallel partitioning algorithms, respec-... In PAGE 10: ... Also, because the coarse-grain implementation is memory efficient, this increases the amount of time spent in the algorithm to set-up the appropriate data structures. The third subtable in Table2 shows the performance achieved by the coarse-grain parallel multilevel refinement algorithm. These results were obtained by using as the initial graph distribution, the partitioning obtained by the parallel multilevel k-way partitioning algorithm.... ..."

Cited by 25

### Table 2: The performance of different implementations of multilevel k-way partitioning algorithm. This table shows the performance of the MPI- and SHMEM-based parallel algorithm, of the coarse-grain parallel multilevel refinement algorithm, and of the serial al- gorithm on an SGI workstation. In the case of the results of the parallel algorithms, for each graph, the performance is shown for 16-, 32-, 64-, and 128-way partitions on 16, 32, 64, and 128 processors, respectively. All times are in seconds.

1997

"... In PAGE 6: ... of Edges Description AUTO 448695 3314611 3D Finite element mesh MDUAL 258569 513132 Dual of a 3D Finite element mesh MDUAL2 988605 1947069 Dual of a 3D Finite element mesh Table 1: Various graphs used in evaluating the parallel multilevel k-way graph partitioning algorithm. Table2 shows the performance of various implementations of the multilevel k-way partitioning algorithm. The first... In PAGE 7: ... Also, because the coarse-grain imple- mentation is memory efficient, this increases the amount of time spent in the algorithm to set-up the appropriate data structures. The third subtable in Table2 shows the performance achieved by the coarse-grain parallel multilevel refinement algorithm. These results were obtained by using as the initial graph distribution, the partitioning obtained by the parallel multilevel k-way partitioning algorithm.... ..."

Cited by 25

### Table 3: Comparison of the performance of different volume rendering implementations on different parallel platforms.

### Table 3. Efficiency index

1994

"... In PAGE 12: ... The closer the index EI is to unity, the better the algorithm is in terms of efficiency. Table3 reports the values of the indices calculated according to equation (6) on the best tests of all the algorithms... ..."

Cited by 104

### Table 1 Performance of initial parallel implementation.

1992

"... In PAGE 8: ....1. Denormalized numbers.. Table1 contains the performance numbers for our initial implementation of the algorithm described above. Here P is the number of processors used, T is the total execution time (in seconds) for 10 timesteps, S is the observed speed-up over the execution time on one processor, E is the parallel e ciency, C is the maximum communication time (in seconds) observed on a single processor, and C=T is the ratio of the maximum communication time to the total execution time.... ..."

Cited by 22