### Table 2: Implementation Results of Parallel Connected Components of Images Algorithms

1994

"... In PAGE 2: ...2 ms 231 ns Table 1: Implementation Results of Parallel Histogramming Algorithms As with the histogramming algorithms, most of the previous connected components parallel algorithms as well are architecture- or machine-speci c, and do not port easily to other platforms. Table2 shows some previous running times for implementations of connected components for images using parallel machines. Again, several of these machines are special purpose image processing machines.... ..."

Cited by 27

### Table 4: Results of the Boltzmann machine implemented for the mapping problem Parallel Implementation: The Boltzmann Machine is from its nature a massively parallel algorithm that should be easy to implement on existing parallel systems. Unfortunately even small problems result in a large and very dense unit graph. Representing each unit by one process cuts down the e ciency due to the large number of context switches on each processor. Therefor the processors run only single processes simulating all units that are mapped onto them. 23

1993

"... In PAGE 24: ...between the processors. Table4 shows the results of the Boltzmann machine applied to the mapping problem. The solution quality is slightly better than that one of the genetic algorithm, although the results are far away for those that are achieved by our simulated annealing implementation.... ..."

Cited by 3

### Table 1. Interprocessor communication networks for a massively parallel machine (CRAY T3E-1200) with 512 processors and two Linux based PC clusters with 32 (Zampano) and 40 (MPCB) processors.

### Table 4: Comparison of the times taken by the split radix sort and the bitonic sort (n keys each with d bits). The constants in the theoretical times are very small for both algorithms. On the Connection Machine, the bitonic sort is implemented in microcode whereas the split radix sort is implemented in macrocode, giving the bitonic sort an edge. 10

1989

"... In PAGE 14: ... The split radix sort is fast in the scan model, but is it fast in practice? After all, our architectural justi cation claimed that the scan primitives bring the P-RAM models closer to reality. Table4 compares implementations of the split radix sort and Batcher apos;s bitonic sort [4] on the Connection Machine. We choose the bitonic sort for comparison because it is commonly cited as the most practical parallel sorting algorithm.... ..."

Cited by 138

### Table 1. Timings (in seconds) for the NAS CG benchmark on 3 parallel machines. Parallel Number of Baseline Improvements Final

1995

"... In PAGE 12: ... We ran the benchmark problem on two massively parallel machines: the 1024{processor nCUBE 2 at Sandia and the 128{node processor Intel iPSC/860 at NASA/Ames. The timing results for the benchmark calculation are shown in Table1 . Five timings are given for each machine.... ..."

Cited by 32

### Table 11 The transition system T: guarded choice. with x fresh distinct variables. Concerning predicate de nitions, let p be de ned by the following Flat GHC clauses:

1993

Cited by 11

### Table3-1. Analog VLSI vs.Digital VLSI ComputingPerformanceComparison

1996

### Table 2: The influence of tree and node scheduling on the performance of the parallel Descartes method. All experiments were conducted on 16 processors of the Parsytec GC/PP system. A massively parallel system with PowerPC 601 processors rated at 80 MHz that are connected by a grid. All times are is seconds. In case of the random polynomials, we show the average computation time of 50 experiments.

1999

"... In PAGE 10: ... Profile (b) shows the same polynomial in the same part of the computation, but, since the number of processors is different, the level schedules differ as well. Table2 shows, to what extend the node and tree scheduling heuristics contribute to the per- formance of the parallel Descartes method. For random polynomials of degree 2000 and 5000, and Chebyshev as well as Mignotte polynomials of degree 200 and 500.... ..."

Cited by 8

### Table 1: slave structure 5. PROBLEM AND CAUSES The parallel e ciency of this rst version is very dis- appointing. Several causes can be found for this lack of e ciency : communications are still huge and very frequent regarding to the computation load; commu- nications in themselves are slow and unstable on an Ethernet network comparatively to networks intercon- necting processors of massively parallel machines; each

1996

"... In PAGE 2: ... If convergence is assumed, convergence loops are over and slaves give their results back to the master. Table1 shows the structure of a slave.... ..."

Cited by 1