### Table 3: A comparison of the reconstruction time (min:sec), the triangle count, and the accuracy of the reconstructions of the pelvis and Armadillo Man models using Radial Basis Functions, Multi- Level Partition of Unity Implicits, and our method.

"... In PAGE 8: ... In the rst experiment, the point sets were non-uniformly sampled from the surface of the model and in the second experiment, the points were uniformly sampled from the surface of the model and then noise was added to the samples prior to the reconstruction. The results of our experiments are shown in Figures 9 and 10 and the complexity and accuracy of the reconstructions are described in Table3 . In both experiments, the points sets consisted of N = 100;000 samples and the surfaces were re- constructed at a band-width of b = 128.... In PAGE 9: ... In contrast, the additive nature of our method (as described in the previous section) gives rise to a surface reconstruction that averages out the noise and returns a surface that resembles a smoothed ver- sion of the initial model and is, on average, about twice as accurate as the reconstruction of the competing methods. The local nature of the optimizations in the Radial Basis Function and Partition of Unity approaches is further high- lited by the timing results in Table3 . Since the input samples are noisy, the iterative local optimizations converge less ef-... ..."

### Table 9: Performance data for two and three-dimensional, unordered, CCFFT on a 2048 processor CM-200.

1992

"... In PAGE 18: ... The latter uses only radix-8 kernels, which are the most e cient. Timings for two- and three-dimensional CCFFT are given in Table9 , and shown in Figure 7. The signi cant increase in performance for the two-dimensional CCFFT between the 1024 1024 array and the 2048 2048 array is due to one of the axis being local to a processor for the larger array.... ..."

Cited by 1

### Table 10: Performance data for two and three-dimensional, ordered, CCFFT on a 2048 processor CM-200.

1992

"... In PAGE 19: ... This part of the axis requires a radix-2 kernel, which is less e cient than the radix-4, and the radix-8 kernels normally used. For reference, performance data for ordered two and three-dimensional transforms are given in Table10 . The execution time increases by 50 - 100% for our examples, considerably more than for entirely local transforms.... ..."

Cited by 1

### Table 4. Multi-Level Threshold Results

"... In PAGE 7: ... Multi-Level Threshold Results Threshold Level 4 3 2 1 Value #28in Meters#29 30 18 8 2 Table 5. Multi-Level Threshold Values Table4 shows the number of PDUs generated and the average error in AOI and SR when our multi- level threshold dead reckoning algorithm is used. The threshold values used in di#0Berent levels are listed in Table 5.... In PAGE 7: ... The threshold values used in di#0Berent levels are listed in Table 5. It can be seen from Table4 that there is a great reduction in the average error in SR, compared to the average error in AOI. In our algorithm, if entity A is in entity B apos;s SR, a minimum threshold will be used in the dead reckoning so that B will receive A apos;s update packets most frequently.... ..."

### Table 2. Statistics of the multi-level preconditioner

"... In PAGE 5: ... In the table, the 5th and 6th columns indicate the total number of Newton iterations and Krylov iterations used in the Newton loop and by the GMRES solver, respectively, before the simulation convergence is reached. The performance of the proposed multi-level preconditioner is summarized in Table2 on the same set of designs, where the total number of Krylov iterations corresponds to that is used by the top- level FGMRES solver. Different from the previous experiments, we have adopted a multi-level structure where the largest sub- problem size on the next level is approximately one fourth of that on the current level.... ..."

### Table 1: An example of a three-dimensional table.

1993

"... In PAGE 3: ...Table1 has three categories, D1, D2 and D3; thus, it is a three-dimensional table. The logical relationship among the data items of a table is the association between labels and entries.... In PAGE 3: ... Each entry is associated with one or more sets of labels of di erent categories simultaneously. For example, in Table1 , entry e1 is associated with a set of labels fd11; d21; d311g simultaneously; entry e7 is associated with both fd12; d21; d312g and fd12; d22; d312g simultaneously. The data items and the logical relationship among them provide the logical structure of the table, which is the primary information that a table conveys and which is independent of its format.... In PAGE 6: ... This function guarantees that every entry in E is mapped from at least one ff1; ; fng 2 D1 n. Using this model, Table1 can be abstracted by (3; fD1; D2; D3g; E; ), where D1 = fd11; d12g D2 = fd21; d22; d23g D3 = fd31; d32g d31 = fd311; d312g d11 = d12 = d21 = d22 = d23 = d32 = d311 = d312 = fg E = fe1; e2; e3; e4; e5; e6; e7; e8; e9g (fD1:d11; D2:d21; D3:d31:d311g) = e1; (fD1:d11; D2:d21; D3:d31:d312g) = e2; (fD1:d11; D2:d22; D3:d31:d311g) = e3; (fD1:d11; D2:d22; D3:d31:d312g) = e3; (fD1:d11; D2:d23; D3:d31:d311g) = e4; (fD1:d11; D2:d21; D3:d32g) = e5; (fD1:d11; D2:d22; D3:d32g) = e5; (fD1:d11; D2:d23; D3:d32g) = e5; (fD1:d12; D2:d21; D3:d31:d311g) = e6; (fD1:d12; D2:d21; D3:d31:d312g) = e7; (fD1:d12; D2:d22; D3:d31:d312g) = e7; (fD1:d12; D2:d23; D3:d31:d312g) = e8; (fD1:d12; D2:d21; D3:d32g) = e9; (fD1:d12; D2:d22; D3:d32g) = e9; (fD1:d12; D2:d23; D3:d32g) = e9; 4.2 Basic operators in the tabular model We rst describe the syntax of all basic operators in function form by giving the operator identi ers and the types of their operands and results.... ..."

Cited by 2

### Table 4: The performance of the single-level and multi-level multi-constraint bisection algorithms.

1998

"... In PAGE 6: ...3 Comparison of Multilevel vs Single-level Partitioners In our third set of experiments, we compare the performance of the FM2-based single-level multi-constraint partition- ing algorithm against the multilevel multi-constraint partitioning algorithm that uses BFC for coarsening and FM2 for refinement. Table4 shows a variety of statistics for the two partitioning algorithms. For each circuit we performed 50 different runs of the single-level partitioning algorithm and 10 different runs of the multilevel partitioning algorithm.... ..."

Cited by 117

### Table 4: The performance of the single-level and multi-level multi-constraint bisection algorithms.

1998

"... In PAGE 6: ...3 Comparison of Multilevel vs Single-level Partitioners In our third set of experiments, we compare the performance of the FM2-based single-level multi-constraint partition- ing algorithm against the multilevel multi-constraint partitioning algorithm that uses BFC for coarsening and FM2 for refinement. Table4 shows a variety of statistics for the two partitioning algorithms. For each circuit we performed 50 different runs of the single-level partitioning algorithm and 10 different runs of the multilevel partitioning algorithm.... ..."

Cited by 117

### Table 1: The number of iterations of the parallel domain decomposition algorithm required to solve a typical three-dimensional convection-diffusion problem in [12].

"... In PAGE 5: ... Furthermore, it is applied to a class of convection- diffusion equations in three dimensions that is not covered by the underlying theory in [3]. Nevertheless, it proves to be surprisingly robust, as illustrated by the iteration counts shown in Table1 that are typical of the results in [12]. Furthermore, very creditable parallel performances are recorded, including parallel speed-ups in excess of 12 when using locally refined ... In PAGE 5: ...Table 1: The number of iterations of the parallel domain decomposition algorithm required to solve a typical three-dimensional convection-diffusion problem in [12]. The iteration counts shown in Table1 illustrate that the number of iterations of the parallel solver that are required to obtain a converged solution is essentially independent of the level of the finest mesh and the number of subdomains used. Hence, provided the sequential solver used on each processor (at step 4 of the algorithm in Figure 4) has a computational cost of O(N), the total cost of the parallel algorithm will also be approximately proportional to N.... ..."