### Table 1: Convergence rates for several = ( 1; 2)T for uniform re nement (upper) and adaptive re nement (lower). 4. References 1 Bank, R.E., Dupont, T.F., Yserentant, H.: The hierarchical basis multigrid method. Numer. Math. 52, 427-458 (1988) 2 Bank, R.E., Gutsch, S.: Hierarchical basis for the convection-di usion equation on unstructured meshes. Ninth Interna- tional Symposium on Domain Decomposition Methods for Partial Di erential Equations (P. Bj rstad, M. Espedal and D. Keyes, eds.), J. Wiley and Sons, New York, (1996) to appear 3 Bank, R.E., Gutsch, S.: The generalized hierarchical basis two-level method for the convection-di usion equation on a regular grid. submitted to the Proceedings of the 5th European Multigrid Conference in Stuttgart (1996) 4 Bank, R.E., Xu, J.: The hierarchical basis multigrid method and incomplete LU decomposition. Seventh International Symposium on Domain Decomposition Methods for Partial Di erential Equations (D. Keyes and J. Xu, eds.), 163-173. AMS, Providence, Rhode Island (1994) 5 Reusken, A.A.: Approximate cyclic reduction preconditioning. Preprint RANA 97-02, Eindhoven University of Technology Addresses: Randolph E. Bank, Department of Mathematics, University of California at San Diego, USA Sabine Gutsch, Mathematical Seminar II, Christian-Albrechts-University Kiel, Germany

### Table 1: Sequence and structure statistics for the HP model for N = 16 and 18.

"... In PAGE 7: ... Hence such systems have been extensively used for gauging algorithm performances. In Table1 properties for N = 16 and 18 systems are listed [18, 19]. A structure is designable if there exists a sequence for which it represents... In PAGE 9: ...Table1 ) are subject to design by minimizing E(r0; ) for all NH. If the resulting minima are non-degenerate for xed NH, the sequences are kept as candidates for good sequences, otherwise they are discarded.... In PAGE 20: ....91 0.01 Table 6: Design results for six N = 20 o -lattice target structures. The corresponding sequences are those from Table1 in [11] but here ordered according to decreasing h 2i. Same notation as in Table 5.... ..."

### Table 1: Load Balancing Statistics [18] Shasha D. and Goodman N. Concurrent Search Tree Algorithms, ACM Transactions on Database Systems, 13(1), 1988, pp. 53-90. [19] Weihl E. W. and Wang P. Multi-version Memory: Software cache Management for Concurrent B- Trees, Proceedings of the 2nd IEEE Symposium on Parallel and Distributed Processing, 1990, pp. 650-655. [20] Yen I. and Bastani F. Hash Table in Massively Parallel Systems, Proceedings of the 1992 Interna- tional Conferences on Computer Languages, April 20-23, 1992, pp. 660-664.

1992

"... In PAGE 16: ... With hot spots the variation is much greater, indicating the nice e ect load balancing has for smoothing the variation and reducing the gradient. Finally Table1 shows the calculated average number of moves made by a node in the entire system, with and without hot spots and with and without load balancing, and the normalized variation of the capacity at each processor from the mean. The table shows that the load balancing reduces the coe cient of variation at the cost of a very small increase in the average moves in the system, indicating that load balancing is e ective with low overhead.... ..."

Cited by 4

### Table 6.5: Settings for Site Observation Case. Flight: Long Beach, WA - San Diego, CA Launch Time (UTC): November 4, 2003 (00:00)

in Abstract Long Range Evolution-based Path Planning for UAVs through Realistic Weather Environments

2004

### Table 3 A comparison of the performance of the two algorithms on an SP2 parallel computer using three communication mechanisms. The table compares the running time of a standard parallel FFT with the running time of the new approximate DFT algorithm. Running times are in seconds. The three communication mechanisms that were used are user-space communication over the high- performance switch (US-HPS), internet protocol over the high-performance switch (IP-HPS), and internet protocol over Ethernet (IP-EN). The last two rows give the minimum and maximum ratios of the timings reported in the table to what we would expect from the sum of (16){(18) for TC or (19){(21) for TN.

1999

"... In PAGE 16: ... The high-performance switch allows additional processors to decrease the absolute running times of both algorithms. Table3 also shows that the conventional algorithm is more sensitive to degra- dation in communication bandwidth. For example, on an FFT of 1,048,576 points on 4 processors, the running time of the conventional algorithm increased by 0:932 seconds when we switched from user-space to IP communication over the HPS, but the running time of the new algorithm increased by only 0:423 seconds.... ..."

Cited by 6

### Table 3 A comparison of the performance of the two algorithms on an SP2 parallel computer using three communication mechanisms. The table compares the running time of a standard parallel FFT with the running time of the new approximate DFT algorithm. Running times are in seconds. The three communication mechanisms that were used are user-space communication over the High- Performance Switch (US-HPS), internet protocol over the High-Performance Switch (IP-HPS), and internet protocol over Ethernet (IP-EN). The last two rows give the minimum and maximum ratios of the timings reported in the table to what one would expect from the sum of Equations (16){(18) for TC, or Equations (19){(21) for TN.

1999

"... In PAGE 17: ... The High-Performance Switch allows additional processors to decrease the absolute running times of both algorithms. Table3 also shows that the conventional algorithm is more sensitive to degra- dation in communication bandwidth. For example, on an FFT of 1048576 points on 4 processors, the running time of the conventional algorithm increased by 0:932 seconds when we switched from user-space to IP communication over the HPS, but the running time of the new algorithm increased by only 0:423 seconds.... ..."

Cited by 6

### Table 2: Number of structures that get designed by the di erent approaches for N = 16 and 18; E(r0; )-minimization with xed NH = N=2 and with scanning through all NH, respectively, the nested MC approach of [7] (NMC), and the multisequence method (MS). Also shown is the computational demand for N = 18 (DEC Alpha 200).

"... In PAGE 9: ... The cost of doing this is for long chains much larger than that of the energy minimization itself. In Table2 the performance of the E(r0; )-minimization methods for N = 16 and 18 is compared with other approaches with respect to design ability and CPU consumption. As can be seen, the multisequence method with its 100% performance, is indeed very fast.... In PAGE 10: ... Indeed, it turns out that this structure has 296 crossing sequences. With these crossing phenomena, it is not surprising that the high-T expansion frequently fails as can be seen from the summary in Table2 , from which it is also clear that the performance deteriorates when increasing N from 16 to 18. MC methods have the advantage that the design temperature can be taken low enough to avoid crossing problems, without introducing any systematic bias.... ..."

### Table 1: 1-3 4-6 7-9 10-12 13-15 16-18 19-21 22-24

1996

"... In PAGE 4: ... JPEG is sensitive to these factors. Table1 below shows the results of a byte by byte comparison of the original image files and the JPEG processed versions, normalized to 100,000 bytes for each image. Here we see that the seagull picture has fewer than half as many errors in the most significant bits (MSB) as the glasses picture.... In PAGE 5: ... Again, the seagull picture has fewer errors. Given the information in Table1 , it is apparent that data embedded in any or all of the lower 5 bits would be corrupted beyond recognition. Attempts to embed data in these bits and recover it after JPEG processing showed that the recovered data was completely garbled by JPEG.... ..."

Cited by 4

### Table 1: 1-3 4-6 7-9 10-12 13-15 16-18 19-21 22-24

in Abstract

"... In PAGE 4: ... JPEG is sensitive to these factors. Table1 below shows the results of a byte by byte comparison of the original image files and the JPEG processed versions, normalized to 100,000 bytes for each image. Here we see that the seagull picture has fewer than half as many errors in the most significant bits (MSB) as the glasses picture.... In PAGE 5: ... Again, the seagull picture has fewer errors. Given the information in Table1 , it is apparent that data embedded in any or all of the lower 5 bits would be corrupted beyond recognition. Attempts to embed data in these bits and recover it after JPEG processing showed that the recovered data was completely garbled by JPEG.... ..."

### Table 1: 1-3 4-6 7-9 10-12 13-15 16-18 19-21 22-24

"... In PAGE 4: ... JPEG is sensitive to these factors. Table1 below shows the results of a byte by byte comparison of the original image files and the JPEG processed versions, normalized to 100,000 bytes for each image. Here we see that the seagull picture has fewer than half as many errors in the most significant bits (MSB) as the glasses picture.... In PAGE 5: ... Again, the seagull picture has fewer errors. Given the information in Table1 , it is apparent that data embedded in any or all of the lower 5 bits would be corrupted beyond recognition. Attempts to embed data in these bits and recover it after JPEG processing showed that the recovered data was completely garbled by JPEG.... ..."