### Table 2: Elapsed CPU time in seconds for the well-conditioned problems.

1996

"... In PAGE 21: ... Sensitivity to the termination tolerance is minimal unless the parameter is very large, since the algorithm always terminates with a step 8 \jump quot; to a solution that is accurate to at least 10 digits of precision. Table2 gives run times for the well-conditioned test problems. The solutions obtained by all our methods agreed to seven signi cant digits.... In PAGE 21: ... The solutions obtained by all our methods agreed to seven signi cant digits. In Table2 , \Memory quot; means that a given problem would not t in physical memory, and \Time quot; indicates that the speci ed code could not solve the problem within 14,400 seconds (4 hours). The results are also plotted graphically in Figures 1 through 5.... In PAGE 21: ... Table 3 gives run times for the more ill-conditioned problems. The format is the same as for Table2 , except that there are some \Failure quot; entries in the CONOPT column. These entries indicate that, once the problems were converted to quadratic programs, CONOPT rejected them as having no feasible solution.... ..."

Cited by 5

### Table 2: Elapsed CPU time in seconds for the well-conditioned problems.

1996

"... In PAGE 21: ... Sensitivity to the termination tolerance is minimal unless the parameter is very large, since the algorithm always terminates with a step 8 \jump quot; to a solution that is accurate to at least 10 digits of precision. Table2 gives run times for the well-conditioned test problems. The solutions obtained by all our methods agreed to seven signi cant digits.... In PAGE 21: ... The solutions obtained by all our methods agreed to seven signi cant digits. In Table2 , \Memory quot; means that a given problem would not t in physical memory, and \Time quot; indicates that the speci ed code could not solve the problem within 14,400 seconds (4 hours). The results are also plotted graphically in Figures 1 through 5.... In PAGE 21: ... Table 3 gives run times for the more ill-conditioned problems. The format is the same as for Table2 , except that there are some \Failure quot; entries in the CONOPT column. These entries indicate that, once the problems were converted to quadratic programs, CONOPT rejected them as having no feasible solution.... ..."

Cited by 5

### TABLE 6 Well-Conditioned Groups and Valid Transformed Views

### Table 1: c1 = c2 = 1, Stopping tolerance = 3:0 10?4. iterates the algorithm took (our results merely show that the theoretical estimates are sharp), but rather the e ciency of the algorithm in terms of computation time, We have timed the experiments in Table 1 and we found that, for these small \well{conditioned quot; problems, execution time is smallest when we choose not to precondition. This is due 24

"... In PAGE 24: ...ase 5.1.1: c1 = c2 = 1. Here, as there are no jumps in a, the condition number of S varies only with the number of unknowns in the problem. In Table1 consider rst the results without preconditioning.... ..."

### Table 2 also shows how many triangles NAR are generated in AR-improving procedure after -triangulation, where the number N of triangles is 222. We have also performed the triangulation in fully rounded interval arithmetic (RIA). Table 3 shows the result. As far as the time cost is concerned, RIA is one order of magnitude more expensive than FPA. However, we need to keep in mind that crucial computations such as intersection test between the edges of an approximately developed surface net, point mapping between the triangulation domain and the parametric space, Delaunay test and self-intersection check of the approximating triangles, may result in system failure if the algorithm is operating purely in FPA. Global behavior of the other realistic examples shown in Figure 9 is consistent with that of our rst example, where we set = 10?3 and AR = 4. Figure 9-(a) shows the result of meshing (Ntotal = 7832) for a surface of revolution, a typical example of rational B-spline surfaces. We have also implemented the method with a composite integral B-spline surface representing a part of a ship hull with a bulbous bow { see Figure 9-(b), where Ntotal = 2580. The next example is another composite integral B-spline surface that represents an airfoil. Figure 9-(c) illustrates the performance (Ntotal = 3310). In these examples, we can verify the well-conditioned meshing as well as local adaptivity near high curvature regions.

"... In PAGE 10: ...1 4.9 Table2 : Trimmed bi-cubic NURBS surface patch; AR-improving triangulation, = 10?2, N = 222 ble and dominant with respect to Cinit. Also in Table 1, CAR, Cfin and Ctotal show the time cost of the AR-improving procedure, intersection test/ nal triangulation, and whole process, respectively.... In PAGE 10: ... It is worthwhile to mention that the satisfactory average aspect ratio ARavg results from the coupled e ects of the features of Delaunay based point insertion algorithm; AR-improving procedure in the planar domain of triangulation; and preservation of triangles apos; shape during the mapping process from the triangulation domain into three-dimensional space. Table2 shows the dependency of AR-improving procedure upon the AR-threshold AR for a xed approx- imation tolerance = 10?2. In this example, 3% of Ntotal have AR gt; AR = 3:5 due to slight deformation occurred in the process of approximate locally isometric mapping.... ..."

### Table 4.2: Comparison of EBE and GEN EBE on a well conditioned problem (number of iterations / time to converge)

### Table 4.8: Comparison of EBE and GEN EBE on a well conditioned problem (number of iterations / time to converge)

### Table 2: The algorithm COPRIME associated vectors given in Theorem A.1 below it becomes clear that our \look{ahead quot; strategy allows us to only encounter subproblems of type (9) with a corresponding well{conditioned matrix of coe cients Sk(a; b). In contrast, in the classical Euclidean algorithm there is no freedom of choosing a stepsize s, since we only encounter \small quot; triangular systems. In other words, we just take the rst existing UR, though the corresponding quantity j det U(0)j might be very small. Thus it might happen that some of the unimodular reductions of the Euclidean algorithm are ill{conditioned problems. This is the fundamental problem of using the Euclidean algorithm in a numerical setting: solutions are sometimes built upon solutions of ill-conditioned subproblems making the nal answers highly inaccurate. Our observation may be nicely illustrated with help of the polynomials a; b of Example 2.1. Here

"... In PAGE 8: ... We also introduce a set A f0; 1; 2; : : :; mg of indices of scaled UR accepted by our criterion. The order of computation is schematically described in Table2 . For further details and proofs we refer to [1, 7].... In PAGE 14: ... In other words, the ideal lt; a(k); b(k) gt; provides as much information with regard to coprimeness as the original one if j det U(k)(0)j is not too small. 2 5 Numerical Experiments The algorithm COPRIME of Table2 was implemented in Matlab and experiments were run in order to verify the predicted behavior. In this section we report on the results of some of these experiments.... ..."

### Tables 1{4 show the numbers of iterations required for convergence for di erent choices of preconditioners. In the table, I denotes no preconditioner, S is the Strang preconditioner [21], Km;2r are the preconditioners from the generalized Jackson kernel Km;2r de ned in (2) and T = Km;2 is the T. Chan preconditioner [11]. Iteration numbers more than 3,000 are denoted by \y quot;. We note that S in general is not positive de nite as the Dirichlet kernel Dn is not positive, see [9]. When some of its eigenvalues are negative, we denote the iteration number by \{ quot; as the PCG method does not apply to non-de nite systems and the solution thus obtained may be inaccurate. The rst two test functions in Table 1 are positive functions and therefore correspond to well-conditioned systems. Notice that the iteration number for the non-preconditioned systems tends to a constant when n is large, indicating that the convergence is linear. In this case, we see that all preconditioners work well and the convergence is fast, see Theorem 4.4 and [9].

1989

Cited by 27