### Table 1. Calculated error for the bilinear tensor product response surface t. Average of three trials for each value of m.

1994

"... In PAGE 9: ... Note that the results for m = 4; 6, and 9 points represent the minimum number of points needed to construct the bilinear, quadratic, and biquadratic response surfaces, respectively. The errors associated with the bilinear tensor prod- uct response surfaces are given in Table1 and show that the error for the D-optimal points decreased only slightly as the number of points ranged from four to twenty. This was to expected since the D-optimality criteria speci ed points on the perimeter of the design space when bilinear response surface functions were used to form X in equation (7).... ..."

Cited by 8

### Table 2. Results for the tensor product Haar basis for full and sparse grids Full grid spaces L N

"... In PAGE 25: ...25 We start with the piecewise constant case. For the full grid spaces VL and for the sparse grid spaces ~ VL, we obtain the results given in Table2 . The computations are performed by using the diagonally preconditioned discretization matrices BL BJL and ^ BL B ^ JL, respectively, as introduced at the end of subsection 2.... In PAGE 26: ... gt; 0 can be taken arbitrarily small (see, e.g., [28, section 7]). For the sparse grid spaces, a formal extrapolation of (37) in conjunction with Proposition 2 and (49) would only give ^ L = O(2?L(1=2? quot;)). Even though slightly better estimates can be proved due to the speci c type of singularity functions involved (as should be expected from comparing with the numerical evidence given in Table2 ), there is no hope for obtaining asymptotically the same approximation rate as for full grid spaces. More importantly, the ultimate goal should be adaptive methods since the presence of edge-corner singularities in the solution f of (47) leads to results that are far from the theoretical optimum.... In PAGE 26: ... See [40] for approximation schemes using graded tensor-product meshes towards the edges that restore these rates asymptotically for the true, low- regularity solution in (2). Furthermore, we see from Table2 that the use of tensor prod- uct Haar functions results in still slightly growing condition numbers for the diagonally preconditioned sti ness matrices and for the cg-... ..."

### Table I. Regression techniques for families of aggregation operators. Operator Approximation method General aggregation operator Monotone tensor product splines Commutative Explicit: tensor product spline on simplex Implicit: symmetrize the data

2003

Cited by 4

### Table 1. Performance of direct sum kernel and tensor product ker- nel in robust KEV adaptation. Results are word accuracies.

"... In PAGE 3: ....2.1. Experiment I: Direct Sum Kernel vs. Tensor Product Kernel We first compare the two types of composite kernels, direct sum kernel and tensor product kernel, using the robust KEV adaptation. The results are shown in Table1 . There is no significant difference between their performance.... ..."

### Table 1: Some possible kernel functions and the type of decision surface they define. The last two kernels are one-dimensional: multidimensional kernels can be built by tensor products of one-dimensional ones. The functions Bn are piecewise polynomials of degree n, whose exact definition can be found in (Schumaker, 1981)

1998

Cited by 158

### Table 6: Number of I/O passes for the tensor product IR AV IC with various data distributions. D = 16, Bd = 512, M = 222, and N = RV C.

1996

"... In PAGE 27: ... We now show that by using an appropriate cyclic(B) data distribution, a better performance program can be synthesized for most of the cases. Several typical examples are shown in Table6 . We notice that when we increase B, we can reduce the number of passes of data access for most of the cases and the decrease in the number of passes can be as large as eight times.... ..."

Cited by 7

### Table 1, i.e. k = PBc + Bp + b and l = Bc + b. The distribution basis for a multi-dimensional array can be expressed as a tensor product of the distribution bases for each dimension.

1994

"... In PAGE 5: ...Table1 : Index mapping functions for regular data distributions. BLOCK CYCLIC CYCLIC(b) local to global k = p(dN=Pe) + l k = lP + p k = (l div b)bP + bp + l mod b global to local l = k mod dN=Pe l = k div P l = (k div Pb)b + k mod b global to proc p = k div dN=Pe p = k mod P p = (k div b) mod P k global index 0 k N ? 1; l local index 0 l lt; db ?dN=(Pb)e ; p processor 0 p lt; P.... In PAGE 5: ... Techniques developed in [11] can be used for the array redistribution in the general case. For identity alignments, the relationships between the global index, the local index and the processor index for regular data distributions of a one-dimensional array are shown in Table1 . The indexing for arrays A and A loc begins at zero and the processors are numbered from 0 to P ? 1.... In PAGE 8: ... For example, under a BLOCK distribution the array is partitioned into segments of size NP . The relationship between the global index k, the processor index p, and the local index l as shown in Table1 can be represented by the equality eN k = eP p eNP l ; where p = k div NP and l = k mod NP . In the above identity, the index of vector basis eP p is associated with the processor index on which element A(k) is located after being distributed using a BLOCK distribution.... ..."

Cited by 8

### Table 1, i.e. k = PBc + Bp + b and l = Bc + b. The distribution basis for a multi-dimensional array can be expressed as a tensor product of the distribution bases for each dimension.

1994

"... In PAGE 5: ...Table1 : Index mapping functions for regular data distributions. BLOCK CYCLIC CYCLIC(b) local to global k = p(dN=Pe) + l k = lP + p k = (l div b)bP + bp + l mod b global to local l = k mod dN=Pe l = k div P l = (k div Pb)b + k mod b global to proc p = k div dN=Pe p = k mod P p = (k div b) mod P k global index 0 k N ? 1; l local index 0 l lt; db ?dN=(Pb)e ; p processor 0 p lt; P.... In PAGE 5: ... Techniques developed in [11] can be used for the array redistribution in the general case. For identity alignments, the relationships between the global index, the local index and the processor index for regular data distributions of a one-dimensional array are shown in Table1 . The indexing for arrays A and A loc begins at zero and the processors are numbered from 0 to P ? 1.... In PAGE 8: ... For example, under a BLOCK distribution the array is partitioned into segments of size NP . The relationship between the global index k, the processor index p, and the local index l as shown in Table1 can be represented by the equality eN k = eP p eNP l ; where p = k div NP and l = k mod NP . In the above identity, the index of vector basis eP p is associated with the processor index on which element A(k) is located after being distributed using a BLOCK distribution.... ..."

Cited by 8

### Table 1: Some possible kernel functions and the type of decision surface they de ne. The last two kernels are one-dimensional: multidimensional kernels can be built by tensor products of one-dimensional ones. The functions Bn are piecewise polynomials of degree n, whose exact de nition can be found in (Schumaker, 1981)

1998

Cited by 158

### Table 1: Some possible kernel functions and the type of decision surface they de ne. The last two kernels are one-dimensional: multidimensional kernels can be built by tensor products of one-dimensional ones. The functions Bn are piecewise polynomials of degree n, whose exact de nition can be found in (Schumaker, 1981)

1998

Cited by 158