### Table 3: Compression rates, number of quadrature knots, and computation times.

"... In PAGE 20: ... Though the compression is asymptotically not optimal, we believe that the presented choice of wavelets and compression parameters leads to faster computation times for N 10 000. In Table3 we show the compression rates (compression rate = number N2 of entries in the full matrix divided by the number of entries in the compressed matrix), the number of quadrature knots, and the computa- tion times (including the time for the set up of the sti ness matrix). The time tC and the number of knots kC are given for the BEM without wavelets.... In PAGE 21: ... The last error is taken over eight points in the exterior of the earth close to (0:5; 0:5). Note that in the computation with the wavelet algorithms presented in Table3 we solve the linear system (44) itera- tively. The multiplication of a vector by Aj, however, is realized by applying the wavelet transform to the vector, by multiplying with the compressed and wavelet transformed matrix A quot; j, and by applying the inverse wavelet transform.... In PAGE 25: ... v) If the subdomain of the starting partition in iii) belongs to level j, then the quadra- ture technique for conventional collocation methods should be applied. Note that the numbers of quadrature knots and the computation times (on a DEC 3000 AXP 400 -processor workstation) presented in Table3 are obtained with a quadrature algorithm based on i), iii)-v). The reduction in computation time is much less than the reduction in storage.... ..."

### Table I: Task times and speedup parameters for three machines executing a linear sequence of three tasks. Task time k 1, but is otherwise arbitrary; communication time O = c. Proposition 2 For linear task graphs GP and mapping 0, we have SP (1 ? ) +

1994

Cited by 15

### Table I: Task times and speedup parameters for three machines executing a linear sequence of three tasks. Task time k 1, but is otherwise arbitrary; communication time O = c. Proposition 2 For linear task graphs GP and mapping 0, we have SP (1 ? ) +

1994

Cited by 15

### Table 1. Common graph embedding view for the most popular dimensionality reduction algorithms. Note that type D means direct graph embedding, while L and K mean the linearization and kernelization of the graph embedding, respectively.

2005

"... In PAGE 4: ... (2-4). Table1 lists the similarity and constraint matrices for all above mentioned methods. And their corresponding graph embedding types are also demonstrated.... In PAGE 4: ... From Eq. (6) and (7), we can easily have the listed formulations of the similarity matrix W and constraint matrix B for PCA/KPCA and LDA/KDA as listed in Table1 . Figure 2 plots the intrinsic graphs for PCA and LDA, respectively.... ..."

Cited by 8

### Table 1: Tests for knot creation.

1998

"... In PAGE 17: ...5 Knot creation summary Each interval is recursively bisected until all the checks for conditions (a) through (d) are met. For purposes of the algorithm, it is useful to regroup these checks into four tests, as described in Table1 . In particular, Tests L and R collect... ..."

Cited by 3

### Table 3: Modulo graph embedding results for the dedicated register file CGRA.

2006

"... In PAGE 9: ... This shows that the modulo graph embedding scheduler is able to achieve quality solutions for significantly lower cost CGRAs. The modulo scheduler runtimes (last column of Table3 ) are rea- sonably fast, as all benchmarks are scheduled within 5 seconds on a 3 GHz Pentium-4 machine with 1G of RAM. This is because the search space is limited to operations in the DFG with the same height; thus, fewer than 20 operations are generally considered at a time.... ..."

Cited by 5

### Table 1: The class of virtual knots arising from Dn.

"... In PAGE 21: ...onjecture 5.2. The at knots Un in (3) are distinct for all n 1. One interesting result we have found is a related in nite class of virtual knot diagrams, depicted in Table1 . As we shall see, they are all distinct.... In PAGE 24: ... Then choose two empty arcs within it, and apply (9) an arbitrary number of times. The Kn in Table1 are an example of this. Theorem 5.... In PAGE 24: ... The rest of the An are also Jones equivalent to the unknot, because for even n, the An are Jones equivalent to A0, and for odd n, the An are Jones equivalent to A1. This is because each An in Table1 is Jones equivalent to An+2 by a single application of (9) on the horizontal arrows. As a result, each Kn is Jones equivalent to the unknot and hence the normalized bracket will be trivial on all of them.... In PAGE 27: ...Table1 , we have representative virtual knots Kn for each An.... In PAGE 27: ...or each An. By Theorem 3.4, if we prove 1 = 0 on each of these, we are done. We will prove this via the quandle. First note that each Kn in Table1 are of the general form: a (11) where the box is replaced by a horizontal sum of elementary 4-tangles. In (11), we have labeled the bridge arc emanating from the lower left side of the tangle by the generator a.... ..."

### Table 2. Two grammars used to generate training and test sentences for experiments with the Grids algorithm. The rst grammar (a) includes arbitrary strings of adjec- tives, whereas the second (b) supports arbitrarily embedded relative clauses.

2000

"... In PAGE 4: ...1 Grammatical Domains and Experimental Design We decided to use arti cial grammars in our experiments, since they let us both control characteristics of the domain and measure the correctness of the induced knowledge structures. In particular, we designed the two subsets of En- glish grammar shown in Table2 . The rst (a) includes declarative sentences with arbitrarily long strings of adjectives and both transitive and intransitive verbs, but no relative clauses, prepositional phrases, adverbs, or in ections.... In PAGE 6: ... 1. Average learning curves for the adjective phrase grammar from Table2 , with (a) measuring the probability of parsing a legal test sentence and (b) the probability of generating a legal sentence. language), which indicate an overgeneral one.... In PAGE 6: ... However, we were also interested in the rate of learning, so we explicitly varied the number of training sentences available to the system, at each level measuring the two accuracies of the learned grammar, averaged over 20 di erent training sets. Figure 1 presents the learning curves for the adjective phrase grammar from Table2 , with (a) showing results on the rst measure, the probability of parsing a legal test sentence, and (b) showing those for the second, the probability of generating a sentence parsed by the target grammar. The curves show both the average accuracy and 95% con dence intervals as a function of di erent numbers of training sentences.... In PAGE 7: ... 2. Learning curves for the relative clause grammar from Table2 , and for analogous grammars that involve larger word classes, with (a) measuring the probability of parsing a legal test sentence and (b) the probability of generating a legal sentence. one goal of future research should be to explain the underlying causes of these distinctive patterns, as well as the widely di ering rates of learning.... ..."

Cited by 13

### Table 1 summarizes the current status of the newly formulated problem of colored simultaneous embedding. A check indicates that it is always possible to simultaneously embed the type of graphs, a a23 indicates that it is not always possible, and a ? indicates an open problem.

2007

"... In PAGE 11: ... Table1 . k-colored simultaneous embeddings: results and open problems.... ..."

Cited by 3

### Table 1 summarizes the current status of the newly formulated problem of colored simultaneous embedding. A check indicates that it is always possible to simultaneously embed the type of graphs, a a23 indicates that it is not always possible, and a ? indicates an open problem.

2007

"... In PAGE 11: ... Table1 . k-colored simultaneous embeddings: results and open problems.... ..."

Cited by 3