### Table 1: Orthogonalization Choices in the Rational Krylov Algorithm

"... In PAGE 8: ... It is stressed that the placement of orthogonality or biorthogonality type constraints on V and Z is purely an implementational decision. Various biorthogonality/orthogonality possibilities are explored in Table1 of Section 4. Yet these orthogonalization choices are in no way fundamental to model reduction via projection.... In PAGE 12: ... Furthermore, it is these vectors that primarily distinguish the speci c implementations. Several important options (but certainly not all) are summarized in Table1 . The rst, second and fourth cases in Table 1 are implemented in detail in Sections 4.... In PAGE 13: ... q and v. The last column of Table1 , titled , restriction, lists the conditions that must be met in each case by the choice of the four q, v, w and z parameters. In the second row, for example, orthogonal V and Z are required, V T V = ZT Z = I.... In PAGE 15: ...ection 4.4. Insights into alternatives to pm are provided by Example 3. Example 3 Consider the construction of an orthogonal V3 (see second row of Table1 ) with the interpolation point-ordering 1 = (1), 2 = (2) and 3 = (1). The rst two columns of V are therefore V2 = 1(A ? (1)E)?1b 2(A ? (2)E)?1b + 2;1v1 ; where the parameters 1, 2 and 2 are chosen so that V2 is orthogonal.... In PAGE 16: ... By taking this step, one need only store two rather than four sequences of vectors in memory. In general, one or more of the qm, vm, wm, or zm equals the corresponding vectors ~ qm, ~ vm, ~ wm, and ~ zm (see Table1 ). When these equalities occur, it is possible that the corresponding vector sequence need not be stored in memory.... ..."

Cited by 1

### Table 3 Randomization Supports Recoding

"... In PAGE 20: ...) chaotic, i.e., unpredictable deterministic, activity fluctuations 2) random initialization of state space, Z(0), at the beginning of each trial 3) quantal synaptic failures at recurrent excitatory synapses, and 4) the relative strength of the external inputs. Table3 summarizes the variety of paradigms and methods used to study the effects of randomization, and it outlines interactions of randomization with such fundamental parameters as activity and connectivity. The overall interpretation of these results is that randomization drives an undirected code word search .... In PAGE 20: ... Randomization does this by counteracting, from one training trial to next, a tendency for too much similarity between sequences of state space representations. [Insert Table3... ..."

### Table 4. The performance of our best deterministic code on

1996

"... In PAGE 13: ...2. The performance data for Code E with the fea- tures of removing duplicated edges and using hybrid concurrent write operations is shown in Table4 . The performance data for our randomized code is shown in Tables 5 and 6.... ..."

### Table 3); for this second test sequence, we also computed the number of deterministic cubes (Column MaxRand) that have to be implemented. Finally, we estimated the overhead area (Gates Equivalent) of the additional mapping logics for both test sequence using [12] and [11]. For each of these mixed-mode techniques we computed the ratio between the area overhead needed using our proposed seed and the random seed (columns [11]Ratio and [12]Ratio).

1999

"... In PAGE 5: ...6 84.4 Table3 : Comparison with best randomly selected seed coverage for a computation time In Table 3, we compare the fault coverage provided by our method (column OurSeed , with 10 test cubes from Ccomp evaluated in Algorithm-3) and the best fault coverage of a randomly selected seed test sequence (column MaxRand ). For these experiments, the test length is 1000, and the computational time (in seconds and in second column) required to find the best randomly selected seed is equal to that used by our method.... In PAGE 5: ...6 84.4 Table 3: Comparison with best randomly selected seed coverage for a computation time In Table3 , we compare the fault coverage provided by our method (column OurSeed , with 10 test cubes from Ccomp evaluated in Algorithm-3) and the best fault coverage of a randomly selected seed test sequence (column MaxRand ). For these experiments, the test length is 1000, and the computational time (in seconds and in second column) required to find the best randomly selected seed is equal to that used by our method.... In PAGE 5: ... For these experiments, the test length is 1000, and the computational time (in seconds and in second column) required to find the best randomly selected seed is equal to that used by our method. The third column of Table3 indicates the number of random test sequences fully simulated during the accorded time. Through this experiment, it appears that the results provided by our method are always better than those of a random seed selection.... ..."

Cited by 6

### Table 7. Transducer automaton for calculating a minimial joint expansion from the binary expansion from left to right. The symbol ? denotes the end of the sequence.

"... In PAGE 15: ...automaton is shown in Table7 and in Figure 3 (note that the nal state has not been drawn in Figure 3). 1 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 0 0j 0 0 1 0j quot; 0 1 j quot; 1 1 j quot; 0 0j0 0 1 0 1 0j1 0 0 0 0 1 j 0 0 1 1 1 1j quot; 0 0 j 0 0 0 1 1 0j0 0 1 1 0 1 j 0 1 0 0 1 1j quot; 0 0 j0 0 1 1 1 0j quot; 0 1j quot; 1 1 j1 1 0 0 1 0 j0 0 0 0j quot; 0 1j quot; 1 1 j quot; 0 1 j0 0 0 0j quot; 1 0j quot; 1 1 j quot; 0 0j1 0 0 0 1 1 1 0j1 0 0 0 0 1 0 1 j 0 0 1 1 1 0 1 1 j 1 0 0 1 0 0 0 0 j 0 1 0 0 1 1 1 0j0 0 1 1 0 1 0 1j 0 1 0 0 1 0 1 1 j0 1 1 0 0 0 0 0 j0 0 1 1 1 0 1 0j1 1 0 1 0 0 0 1 j 0 0 1 1 1 1 1 1 j 1 1 0 0 0 1 0 0 j 0 0 1 1 0 1 1 0j0 0 1 1 1 1 0 1 j 1 1 1 0 0 0 1 1 j1 1 0 0 1 0 1 1j 0 0 0 0 j quot; 1 0 j quot; 0 1j quot; 0 0j 1 0 0 0 1 0j0 0 1 0 1 1 j 0 0 1 1 0 1j quot; 1 0j 0 0 1 1 0 1j 1 1 0 0 0 0j quot; 1 1j quot; 0 0j0 0 1 1 1 0 j 0 0 0 1 1 1j 0 1 0 0 0 1j quot; 0 0 j 0 1 0 0 0 1j0 0 0 1 1 1j0 0 1 1 1 0j quot; 1 0j 1 1 0 0 0 1j 0 0 1 1 0 0j quot; 1 1j quot; 0 0j 0 0 1 1 0 1 j 0 0 1 0 1 1j1 0 0 0 1 0j quot; 0 0 j 1 1 0 0 1 1 j0 0 1 1 1 0j quot; 0 1j quot; 1 0 j 0 1 0 0 0 1j0 0 1 1 1 1 j 0 0 0 1 0 0j quot; 1 0 j 0 0 1 1 0 1j 1 0 0 0 1 1j0 0 1 0 0 0j quot; 0 0j 1 0 0 0 0 1 1 0j 1 0 0 0 1 1 0 1 j 1 0 0 1 0 0 1 1 j 0 0 1 1 1 0 0 0j 1 1 0 1 0 0 1 0j 0 0 1 1 1 0 0 1 j 1 1 0 0 0 1 1 1 j 0 0 1 1 1 1 0 0j0 0 1 1 1 1 1 0 j 0 0 1 1 0 1 0 1j 1 1 0 0 1 0 1 1 j 1 1 1 0 0 0 0 0j0 0 1 1 0 1 1 0 j 0 1 0 0 1 1 0 1j 0 1 1 0 0 0 1 1 j 0 1 0 0 1 0 0 0 j 0 1 0 0 1 0 1 0 j 0 1 1 0 0 0 0 1j0 1 0 0 1 1 1 1j0 0 1 1 0 1 0 0 j 1 1 1 0 0 0 1 0j 1 1 0 0 1 0 0 1 j 0 0 1 1 0 1 1 1j0 0 1 1 1 1 0 0 j 0 0 1 1 1 1 1 0 j 1 1 0 0 0 1 0 1j 0 0 1 1 1 0 1 1j1 1 0 1 0 0 0 0 j 0 0 1 1 1 0 1 0j 1 0 0 1 0 0 0 1 j 1 0 0 0 1 1 1 1j1 0 0 0 0 1 0 0 j 1 1 0 0 1 0 1 0 j 1 1 1 0 0 0 0 1j0 0 1 1 1 1 1 1 j 0 0 1 1 0 1 0 0 j 1 1 0 0 0 1 1 0 j 0 0 1 1 1 1 0 1j 1 1 0 1 0 0 1 1 j0 0 1 1 1 0 0 0 j0 1 1 0 0 0 1 0j 0 1 0 0 1 0 0 1j0 0 1 1 0 1 1 1 j 0 1 0 0 1 1 0 0 j 1 0 0 1 0 0 1 0 j 0 0 1 1 1 0 0 1j 1 0 0 0 0 1 1 1j 1 0 0 0 1 1 Figure 3.... In PAGE 17: ... Let x, y 2 Z with binary expansions x = PJ j=0 xj2j and y = PJ j=0 yj2j. Then the output ( quot;J+1 : : : quot;0) of the transducer in Table7 when reading (xJ yJ : : : x0 y0 ?) is a joint expansion of x and y of minimal joint Hamming weight. References [1] R.... ..."

Cited by 7

### Table 10 Improvement factor and relative background model sensitivity of seed ((1,1,1,1,1),14)

"... In PAGE 6: ...WISSPROT................................................................................................................................ 30 Table10 Improvement factor and relative background model sensitivity of seed ((1,1,1,1,1),14) .... In PAGE 37: ...We show the improvement and relative background model sensitivity of the new seed against two data sets in Table10 . It has significant higher improvements than the three original seeds at each level (minimum, median, and maximum) (also see Table 5 and Table 8).... ..."

### Table 1: Comparison of speed of original deterministic algorithms and randomized versions on test-bed problems.

1998

"... In PAGE 2: ...s (N=2 (N 1))!, i.e., the search space size grows as the factorial of the square of N=2. Published algorithms for this problem all scale poorly, and the times for our deterministic solver (as shown in Table1 ) are among the best (see also Gomes et al. 1998b).... In PAGE 5: ... In addi- tion, a very low cutoff value can also be used to exploit the heavy-tails to the left of the median, and will allow us to solve previously unsolved problem instances after a suffi- cient number of restarts. In Table1 , the mean solution times in the Randomized column are based on empirically de- termined near-optimal cutoff values. For each randomized solution time the standard deviation is of the same order of magnitude as the mean.... In PAGE 6: ... Our fast restart strategy exploits this. See Table1 for other improvements due to randomiza- tion. Until now, the 3bit-adder problems had not been solved by any backtrack-style procedure.... ..."

Cited by 230

### Table 1: Comparison of speed of original deterministic algorithms and randomized versions on test-bed problems.

1998

"... In PAGE 2: ... i.e., the search space size grows as the factorial of the square of a6a12a11a14a13 . Published algorithms for this problem all scale poorly, and the times for our deterministic solver (as shown in Table1 ) are among the best (see also Gomes et al. 1998b).... In PAGE 5: ... In addi- tion, a very low cutoff value can also be used to exploit the heavy-tails to the left of the median, and will allow us to solve previously unsolved problem instances after a suffi- cient number of restarts. In Table1 , the mean solution times in the Randomized column are based on empirically de- termined near-optimal cutoff values. For each randomized solution time the standard deviation is of the same order of magnitude as the mean.... In PAGE 6: ... Our fast restart strategy exploits this. See Table1 for other improvements due to randomiza- tion. Until now, the 3bit-adder problems had not been solved by any backtrack-style procedure.... ..."

Cited by 230

### Table 2: Size of the Deterministic Equivalent Problems

"... In PAGE 6: ... In Table 1 we report the di- mensions of the first and the second stage problems for both the original model (SLP 2S) and the limited recourse model (SLP LR). Table2 reports the size of the deterministic equivalent problems with increasing number of scenarios. Table 1: Size of Randomly Generated Test Problems Test SLP 2S model SLP LR model Problem n1 = n2 m1 m2 n1 = n2 m1 m2 P1 4 3 2 4 5 4 P2 37 28 28 37 56 56 P3 13 9 7 13 16 14 P4 11 2 7 11 9 14 Our objective here is to illustrate how the objective function value increases when a progressively tighter tolerance is imposed on the model.... ..."

### Table 7: Results of combining evidence; all configu- rations were combined with the sequence of six sub- jects

2000

"... In PAGE 4: ...ll statistically significant, with p lt; 0.01. Likewise, the edge the sequence-of-subjects configurations had over the other configurations, was also statistically significant. The results from combining the evidence from dif- ferent configurations, in Table7 , showed a much higher accuracy, but a sharp drop in the total num- ber of associated words found. The most fruitful pairs of experiments were those that combined dis- tinct approaches, for example, the five-subject con- figuration with either full paragraphs or with sen- tences with prepositional phrases.... ..."

Cited by 5