### Table 1. Comparison of results between grids with and without diagonals. New results

1994

"... In PAGE 2: ... For two-dimensional n n meshes without diagonals 1-1 problems have been studied for more than twenty years. The so far fastest solutions for 1-1 problems and for h-h problems with small h 9 are summarized in Table1 . In that table we also present our new results on grids with diagonals and compare them with those for grids without diagonals.... ..."

Cited by 11

### Table 1 are used, with the additional work per iteration of (2s)wdot=s for Orthomin and (2s + 1)wdot=s + (s + 1)wsax=s for GMRES, s here denoting the length of the current cycle. The modi ed Orthomin and GMRES algorithms are summarized as follows: 1. Initialization. n = 1. 2. Perform stopping test. 3. Form A r(n?1). 4. Calculate all dot products for new basis vector. 5. Estimate norms for next residual with/without restart. 6. Perform test to determine whether to restart. 7. Calculate new basis vector p(n?1) based on decision. 8. Form new iterate u(n) and residual r(n). 9. n n + 1; go to 2. Orthomin with Adaptive Restarting

in On The Convergence Behavior Of The Restarted Gmres Algorithm For Solving Nonsymmetric Linear Systems

1994

"... In PAGE 4: ... For general A, the cost to perform n steps of a minimal residual algorithm is proportional to n2. Table1 summarizes the average work per iteration for the restarted versions of Orthomin and GMRES, where s denotes the restart frequency. For Orthomin, the cost of a residual norm computation jjr(n)jj for the stopping test at each iteration is included; for GMRES, this quantity is available without extra vector work.... In PAGE 4: ... For both algorithms, one matrix- vector product is required per step. Table1 . Average work per iteration, restart frequency s (excluding matrix-vector products) method dot products: u v SAXPYs: y y + x total Orthomin (s+5)2 s + 1 3 2s + 72 GMRES (s+1)(s+2) (2s) (s2+5s+2) (2s)... In PAGE 16: ... Let us de ne e (r(old); r(new)) = ? log jjr(new)jj jjr(old)jj work(old; new) : a measure of the e ciency of calculating u(new) from u(old). Here, work(old; new) is the CPU time used to calculate u(new) given u(old), measured for example by the formulas of Table1 . This e ciency function corresponds roughly to the reciprocal... ..."

Cited by 12

### Table 2: Cutsize Reduction from Various Enhancements result reveals the following; Prior-clustering as well as declustering can be e ectively incorporated into LR/LSR to improve the partitioning solution quality even further. This is evident from the cutsize reduction trend along L(S)R ! L(S)Rn ! L(S)Rb. The idea of combining the static stable net removal with dynamic loose net removal (LSR) is very e ective in removing nets in the cutset. This is evident from the cutsize comparison between LR based algorithms (LR, LRn, LRb) and LSR based algorithms (LSR, LSRn, LSRb). The hierarchical MFFS clustering reduces the complexity of the given netlist and thus the partitioning time. This is evident from the runtime comparison between L(S)R and L(S)Rn. LSR based algorithms are always faster than LR based algorithms since SNT promotes higher rate of convergence due to its minor perturbation of the current partition compared to an entirely new random initial partition.

1997

"... In PAGE 7: ... 4.1 Cutsize Reduction Trend The experimental result summerized in Table2 shows the cutsize reduction trend starting from the basic FM to our most enhanced FM-LSRb. FM100 refers to the 100 runs of basic FM with conventional bucket structure, while all the other algorithms are based on 20 runs.... ..."

Cited by 34

### Table 1. Comparison between the total numbers of operations for Algorithm 1 de- pending of the base extension method.

2003

"... In PAGE 7: ... For the sake of simplicity, we decided to discard solutions using an approxi- mate base conversion. We compare in Table1 the efficiency of Algorithm 1... In PAGE 7: ...2, if the basic operations are well orga- nized, the bottleneck of the system is not the operations themselves but the data transmission between the memory and the different basic cells. In Table1 , we took into account all basic operations in our comparison. The method of Shenoy et al.... ..."

Cited by 5

### Table 1. Comparison between the total numbers of operations for Algorithm 1 de- pending of the base extension method.

"... In PAGE 7: ... For the sake of simplicity, we decided to discard solutions using an approxi- mate base conversion. We compare in Table1 the efficiency of Algorithm 1... In PAGE 7: ...2, if the basic operations are well orga- nized, the bottleneck of the system is not the operations themselves but the data transmission between the memory and the different basic cells. In Table1 , we took into account all basic operations in our comparison. The method of Shenoy et al.... ..."

### Table 1: Gradient descent image alignment algorithms can either be additive or compositional, and either forwards or inverse. Our framework leads immediately to two new algorithms, the inverse compositional algorithm and its extension for fitting FAMs.

2001

"... In PAGE 8: ... The naive algorithm takes over 6 iterations to reach the same degree of fit that the inverse compositional algorithm reaches in 3. 5 Discussion We have presented a framework (see Table1 ) for gradient descent image alignment. Algorithms can either be additive or compositional, and either forwards or inverse.... ..."

Cited by 68

### Table 3: Cost reductions attained by algorithms RN-STM-TS and UN-STM-TS accord- ing to problem size (nh)

1996

"... In PAGE 32: ... The algorithm UN-STM-TS was implemented using the MS-SP parallelization strategy and tested using 8 processors of an IBM-SP1. Some slight quality changes were obtained with the algorithm UN-STM-TS and are shown in Table3 . In some cases (nh = 8; 12; 18) the solutions obtained with UN-STM-TS are better, although this behavior is not systematic and in some cases the new solutions are even of inferior quality.... ..."

Cited by 4

### Table 7. Average execution time (s)

1997

Cited by 3

### Table 2: New Signal constructs and extensions .

"... In PAGE 9: ...Irina Smarandache, Paul Le Guernic (| H := sample{ max(0, - apos;), n }(I) | K := sample{ max(0, apos;), d }(I) clk affinef n, apos;,d g( X,Y ) , | H ^= X) (n; d gt; 0 and apos; 2 6Z) | K ^= Y |) / H, K, I Finally, Y := unsamplef n, apos; g( X, Z), with n gt; 0 and apos; 2 6Z, de nes Y as an a ne oversampling using X and Z: (| clk_affine{ n, apos;, 1 }(X, Y) Y := unsamplef n, apos; g( X, Z) , | Y := ( X when ^Y ) default Z (n gt; 0 and apos; 2 6Z) | Y ^= Z |) Table2 summarizes the a ne processes and their associated clock equations. For an arbitrary clock H, we note [H]( apos;;n) the clock obtained by down-sampling with phase apos; and period n on H and (H)(n; apos;;d) the result of an a ne transformation of parameters (n; apos;; d) applied to the clock H.... ..."

### Table 14. Proportion of Urban Population by Labor Market Category and Gender, 1998 Stat~~~rn

2000

"... In PAGE 47: ... Comparing Ecuador to other countries in the Region, women apos;s labor force participation in 1990 was the second lowest in Latin America after Guatemala (FLACO, 1995). According to INEC data, by 1998 46 percent of urban women were economically active, having increased from 44 percent in 1993 (see Table14 ). Over the same period, the proportion of men who were economically active decreased from 74 to 72 percent.... ..."