### Table I shows that keeping higher order terms tends to, though not monotonically, provide more accurate power es- timation results. One significant improvement shown in the table is from the first data column (keeping only order 1 terms) to the third data column (keeping order 1, 2, and 3 terms). From this point on, the complexity of the reduced-order function increases much faster than the error decreases. This observation is also supported by the experimental results which is presented in Section IV. We, therefore, approximate (3.4) by ignoring terms with order higher than three. The reduced-order function is written as

1997

Cited by 40

### Table 1: Decidability and complexity of higher order matching

1999

"... In PAGE 2: ... The proof may be however in an obvious way changed to apply also to the pure language. Thus all lower bounds shown in Table1 apply to pure languages (without constants), while upper bounds hold for arbitrary signatures. In this paper we show that although the second and third order matching are both NP-complete, the third order case is somewhat harder in the following sense: the second order matching is in PTIME if the number of unknowns is bounded by a constant, while the third order case is NP-hard even with only one unknown variable.... ..."

Cited by 2

### Table 1. Efficiency and accuracy of Algorithms 3 and 6 on a complex symmetric matrix of order 2048.

2005

"... In PAGE 8: ... Algorithm 3 (normwise detection) and Algorithm 6 were run on a random complex symmetric matrix of order 2048. Table1 shows the total number of orthogonalizations and errors for various block sizes.... ..."

Cited by 3

### Table 1. Efficiency and accuracy of Algorithms 3 and 6 on a complex symmetric matrix of order 2048.

2005

"... In PAGE 8: ... Algorithm 3 (normwise detection) and Algorithm 6 were run on a random complex symmetric matrix of order 2048. Table1 shows the total number of orthogonalizations and errors for various block sizes.... ..."

Cited by 3

### Table 1 shows that keeping higher order terms tends to, although not monotonically, to provide more accurate power estimation results. One significant improvement shown in the table is from the 1st data column (keeping only order 1 terms) to the 3rd data column (keeping order 1, 2 and 3 terms). From this point on, the complexity of the reduced-order function increases much faster than the percentage error decreases. This observation is also supported by the experimental results which is presented in Section IV. We therefore approximate Eqn.(3.4) by ignoring terms with order higher than 3. The reduced-order function is written as:

1997

"... In PAGE 4: ... The second question is what the cost will be if we keep more terms in the original function. Table1 shows some examples of the percentage error caused by ignoring the high order input correlations. Column 1 gives the circuit name.... In PAGE 5: ...Table1 from the 1st data column to the 8th data column. The integer number i on top of each column indicates the maximum order to which the function terms are kept.... In PAGE 5: ... The last row of the table shows the total number of variables in the reduced-order functions for each of the circuits. Table1 Average percentage error in power dissipation when using reduced-order functions Circuit 1 2 3 4 5 6 7 8 A 99.... ..."

Cited by 40

### TABLE I COEFFICIENTS FOR THE HIGHER ORDER FDTD SCHEMES

### Table 2 Summary of key mask parameters as a function of the mask-writing technology. Note that the mask dimensions are reduced by a factor of 4 when imaged on the wafer.

"... In PAGE 8: ... Under this scheme, resist images in very dense gratings receive a lower primary e-beam dose than do isolated images in order to compensate for the unintended backscattering. A summary of the key parameters attainable with each of the mask writers is given in Table2 . As shown, masks fabricated with the VSB writer were significantly improved compared with their earlier counterparts fabricated on an RSG system.... ..."

### Table 1: Average cutsize results for 100 runs of FM (column 3) and Krishnamurthy higher level gains (columns 4-6) using LIFO (Last-In-First- Out), random and FIFO (First-In-First-Out) or- ganization schemes for the gain buckets. 3For space reasons, all tables give averagecutsize results over 100 runs. Minimum cutsize results are qualitatively similar and are separately available. 4Note that for bipartitioning, the cost at the end of the pass is exactly the same as the cost at the beginning of the pass, meaning that improvement results from an initial decrease in cost during the pass, followed by a corresponding increase in cost later in the pass.

"... In PAGE 2: ... The results are shown in Table 2. Note that the third column (pure FM) results are the same as in the third column of Table1 since our new for- mulation does not a ect the rst-level gain. As was observed with the Krishnamurthy formulation, the re- sults using a LIFO selection scheme with our new for- mulation are signi cantly better than the results us- ing random or FIFO selection schemes.... In PAGE 3: ...Experimental Results The third column of Table1 clearly shows the ef- fects of the selection methodology.3 Surprisingly, the FIFO scheme is no better than random selection.... In PAGE 3: ... 4Note that for bipartitioning, the cost at the end of the pass is exactly the same as the cost at the beginning of the pass, meaning that improvement results from an initial decrease in cost during the pass, followed by a corresponding increase in cost later in the pass. Columns 4-6 of Table1 show the e ects of LIFO, random and FIFO selection on higher-level gains as de ned by Krishnamurthy [5]. Introducing second- level (k = 2) gain and in some cases third-level (k = 3) gain seems to improve the solution quality for random and FIFO selection.... ..."

### Table 2. Efficiency and accuracy of Algorithms 5 and 6 on a complex symmetric matrix of order 2048. algorithm total number of error in error in run time

2005

"... In PAGE 8: ... Algorithm 5 (componentwise detection) and Algorithm 6 were run on a random complex symmetric matrix of order 2048. Table2 shows the total number of orthogonalizations and errors for various block sizes. This example shows that the componentwise algorithm performed fewer orthogonalizations than the normwise algorithm.... ..."

Cited by 3

### Table 2. Efficiency and accuracy of Algorithms 5 and 6 on a complex symmetric matrix of order 2048. algorithm total number of error in error in run time

2005

"... In PAGE 8: ... Algorithm 5 (componentwise detection) and Algorithm 6 were run on a random complex symmetric matrix of order 2048. Table2 shows the total number of orthogonalizations and errors for various block sizes. This example shows that the componentwise algorithm performed fewer orthogonalizations than the normwise algorithm.... ..."

Cited by 3