### Table 2: TBV algorithms in order of their runtimes. COM merges functionally equivalent gates using low complexity analysis [17]. EQV makes intelligent guesses on equivalency [5] and performs expensive checks that allow huge gate merging reductions when they pass. It also exploits structural

"... In PAGE 8: ... The decision techniques aim to find a sat- isfying trace, that is an assignment to initial value functions of registers and a sequence of input valuations that result in asserting the target gate to a true value at the last cycle of the trace. In Table2 , we briefly describe various transforms that were used in the context of this work and comment on their efficiency. We had to manually experiment with vari- ous techniques in order to find a successful flow of transforms for each of the problems.... ..."

### TABLE XIIa. Timetable for the Low Complexity BinDCT Architecture

### Table 4.2: Comparison between typical and low-complexity implementations of the HDD circuit for difierent frame lengths (= N bits)

2005

### Table 1. Sizes of the compressed versions of the context model for order 2 in Fig. a) and order 3 in Figure b). A low-complexity device can maintain a set of models for different types of data.

2006

"... In PAGE 5: ...wo bytes. Typically, there exist many streams of zero elements in the data model. The maximum stream size that can be denoted is 256 empty elements. Table1 gives the size of different compressed data models in Bytes for the orders 2 in table a) and 3 in table b). We use the so constructed models to obtain the compression results in... ..."

Cited by 2

### Table 3: Low complexity performance comparison with respect to FBMA (in dB)

"... In PAGE 11: ... All simulation results are compared against the FBMA. In Table3 , the average PSNR of the reconstructed frames from frame 1 to 29 are listed. The second column shows the average PSNRs obtained using FBMA which serve as benchmarks for the comparisons.... ..."

### Table-driven variables and data 33% Low complexity of base code 32% Y2K and special search engines 30%

2006

### Table 8: Performance comparison of synchronous dynamic bandwidth allocation schemes using non- predictive bandwidth estimation, RLS bandwidth prediction and PSN-TDNN bandwidth prediction in the worst scenario (page 39). which is desirable for low-complexity network management, but at the expense of increased transmission bandwidth. This study indicates the signi cant performance improvement due to dynamic allocation and the feasibility of its implementation at a reasonably long adaptation interval such as 0.56 seconds for video transmission.To further verify the signi cance of bandwidth prediction, we also study a non-predictive bandwidth estimation scheme where the bandwidth for the interval [(n + D) ; (n + D + M) ) is approximated at time n by the maximum of (M + 1) current observations, i.e., C maxfxL(n ? M); xL(n ? M + 1); ; xL(n)g. The queueing performance by such a non-predictive scheme is compared with those by the RLS and the PSN-TDNN schemes in the case of synchronous dynamic bandwidth allocation with an adaptation interval M =0:56 seconds. We use the same RLS and PSN-TDNN schemes as designed in Section 3. Speci cally, the prediction lead time (D + M) is set at 0.7 seconds for the RLS scheme and 0.84 seconds for the PSN-TDNN scheme. For comparison, we use the same lead time for the non-predictive estimator. Listed in

1995

"... In PAGE 20: ... For comparison, we use the same lead time for the non-predictive estimator. Listed in Table8 are the results when C = 1:25. The RLS and PSN-TDNN schemes almost achieve the ideal queueing performance, whereas the non-predictive scheme greatly degrades the queueing performance.... ..."

Cited by 61

### Table 2. Comparisons Between Low-Complexity-bias and No- Complexity-bias Evolutionary Computation Algorithms for Individuals Less Than 0.1441

1999

"... In PAGE 7: ...1441 were generated from 36 of the 50 trials (72%). Table2 presents a summary of some interesting statistical features of the algorithms. While there is very little difference in the mean generalization error of the algorithms, there is an almost three fold difference in the number of individuals with generalization errors less than 0.... ..."

Cited by 8

### TABLE I THE NUMBER OF REAL SUBTRACTIONS, MULTIPLICATIONS, ADDITIONS AND COMPARISONS NEEDED FOR EACH BIT METRIC USING THE ORIGINAL BICM METRIC (3) AND THE LOW COMPLEXITY METRIC (8)

### Table 4. Comparison of transform apos;s coding gain with AR(1) image model, = 0:95.

1998

"... In PAGE 11: ... Table 3 tabulates the number of multiplications, additions, and/or shifting operations needed to process 8 input samples. Table4 lists the coding gain of each transform given an AR(1) signal with intersample autocorrelation coe cient = 0:95. The evidence in Table 3-4 shows that high-performance, yet low-complexity, FBs can be constructed based on our proposed lattice structure.... ..."

Cited by 21