### Table 4: Comparison with the results of the gradient-based

1997

"... In PAGE 5: ... The results also show that with a few more test points, the timing-driven TPI can achieve the same level of fault coverage as the area-driven TPI does. Furthermore, we compare the results with those of the gradient-based method as shown in Table4 . Same num- ber of test points are selected using both approaches.... ..."

Cited by 5

### Table 1: Complexity of Gradient-Based Direct and Indirect Methods

2003

Cited by 1

### Table 1 A summary of the gradient-based methods and their time complexities. The variable n denotes the number of processing nodes.

1999

Cited by 3

### Table 3. Comparison of GBC and gradient-based minimization algorithm

2000

"... In PAGE 5: ...1). But as shown in Table3 the sim- ulation effort for the gradient-based algorithm was signifi-... ..."

Cited by 2

### TABLE III MATCHING RATES OF TWO FINGERPRINT RECOGNITION SYSTEMS BY USING THE MODEL-BASED ORIENTATION FIELD ESTIMATION METHOD AND THE HIERARCHICAL GRADIENT-BASED METHOD [6], [7], RESPECTIVELY, ON THE DATABASE USING THE LEAVE-ONE-OUT STRATEGY.

### Table 2: Summary of Diverging Tree results Our gradient-based algorithm was also tested on a real data set namely, the Rotating Rubic cube. Figure 1 shows one frame of the Rubic cubic sequence and the computed optical ow after 40 iterations of the in- complete Cholesky preconditioned conjugate gradient algorithm.

"... In PAGE 5: ... We use the regularization parameter = 0:1 and = = 1 in this example. The er- ror in the computed optical ow after 40 iterations of the incomplete Cholesky preconditioned CG algo- rithm is presented in Table2 , along results from other gradient-based methods in literature that yield 100% ow density [2]. Once again, our modi ed gradient- based regularization method produces more accurate optical ow than the other methods.... ..."

Cited by 1

### Table 2 Results of the statistical tests applied to the best gradient-based and the best Newton-based DID or MFDID method for each database. The parameters histogram equalization (hist-eq) and Butterworth cutoff frequency (cutoff), and the median angular error (med. AE) are given for each method. The last column shows whether the median angular errors of the two compared methods differ significantly (** = significant with fi = 1%, n.s. = not significant). gradient Newton significant

"... In PAGE 8: ... Histogram equalization seems to im- prove the performance for both methods; we will dis- cuss this in section 10. Table2 compares the best result over the parameters cutoff frequency and histogram... In PAGE 9: ... It improves the performance of the gradient method in A1originalHh, Chall1Hh, and Chall2Hh, and reduces the performance of the Newton method in Chall1Hh, but has a relatively small effect on Moeller1Hh. Table2 lists the performance of the best gradient method and the best Newton method for each database. In three of the real-world databases, Newton-MFDID performs significantly better than the gradient-based MFDID method.... ..."

### Table1. Complexityofgradient-baseddirectandindirectmethods. Gradient-based estimators 2-D 1-D

2003

Cited by 1