### Table 1. Test results and performance comparison of forex forecasting

2003

"... In PAGE 9: ...029 0.020 Test results Table1 summarizes the training and test performances of the neuro-fuzzy system and neural network. Figure 4 shows the developed Takagi-Sugeno type fuzzy inference model for forex prediction.... ..."

Cited by 1

### Table 2. Performance of the di erent paradigms.

"... In PAGE 9: ... Figures 9 and 10 illustrate the meta-learning approach combining evolutionary learning and gradient descent technique during the 35 generations. Table2 summarizes the performance of the devel- oped i-Miner for training and test data. Performance is compared with the previous results reported by Wang et al.... In PAGE 9: ... (2002) wherein the trends were analyzed using a Takagi-Sugeno Fuzzy Inference System (ANFIS), Arti - cial Neural Network (ANN) and Linear Genetic Program- ming (LGP). The Correlation Coe cient (CC) for the test data set is also given in Table2 . The 35 generations of meta-learning approach created 62 if-then Takagi{Sugeno type fuzzy rules (daily tra c trends) and 64 rules (hourly tra c trends) compared to the 81 rules reported by Wang et al.... In PAGE 13: ... The knowledge discovered from the developed FCM clusters and SOM could be a good comparison study and is left as a future research topic. As illustrated in Table2 , i-Miner framework gave the overall best results with the lowest RMSE on test error and the highest correlation coe cient. It is interesting to note that the three considered soft computing paradigms could easily pickup the daily and hourly Web-access trend patterns.... ..."

### Table 2: Summary of Diverging Tree results Our gradient-based algorithm was also tested on a real data set namely, the Rotating Rubic cube. Figure 1 shows one frame of the Rubic cubic sequence and the computed optical ow after 40 iterations of the in- complete Cholesky preconditioned conjugate gradient algorithm.

"... In PAGE 5: ... We use the regularization parameter = 0:1 and = = 1 in this example. The er- ror in the computed optical ow after 40 iterations of the incomplete Cholesky preconditioned CG algo- rithm is presented in Table2 , along results from other gradient-based methods in literature that yield 100% ow density [2]. Once again, our modi ed gradient- based regularization method produces more accurate optical ow than the other methods.... ..."

Cited by 1

### Table 4: Comparison with the results of the gradient-based

1997

"... In PAGE 5: ... The results also show that with a few more test points, the timing-driven TPI can achieve the same level of fault coverage as the area-driven TPI does. Furthermore, we compare the results with those of the gradient-based method as shown in Table4 . Same num- ber of test points are selected using both approaches.... ..."

Cited by 5

### Table 1. CPU times required for solution via LU decomposition of 3 different representations of the same model. The numbers in each column are relative to the corresponding single processor times shown in the first row. Each column corresponds to a different order of field interpolation on the mesh: the first column corresponds to order 0.5 edge element, the second column to the 1.0 order edge element, and the last column to the 1.5 order edge element. When an iterative solver will be used to solve the matrix equation, a simple modification to the equation can significantly improve the convergence of a conjugate gradient-based solver8. The normal coefficient matrix that is obtained from hierarchical edge elements is augmented with nodal degrees of freedom (DOFs) associated with the scalar potential. The addition of these DOFs is used to explicitly enforce the divergence relation on the electric field, and resulting modification to the matrix equation can be realized without changing the original coefficient matrix (aside from a renumbering exercise), i.e., the new matrix equation becomes:

"... In PAGE 4: ...with element interpolation level). The results are shown in Table1 . Note that for small models, for which the inter-processor communication requirements per local mesh element are relatively large, the addition of an externally networked processor actually slows the solution of the problem.... ..."

### Table 3. Comparison of GBC and gradient-based minimization algorithm

2000

"... In PAGE 5: ...1). But as shown in Table3 the sim- ulation effort for the gradient-based algorithm was signifi-... ..."

Cited by 2

### Table 1: Complexity of Gradient-Based Direct and Indirect Methods

2003

Cited by 1

### Table 1: Summary of Square 2 results The other synthetic data is the more realistic Di- verging Tree image sequence. The underlying motion in this example is a bilinear expansion from the cen- ter of the image. We use the regularization parameter

"... In PAGE 5: ... The optical ow was obtained after 40 iterations of the incomplete Cholesky preconditioned conjugate gradi- ent algorithm. The errors of our modi ed gradient- based regularization method and the other gradient- based methods for this translating square example are summarized in Table1 . Only the results from those gradient-based methods that yield 100% density ow estimates reported in [2] are included in this table for comparison.... ..."

Cited by 1

### Table 2 Results of the statistical tests applied to the best gradient-based and the best Newton-based DID or MFDID method for each database. The parameters histogram equalization (hist-eq) and Butterworth cutoff frequency (cutoff), and the median angular error (med. AE) are given for each method. The last column shows whether the median angular errors of the two compared methods differ significantly (** = significant with fi = 1%, n.s. = not significant). gradient Newton significant

"... In PAGE 8: ... Histogram equalization seems to im- prove the performance for both methods; we will dis- cuss this in section 10. Table2 compares the best result over the parameters cutoff frequency and histogram... In PAGE 9: ... It improves the performance of the gradient method in A1originalHh, Chall1Hh, and Chall2Hh, and reduces the performance of the Newton method in Chall1Hh, but has a relatively small effect on Moeller1Hh. Table2 lists the performance of the best gradient method and the best Newton method for each database. In three of the real-world databases, Newton-MFDID performs significantly better than the gradient-based MFDID method.... ..."

### Table 1: Several discrete-time gradient-based algorithms: EGU|Unnormalized Exponentiated Gradient [KW97b], EG|Exponentiated Gradient [KW97b], and BEG|Bounded Exponentiated Gradient [Byl97]. Here 5t;i is short hand for @Lt(!t) @!t[i]

1997

"... In PAGE 3: ... The Euler-discretization of the dual update (2) gives !t+h := !t ? h 5 Lt(!t) or t+h := f(f?1( t) ? h 5 Lt(!t)) : (4) For example if f is the identity function then both the main update and its dual collapse to the conventional gradient descent update. However if f(x) = ln(x) then the discretized version (3) of the main update with h = 1 gives the Unnormalized Exponentiated Gradient Update (EGU) of [KW97b] (See the Table1 for more examples).In the next section we discuss the purpose and desired properties of link functions.... ..."

Cited by 5