### Table 12: The six gradient descent approximations that we considered: Gauss-Newton, Newton, steepest descent, Diagonal Hessian (Gauss-Newton amp; Newton), and Levenberg-Marquardt. When combined with the inverse compositional algorithm the six alternatives are all equally efficient except Newton. When combined with a forwards algorithm, only steepest descent and the diagonal Hessian algorithms are efficient. Only Gauss-Newton and Levenberg-Marquardt converge well empirically. Algorithm Efficient As Efficient As Convergence Frequency of

2004

"... In PAGE 42: ... We have exhibited five alternatives: (1) Newton, (2) steepest descent, (3) diagonal approximation to the Gauss-Newton Hessian, (4) diagonal ap- proximation to the Newton Hessian, and (5) Levenberg-Marquardt. Table12 contains a summary of the six gradient descent approximations we considered. We found that steepest descent and the diagonal approximations to the Hessian all perform very poorly, both in terms of the convergence rate and in terms of the frequency of convergence.... ..."

Cited by 144

### Table 12: The six gradient descent approximations that we considered: Gauss-Newton, Newton, steepest descent, Diagonal Hessian (Gauss-Newton amp; Newton), and Levenberg-Marquardt. When combined with the inverse compositional algorithm the six alternatives are all equally efficient except Newton. When combined with a forwards algorithm, only steepest descent and the diagonal Hessian algorithms are efficient. Only Gauss-Newton and Levenberg-Marquardt converge well empirically. Algorithm Efficient As Efficient As Convergence Frequency of

2004

"... In PAGE 42: ... We have exhibited five alternatives: (1) Newton, (2) steepest descent, (3) diagonal approximation to the Gauss-Newton Hessian, (4) diagonal ap- proximation to the Newton Hessian, and (5) Levenberg-Marquardt. Table12 contains a summary of the six gradient descent approximations we considered. We found that steepest descent and the diagonal approximations to the Hessian all perform very poorly, both in terms of the convergence rate and in terms of the frequency of convergence.... ..."

Cited by 144

### Table 12: The six gradient descent approximations that we considered: Gauss-Newton, Newton, steepest descent, Diagonal Hessian (Gauss-Newton amp; Newton), and Levenberg-Marquardt. When combined with the inverse compositional algorithm the six alternatives are all efficient except Newton. When combined with the forwards compositional algorithm, only the steepest descent and the diagonal Hessian algorithms are efficient. Only Gauss-Newton and Levenberg-Marquardt converge well empirically. Algorithm Complexity w/ Complexity w/ Convergence Convergence

2004

"... In PAGE 47: ...26 proximation to the Newton Hessian, and (5) Levenberg-Marquardt. Table12 contains a summary of the six gradient descent approximations we considered. We found that steepest descent and the diagonal approximations to the Hessian all perform very poorly, both in terms of the convergence rate and in terms of the frequency of convergence.... ..."

Cited by 144

### Table 3-4: Comparison of original and modified steepest descent methods

2006

"... In PAGE 56: ... The primal decision variables are reduced to the configuration of the equipment. By dualizing and solving this geometric problem, we get the parameters of each amplifier, as shown in Table3 -1. ... In PAGE 57: ... 40 Table3 -1: Solution of the primal problem Node Attenuation Gain Noise Figure X-MOD 25 0.448159 4.... In PAGE 57: ...4.520926 0.950686*107 Step IV: Adding Reverse Modules in Amplifiers We add the reverse modules into the amplifiers to amplify the upstream signal and conform to the signal quality constraints. By dualizing and solving this geometric problem, we get the parameters of each amplifier in Table3 -2. ... In PAGE 58: ... 41 Table3 -2: Solution of primal problem Reverse NF (dB) 8.808512 Input signal strength to reverse module (dB) 5.... In PAGE 59: ... 42 We have done experiments on some network examples to compare the results of steepest descent method. As shown in Table3 -3, the steepest descent method converged at some points that are not optimal. Since in steepest descent method, the program are terminated at the iteration with no more or too little improvement.... In PAGE 60: ... 43 Table3 -3: Results of Steepest Descent Method Net# Steepest Descent Method Optimal Dual Primal Dual(converted) Optimal c00 -2.83746 17.... In PAGE 61: ... Since the ill-structure of CATV planning problems, the modified steepest descent method could solve by a different way. The result generated by the modified steepest descent method are 51% to 92% better than the original steepest descent method, shown as Table3 -4. Based on our findings, we suggest that the modified steepest descent method are more suitable for the dual problems of CATV network planning problems.... ..."

### Table 3-3: Results of Steepest Descent Method

2006

"... In PAGE 56: ... The primal decision variables are reduced to the configuration of the equipment. By dualizing and solving this geometric problem, we get the parameters of each amplifier, as shown in Table3 -1. ... In PAGE 57: ... 40 Table3 -1: Solution of the primal problem Node Attenuation Gain Noise Figure X-MOD 25 0.448159 4.... In PAGE 57: ...4.520926 0.950686*107 Step IV: Adding Reverse Modules in Amplifiers We add the reverse modules into the amplifiers to amplify the upstream signal and conform to the signal quality constraints. By dualizing and solving this geometric problem, we get the parameters of each amplifier in Table3 -2. ... In PAGE 58: ... 41 Table3 -2: Solution of primal problem Reverse NF (dB) 8.808512 Input signal strength to reverse module (dB) 5.... In PAGE 59: ... 42 We have done experiments on some network examples to compare the results of steepest descent method. As shown in Table3 -3, the steepest descent method converged at some points that are not optimal. Since in steepest descent method, the program are terminated at the iteration with no more or too little improvement.... In PAGE 61: ... Since the ill-structure of CATV planning problems, the modified steepest descent method could solve by a different way. The result generated by the modified steepest descent method are 51% to 92% better than the original steepest descent method, shown as Table3 -4. Based on our findings, we suggest that the modified steepest descent method are more suitable for the dual problems of CATV network planning problems.... In PAGE 62: ... 45 Table3 -4: Comparison of original and modified steepest descent methods Net# Original Modified Difference Improvement c00 16212 2921 13292 82% c01 14821 2372 12448 84% c02 8008 3900 4108 51% c03 11673 1714 9959 85% c04 43524 3482 40042 92% c05 5402 1657 3745 69% c06 5770 1558 4212 73% c07 5025 1602 3423 68% c08 6332 1286 5046 80% c09 10425 1851 8574 82% In order to improve the efficiency and effectiveness of the modified steepest descent method, we have done more experiments on the settings of parameters in the modified steepest descent method. In the experiments, we found the initial step size play an important role to converge to optimal value.... ..."

### Table 1: Solutions Costs Using Greedy Steepest Descent with Best Values

1997

Cited by 6

### Table 1: Result of Our Steepest

"... In PAGE 7: ... Setting (1) ignore the time hardness, and Setting (2) includes it. Table1 shows the experimental results using these settings. Three order instances whose numbers of orders are 5,000, 4,000, and 3,000 were used with the two carrier instances.... In PAGE 7: ... The columns ini , fin , and time show the objective values for the initial solution, the objective value for the final solution, and the calculation time taken by our steepest descent algorithm. From Table1 , we cannot see any significant differences regarding their solution qualities. Regarding the calculation times, Setting (2) was relatively slower to converge, but the slow convergence did not insure obtaining good final and Okano solutions.... ..."

### Table 1 Block coordinate descent algorithm

"... In PAGE 6: ... The resulting CG scheme is ensured to converge to the unique minimizer of J as a function of k, under constraint (4). See Table1 for the detailed algorithm.... ..."

### Table 5.1.3 displays the algorithms, number of nodes, and average number of constraint checks for the algorithms with the highest convergence percentages. The steepest-descent breakout algorithms outperformed the next-descent breakout algorithm in terms of average constraint checks. The Wallace amp; Freuder algorithms were not ranked due to non-convergence. The BAs required approximately one half the constraint checks required the Wallace amp; Freuder algorithms.

### Table 2: Average deviation from best results for steepest descent with di erent neigh- bourhoods.

1999

"... In PAGE 11: ... At rst, we neglected the pilot heuristic as construction method as the high quality of the starting solutions would hinder a di erentiation of the factors examinded here. Table2 shows the results for steepest descent (SD), Table 3 shows the results for simulated annealing (SA) with the parameters set as described in Section 3.1, Table 4 shows the results for static tabu search (1,000 itera- tions) with the tabu list length l set to the number of the jobs and the tabu threshold set corresponding to a full move.... ..."

Cited by 12