### Table 1 Block coordinate descent algorithm

"... In PAGE 6: ... The resulting CG scheme is ensured to converge to the unique minimizer of J as a function of k, under constraint (4). See Table1 for the detailed algorithm.... ..."

### Table 6: A framework for gradient descent image alignment algorithms. Gradient descent image alignment algorithms can be either additive or compositional, and either forwards or inverse. The inverse algorithms are computationally efficient whereas the forwards algorithms are not. The various algorithms can be applied to different sets of warps. Most sets of warps in computer vision form groups and so the forwards additive, the forwards compositional, and the inverse compositional algorithms can be applied to most sets of warps. The inverse additive algorithm can only be applied to a very small class of warps, mostly linear 2D warps. Algorithm For Example Efficient? Can be Applied To

2004

"... In PAGE 25: ...teration to first order in A1D4. In Section 3.4 we validated this equivalence empirically. The four algorithms do differ, however, in two other respects. See Table6 for a summary. Although the computational requirements of the two forwards algorithms are almost identical and... ..."

Cited by 144

### Table 6: A framework for gradient descent image alignment algorithms. Gradient descent image alignment algorithms can be either additive or compositional, and either forwards or inverse. The inverse algorithms are computationally efficient whereas the forwards algorithms are not. The various algorithms can be applied to different sets of warps. Most sets of warps in computer vision form groups and so the forwards additive, the forwards compositional, and the inverse compositional algorithms can be applied to most sets of warps. The inverse additive algorithm can only be applied to a very small class of warps, mostly linear 2D warps. Algorithm For Example Efficient? Can be Applied To

2004

"... In PAGE 25: ...teration to first order in A1D4. In Section 3.4 we validated this equivalence empirically. The four algorithms do differ, however, in two other respects. See Table6 for a summary. Although the computational requirements of the two forwards algorithms are almost identical and... ..."

Cited by 144

### Table 6: A framework for gradient descent image alignment algorithms. Gradient descent image alignment algorithms can be either additive or compositional, and either forwards or inverse. The inverse algorithms are computationally efficient whereas the forwards algorithms are not. The various algorithms can be applied to different sets of warps. Most sets of warps in computer vision form groups and so the forwards additive, the forwards compositional, and the inverse compositional algorithms can be applied to most sets of warps. The inverse additive algorithm can only be applied to a very small class of warps, mostly linear 2D warps. Algorithm For Example Complexity Can be Applied To

2004

"... In PAGE 30: ... In Section 3.4 we validated this equivalence empirically. The four algorithms do differ, however, in two other respects. See Table6 for a summary. Although the computational requirements of the two forwards algorithms are almost identical and the computational requirements of the two inverse algorithms are also almost identical, the two inverse algorithms are far more efficient than the two forwards algorithms.... In PAGE 49: ...pproximations, and the Levenberg-Marquardt approximation. These two choices are orthogonal. For example, one could derive a forwards compositional steepest descent algorithm. The results of the first half are summarized in Table6 . All four algorithms empirically perform equivalently.... ..."

Cited by 144

### Table 12: The six gradient descent approximations that we considered: Gauss-Newton, Newton, steepest descent, Diagonal Hessian (Gauss-Newton amp; Newton), and Levenberg-Marquardt. When combined with the inverse compositional algorithm the six alternatives are all equally efficient except Newton. When combined with a forwards algorithm, only steepest descent and the diagonal Hessian algorithms are efficient. Only Gauss-Newton and Levenberg-Marquardt converge well empirically. Algorithm Efficient As Efficient As Convergence Frequency of

2004

"... In PAGE 42: ... We have exhibited five alternatives: (1) Newton, (2) steepest descent, (3) diagonal approximation to the Gauss-Newton Hessian, (4) diagonal ap- proximation to the Newton Hessian, and (5) Levenberg-Marquardt. Table12 contains a summary of the six gradient descent approximations we considered. We found that steepest descent and the diagonal approximations to the Hessian all perform very poorly, both in terms of the convergence rate and in terms of the frequency of convergence.... ..."

Cited by 144

### Table 12: The six gradient descent approximations that we considered: Gauss-Newton, Newton, steepest descent, Diagonal Hessian (Gauss-Newton amp; Newton), and Levenberg-Marquardt. When combined with the inverse compositional algorithm the six alternatives are all equally efficient except Newton. When combined with a forwards algorithm, only steepest descent and the diagonal Hessian algorithms are efficient. Only Gauss-Newton and Levenberg-Marquardt converge well empirically. Algorithm Efficient As Efficient As Convergence Frequency of

2004

"... In PAGE 42: ... We have exhibited five alternatives: (1) Newton, (2) steepest descent, (3) diagonal approximation to the Gauss-Newton Hessian, (4) diagonal ap- proximation to the Newton Hessian, and (5) Levenberg-Marquardt. Table12 contains a summary of the six gradient descent approximations we considered. We found that steepest descent and the diagonal approximations to the Hessian all perform very poorly, both in terms of the convergence rate and in terms of the frequency of convergence.... ..."

Cited by 144

### TABLE 4 2D fused lasso applied to the toy problem. The table shows the number of CPU seconds required for the standard and pathwise coordinate descent algorithms, as n increases. The regularization parameters were set at the values that yielded the solution in the bottom left panel of Figure 9

2007

Cited by 2

### Table 12: The six gradient descent approximations that we considered: Gauss-Newton, Newton, steepest descent, Diagonal Hessian (Gauss-Newton amp; Newton), and Levenberg-Marquardt. When combined with the inverse compositional algorithm the six alternatives are all efficient except Newton. When combined with the forwards compositional algorithm, only the steepest descent and the diagonal Hessian algorithms are efficient. Only Gauss-Newton and Levenberg-Marquardt converge well empirically. Algorithm Complexity w/ Complexity w/ Convergence Convergence

2004

"... In PAGE 47: ...26 proximation to the Newton Hessian, and (5) Levenberg-Marquardt. Table12 contains a summary of the six gradient descent approximations we considered. We found that steepest descent and the diagonal approximations to the Hessian all perform very poorly, both in terms of the convergence rate and in terms of the frequency of convergence.... ..."

Cited by 144

### Table 1: Run times (CPU seconds) for lasso problems of various sizes n, p and di erent correlation between the features. Methods are the coordinate-wise opti- mization (Fortran), LARS (R and Fortran versions) and lasso2 (C language)| the homotopy procedure of Osborne et al. (2000).

2007

"... In PAGE 19: ...he signal-to-noise ratio is 3.0. The coe cients are constructed to have al- ternating signs and to be exponentially decreasing. Table1 shows the average CPU timings for the coordinatewise algorithm, two versions of the LARS procedure and lasso2, an implementation of the homotopy algorithm of Osborne et al. (2000).... In PAGE 21: ...COMPARISON OF RUN TIMES 21 Figure 8 shows the CPU times for coordinate descent, for the same prob- lem as in Table1 . We varied n and p, and averaged the times over ve runs.... ..."

Cited by 2

### TABLE I The general grouped coordinate descent algorithm. Note that the updates of ^ xj are done \in place. quot;

1997

Cited by 17