### Table 1 Block coordinate descent algorithm

"... In PAGE 6: ... The resulting CG scheme is ensured to converge to the unique minimizer of J as a function of k, under constraint (4). See Table1 for the detailed algorithm.... ..."

### TABLE I The general grouped coordinate descent algorithm. Note that the updates of ^ xj are done \in place. quot;

1997

Cited by 17

### TABLE I Algorithm outline for a paraboloidal surrogates algorithm with coordinate descent (PSCD). The curvature choice shown here is the maximum second derivative.

1999

Cited by 38

### TABLE 4 2D fused lasso applied to the toy problem. The table shows the number of CPU seconds required for the standard and pathwise coordinate descent algorithms, as n increases. The regularization parameters were set at the values that yielded the solution in the bottom left panel of Figure 9

2007

Cited by 2

### Table A.17: Descent test results. y is the y coordinate values. b is the bank angle. clb represents climb rate and h represents heading values. The subscripts i and f indicate initial and nal values. NED means Normalised Euclidian Distance (see Section 1.2.2).

### Table A.18: (Continued) Descent test results. y is the y coordinate values. b is the bank angle. clb represents climb rate and h represents heading values. The subscripts i and f indicate initial and nal values. NED means Normalised Euclidian Distance (see Section 1.2.2).

### Table 1: Coarse outline of PSCD algorithm.

1998

"... In PAGE 3: ... We call this method the Paraboloidal Surrogates Coordinate Descent (PSCD) method. A coarse outline of the algorithm is given in Table1 . For computa- tional considerations and the detailed algorithm flow table, see [7].... ..."

Cited by 7

### Table 1: Run times (CPU seconds) for lasso problems of various sizes n, p and di erent correlation between the features. Methods are the coordinate-wise opti- mization (Fortran), LARS (R and Fortran versions) and lasso2 (C language)| the homotopy procedure of Osborne et al. (2000).

2007

"... In PAGE 19: ...he signal-to-noise ratio is 3.0. The coe cients are constructed to have al- ternating signs and to be exponentially decreasing. Table1 shows the average CPU timings for the coordinatewise algorithm, two versions of the LARS procedure and lasso2, an implementation of the homotopy algorithm of Osborne et al. (2000).... In PAGE 21: ...COMPARISON OF RUN TIMES 21 Figure 8 shows the CPU times for coordinate descent, for the same prob- lem as in Table1 . We varied n and p, and averaged the times over ve runs.... ..."

Cited by 2

### Table 9 - Star-shape - overall RMSE for the GPOF method (used as initialization) and the Direct ML approach.

2004

"... In PAGE 24: ...8 1 Figure 4: The Star shape estimation improvement - the penalty function as a function of each vertex separately. Experiment 4: Table9 summarizes the results of the average error obtained over 20 runs using the star-shape, applying the GPOF method for initialization, and applying 20 iterations of the coordinate descent algorithm. Each such iteration updates every vertex once, and so we have 200 overall updates.... ..."

Cited by 7

### Table 9 - Star-shape - overall RMSE for the GPOF method (used as initialization) and the Direct ML approach.

2004

"... In PAGE 24: ...8 1 Figure 4: The Star shape estimation improvement - the penalty function as a function of each vertex separately. Experiment 4: Table9 summarizes the results of the average error obtained over 20 runs using the star-shape, applying the GPOF method for initialization, and applying 20 iterations of the coordinate descent algorithm. Each such iteration updates every vertex once, and so we have 200 overall updates.... ..."

Cited by 7