### Table 9: Active set algorithm: Boston housing data

in When

"... In PAGE 30: ... This now contains a number of terms equal to the number of observations so that it is distinctly more complex than in the lasso. The active set algorithm proves reasonably e cient on the Boston housing data set and results are summarized in Table9 . Here the data presented are the number of iterations to convergence (nits), the number of residuals in the quot;-insensitive region (n0) and the number of residuals at the quot; bound (ne) for a range of values of and quot;.... ..."

### Table 7. Average computational time for 30-activity and 20-activity sets

"... In PAGE 22: ...05 8.24 134 22.73 26.18 Table7 shows the average computational time, in seconds, spent by the column generation method, Lagrangean relaxation, scatter search and the cutting plane exact algorithm. The column generation method and Lagrangean relaxation were run by their authors on a Pentium II, 300 MHz, and 64 MB RAM (Drexl and Kimms, 2001).... ..."

### Table 1. Solution quality and optimization time comparison for the code without active set and the parallel active set code. problem prob1 no active set active set, no. of PEs best

### Table 3: Computational results with optimized active set code.

2000

"... In PAGE 13: ...hich CPLEX [36] does not solve well. All problems are available from the authors. Several problems are so large that a code like CPLEX is not able to nd a solution even to the LP-relaxation. In Table3 we give a comparison of typical results between the old prob1 code which is in use at Carmen Systems, and the new paqs code without and with the active set strategy, on one processor of a Sun Ultra Enterprise 10000/249. The runtimes are given as user time, that is, the CPU time in seconds dedicated to the computing process.... In PAGE 21: ... Table 6 shows some typical running time results for the global scan on a Sun Enterprise 10000/249, and in Table 7 a Sun Ultra-1/140 network connected by a shared Fast Ethernet. The problems selected are all the larger problems in Table3 . We see that the global scan can be parallelized with a speedup of up to three on four processors even on a network of workstations.... ..."

Cited by 4

### Table 1: Performance of the -active set algorithm on QPECgen problems.

1999

"... In PAGE 11: ... The problems thus generated are special cases of (1) with f convex quadratic; Hi(x; y) = yi 8i; q = 0; where x 2 lt;n?m and y 2 lt;m. The dimensions n; m; p for the generated problems are shown in Table1 . The values of second_deg and mono_M, which are QPECgen param- eters that control the degree of degeneracy (failure of strict complementarity) and the monotonicity of [Gi(x; y)]m i=1 in y, are also shown.... In PAGE 11: ...] The performance of the algorithm on the test problems is reported in Table 1. As can be seen from Table1 , the algorithm terminated in a nite number of iterations on each problem. Moreover, on all except problems 9 and 10, the nal z is veri ed to be a B-stationary point since the gradients of the active constraints were linearly independent.... In PAGE 11: ... The reasons for this is not well understood, but it does not appear to be related to degeneracy. For example, on the last problem of Table1 , the nal solution has 3 degenerate indices, and yet the number of iterations is small relative to the problem size. The work at each iteration k varies, depending on the e ort to solve the quadratic program (3) using the warm starting point zk and the e ort... ..."

Cited by 9

### Table 3.3: Active set vs projected gradient algorithms

### Table 10: Active set algorithm: Iowa wheat data

in When

"... In PAGE 30: ... The total work corresponds very roughly to O(10) solutions of the least squares problem for the corresponding design matrix. For comparison, the corresponding values for the Iowa wheat data are given in Table10 . Here the increase in computing cost for the housing data example suggests a stronger dependence on n than in the lasso computations.... ..."

### Table 1. LP Test Problems (Final Active Set) The methods are compared against the Markowitz ordering for calculating sparse LU factors which is widely regarded as the best practical method for limiting ll-in. As with the SRT and SPK1 methods, the Markowitz method has been coded without regard to possible numerical instability. It is assumed that the Markowitz method stores factors of A rather than A2, but takes account of the unit columns by not storing their unit elements. This would seem to be the most e cient way to proceed (just storing factors of A2 would require A to be accessed in the solves as in (4.2) and (4.3), and would increase the time required for a solve). The SRT and SPK1 methods are used in the context of implicit LU factors and so require to store the spikes of the factor IL2 corresponding to A2, and the diagonal matrix D2 which contains the diagonal of the implicit upper triangular matrix IL2A2 (assuming that any permutations have been subsumed into A). Thus the total storage requirement for the factors is the total spike length plus the length of D2, which is m2.

"... In PAGE 17: ...11) adapt readily to this form without needing to partition the non-unit columns of A explicitly. The dimensions of the test problems are given in Table1 , and the columns headed A1 + A2 and A2 give the numbers of non-zero elements in the corresponding parts of A. The nal column gives the average number of elements per row in the matrix A2 and... ..."

### Table 5.2: List of activity sets and the activities they contain Set name List of activities

2006