### Table 3: Primal Dual algorithm

"... In PAGE 9: ... The running time increases with the accuracy needed. The next Theorem states the running time and the correctness of the algorithm shown in Table3 . The proof is omitted here due to lack of space, but is similar to the one in [31].... In PAGE 9: ... Theorem 4. The algorithm in Table3 computes a (1 ) 3 optimal solution to the ow scaling problem in time polynomial in Q; L; n and 1 , where Q is the number of com- modities, L is the number of constraining sets, and n is the number of nodes. 6.... ..."

### TABLE I COMPARISON BETWEEN THE RESULT OF THE PROPOSED PRIMAL-DUAL ALGORITHM AND THE OPTIMAL SOLUTION

2006

Cited by 2

### Table 1: Results of the rst experiment: total number of conjugate gradient iterations, number of outer iterations of the primal-dual algorithm, and average number of conjugate gradient iterations per step of the primal-dual algorithm. Each gure is the average of ten randomly generated problems.

1991

"... In PAGE 25: ...Table1 . Each entry is an average over ten instances of the problem.... ..."

Cited by 2

### Table IV gives the relative accuracy of the maximal delay for several benchmarks related to the primal- dual gap. Notice that for the formulations for min- imal power the maximal delay increases and for the formulations for speed the maximal delay decreases as the primal dual gap becomes smaller. Notice also that for a primal-dual gap of 1e-4 the maximal delays of all examples have reached their optimal value. TABLE IV Relative Accuracy of Maximal Delay Relating to the Primal-Dual Gap

### Table 1. Comparison of two implementations of primal-dual Newton interior point method. If 0:1 then = 7, otherwise = 40. Fixed value of q = 6; q1 = 3.

"... In PAGE 4: ...n [2]. The test problems are from the Netlib set [5] of linear programs. The test code is implemented in MATLAB The variable is a proximity measure of the interior point iterations to a solution of the linear programming problem and is the number of iterations (or corrections) allowed for the preconditioned conjugate gradient method. The percentages in Table1 are based on that approximately half of the direct solves are replaced by an iterative solution. The gain is therefor approximately the ratio of the di erence between the two methods and half of the total time.... In PAGE 4: ... The gain is therefor approximately the ratio of the di erence between the two methods and half of the total time. The results in Table1 show that the mixed primal-dual Newton (mixed PDN) interior-point method, which alternatively uses a direct (Cholesky factorization)... ..."

### Table 1: Comparison of two implementations of primal-dual Newton interior point method. If 0:1 then = 7, otherwise = 40. Fixed value of q = 6; q1 = 3.

1999

"... In PAGE 21: ...19 C(1) = D for i = 1,: : : , m do p(i) = C(i)V T i ^ Dii = Dii + Vi p(i) u(i) = 1 ^ Dii p(i) C(i+1) = C(i) ? u(i)p(i)T Table1 : Computing ^ D and u(i) Set W(1) = LV for i = 1; : : : ; m do for j = i + 1; : : : ; m do W(i+1) j = W(i) j ? LjiVi Rji = Lji + W(i+1) j u Table 2: Computing R to obtain the factorization of D + V DV T . Given V; D and D we can show that the recurrence relations for computing ^ Dii and u(i) are as given in Table 1.... In PAGE 21: ...19 C(1) = D for i = 1,: : : , m do p(i) = C(i)V T i ^ Dii = Dii + Vi p(i) u(i) = 1 ^ Dii p(i) C(i+1) = C(i) ? u(i)p(i)T Table 1: Computing ^ D and u(i) Set W(1) = LV for i = 1; : : : ; m do for j = i + 1; : : : ; m do W(i+1) j = W(i) j ? LjiVi Rji = Lji + W(i+1) j u Table 2: Computing R to obtain the factorization of D + V DV T . Given V; D and D we can show that the recurrence relations for computing ^ Dii and u(i) are as given in Table1 . We can compute R = L^ L in terms of the u(i), for i = 1,: : : , m by forward recurrence using Lemma V in [8].... In PAGE 21: ... This is re ected in the sparse algorithm by setting Rji = 0 whenever Lji = 0. For Vi = 0 we have from Table1 that u(i) = 0 and ^ Lri = 0 for r = i + 1; : : : ; m in (11). Thus Rri = Lri for r = i + 1; : : : ; m.... In PAGE 28: ... The test code is implemented in MATLAB The variable is a proximity measure of the interior point iterations to a solution of the linear programming problem and is the number of iterations (or corrections) allowed for the preconditioned conjugate gradient method. The percentages in Table1 are based on that approximately half of the direct solves are replaced by an iterative solution. The gain is therefor approximately the ratio of the di erence between the two methods and half of the total time.... In PAGE 28: ... The gain is therefor approximately the ratio of the di erence between the two methods and half of the total time. The results in Table1 show that the mixed primal-dual Newton (mixed PDN) interior-point method, which alternatively uses a direct (Cholesky factorization) method and a preconditioned (with the preconditioner described in Section 3) conjugate gradient method to solve (2), competes favourably with the primal-dual Newton (PDN) interior-point method on large-scale problems. The numerical results show that the mixed PDN method is promising and merits further study.... In PAGE 34: ...1 Dense algorithm De ne V = A; C = D for i = 1; : : : ; m do p = (Vi C)T Tii = Dii + Vi p u = (1=Tii) p C C ? u pT for j = i + 1; : : : ; m do Vj Vj ? LjiVi Rji = Lji + Vj u Algorithm C.2 Sparse algorithm De ne V = A; C = D; R = L; T = D for i = 1; : : : ; m do if Vi 6 = 0 then p = (Vi C)T Tii = Dii + Vi p u = (1=Tii) p C C ? u pT for j = i + 1; : : : ; m do if Lji 6 = 0 then Vj Vj ? LjiVi Rji = Lji + Vj u Table1 : Updating the triangular factors for matrices of the form: RTRT = LDLT + A D AT Algorithm D.1 PCGLS y0 is the initial starting vector r0 = h ? AT y0, v0 = AGr0, p0 = s0 = Mv0, 0 = sT 0 v0 k 0 While not converged do qk = AT pk k = k qT k Gqk yk+1 = yk + kpk rk+1 = rk ? kqk (1.... ..."

Cited by 2

### Table 9: Comparison of the primal barrier method and the primal-dual method for lNa13

2000

"... In PAGE 21: ... Compared with the primal barrier method in [And96b], there is a signi cant reduction in the number of itera- tions and in CPU time. This is shown in Table9 . The primal-dual algorithm also obtains signi cantly more zero norms in the optimal solution.... In PAGE 22: ... As shown in Table 10, the number of zero norms varies from 62 to 96 percent of the total number of terms. Comparison with [And96b] con rms the observations from Table9 : for the primal-dual method the iteration count is signi cantly reduced and increases very slowly with the problem size. The CPU time is reduced by a factor 4 or more, and we are able to solve larger instances of the problem.... ..."

Cited by 10

### Table 3 Primal Dual

"... In PAGE 10: ... If the ith primal variable yi is urs, the ith dual constraint is an equality constraint. Table3 gives a more complete relationship between nonnormal primal and dual problems. Table 3 Primal Dual... ..."

### Table 5: The number of function evaluations for the primal-dual version of NITRO versus LANCELOT, on problems from the Hock and Schittkowski collection. An asterisk indi- cates that the convergence test was not satis ed after 10,000 iterations. In problem HS75, LANCELOT stopped but was not able to satisfy the termination test on the projected gradient.

1999

"... In PAGE 23: ...results are given in Table5 , and include all the problems that we tested. Since these problems contain a very small number of variables, we do not report CPU time.... ..."

Cited by 61