### Table 3: Primal Dual algorithm

"... In PAGE 9: ... The running time increases with the accuracy needed. The next Theorem states the running time and the correctness of the algorithm shown in Table3 . The proof is omitted here due to lack of space, but is similar to the one in [31].... In PAGE 9: ... Theorem 4. The algorithm in Table3 computes a (1 ) 3 optimal solution to the ow scaling problem in time polynomial in Q; L; n and 1 , where Q is the number of com- modities, L is the number of constraining sets, and n is the number of nodes. 6.... ..."

### Table 1: Comparison of two implementations of primal-dual Newton interior point method. If 0:1 then = 7, otherwise = 40. Fixed value of q = 6; q1 = 3.

1999

"... In PAGE 21: ...19 C(1) = D for i = 1,: : : , m do p(i) = C(i)V T i ^ Dii = Dii + Vi p(i) u(i) = 1 ^ Dii p(i) C(i+1) = C(i) ? u(i)p(i)T Table1 : Computing ^ D and u(i) Set W(1) = LV for i = 1; : : : ; m do for j = i + 1; : : : ; m do W(i+1) j = W(i) j ? LjiVi Rji = Lji + W(i+1) j u Table 2: Computing R to obtain the factorization of D + V DV T . Given V; D and D we can show that the recurrence relations for computing ^ Dii and u(i) are as given in Table 1.... In PAGE 21: ...19 C(1) = D for i = 1,: : : , m do p(i) = C(i)V T i ^ Dii = Dii + Vi p(i) u(i) = 1 ^ Dii p(i) C(i+1) = C(i) ? u(i)p(i)T Table 1: Computing ^ D and u(i) Set W(1) = LV for i = 1; : : : ; m do for j = i + 1; : : : ; m do W(i+1) j = W(i) j ? LjiVi Rji = Lji + W(i+1) j u Table 2: Computing R to obtain the factorization of D + V DV T . Given V; D and D we can show that the recurrence relations for computing ^ Dii and u(i) are as given in Table1 . We can compute R = L^ L in terms of the u(i), for i = 1,: : : , m by forward recurrence using Lemma V in [8].... In PAGE 21: ... This is re ected in the sparse algorithm by setting Rji = 0 whenever Lji = 0. For Vi = 0 we have from Table1 that u(i) = 0 and ^ Lri = 0 for r = i + 1; : : : ; m in (11). Thus Rri = Lri for r = i + 1; : : : ; m.... In PAGE 28: ... The test code is implemented in MATLAB The variable is a proximity measure of the interior point iterations to a solution of the linear programming problem and is the number of iterations (or corrections) allowed for the preconditioned conjugate gradient method. The percentages in Table1 are based on that approximately half of the direct solves are replaced by an iterative solution. The gain is therefor approximately the ratio of the di erence between the two methods and half of the total time.... In PAGE 28: ... The gain is therefor approximately the ratio of the di erence between the two methods and half of the total time. The results in Table1 show that the mixed primal-dual Newton (mixed PDN) interior-point method, which alternatively uses a direct (Cholesky factorization) method and a preconditioned (with the preconditioner described in Section 3) conjugate gradient method to solve (2), competes favourably with the primal-dual Newton (PDN) interior-point method on large-scale problems. The numerical results show that the mixed PDN method is promising and merits further study.... In PAGE 34: ...1 Dense algorithm De ne V = A; C = D for i = 1; : : : ; m do p = (Vi C)T Tii = Dii + Vi p u = (1=Tii) p C C ? u pT for j = i + 1; : : : ; m do Vj Vj ? LjiVi Rji = Lji + Vj u Algorithm C.2 Sparse algorithm De ne V = A; C = D; R = L; T = D for i = 1; : : : ; m do if Vi 6 = 0 then p = (Vi C)T Tii = Dii + Vi p u = (1=Tii) p C C ? u pT for j = i + 1; : : : ; m do if Lji 6 = 0 then Vj Vj ? LjiVi Rji = Lji + Vj u Table1 : Updating the triangular factors for matrices of the form: RTRT = LDLT + A D AT Algorithm D.1 PCGLS y0 is the initial starting vector r0 = h ? AT y0, v0 = AGr0, p0 = s0 = Mv0, 0 = sT 0 v0 k 0 While not converged do qk = AT pk k = k qT k Gqk yk+1 = yk + kpk rk+1 = rk ? kqk (1.... ..."

Cited by 2

### Table 3 Primal Dual

"... In PAGE 10: ... If the ith primal variable yi is urs, the ith dual constraint is an equality constraint. Table3 gives a more complete relationship between nonnormal primal and dual problems. Table 3 Primal Dual... ..."

### Table 1. Comparison of two implementations of primal-dual Newton interior point method. If 0:1 then = 7, otherwise = 40. Fixed value of q = 6; q1 = 3.

"... In PAGE 4: ...n [2]. The test problems are from the Netlib set [5] of linear programs. The test code is implemented in MATLAB The variable is a proximity measure of the interior point iterations to a solution of the linear programming problem and is the number of iterations (or corrections) allowed for the preconditioned conjugate gradient method. The percentages in Table1 are based on that approximately half of the direct solves are replaced by an iterative solution. The gain is therefor approximately the ratio of the di erence between the two methods and half of the total time.... In PAGE 4: ... The gain is therefor approximately the ratio of the di erence between the two methods and half of the total time. The results in Table1 show that the mixed primal-dual Newton (mixed PDN) interior-point method, which alternatively uses a direct (Cholesky factorization)... ..."

### Table 1: Relationships Between Primal and Dual Problems (page 241 in [3])

"... In PAGE 3: ...2 Rules for Formulating the Dual In the dual of a linear program, there is one dual variable corresponding to one primal constraint, and one dual constraint for each primal variable. Table1 gives the rules in formulating the dual of a primal LP. Connection with Lagrange dual function (in x5.... ..."

### Table 2. Overall e#0Eciency of the primal-dual cutting-plane method.

1996

"... In PAGE 16: ...Table2... In PAGE 16: ... All cuts mentioned in the second column of the table are appended to the restricted master problem but some of them are eliminated in later iterations because they are inactive #28see #5B12#5D for details#29. The remaining columns of Table2 report the number of iterations of the cutting-plane method #28Outer#29 and the number of interior-point iterations #28Inner#29. In the latter wehave distinguished the iterations needed to reach the approximate analytic center #28to be saved for the future warm start#29, AC, the iterations to reach the desired accuracy of solution to the restricted master problem, Opt, and their sum, respectively.... In PAGE 16: ... In the latter wehave distinguished the iterations needed to reach the approximate analytic center #28to be saved for the future warm start#29, AC, the iterations to reach the desired accuracy of solution to the restricted master problem, Opt, and their sum, respectively. From the results collected in Table2 one can see that we really deal with nontrivial warm- start examples. The sizes of the restricted master problems always reach tens of thousand columns with, on the average, thousands of new cuts to be accommodated at every reoptimiza- tion.... In PAGE 16: ... The sizes of the restricted master problems always reach tens of thousand columns with, on the average, thousands of new cuts to be accommodated at every reoptimiza- tion. The results collected in Table2 con#0Crm the overall good performance of the primal-dual analytic center cutting-plane method, but they do not givemuch insightinto the behavior of the warm-start procedure proposed in this paper. Such insight is given by the results reported in Table 3.... ..."