## On the solution of equality constrained quadratic programming problems arising . . . (1998)

### Cached

### Download Links

Citations: | 43 - 2 self |

### BibTeX

@MISC{Gould98onthe,

author = {Nicholas I. M. Gould and Mary E. Hribar and Jorge Nocedal},

title = {On the solution of equality constrained quadratic programming problems arising . . . },

year = {1998}

}

### OpenURL

### Abstract

### Citations

2196 |
Numerical optimization
- Nocedal, Wright
- 1999
(Show Context)
Citation Context ... the nonzero singular values of A. 2. The CG method and linear constraints A common approach for solving linearly constrained problems is to eliminate the constraints and solve a reduced problem (cf. =-=[20, 38]-=-). More speci cally, suppose that Z is an n (n;m) matrix spanning the null space of A. Then AZ = 0, the columns of A T together with the columns of Z span R n , and any solution x of the linear equati... |

1544 |
Practical optimization
- Gill, Murray, et al.
- 1981
(Show Context)
Citation Context ... the nonzero singular values of A. 2. The CG method and linear constraints A common approach for solving linearly constrained problems is to eliminate the constraints and solve a reduced problem (cf. =-=[20, 38]-=-). More speci cally, suppose that Z is an n (n;m) matrix spanning the null space of A. Then AZ = 0, the columns of A T together with the columns of Z span R n , and any solution x of the linear equati... |

1222 | Practical Methods of Optimization - Fletcher - 1987 |

950 | Accuracy and Stability of Numerical Algorithms - Higham - 2002 |

779 |
Methods of conjugate gradients for solving linear systems
- Hestenes, Stiefel
- 1952
(Show Context)
Citation Context ...Energy grant DE-FG02-87ER25047-A004. 1s1. Introduction A variety of algorithms for linearly and nonlinearly constrained optimization (e.g., [8, 13, 14, 35, 36]) use the conjugate gradient (CG) method =-=[28]-=- to solve subproblems of the form minimize x q(x) = 1 2 xT Hx + c T x (1.1) subject to Ax = b: (1.2) In nonlinear optimization, the n-vector c usually represents the gradient rf of the objective funct... |

666 |
Numerical Methods for Least Squares Problems
- Björck
- 1996
(Show Context)
Citation Context ...since choosing a good value of this parameter can be di cult, we consider here only (4.5). In the case which concerns us most, when kg + k converges to zero while kv + k is bounded, an error analysis =-=[4]-=- shows that kg + ; g + c k kg + k m( 1 + (A)) kv+ k kg + k : It is interesting to compare this bound with (4.3). We see that the ratio (4.4) again plays a crucial role in the analysis, and that the au... |

573 | Direct Method for Sparse Matrices - Duff, Erisman, et al. - 1989 |

542 |
Iterative Solution Methods
- Axelsson
- 1994
(Show Context)
Citation Context ...[20]). 2sLet us now consider the practical application of the CG method to the reduced system (2.4). It is well known that preconditioning can improve the rate of convergence of the CG iteration (cf. =-=[2]-=-), and we therefore assume that a preconditioner W ZZ is given. W ZZ is a symmetric, positive de nite matrix of dimension n ; m, which might be chosen to reduce the span of, and to cluster, the eigenv... |

384 | SNOPT: An SQP algorithm for large-scale constrained optimization - Gill, Murray, et al. |

237 | The multifrontal solution of indefinite sparse symmetric linear - Duff, Reid - 1983 |

140 |
The conjugate gradient method and trust regions in large scale optimization
- Steihaug
- 1983
(Show Context)
Citation Context ...eeded in trust region methods, but our discussion will also be valid in that context because trust region methods normally terminate the CG iteration as soon as negative curvature is encountered (see =-=[42, 44]-=-, and, by contrast, [24]). The quadratic program (1.1){(1.2) can be solved by computing a basis Z for the null space of A, using this basis to eliminate the constraints, and then applying the CG metho... |

133 |
Some stable methods for calculating inertia and solving symmetric linear systems
- Bunch, Kaufman
- 1977
(Show Context)
Citation Context ...hich the preconditioned residual g + = Pr + is computed by solving I A T A 0 ! g + v + ! = r + 0 ! (4.5) using a direct method. There are a number of such methods, the strategies of Bunch and Kaufman =-=[6]-=- and Du and Reid [16] being the best known examples for dense and sparse matrices, respectively. Both form the LDL T factorization of the augmented matrix (i.e. the matrix appearing on the left hand s... |

132 |
CUTE: Constrained and unconstrained testing environment
- Toint
- 1995
(Show Context)
Citation Context ... will choose G = I, which aswehave mentioned, arises in trust region optimization methods without preconditioning. Example 2. We applied Algorithm II to solve problem CVXEQP3 from the CUTE collection =-=[5]-=-, with n = 1000 and m = 750. We used both the normal equations (3.5){(3.6) and augmented system (3.8) approaches to compute the projection, and de ne G = I. The results are given in Figure 1, which pl... |

86 |
Solving sparse linear systems with sparse backward error
- Arioli, Demmel, et al.
- 1989
(Show Context)
Citation Context ...nt manifold if and only if the cosine (3.15) is zero, and thus it is reasonable to ask that the cosine for a computed approximation to g should be small. The general analysis of Arioli, Demmel and Du =-=[1]-=-, indicates that, with care, it is possible to ensure that the backward error 1 A T i g+ =(jAjjg + j)i 1 C A � 1 This de nition needs to be modi ed if jAjjg + j is (close to) zero. See [1] for details... |

81 | Constraint preconditioning for indefinite linear systems - Keller, Gould, et al. |

78 | An interior point algorithm for large scale nonlinear programming
- Byrd, Hribar, et al.
(Show Context)
Citation Context ...l Science Foundation grant CDA-9726385 and by Department of Energy grant DE-FG02-87ER25047-A004. 1s1. Introduction A variety of algorithms for linearly and nonlinearly constrained optimization (e.g., =-=[8, 13, 14, 35, 36]-=-) use the conjugate gradient (CG) method [28] to solve subproblems of the form minimize x q(x) = 1 2 xT Hx + c T x (1.1) subject to Ax = b: (1.2) In nonlinear optimization, the n-vector c usually repr... |

54 | Towards an efficient sparsity exploiting Newton method for minimization - Toint - 1981 |

51 | Trust-Region Interior-Point Algorithms for Minimization Problems with Simple Bounds
- Dennis, Vicente
- 1996
(Show Context)
Citation Context ...of the form ; P n i=1 (log(xi ; li)+log(ui ; xi)) to the objective function, for some positive barrier parameter . The choice G = I arises in several trust region methods for constrained optimization =-=[8, 14, 15, 27, 35, 39, 46]-=-. These methods include a trust region constraint of the form kZx Zk in the subproblem (2.3). In order to transform it into a spherical constraint, we introduce the change of variables x Z (Z T Z) ;1=... |

48 | Indefinitely preconditioned inexact Newton method for large sparse equality constrained nonlinear programming problems. Numerical Linear Algebra with Applications - Lukšan, Vlček - 1998 |

47 | A global convergence theory for general trust-region-based algorithms for equality constrained optimization
- Dennis, El-Alem, et al.
- 1997
(Show Context)
Citation Context ...l Science Foundation grant CDA-9726385 and by Department of Energy grant DE-FG02-87ER25047-A004. 1s1. Introduction A variety of algorithms for linearly and nonlinearly constrained optimization (e.g., =-=[8, 13, 14, 35, 36]-=-) use the conjugate gradient (CG) method [28] to solve subproblems of the form minimize x q(x) = 1 2 xT Hx + c T x (1.1) subject to Ax = b: (1.2) In nonlinear optimization, the n-vector c usually repr... |

42 | On the implementation of an algorithm for large-scale equality constrained optimization
- Lalee, Nocedal, et al.
- 1998
(Show Context)
Citation Context ...l Science Foundation grant CDA-9726385 and by Department of Energy grant DE-FG02-87ER25047-A004. 1s1. Introduction A variety of algorithms for linearly and nonlinearly constrained optimization (e.g., =-=[8, 13, 14, 35, 36]-=-) use the conjugate gradient (CG) method [28] to solve subproblems of the form minimize x q(x) = 1 2 xT Hx + c T x (1.1) subject to Ax = b: (1.2) In nonlinear optimization, the n-vector c usually repr... |

42 |
The conjugate gradient method in extremal problems
- Polyak
- 1969
(Show Context)
Citation Context ... = H, but other choices for G are also possible� all that is required is that zT Gz > 0 for all nonzero z for which Az =0. The idea of using the projection (3.3) in the CG method dates back toatleast =-=[41]-=-� the alternative (3.11), and its special case (3.8), are proposed in [9], although [9] unnecessarily requires that G be positive de nite. A more recent study on preconditioning the projected CG metho... |

38 |
Maintaining lu factors of a general sparse matrix. Linear Algebra and its
- Gill, Murray, et al.
- 1987
(Show Context)
Citation Context ...ond reason for not wanting to compute Z is that it sometimes gives rise to unnecessary ill-conditioning [10, 11, 18, 26, 40, 43]. Although the carefully constructed null-space basis provided by LUSOL =-=[19]-=-) is largely successful in avoiding this potential defect [21], it requires two LU factorizations to compute Z. We thus contend that it can be very useful for general-purpose optimization codes to pro... |

37 | Trust-Region Interior-Point SQP Algorithms for a Class of Nonlinear Programming Problems
- Dennis, Heinkenschloss, et al.
- 1998
(Show Context)
Citation Context ...of the form ; P n i=1 (log(xi ; li)+log(ui ; xi)) to the objective function, for some positive barrier parameter . The choice G = I arises in several trust region methods for constrained optimization =-=[8, 14, 15, 27, 35, 39, 46]-=-. These methods include a trust region constraint of the form kZx Zk in the subproblem (2.3). In order to transform it into a spherical constraint, we introduce the change of variables x Z (Z T Z) ;1=... |

34 |
Solving the trust-region subproblem using the Lanczos method
- Toint
- 1999
(Show Context)
Citation Context ...s, but our discussion will also be valid in that context because trust region methods normally terminate the CG iteration as soon as negative curvature is encountered (see [42, 44], and, by contrast, =-=[24]-=-). The quadratic program (1.1){(1.2) can be solved by computing a basis Z for the null space of A, using this basis to eliminate the constraints, and then applying the CG method to the reduced problem... |

26 | The design of MA48: a code for the direct solution of sparse unsymmetric linear systems of equations - Duff, Reid - 1996 |

23 |
A primal-dual trustregion algorithm for non-convex nonlinear programming
- Toint
(Show Context)
Citation Context |

20 |
The multifrontal solution of inde nite sparse symmetric linear systems
- Du, Reid
- 1983
(Show Context)
Citation Context ...ned residual g + = Pr + is computed by solving I A T A 0 ! g + v + ! = r + 0 ! (4.5) using a direct method. There are a number of such methods, the strategies of Bunch and Kaufman [6] and Du and Reid =-=[16]-=- being the best known examples for dense and sparse matrices, respectively. Both form the LDL T factorization of the augmented matrix (i.e. the matrix appearing on the left hand side of (4.5)), where ... |

16 | A preconditioned conjugate gradient approach to linear equality constrained minimization
- Coleman, Verma
- 1998
(Show Context)
Citation Context ... (2.9) = + T + T (rZ ) gZ =rZ gZ (2.10) + pZ ;gZ + pZ (2.11) + + gZ gZ and rZ rZ (2.12) This iteration may be terminated, for example, when r Z T (Z T GZ) ;1 rZ is su ciently small. Coleman and Verma =-=[12]-=- and Nash and Sofer [37] have proposed strategies for de ning the preconditioner Z T GZ which make use of products involving the null-space basis Z and its transpose. Once an approximate solution is o... |

16 | QR Factorization of large sparse overdetermined and square matrices using a multifrontal method in a multiprocessor environment - Puglisi - 1993 |

14 |
The null space problem II: algorithms
- Coleman, Pothen
- 1987
(Show Context)
Citation Context ...near system of equations and signi cantly reduce the cost of the optimization iteration. The second reason for not wanting to compute Z is that it sometimes gives rise to unnecessary ill-conditioning =-=[10, 11, 18, 26, 40, 43]-=-. Although the carefully constructed null-space basis provided by LUSOL [19]) is largely successful in avoiding this potential defect [21], it requires two LU factorizations to compute Z. We thus cont... |

14 |
Sparse orthogonal schemes for structural optimization using the force method
- Heath, Plemmons, et al.
- 1984
(Show Context)
Citation Context ...near system of equations and signi cantly reduce the cost of the optimization iteration. The second reason for not wanting to compute Z is that it sometimes gives rise to unnecessary ill-conditioning =-=[10, 11, 18, 26, 40, 43]-=-. Although the carefully constructed null-space basis provided by LUSOL [19]) is largely successful in avoiding this potential defect [21], it requires two LU factorizations to compute Z. We thus cont... |

13 |
The null space problem I: complexity
- Coleman, Pothen
- 1986
(Show Context)
Citation Context ...near system of equations and signi cantly reduce the cost of the optimization iteration. The second reason for not wanting to compute Z is that it sometimes gives rise to unnecessary ill-conditioning =-=[10, 11, 18, 26, 40, 43]-=-. Although the carefully constructed null-space basis provided by LUSOL [19]) is largely successful in avoiding this potential defect [21], it requires two LU factorizations to compute Z. We thus cont... |

12 | Linearly constrained optimization and projected preconditioned conjugate gradients - Coleman - 1994 |

11 |
private communication
- Gill, Saunders
- 1999
(Show Context)
Citation Context ...ives rise to unnecessary ill-conditioning [10, 11, 18, 26, 40, 43]. Although the carefully constructed null-space basis provided by LUSOL [19]) is largely successful in avoiding this potential defect =-=[21]-=-, it requires two LU factorizations to compute Z. We thus contend that it can be very useful for general-purpose optimization codes to provide the option of not computing with a null-space basis, and ... |

11 | Analysis of inexact trust-region interior-point SQP algorithms
- Heinkenschloss, Vicente
- 1995
(Show Context)
Citation Context ...of the form ; P n i=1 (log(xi ; li)+log(ui ; xi)) to the objective function, for some positive barrier parameter . The choice G = I arises in several trust region methods for constrained optimization =-=[8, 14, 15, 27, 35, 39, 46]-=-. These methods include a trust region constraint of the form kZx Zk in the subproblem (2.3). In order to transform it into a spherical constraint, we introduce the change of variables x Z (Z T Z) ;1=... |

11 |
A trust region method for nonlinear programming based on primal interior-point techniques
- Plantenga
- 1998
(Show Context)
Citation Context |

10 | Preconditioning of reduced matrices
- Nash, Sofer
- 1993
(Show Context)
Citation Context ...Z =rZ gZ (2.10) + pZ ;gZ + pZ (2.11) + + gZ gZ and rZ rZ (2.12) This iteration may be terminated, for example, when r Z T (Z T GZ) ;1 rZ is su ciently small. Coleman and Verma [12] and Nash and Sofer =-=[37]-=- have proposed strategies for de ning the preconditioner Z T GZ which make use of products involving the null-space basis Z and its transpose. Once an approximate solution is obtained using Algorithm ... |

10 | Multifrontal computation with the orthogonal factors of sparse matrices - Lu, Barlow - 1996 |

9 | Iterative refinement for linear systems and LAPACK - Higham - 1997 |

6 | Iterative methods for ill-conditioned linear systems from optimization, in Nonlinear Optimization and Related Topics
- Gould
- 1999
(Show Context)
Citation Context ...ited form of iterative re nement inwhich the computed v + , but not the computed g + which is discarded, is used to re ne the solution. This \iterative semi-re nement" has been used in other contexts =-=[7, 23]-=-. For the problem given in Example 1, the resulting g + gives cos = 9.6E;21. There is another interesting interpretation of the reset r r ; A T y performed at the start of Algorithm III. In the parlan... |

6 |
Nested dissection for sparse nullspace bases
- Stern, Vavasis
- 1993
(Show Context)
Citation Context |

6 |
On large-scale nonlinear network optimiz ation
- TOINT, TUYTTENS
- 1990
(Show Context)
Citation Context ... this basis to eliminate the constraints, and then applying the CG method to the reduced problem. This approach has been successfully implementedinvarious algorithms for large scale optimization (cf. =-=[17, 32, 45]-=-). In this paper we studyhow to apply the preconditioned CG method to (1.1){(1.2) without computing a null-space basis Z. There are two reasons for this. Several optimization algorithms require the so... |

5 | Large-scale constrained optimization - Hribar - 1996 |

4 |
Pivoting and stability in augmented systems
- Björck
- 1992
(Show Context)
Citation Context ...lower triangular and D is block diagonal with 1 1or2 2 blocks. This approach is usually (but not always) more stable than the normal equations approach. To improve the stability of the method, Bjorck =-=[3]-=- suggests replacing the upper-left block of (4.5) by a multiple of the identity I, but since choosing a good value of this parameter can be di cult, we consider here only (4.5). In the case which conc... |

4 |
Implicit nullspace iterative methods for constrained least square problems
- James
- 1992
(Show Context)
Citation Context ... this basis to eliminate the constraints, and then applying the CG method to the reduced problem. This approach has been successfully implementedinvarious algorithms for large scale optimization (cf. =-=[17, 32, 45]-=-). In this paper we studyhow to apply the preconditioned CG method to (1.1){(1.2) without computing a null-space basis Z. There are two reasons for this. Several optimization algorithms require the so... |

4 |
Substructuring methods for computing the nullspace of equilibrium matrices
- Plemmons, White
- 1990
(Show Context)
Citation Context |

4 |
Towards an e cient sparsity exploiting Newton method for minimization
- Toint
- 1981
(Show Context)
Citation Context ...eeded in trust region methods, but our discussion will also be valid in that context because trust region methods normally terminate the CG iteration as soon as negative curvature is encountered (see =-=[42, 44]-=-, and, by contrast, [24]). The quadratic program (1.1){(1.2) can be solved by computing a basis Z for the null space of A, using this basis to eliminate the constraints, and then applying the CG metho... |

3 |
Computing a sparse basis for the null-space
- Gilbert, Heath
- 1987
(Show Context)
Citation Context |

2 |
Second-order multiplier update calculations for optimal control problems and related large scale nonlinear programs
- Dunn
- 1993
(Show Context)
Citation Context ... this basis to eliminate the constraints, and then applying the CG method to the reduced problem. This approach has been successfully implementedinvarious algorithms for large scale optimization (cf. =-=[17, 32, 45]-=-). In this paper we studyhow to apply the preconditioned CG method to (1.1){(1.2) without computing a null-space basis Z. There are two reasons for this. Several optimization algorithms require the so... |