Results 1  10
of
26
SNOPT: An SQP Algorithm For LargeScale Constrained Optimization
, 2002
"... Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first deriv ..."
Abstract

Cited by 597 (24 self)
 Add to MetaCart
(Show Context)
Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first derivatives are available, and that the constraint gradients are sparse. We discuss
The Ellipsoid Method: A Survey
 OR
, 1981
"... ... method for linear programming can be implemented in polynomial time. This result has caused great excitement and stimulated a flood of technical papers. Ordinarily there would be no need for a survey of work so recent, but the current circumstances are obviously exceptional. Word of Khachiyan&ap ..."
Abstract

Cited by 93 (2 self)
 Add to MetaCart
... method for linear programming can be implemented in polynomial time. This result has caused great excitement and stimulated a flood of technical papers. Ordinarily there would be no need for a survey of work so recent, but the current circumstances are obviously exceptional. Word of Khachiyan's result has spread extraordinarily fast, much faster than comprehension of its significance. A variety of issues have, in general, not been well understood, including the exact character of the ellipsoid method and of Khachiyan's result on polynomiality, its practical significance in linear programming, its implementation, its potential applicability to problems outside of the domain of linear programming, and its relationship to earlier work. Our aim is to help clarify these important issues in the context of a survey of the ellipsoid method, its historical antecedents, recent developments, and current research.
SPARSE MATRIX METHODS IN OPTIMIZATION
, 1984
"... Optimization algorithms typically require the solution of many systems of linear equations Bkyk b,. When large numbers of variables or constraints are present, these linear systems could account for much of the total computation time. Both direct and iterative equation solvers are needed in practi ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
Optimization algorithms typically require the solution of many systems of linear equations Bkyk b,. When large numbers of variables or constraints are present, these linear systems could account for much of the total computation time. Both direct and iterative equation solvers are needed in practice. Unfortunately, most of the offtheshelf solvers are designed for single systems, whereas optimization problems give rise to hundreds or thousands of systems. To avoid refactorization, or to speed the convergence of an iterative method, it is essential to note that B is related to Bk _ 1. We review various sparse matrices that arise in optimization, and discuss compromises that are currently being made in dealing with them. Since significant advances continue to be made with singlesystem solvers, we give special attention to methods that allow such solvers to be used repeatedly on a sequence of modified systems (e.g., the productform update; use of the Schur complement). The speed of factorizing a matrix then becomes relatively less important than the efficiency of subsequent solves with very many righthand sides. At the same time, we hope that future improvements to linearequation software will be oriented more specifically to the case of related matrices B k.
GAMS/MINOS: A Solver for Largescale Nonlinear Optimization
 Problems”, GAMS Development Corporation
, 2002
"... ..."
Symbiosis between Linear Algebra and Optimization
, 1999
"... The efficiency and effectiveness of most optimization algorithms hinges on the numerical linear algebra algorithms that they utilize. Effective linear algebra is crucial to their success, and because of this, optimization applications have motivated fundamental advances in numerical linear algebra. ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
The efficiency and effectiveness of most optimization algorithms hinges on the numerical linear algebra algorithms that they utilize. Effective linear algebra is crucial to their success, and because of this, optimization applications have motivated fundamental advances in numerical linear algebra. This essay will highlight contributions of numerical linear algebra to optimization, as well as some optimization problems encountered within linear algebra that contribute to a symbiotic relationship.
The Simplex Method is Not Always Well Behaved
"... This paper deals with the roundingerror analysis of the simplex method for solving linearprogramming problems. We prove that in general any simplextype algorithm is not well behaved, which means that the computed solution cannot be considered as an exact solution to a slightly perturbed problem. W ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper deals with the roundingerror analysis of the simplex method for solving linearprogramming problems. We prove that in general any simplextype algorithm is not well behaved, which means that the computed solution cannot be considered as an exact solution to a slightly perturbed problem. We also point out that simplex algorithms with wellbehaved updating techniques (such as the BartelsGolub algorithm) are numerically stable whenever proper tolerances are introduced into the optirnality criteria. This means that the error in the computed solution is of a similar order to the sensitivity of the optimal solution to slight data perturbations. 1.
ISSN 17499097Commentary on Selected Papers by Gene Golub on Matrix Factorizations and Applications ∗
, 2006
"... One of the fundamental tenets of numerical linear algebra is to exploit matrix factorizations. Doing so has numerous benefits, ranging from allowing clearer analysis and deeper understanding to simplifying the efficient implementation of algorithms. Textbooks in numerical analysis and matrix analysi ..."
Abstract
 Add to MetaCart
(Show Context)
One of the fundamental tenets of numerical linear algebra is to exploit matrix factorizations. Doing so has numerous benefits, ranging from allowing clearer analysis and deeper understanding to simplifying the efficient implementation of algorithms. Textbooks in numerical analysis and matrix analysis nowadays maximize the use of matrix factorizations, but this was not so in the first half of the 20th century. Golub has done as much as anyone to promulgate the benefits of matrix factorization, particularly the QR factorization and the singular value decomposition, and especially through his book Matrix Computations with Van Loan [28]. The five papers in this section illustrate several different facets of the matrix factorization paradigm. On direct methods for solving Poisson’s equations, by Buzbee, Golub, and Nielson [9] Cyclic reduction is a recurring topic in numerical analysis. In the context of solving a tridiagonal linear system of order 2n − 1, the idea is to eliminate the oddnumbered unknowns, thus halving the size of the system, and to continue this procedure recursively until a single equation remains. One unknown can now be solved for and the rest are