Results 1  10
of
10
LOQO: An interior point code for quadratic programming
, 1994
"... ABSTRACT. This paper describes a software package, called LOQO, which implements a primaldual interiorpoint method for general nonlinear programming. We focus in this paper mainly on the algorithm as it applies to linear and quadratic programming with only brief mention of the extensions to convex ..."
Abstract

Cited by 156 (9 self)
 Add to MetaCart
ABSTRACT. This paper describes a software package, called LOQO, which implements a primaldual interiorpoint method for general nonlinear programming. We focus in this paper mainly on the algorithm as it applies to linear and quadratic programming with only brief mention of the extensions to convex and general nonlinear programming, since a detailed paper describing these extensions were published recently elsewhere. In particular, we emphasize the importance of establishing and maintaining symmetric quasidefiniteness of the reduced KKT system. We show that the industry standard MPS format can be nicely formulated in such a way to provide quasidefiniteness. Computational results are included for a variety of linear and quadratic programming problems. 1.
An InteriorPoint Algorithm For Nonconvex Nonlinear Programming
 COMPUTATIONAL OPTIMIZATION AND APPLICATIONS
, 1997
"... The paper describes an interiorpoint algorithm for nonconvex nonlinear programming which is a direct extension of interiorpoint methods for linear and quadratic programming. Major modifications include a merit function and an altered search direction to ensure that a descent direction for the mer ..."
Abstract

Cited by 144 (13 self)
 Add to MetaCart
The paper describes an interiorpoint algorithm for nonconvex nonlinear programming which is a direct extension of interiorpoint methods for linear and quadratic programming. Major modifications include a merit function and an altered search direction to ensure that a descent direction for the merit function is obtained. Preliminary numerical testing indicates that the method is robust. Further, numerical comparisons with MINOS and LANCELOT show that the method is efficient, and has the promise of greatly reducing solution times on at least some classes of models.
Interiorpoint methods for nonconvex nonlinear programming: Filter methods and merit functions
 Computational Optimization and Applications
, 2002
"... Abstract. In this paper, we present global and local convergence results for an interiorpoint method for nonlinear programming and analyze the computational performance of its implementation. The algorithm uses an ℓ1 penalty approach to relax all constraints, to provide regularization, and to bound ..."
Abstract

Cited by 84 (7 self)
 Add to MetaCart
Abstract. In this paper, we present global and local convergence results for an interiorpoint method for nonlinear programming and analyze the computational performance of its implementation. The algorithm uses an ℓ1 penalty approach to relax all constraints, to provide regularization, and to bound the Lagrange multipliers. The penalty problems are solved using a simplified version of Chen and Goldfarb’s strictly feasible interiorpoint method [12]. The global convergence of the algorithm is proved under mild assumptions, and local analysis shows that it converges Qquadratically for a large class of problems. The proposed approach is the first to simultaneously have all of the following properties while solving a general nonconvex nonlinear programming problem: (1) the convergence analysis does not assume boundedness of dual iterates, (2) local convergence does not require the Linear Independence Constraint Qualification, (3) the solution of the penalty problem is shown to locally converge to optima that may not satisfy the KarushKuhnTucker conditions, and (4) the algorithm is applicable to mathematical programs with equilibrium constraints. Numerical testing on a set of general nonlinear programming problems, including degenerate problems and infeasible problems, confirm the theoretical results. We also provide comparisons to a highlyefficient nonlinear solver and thoroughly analyze the effects of enforcing theoretical convergence guarantees on the computational performance of the algorithm. 1.
Formulating and Solving Nonlinear Programs as Mixed Complementarity Problems
 Optimization. Lecture Notes in Economics and Mathematical Systems
, 2000
"... . We consider a primaldual approach to solve nonlinear programming problems within the AMPL modeling language, via a mixed complementarity formulation. The modeling language supplies the first order and second order derivative information of the Lagrangian function of the nonlinear problem using au ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
. We consider a primaldual approach to solve nonlinear programming problems within the AMPL modeling language, via a mixed complementarity formulation. The modeling language supplies the first order and second order derivative information of the Lagrangian function of the nonlinear problem using automatic differentiation. The PATH solver finds the solution of the first order conditions which are generated automatically from this derivative information. In addition, the link incorporates the objective function into a new merit function for the PATH solver to improve the capability of the complementarity algorithm for finding optimal solutions of the nonlinear program. We test the new solver on various test suites from the literature and compare with other available nonlinear programming solvers. Keywords: Complementarity problems, nonlinear programs, automatic differentiation, modeling languages. 1 Introduction While the use of the simplex algorithm for linear programs in the 1940's h...
Symbiosis between Linear Algebra and Optimization
, 1999
"... The efficiency and effectiveness of most optimization algorithms hinges on the numerical linear algebra algorithms that they utilize. Effective linear algebra is crucial to their success, and because of this, optimization applications have motivated fundamental advances in numerical linear algebra. ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
The efficiency and effectiveness of most optimization algorithms hinges on the numerical linear algebra algorithms that they utilize. Effective linear algebra is crucial to their success, and because of this, optimization applications have motivated fundamental advances in numerical linear algebra. This essay will highlight contributions of numerical linear algebra to optimization, as well as some optimization problems encountered within linear algebra that contribute to a symbiotic relationship. 1 Introduction The work in any continuous optimization algorithm neatly partitions into two pieces: the work in acquiring information through evaluation of the function and perhaps its derivatives, and the overhead involved in generating points approximating an optimal point. More often than not, this second part of the work is dominated by linear algebra, usually in the form of solution of a linear system or least squares problem and updating of matrix information. Thus, members of the optim...
Smooth Exact Penalty and Barrier Functions for Nonsmooth Optimization
"... For constrained nonsmooth optimization problems, continuously differentiable penalty functions and barrier functions are given. They are proved exact in the sense that under some nondegeneracy assumption, local optimizers of a nonlinear program are also optimizers of the associated penalty or barrie ..."
Abstract
 Add to MetaCart
For constrained nonsmooth optimization problems, continuously differentiable penalty functions and barrier functions are given. They are proved exact in the sense that under some nondegeneracy assumption, local optimizers of a nonlinear program are also optimizers of the associated penalty or barrier function. This is achieved by augmenting the dimension of the program by a variable that controls the regularization of the nonsmooth terms and the weight of the penalty or barrier terms.
Design Issues in Algorithms for Large Scale Nonlinear Programming
, 1999
"... Design Issues in Algorithms for Large Scale Nonlinear Programming Guanghui Liu Ph.D. Supervisor: Jorge Nocedal This dissertation studies a wide range of issues in the design of interior point methods for large scale nonlinear programming. Strategies for computing high quality steps and for rapidly d ..."
Abstract
 Add to MetaCart
Design Issues in Algorithms for Large Scale Nonlinear Programming Guanghui Liu Ph.D. Supervisor: Jorge Nocedal This dissertation studies a wide range of issues in the design of interior point methods for large scale nonlinear programming. Strategies for computing high quality steps and for rapidly decreasing barrier parameters are introduced. Preconditioning or scaling techniques are developed. To extend the range of applicability of interior methods, feasible variants are proposed, quasiNewton updating methods are introduced, and strategies for the nonmonotone decrease of the merit function are explored. A product of this dissertation is a software, called NITRO, that implements an interior point method using the new algorithmic features presented in this dissertation. The performance of this code is assessed by comparing it with established software for large scale optimization (SNOPT, filterSQP and LOQO). ii ACKNOWLEDGMENT I am grateful to Jorge Nocedal, my PhD supervisor, for i...
InteriorPoint Methods for Nonlinear, SecondOrder Cone, and . . .
, 2001
"... Interiorpoint methods have been a reemerging field in optimization since the mid1980s. We will present here ways of improving the performance of these algorithms for nonlinear optimization and extending them to different classes of problems and application areas. At each iteration, an interiorpo ..."
Abstract
 Add to MetaCart
Interiorpoint methods have been a reemerging field in optimization since the mid1980s. We will present here ways of improving the performance of these algorithms for nonlinear optimization and extending them to different classes of problems and application areas. At each iteration, an interiorpoint algorithm computes a direction in which to proceed, and then must decide how long of a step to take. The traditional approach to choosing a steplength is to use a merit function, which balances the goals of improving the objective function and satisfying the constraints. Recently, Fletcher and Leyffer reported success with using a filter method, where improvement of any of the objective function and constraint infeasibility is sufficient. We have combined these two approaches and applied them to interiorpoint methods for the first time and with good results. Another issue in nonlinear optimization is the emergence of several popular problem classes and their specialized solution algorithms. Two such problem classes are SecondOrder Cone Programming (SOCP) and Semidefinite Programming (SDP). In the second part of this dissertation, we show that problems from both of these classes can be reformulated as smooth convex optimization problems and solved using a general purpose interiorpoint algorithm for nonlinear optimization.
Switching Stepsize Strategies for SQP
, 2010
"... An SQP algorithm is presented for solving constrained nonlinear programming problems. The algorithm uses three stepsize strategies in order to achieve global and superlinear convergence. Switching rules are implemented that combine the merits and avoid the drawbacks of the three stepsize strategies. ..."
Abstract
 Add to MetaCart
An SQP algorithm is presented for solving constrained nonlinear programming problems. The algorithm uses three stepsize strategies in order to achieve global and superlinear convergence. Switching rules are implemented that combine the merits and avoid the drawbacks of the three stepsize strategies. A penalty parameter is determined using an adaptive strategy that aims to achieve sufficient decrease of the activated merit function. Global convergence is established and it is also shown that, locally, unity step sizes are accepted, and therefore superlinear convergence is not impeded under standard assumptions. Global convergence and convergence of the stepsizes is displayed on test problems from the Hock and Schittkowski collection. Keywords:
www.elsevier.nl/locate/cam Symbiosis between linear algebra and optimization �
, 1999
"... The e ciency and e ectiveness of most optimization algorithms hinges on the numerical linear algebra algorithms that they utilize. E ective linear algebra is crucial to their success, and because of this, optimization applications have motivated fundamental advances in numerical linear algebra. This ..."
Abstract
 Add to MetaCart
The e ciency and e ectiveness of most optimization algorithms hinges on the numerical linear algebra algorithms that they utilize. E ective linear algebra is crucial to their success, and because of this, optimization applications have motivated fundamental advances in numerical linear algebra. This essay will highlight contributions of numerical linear algebra to optimization, as well as some optimization problems encountered within linear algebra that contribute to a