Results 1  10
of
110
SNOPT: An SQP Algorithm For LargeScale Constrained Optimization
, 2002
"... Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first deriv ..."
Abstract

Cited by 597 (24 self)
 Add to MetaCart
(Show Context)
Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first derivatives are available, and that the constraint gradients are sparse. We discuss
On the Implementation of an InteriorPoint Filter LineSearch Algorithm for LargeScale Nonlinear Programming
, 2004
"... We present a primaldual interiorpoint algorithm with a filter linesearch method for nonlinear programming. Local and global convergence properties of this method were analyzed in previous work. Here we provide a comprehensive description of the algorithm, including the feasibility restoration ph ..."
Abstract

Cited by 294 (6 self)
 Add to MetaCart
We present a primaldual interiorpoint algorithm with a filter linesearch method for nonlinear programming. Local and global convergence properties of this method were analyzed in previous work. Here we provide a comprehensive description of the algorithm, including the feasibility restoration phase for the filter method, secondorder corrections, and inertia correction of the KKT matrix. Heuristics are also considered that allow faster performance. This method has been implemented in the IPOPT code, which we demonstrate in a detailed numerical study based on 954 problems from the CUTEr test set. An evaluation is made of several linesearch options, and a comparison is provided with two stateoftheart interiorpoint codes for nonlinear programming.
Nonlinear Programming without a penalty function
 Mathematical Programming
, 2000
"... In this paper the solution of nonlinear programming problems by a Sequential Quadratic Programming (SQP) trustregion algorithm is considered. The aim of the present work is to promote global convergence without the need to use a penalty function. Instead, a new concept of a "filter" is in ..."
Abstract

Cited by 252 (32 self)
 Add to MetaCart
In this paper the solution of nonlinear programming problems by a Sequential Quadratic Programming (SQP) trustregion algorithm is considered. The aim of the present work is to promote global convergence without the need to use a penalty function. Instead, a new concept of a "filter" is introduced which allows a step to be accepted if it reduces either the objective function or the constraint violation function. Numerical tests on a wide range of test problems are very encouraging and the new algorithm compares favourably with LANCELOT and an implementation of Sl 1 QP.
An InteriorPoint Algorithm For Nonconvex Nonlinear Programming
 COMPUTATIONAL OPTIMIZATION AND APPLICATIONS
, 1997
"... The paper describes an interiorpoint algorithm for nonconvex nonlinear programming which is a direct extension of interiorpoint methods for linear and quadratic programming. Major modifications include a merit function and an altered search direction to ensure that a descent direction for the mer ..."
Abstract

Cited by 199 (14 self)
 Add to MetaCart
The paper describes an interiorpoint algorithm for nonconvex nonlinear programming which is a direct extension of interiorpoint methods for linear and quadratic programming. Major modifications include a merit function and an altered search direction to ensure that a descent direction for the merit function is obtained. Preliminary numerical testing indicates that the method is robust. Further, numerical comparisons with MINOS and LANCELOT show that the method is efficient, and has the promise of greatly reducing solution times on at least some classes of models.
Efficient Synthesis of Physically Valid Human Motion
, 2003
"... Optimization is a promising way to generate new animations from a minimal amount of input data. Physically based optimization techniques, however, are difficult to scale to complex animated characters, in part because evaluating and differentiating physical quantities becomes prohibitively slow. Tra ..."
Abstract

Cited by 117 (3 self)
 Add to MetaCart
Optimization is a promising way to generate new animations from a minimal amount of input data. Physically based optimization techniques, however, are difficult to scale to complex animated characters, in part because evaluating and differentiating physical quantities becomes prohibitively slow. Traditional approaches often require optimizing or constraining parameters involving joint torques; obtaining first derivatives for these parameters is generally an O(D²) process, where D is the number of degrees of freedom of the character. In this paper, we describe a set of objective functions and constraints that lead to linear time analytical first derivatives. The surprising finding is that this set includes constraints on physical validity, such as ground contact constraints. Considering only constraints and objective functions that lead to linear time first derivatives results in fast periteration computation times and an optimization problem that appears to scale well to more complex characters. We show that qualities such as squashandstretch that are expected from physically based optimization result from our approach. Our animation system is particularly useful for synthesizing highly dynamic motions, and we show examples of swinging and leaping motions for characters having from 7 to 22 degrees of freedom.
A New Trust Region Algorithm For Equality Constrained Optimization
, 1995
"... . We present a new trust region algorithm for solving nonlinear equality constrained optimization problems. At each iterate a change of variables is performed to improve the ability of the algorithm to follow the constraint level sets. The algorithm employs L 2 penalty functions for obtaining global ..."
Abstract

Cited by 72 (7 self)
 Add to MetaCart
. We present a new trust region algorithm for solving nonlinear equality constrained optimization problems. At each iterate a change of variables is performed to improve the ability of the algorithm to follow the constraint level sets. The algorithm employs L 2 penalty functions for obtaining global convergence. Under certain assumptions we prove that this algorithm globally converges to a point satisfying the second order necessary optimality conditions; the local convergence rate is quadratic. Results of preliminary numerical experiments are presented. 1. Introduction. We consider the equality constrained optimization problem minimize f(x) subject to c(x) = 0 (1:1) where x 2 ! n and f : ! n ! !, and c : ! n ! ! m are smooth nonlinear functions. Problem (1.1) is often solved by successive quadratic programming (SQP) methods. At a current point x k 2 ! n , SQP methods determine a search direction d k by solving a quadratic programming problem minimize rf(x k ) T d + 1 2 ...
A Subspace, Interior, and Conjugate Gradient Method for LargeScale BoundConstrained Minimization Problems
 SIAM JOURNAL ON SCIENTIFIC COMPUTING
, 1999
"... A subspace adaptation of the ColemanLi trust region and interior method is proposed for solving largescale boundconstrained minimization problems. This method can be implemented with either sparse Cholesky factorization or conjugate gradient computation. Under reasonable conditions the convergenc ..."
Abstract

Cited by 66 (2 self)
 Add to MetaCart
(Show Context)
A subspace adaptation of the ColemanLi trust region and interior method is proposed for solving largescale boundconstrained minimization problems. This method can be implemented with either sparse Cholesky factorization or conjugate gradient computation. Under reasonable conditions the convergence properties of this subspace trust region method are as strong as those of its fullspace version. Computational
LBFGSB  Fortran Subroutines for LargeScale Bound Constrained Optimization
, 1994
"... LBFGSB is a limited memory algorithm for solving large nonlinear optimization problems subject to simple bounds on the variables. It is intended for problems in which information on the Hessian matrix is di cult to obtain, or for large dense problems. LBFGSB can also be used for unconstrained pr ..."
Abstract

Cited by 58 (3 self)
 Add to MetaCart
(Show Context)
LBFGSB is a limited memory algorithm for solving large nonlinear optimization problems subject to simple bounds on the variables. It is intended for problems in which information on the Hessian matrix is di cult to obtain, or for large dense problems. LBFGSB can also be used for unconstrained problems, and in this case performs similarly to its predecessor, algorithm LBFGS (Harwell routine VA15). The algorithm is implemented in Fortran 77.
NITSOL: A NEWTON ITERATIVE SOLVER FOR NONLINEAR SYSTEMS
, 1998
"... We introduce a welldeveloped Newton iterative (truncated Newton) algorithm for solving largescale nonlinear systems. The framework is an inexact Newton method globalized by backtracking. Trial steps are obtained using one of several Krylov subspace methods. The algorithm is implemented in a Fort ..."
Abstract

Cited by 52 (8 self)
 Add to MetaCart
We introduce a welldeveloped Newton iterative (truncated Newton) algorithm for solving largescale nonlinear systems. The framework is an inexact Newton method globalized by backtracking. Trial steps are obtained using one of several Krylov subspace methods. The algorithm is implemented in a Fortran solver called NITSOL that is robust yet easy to use and provides a number of useful options and features. The structure offers the user great flexibility in addressing problem specicity through preconditioning and other means and allows easy adaptation to parallel environments. Features and capabilities are illustrated in numerical experiments.
Integrating SQP and branchandbound for Mixed Integer Nonlinear Programming
 Computational Optimization and Applications
, 1998
"... This paper considers the solution of Mixed Integer Nonlinear Programming (MINLP) problems. Classical methods for the solution of MINLP problems decompose the problem by separating the nonlinear part from the integer part. This approach is largely due to the existence of packaged software for solving ..."
Abstract

Cited by 45 (1 self)
 Add to MetaCart
(Show Context)
This paper considers the solution of Mixed Integer Nonlinear Programming (MINLP) problems. Classical methods for the solution of MINLP problems decompose the problem by separating the nonlinear part from the integer part. This approach is largely due to the existence of packaged software for solving Nonlinear Programming (NLP) and Mixed Integer Linear Programming problems. In contrast, an integrated approach to solving MINLP problems is considered here. This new algorithm is based on branchandbound, but does not require the NLP problem at each node to be solved to optimality. Instead, branching is allowed after each iteration of the NLP solver. In this way, the nonlinear part of the MINLP problem is solved whilst searching the tree. The nonlinear solver that is considered in this paper is a Sequential Quadratic Programming solver. A numerical comparison of the new method with nonlinear branchandbound is presented and a factor of about 3 improvement over branchandbound is observed...