Results 1  10
of
18
An interior point algorithm for large scale nonlinear programming
 SIAM Journal on Optimization
, 1999
"... The design and implementation of a new algorithm for solving large nonlinear programming problems is described. It follows a barrier approach that employs sequential quadratic programming and trust regions to solve the subproblems occurring in the iteration. Both primal and primaldual versions of t ..."
Abstract

Cited by 74 (17 self)
 Add to MetaCart
The design and implementation of a new algorithm for solving large nonlinear programming problems is described. It follows a barrier approach that employs sequential quadratic programming and trust regions to solve the subproblems occurring in the iteration. Both primal and primaldual versions of the algorithm are developed, and their performance is illustrated in a set of numerical tests. Key words: constrained optimization, interior point method, largescale optimization, nonlinear programming, primal method, primaldual method, successive quadratic programming, trust region method.
Automatic Determination Of An Initial Trust Region In Nonlinear Programming
 Department of
, 1995
"... . This paper presents a simple but efficient way to find a good initial trust region radius in trust region methods for nonlinear optimization. The method consists of monitoring the agreement between the model and the objective function along the steepest descent direction, computed at the starting ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
. This paper presents a simple but efficient way to find a good initial trust region radius in trust region methods for nonlinear optimization. The method consists of monitoring the agreement between the model and the objective function along the steepest descent direction, computed at the starting point. Further improvements for the starting point are also derived from the information gleaned during the initializing phase. Numerical results on a large set of problems show the impact the initial trust region radius may have on trust region methods behaviour and the usefulness of the proposed strategy. Key Words. Nonlinear optimization, trust region methods, initial trust region, numerical results 1. Introduction. Trust region methods for unconstrained optimization were first introduced by Powell in [14]. Since then, these methods have enjoyed a good reputation on the basis of their remarkable numerical reliability in conjunction with a sound and complete convergence theory. They have...
Global Optimization For Constrained Nonlinear Programming
, 2001
"... In this thesis, we develop constrained simulated annealing (CSA), a global optimization algorithm that asymptotically converges to constrained global minima (CGM dn ) with probability one, for solving discrete constrained nonlinear programming problems (NLPs). The algorithm is based on the necessary ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
In this thesis, we develop constrained simulated annealing (CSA), a global optimization algorithm that asymptotically converges to constrained global minima (CGM dn ) with probability one, for solving discrete constrained nonlinear programming problems (NLPs). The algorithm is based on the necessary and sufficient condition for constrained local minima (CLM dn ) in the theory of discrete constrained optimization using Lagrange multipliers developed in our group. The theory proves the equivalence between the set of discrete saddle points and the set of CLM dn, leading to the firstorder necessary and sufficient condition for CLM dn. To find
TwoStep Algorithms for Nonlinear Optimization with Structured Applications
 SIAM Journal on Optimization
, 1999
"... In this paper we propose extensions to trustregion algorithms in which the classical step is augmented with a second step that we insist yields a decrease in the value of the objective function. The classical convergence theory for trustregion algorithms is adapted to this class of twostep alg ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
In this paper we propose extensions to trustregion algorithms in which the classical step is augmented with a second step that we insist yields a decrease in the value of the objective function. The classical convergence theory for trustregion algorithms is adapted to this class of twostep algorithms. The algorithms can be applied to any problem with variable(s) whose contribution to the objective function is a known functional form. In the nonlinear programming package LANCELOT, they have been applied to update slack variables and variables introduced to solve minimax problems, leading to enhanced optimization eciency. Extensive numerical results are presented to show the eectiveness of these techniques. Keywords. Trust regions, line searches, twostep algorithms, spacer steps, slack variables, LANCELOT, minimax problems, expensive function evaluations, circuit optimization. AMS subject classications. 49M37, 90C06, 90C30 1 Introduction In nonlinear optimization proble...
Optimal Anytime Search For Constrained Nonlinear Programming
, 2001
"... In this thesis, we study optimal anytime stochastic search algorithms (SSAs) for solving general constrained nonlinear programming problems (NLPs) in discrete, continuous and mixedinteger space. The algorithms are general in the sense that they do not assume di#erentiability or convexity of functio ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
In this thesis, we study optimal anytime stochastic search algorithms (SSAs) for solving general constrained nonlinear programming problems (NLPs) in discrete, continuous and mixedinteger space. The algorithms are general in the sense that they do not assume di#erentiability or convexity of functions. Based on the search algorithms, we develop the theory of SSAs and propose optimal SSAs with iterative deepening in order to minimize their expected search time. Based on the optimal SSAs, we then develop optimal anytime SSAs that generate improved solutions as more search time is allowed. Our SSAs
The Theory And Applications Of Discrete Constrained Optimization Using Lagrange Multipliers
, 2000
"... In this thesis, we present a new theory of discrete constrained optimization using Lagrange multipliers and an associated firstorder search procedure (DLM) to solve general constrained optimization problems in discrete, continuous and mixedinteger space. The constrained problems are general in the ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
In this thesis, we present a new theory of discrete constrained optimization using Lagrange multipliers and an associated firstorder search procedure (DLM) to solve general constrained optimization problems in discrete, continuous and mixedinteger space. The constrained problems are general in the sense that they do not assume the differentiability or convexity of functions. Our proposed theory and methods are targeted at discrete problems and can be extended to continuous and mixedinteger problems by coding continuous variables using a floatingpoint representation (discretization). We have characterized the errors incurred due to such discretization and have proved that there exists upper bounds on the errors. Hence, continuous and mixedinteger constrained problems, as well as discrete ones, can be handled by DLM in a unified way with bounded errors.
SymbolicAlgebraic Computations in a Modeling Language for Mathematical Programming
, 2000
"... This paper was written for the proceedings of a seminar on "Symbolicalgebraic ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This paper was written for the proceedings of a seminar on "Symbolicalgebraic
Improving Constrained Nonlinear Search Algorithms Through Constraint Relaxation
, 2001
"... In this thesis we study constraint relaxations of various nonlinear programming (NLP) algorithms in order to improve their performance. For both stochastic and deterministic algorithms, we study the relationship between the expected time to find a feasible solution and the constraint relaxation leve ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this thesis we study constraint relaxations of various nonlinear programming (NLP) algorithms in order to improve their performance. For both stochastic and deterministic algorithms, we study the relationship between the expected time to find a feasible solution and the constraint relaxation level, build an exponential model based on this relationship, and develop a constraint relaxation schedule in such a way that the total time spent to find a feasible solution for all the relaxation levels is of the same order of magnitude as the time spent for finding a solution of similar quality using the last relaxation level alone.
Local Analysis of a New Multipliers Method
 European Journal of Operational Research (special volume on Continuous Optimization
"... In this paper we introduce a penalty function and a corresponding multipliers method for the solution of a class of nonlinear programming problems where the equality constraints have a particular structure. The class models optimal control and engineering design problems with bounds on the state and ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper we introduce a penalty function and a corresponding multipliers method for the solution of a class of nonlinear programming problems where the equality constraints have a particular structure. The class models optimal control and engineering design problems with bounds on the state and control variables and has wide applicability. The multipliers method updates multipliers corresponding to inequality constraints (maintaining their nonnegativity) instead of dealing with multipliers associated with equality constraints. The basic local convergence properties of the method are proved and a dual framework is introduced. We also analyze the properties of the penalized problem related with the penalty function. Keywords. Nonlinear programming, optimal control, state constraints, penalty function, multipliers method, augmented Lagrangian. AMS subject classications. 49M37, 90C06, 90C30 1
An active set Newton's algorithm for largescale nonlinear programs with box constraints
 SIAM J. Optim
, 1995
"... A new algorithm for largescale nonlinear programs with box constraints is introduced. The algorithm is based on an efficient identification technique of the active set at the solution and on a nonmonotone stabilization technique. It possesses global and superlinear convergence properties under stan ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
A new algorithm for largescale nonlinear programs with box constraints is introduced. The algorithm is based on an efficient identification technique of the active set at the solution and on a nonmonotone stabilization technique. It possesses global and superlinear convergence properties under standard, mild assumptions. A new technique for generating test problems with known characteristics is also introduced. The implementation of the method is described along with computational results for largescale problems. 1 Introduction In this paper we consider the solution of the box constrained nonlinear programming problem min x2K f(x) (1) where K = fx 2 IR n : l i x i u i ; i = 1; : : : ; ng (2) is a nonempty set. We assume that the lower and upper bounds may be finite or infinite and that f is a twice continuously differentiable function in an open set containing K. A vector ¯ x 2 K is said to be a stationary point for Problem (1) if it satisfies 8 ? ? ! ? ? : l i = ¯ x i =) ...