Results 1  10
of
257
An Efficient Constraint Handling Method for Genetic Algorithms
 Computer Methods in Applied Mechanics and Engineering
, 1998
"... Many realworld search and optimization problems involve inequality and/or equality constraints and are thus posed as constrained optimization problems. In trying to solve constrained optimization problems using genetic algorithms (GAs) or classical optimization methods, penalty function methods hav ..."
Abstract

Cited by 225 (15 self)
 Add to MetaCart
(Show Context)
Many realworld search and optimization problems involve inequality and/or equality constraints and are thus posed as constrained optimization problems. In trying to solve constrained optimization problems using genetic algorithms (GAs) or classical optimization methods, penalty function methods have been the most popular approach, because of their simplicity and ease of implementation. However, since the penalty function approach is generic and applicable to any type of constraint (linear or nonlinear), their performance is not always satisfactory. Thus, researchers have developed sophisticated penalty functions specific to the problem at hand and the search algorithm used for optimization. However, the most difficult aspect of the penalty function approach is to find appropriate penalty parameters needed to guide the search towards the constrained optimum. In this paper, GA's populationbased approach and ability to make pairwise comparison in tournament selection operator are explo...
Stochastic Ranking for Constrained Evolutionary Optimization
, 2000
"... Penalty functions are often used in constrained optimization. However, it is very difficult to strike the right balance between objective and penalty functions. This paper introduces a novel approach to balance objective and penalty functions stochastically, i.e., stochastic ranking, and presents a ..."
Abstract

Cited by 199 (11 self)
 Add to MetaCart
Penalty functions are often used in constrained optimization. However, it is very difficult to strike the right balance between objective and penalty functions. This paper introduces a novel approach to balance objective and penalty functions stochastically, i.e., stochastic ranking, and presents a new view on penalty function methods in terms of the dominance of penalty and objective functions. Some of the pitfalls of naive penalty methods are discussed in these terms. The new ranking method is tested using a (µ, ) evolution strategy on 13 benchmark problems. Our results show that suitable ranking alone (i.e., selection), without the introduction of complicated and specialized variation operators, is capable of improving the search performance significantly.
An InteriorPoint Algorithm For Nonconvex Nonlinear Programming
 COMPUTATIONAL OPTIMIZATION AND APPLICATIONS
, 1997
"... The paper describes an interiorpoint algorithm for nonconvex nonlinear programming which is a direct extension of interiorpoint methods for linear and quadratic programming. Major modifications include a merit function and an altered search direction to ensure that a descent direction for the mer ..."
Abstract

Cited by 193 (14 self)
 Add to MetaCart
The paper describes an interiorpoint algorithm for nonconvex nonlinear programming which is a direct extension of interiorpoint methods for linear and quadratic programming. Major modifications include a merit function and an altered search direction to ensure that a descent direction for the merit function is obtained. Preliminary numerical testing indicates that the method is robust. Further, numerical comparisons with MINOS and LANCELOT show that the method is efficient, and has the promise of greatly reducing solution times on at least some classes of models.
CUTE: Constrained and unconstrained testing environment
, 1993
"... The purpose of this paper is to discuss the scope and functionality of a versatile environment for testing small and largescale nonlinear optimization algorithms. Although many of these facilities were originally produced by the authors in conjunction with the software package LANCELOT, we belie ..."
Abstract

Cited by 185 (3 self)
 Add to MetaCart
The purpose of this paper is to discuss the scope and functionality of a versatile environment for testing small and largescale nonlinear optimization algorithms. Although many of these facilities were originally produced by the authors in conjunction with the software package LANCELOT, we believe that they will be useful in their own right and should be available to researchers for their development of optimization software. The tools are available by anonymous ftp from a number of sources and may, in many cases, be installed automatically. The scope of a major collection of test problems written in the standard input format (SIF) used by the LANCELOT software package is described. Recognising that most software was not written with the SIF in mind, we provide tools to assist in building an interface between this input format and other optimization packages. These tools already provide a link between the SIF and an number of existing packages, including MINOS and OSL. In ad...
On the Use of NonStationary Penalty Functions to Solve Nonlinear Constrained Optimization Problems with GA's
 In
, 1994
"... In this paper we discuss the use of nonstationary penalty functions to solve general nonlinear programming problems (NP ) using realvalued GAs. The nonstationary penalty is a function of the generation number; as the number of generations increases so does the penalty. Therefore, as the penalty i ..."
Abstract

Cited by 139 (7 self)
 Add to MetaCart
(Show Context)
In this paper we discuss the use of nonstationary penalty functions to solve general nonlinear programming problems (NP ) using realvalued GAs. The nonstationary penalty is a function of the generation number; as the number of generations increases so does the penalty. Therefore, as the penalty increases it puts more and more selective pressure on the GA to find a feasible solution. The ideas presented in this paper come from two basic areas: calculusbased nonlinear programming and simulated annealing. The nonstationary penalty methods are tested on four NP test cases and the effectiveness of these methods are reported.. 1 Introduction Constrained function optimization is an extremely important tool used in almost every facet of engineering, operations research, mathematics, and etc. Constrained optimization can be represented as a nonlinear programming problem. The general nonlinear programming problem is defined as follows: (NP ) minimize f(X) subject to (nonlinear and linear)...
Comparative Studies Of Metamodeling Techniques Under Multiple Modeling Criteria
 Structural and Multidisciplinary Optimization
, 2000
"... 1 Despite the advances in computer capacity, the enormous computational cost of complex engineering simulations makes it impractical to rely exclusively on simulation for the purpose of design optimization. To cut down the cost, surrogate models, also known as metamodels, are constructed from and ..."
Abstract

Cited by 126 (7 self)
 Add to MetaCart
(Show Context)
1 Despite the advances in computer capacity, the enormous computational cost of complex engineering simulations makes it impractical to rely exclusively on simulation for the purpose of design optimization. To cut down the cost, surrogate models, also known as metamodels, are constructed from and then used in lieu of the actual simulation models. In the paper, we systematically compare four popular metamodeling techniquesPolynomial Regression, Multivariate Adaptive Regression Splines, Radial Basis Functions, and Krigingbased on multiple performance criteria using fourteen test problems representing different classes of problems. Our objective in this study is to investigate the advantages and disadvantages these four metamodeling techniques using multiple modeling criteria and multiple test problems rather than a single measure of merit and a single test problem. 1 Introduction Simulationbased analysis tools are finding increased use during preliminary design to explore desi...
Interiorpoint methods for nonconvex nonlinear programming: Filter methods and merit functions
 Computational Optimization and Applications
, 2002
"... Abstract. In this paper, we present global and local convergence results for an interiorpoint method for nonlinear programming and analyze the computational performance of its implementation. The algorithm uses an ℓ1 penalty approach to relax all constraints, to provide regularization, and to bound ..."
Abstract

Cited by 119 (8 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, we present global and local convergence results for an interiorpoint method for nonlinear programming and analyze the computational performance of its implementation. The algorithm uses an ℓ1 penalty approach to relax all constraints, to provide regularization, and to bound the Lagrange multipliers. The penalty problems are solved using a simplified version of Chen and Goldfarb’s strictly feasible interiorpoint method [12]. The global convergence of the algorithm is proved under mild assumptions, and local analysis shows that it converges Qquadratically for a large class of problems. The proposed approach is the first to simultaneously have all of the following properties while solving a general nonconvex nonlinear programming problem: (1) the convergence analysis does not assume boundedness of dual iterates, (2) local convergence does not require the Linear Independence Constraint Qualification, (3) the solution of the penalty problem is shown to locally converge to optima that may not satisfy the KarushKuhnTucker conditions, and (4) the algorithm is applicable to mathematical programs with equilibrium constraints. Numerical testing on a set of general nonlinear programming problems, including degenerate problems and infeasible problems, confirm the theoretical results. We also provide comparisons to a highlyefficient nonlinear solver and thoroughly analyze the effects of enforcing theoretical convergence guarantees on the computational performance of the algorithm. 1.
On the formulation and theory of newton interior point methods for nonlinear programming
 Journal of Optimization Theory and Applications
, 1996
"... Abstract. In this work, we first study in detail the formulation of the primaldual interiorpoint method for linear programming. We show that, contrary to popular belief, it cannot be viewed.as adamped Newton method applied to the KarushKuhnTucker conditions for the logarithmic barrier function ..."
Abstract

Cited by 113 (5 self)
 Add to MetaCart
(Show Context)
Abstract. In this work, we first study in detail the formulation of the primaldual interiorpoint method for linear programming. We show that, contrary to popular belief, it cannot be viewed.as adamped Newton method applied to the KarushKuhnTucker conditions for the logarithmic barrier function problem. Next, we extend the formulation to general nonlinear programming, and then validate this extension by demonstrating that this algorithm can be implemented sothat it is locally and Qquadratically convergent under only the standard Newton method assumptions. We also establish a global convergence theory for this algorithm and include promising numerical experimentation. Key Words. Interiorpoint methods, primaldual methods, nonlinear programming, superlinear and quadratic convergence, global convergence. 1.
A Semismooth Equation Approach To The Solution Of Nonlinear Complementarity Problems
, 1995
"... In this paper we present a new algorithm for the solution of nonlinear complementarity problems. The algorithm is based on a semismooth equation reformulation of the complementarity problem. We exploit the recent extension of Newton's method to semismooth systems of equations and the fact that ..."
Abstract

Cited by 105 (12 self)
 Add to MetaCart
In this paper we present a new algorithm for the solution of nonlinear complementarity problems. The algorithm is based on a semismooth equation reformulation of the complementarity problem. We exploit the recent extension of Newton's method to semismooth systems of equations and the fact that the natural merit function associated to the equation reformulation is continuously differentiable to develop an algorithm whose global and quadratic convergence properties can be established under very mild assumptions. Other interesting features of the new algorithm are an extreme simplicity along with a low computational burden per iteration. We include numerical tests which show the viability of the approach.
A survey of constraint handling techniques in evolutionary computation methods
 Proceedings of the 4th Annual Conference on Evolutionary Programming
, 1995
"... One of the major components of any evolutionary system is the eval� uation function. Evaluation functions are used to assign a quality measure for individuals in a population. Whereas evolutionary com� putation techniques assume the existence of an �e�cient � evaluation function for feasible individ ..."
Abstract

Cited by 102 (5 self)
 Add to MetaCart
One of the major components of any evolutionary system is the eval� uation function. Evaluation functions are used to assign a quality measure for individuals in a population. Whereas evolutionary com� putation techniques assume the existence of an �e�cient � evaluation function for feasible individuals � there is no uniform methodology for handling �i.e. � evaluating � unfeasible ones. The simplest approach� incorporated by evolution strategies and a version of evolutionary programming �for numerical optimization problems� � is to reject un� feasible solutions. But several other methods for handling unfeasible individuals have emerged recently. This paper reviews such methods �using a domain of nonlinear programming problems � and discusses their merits and drawbacks. 1