Results 1  10
of
32
Snopt: An SQP Algorithm For LargeScale Constrained Optimization
, 1997
"... Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first deriv ..."
Abstract

Cited by 328 (18 self)
 Add to MetaCart
Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first derivatives are available, and that the constraint gradients are sparse.
Sequential Quadratic Programming
, 1995
"... this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can ..."
Abstract

Cited by 113 (2 self)
 Add to MetaCart
this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can
Constrainthandling in genetic algorithms through the use of dominancebased tournament selection,
 Advanced Engineering Informatics
, 2002
"... In this paper, we propose a dominancebased selection scheme to incorporate constraints into the fitness function of a genetic algorithm used for global optimization. The approach does not require the use of a penalty function and, unlike traditional evolutionary multiobjective optimization techniqu ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
In this paper, we propose a dominancebased selection scheme to incorporate constraints into the fitness function of a genetic algorithm used for global optimization. The approach does not require the use of a penalty function and, unlike traditional evolutionary multiobjective optimization techniques, it does not require niching (or any other similar approach) to maintain diversity in the population. We validated the algorithm using several test functions taken from the specialized literature on evolutionary optimization. The results obtained indicate that the approach is a viable alternative to the traditional penalty function, mainly in engineering optimization problems.
On the realization of the Wolfe conditions in reduced quasiNewton methods for equality constrained optimization
 SIAM Journal on Optimization
, 1997
"... Abstract. This paper describes a reduced quasiNewton method for solving equality constrained optimization problems. A major difficulty encountered by this type of algorithm is the design of a consistent technique for maintaining the positive definiteness of the matrices approximating the reduced He ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Abstract. This paper describes a reduced quasiNewton method for solving equality constrained optimization problems. A major difficulty encountered by this type of algorithm is the design of a consistent technique for maintaining the positive definiteness of the matrices approximating the reduced Hessian of the Lagrangian. A new approach is proposed in this paper. The idea is to search for the next iterate along a piecewise linear path. The path is designed so that some generalized Wolfe conditions can be satisfied. These conditions allow the algorithm to sustain the positive definiteness of the matrices from iteration to iteration by a mechanism that has turned out to be efficient in unconstrained optimization.
Hybrid differential evolution with multiplier updating method for nonlinear constrained optimization
 Congress on Evolutionary Computation 2002, Piscataway
, 2002
"... Abstract In this paper, we introduce hybrid differential multipliers. In this paper, we introduce hybrid differential evolution with a multiplier updating method to solve evolution (HDE) including a multiplier updating method to constrained optimization problems. An adaptive scheme for solve the mi ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Abstract In this paper, we introduce hybrid differential multipliers. In this paper, we introduce hybrid differential evolution with a multiplier updating method to solve evolution (HDE) including a multiplier updating method to constrained optimization problems. An adaptive scheme for solve the minmax problem. ~n minimization phase, penalty parameters is involved in theproposedalgorithm so that H ~ is E used to minimize the augmented ~~~~a ~ function smaller penalty parameters can be used and does not affect the with multipliers fixed In maximization phase, the Lagrange final search results. Computational examples reveal that nearly Of identical minimum solutions can be obtained using the proposed updated to ascend the the dual algorithm even under wide variation of the initial penalty function toward Obtaining the maximum Of the problem parameters. In order to obtain global convergence, the selfadaptation scheme for penalty parameters is introduced in the algorithm. I.
Analyse und Restrukturierung eines Verfahrens zur direkten Lösung von OptimalSteuerungsproblemen (The Theory of MUSCOD in a Nutshell)
, 1995
"... MUSCOD (MU ltiple Shooting COde for Direct Optimal Control) is the implementation of an algorithm for the direct solution of optimal control problems. The method is based on multiple shooting combined with a sequential quadratic programming (SQP) technique; its original version was developed in the ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
MUSCOD (MU ltiple Shooting COde for Direct Optimal Control) is the implementation of an algorithm for the direct solution of optimal control problems. The method is based on multiple shooting combined with a sequential quadratic programming (SQP) technique; its original version was developed in the early 1980s by Plitt under the supervision of Bock [Plitt81, Bock84]. The following report is intended to describe the basic aspects of the underlying theory in a concise but readable form. Such a description is not yet available: the paper by Bock and Plitt [Bock84] gives a good overview of the method, but it leaves out too many important details to be a complete reference, while the diploma thesis by Plitt [Plitt81], on the other hand, presents a fairly complete description, but is rather difficult to read. Throughout the present document, emphasis is given to a clear presentation of the concepts upon which MUSCOD is based. An effort has been made to properly reflect the structure of the a...
A Stochastic approximation method for inference in probabilistic graphical models
"... We describe a new algorithmic framework for inference in probabilistic models, and apply it to inference for latent Dirichlet allocation (LDA). Our framework adopts the methodology of variational inference, but unlike existing variational methods such as mean field and expectation propagation it is ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We describe a new algorithmic framework for inference in probabilistic models, and apply it to inference for latent Dirichlet allocation (LDA). Our framework adopts the methodology of variational inference, but unlike existing variational methods such as mean field and expectation propagation it is not restricted to tractable classes of approximating distributions. Our approach can also be viewed as a “populationbased ” sequential Monte Carlo (SMC) method, but unlike existing SMC methods there is no need to design the artificial sequence of distributions. Significantly, our framework offers a principled means to exchange the variance of an importance sampling estimate for the bias incurred through variational approximation. We conduct experiments on a difficult inference problem in population genetics, a problem that is related to inference for LDA. The results of these experiments suggest that our method can offer improvements in stability and accuracy over existing methods, and at a comparable cost. 1
Nonlinear modal analysis of mechanical systems with frictionless contact interfaces
, 2009
"... This paper investigates the nonlinear analysis, in the form of nonlinear modes, of mechanical systems undergoing unilateral frictionless contact. The nonlinear eigenproblem is introduced by a Rayleigh quotient minimization with inequality constraints formulated in the frequency domain. An augmented ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This paper investigates the nonlinear analysis, in the form of nonlinear modes, of mechanical systems undergoing unilateral frictionless contact. The nonlinear eigenproblem is introduced by a Rayleigh quotient minimization with inequality constraints formulated in the frequency domain. An augmented Lagrangian procedure is used for the calculation of the nonlinear contact forces. The efficiency of the proposed method for largescale mechanical systems involving nonsmooth nonlinear terms is shown. An industrial application consisting of a compressor blade in contact with a rigid casing is proposed. Sensitivity of the nonlinear modal parameters to contact is illustrated.
Enlarging the Region of Convergence of Newton's Method for Constrained Optimization
, 1982
"... In this paper, we consider Newton's method for solving the system of necessary optimality conditions of optimization problems with equality and inequality constraints. The principal drawbacks of the method are the need for a good starting point, the inability to distinguish between local maxima and ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper, we consider Newton's method for solving the system of necessary optimality conditions of optimization problems with equality and inequality constraints. The principal drawbacks of the method are the need for a good starting point, the inability to distinguish between local maxima and local minima, and, when inequality constraints are present, the necessity to solve a quadratic programming problem at each iteration. We show that all these drawbacks can be overcome to a great extent without sacrificing the superlinear convergence rate by making use of exact differentiable penalty functions introduced by Di Pillo and Grippo (Ref. 1). We also show that there is a close relationship between the class of penalty functions of Di Pillo and Grippo and the class of Fletcher (Ref. 2), and that the region of convergence of a variation of Newton's method can be enlarged by making use of one of Fletcher's penalty functions.
SQP and PDIP algorithms for Nonlinear Programming
, 2007
"... Penalty and barrier methods are indirect ways of solving constrained optimization problems, using techniques developed in the unconstrained optimization realm. In what follows we shall give the foundation of two more direct ways of solving constrained optimization problems, namely Sequential Quadrat ..."
Abstract
 Add to MetaCart
Penalty and barrier methods are indirect ways of solving constrained optimization problems, using techniques developed in the unconstrained optimization realm. In what follows we shall give the foundation of two more direct ways of solving constrained optimization problems, namely Sequential Quadratic Programming (SQP) methods and PrimalDual Interior Point (PDIP) methods. 1 Sequential Quadratic Programming For the derivation of the Sequential Quadratic Programming method we shall use the equality constrained problem minimize f(x) x subject to h(x) = 0, (ECP) where f: R n → R and h: R n → R m are smooth functions. An understanding of this problem is essential in the design of SQP methods for general nonlinear programming problems. The KKT conditions for this problem are given by ∇f(x) + m� λi∇hi(x) = 0 (1a) i=1 1 h(x) = 0 (1b) where λ ∈ R m are the Lagrange multipliers associated with the equality constraints. If we use the Lagrangian L(x, λ) = f(x) + m� λihi(x) (2) we can write the KKT conditions (1) more compactly as ∇x L(x, λ) = 0. (EQKKT) ∇λ L(x, λ) As with Newton’s method unconstrained optimization, the main idead behind SQP is to model problem (ECP) at a given point x (k) by a quadratic programming subrpoblem and then use the solution to this problem to construct a more accurate approximation x (k+1). If we perform a Taylor series expansion of system (EQKKT) about (x (k) , λ (k) ) we obtain ∇x L(x (k) , λ (k))