Results 1  10
of
59
SNOPT: An SQP Algorithm For LargeScale Constrained Optimization
, 2002
"... Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first deriv ..."
Abstract

Cited by 597 (24 self)
 Add to MetaCart
(Show Context)
Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first derivatives are available, and that the constraint gradients are sparse. We discuss
Sequential Quadratic Programming
, 1995
"... this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can ..."
Abstract

Cited by 166 (4 self)
 Add to MetaCart
this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can
Constrainthandling in genetic algorithms through the use of dominancebased tournament selection,
 Advanced Engineering Informatics
, 2002
"... In this paper, we propose a dominancebased selection scheme to incorporate constraints into the fitness function of a genetic algorithm used for global optimization. The approach does not require the use of a penalty function and, unlike traditional evolutionary multiobjective optimization techniqu ..."
Abstract

Cited by 38 (2 self)
 Add to MetaCart
(Show Context)
In this paper, we propose a dominancebased selection scheme to incorporate constraints into the fitness function of a genetic algorithm used for global optimization. The approach does not require the use of a penalty function and, unlike traditional evolutionary multiobjective optimization techniques, it does not require niching (or any other similar approach) to maintain diversity in the population. We validated the algorithm using several test functions taken from the specialized literature on evolutionary optimization. The results obtained indicate that the approach is a viable alternative to the traditional penalty function, mainly in engineering optimization problems.
A new approach to optimization of chemical processes
 AlChE Journal
, 1980
"... A new approach to optimization of chemical processes ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
A new approach to optimization of chemical processes
A Stochastic approximation method for inference in probabilistic graphical models
"... We describe a new algorithmic framework for inference in probabilistic models, and apply it to inference for latent Dirichlet allocation (LDA). Our framework adopts the methodology of variational inference, but unlike existing variational methods such as mean field and expectation propagation it is ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
We describe a new algorithmic framework for inference in probabilistic models, and apply it to inference for latent Dirichlet allocation (LDA). Our framework adopts the methodology of variational inference, but unlike existing variational methods such as mean field and expectation propagation it is not restricted to tractable classes of approximating distributions. Our approach can also be viewed as a “populationbased ” sequential Monte Carlo (SMC) method, but unlike existing SMC methods there is no need to design the artificial sequence of distributions. Significantly, our framework offers a principled means to exchange the variance of an importance sampling estimate for the bias incurred through variational approximation. We conduct experiments on a difficult inference problem in population genetics, a problem that is related to inference for LDA. The results of these experiments suggest that our method can offer improvements in stability and accuracy over existing methods, and at a comparable cost. 1
Block structured quadratic programming for the direct multiple shooting method for optimal control
 Optimization Methods and Software
, 2011
"... Abstract. In this contribution we address the efficient solution of optimal control problems of dynamic processes with many controls. Such problems arise, e.g., from the outer convexification of integer control decisions. We treat this optimal control problem class using the direct multiple shooting ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
(Show Context)
Abstract. In this contribution we address the efficient solution of optimal control problems of dynamic processes with many controls. Such problems arise, e.g., from the outer convexification of integer control decisions. We treat this optimal control problem class using the direct multiple shooting method to discretize the optimal control problem. The resulting nonlinear problems are solved using sequential quadratic programming methods. We review the classical condensing algorithm that preprocesses the large but sparse quadratic programs to obtain small but dense ones. We show that this approach leaves room for improvement when applied in conjunction with outer convexification. To this end, we present a new complementary condensing algorithm for quadratic programs with many controls. This algorithm is based on a hybrid null–space range–space approach to exploit the block sparse structure of the quadratic programs that is due to direct multiple shooting. An assessment of the theoretical run time complexity reveals significant advantages of the proposed algorithm. We give a detailed account on the required number of floating point operations, depending on the process dimensions. Finally we demonstrate the merit of the new complementary condensing approach by comparing the behavior of both methods for a vehicle control problem in which the integer gear decision is convexified. 1.
Hybrid differential evolution with multiplier updating method for nonlinear constrained optimization
 Congress on Evolutionary Computation 2002, Piscataway
, 2002
"... Abstract In this paper, we introduce hybrid differential multipliers. In this paper, we introduce hybrid differential evolution with a multiplier updating method to solve evolution (HDE) including a multiplier updating method to constrained optimization problems. An adaptive scheme for solve the mi ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Abstract In this paper, we introduce hybrid differential multipliers. In this paper, we introduce hybrid differential evolution with a multiplier updating method to solve evolution (HDE) including a multiplier updating method to constrained optimization problems. An adaptive scheme for solve the minmax problem. ~n minimization phase, penalty parameters is involved in theproposedalgorithm so that H ~ is E used to minimize the augmented ~~~~a ~ function smaller penalty parameters can be used and does not affect the with multipliers fixed In maximization phase, the Lagrange final search results. Computational examples reveal that nearly Of identical minimum solutions can be obtained using the proposed updated to ascend the the dual algorithm even under wide variation of the initial penalty function toward Obtaining the maximum Of the problem parameters. In order to obtain global convergence, the selfadaptation scheme for penalty parameters is introduced in the algorithm. I.
Analyse und Restrukturierung eines Verfahrens zur direkten Lösung von OptimalSteuerungsproblemen (The Theory of MUSCOD in a Nutshell)
, 1995
"... MUSCOD (MU ltiple Shooting COde for Direct Optimal Control) is the implementation of an algorithm for the direct solution of optimal control problems. The method is based on multiple shooting combined with a sequential quadratic programming (SQP) technique; its original version was developed in the ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
MUSCOD (MU ltiple Shooting COde for Direct Optimal Control) is the implementation of an algorithm for the direct solution of optimal control problems. The method is based on multiple shooting combined with a sequential quadratic programming (SQP) technique; its original version was developed in the early 1980s by Plitt under the supervision of Bock [Plitt81, Bock84]. The following report is intended to describe the basic aspects of the underlying theory in a concise but readable form. Such a description is not yet available: the paper by Bock and Plitt [Bock84] gives a good overview of the method, but it leaves out too many important details to be a complete reference, while the diploma thesis by Plitt [Plitt81], on the other hand, presents a fairly complete description, but is rather difficult to read. Throughout the present document, emphasis is given to a clear presentation of the concepts upon which MUSCOD is based. An effort has been made to properly reflect the structure of the a...
On the realization of the Wolfe conditions in reduced quasiNewton methods for equality constrained optimization
 SIAM Journal on Optimization
, 1997
"... Abstract. This paper describes a reduced quasiNewton method for solving equality constrained optimization problems. A major difficulty encountered by this type of algorithm is the design of a consistent technique for maintaining the positive definiteness of the matrices approximating the reduced He ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
Abstract. This paper describes a reduced quasiNewton method for solving equality constrained optimization problems. A major difficulty encountered by this type of algorithm is the design of a consistent technique for maintaining the positive definiteness of the matrices approximating the reduced Hessian of the Lagrangian. A new approach is proposed in this paper. The idea is to search for the next iterate along a piecewise linear path. The path is designed so that some generalized Wolfe conditions can be satisfied. These conditions allow the algorithm to sustain the positive definiteness of the matrices from iteration to iteration by a mechanism that has turned out to be efficient in unconstrained optimization.
Enlarging the Region of Convergence of Newton's Method for Constrained Optimization
, 1982
"... In this paper, we consider Newton's method for solving the system of necessary optimality conditions of optimization problems with equality and inequality constraints. The principal drawbacks of the method are the need for a good starting point, the inability to distinguish between local maxima ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
In this paper, we consider Newton's method for solving the system of necessary optimality conditions of optimization problems with equality and inequality constraints. The principal drawbacks of the method are the need for a good starting point, the inability to distinguish between local maxima and local minima, and, when inequality constraints are present, the necessity to solve a quadratic programming problem at each iteration. We show that all these drawbacks can be overcome to a great extent without sacrificing the superlinear convergence rate by making use of exact differentiable penalty functions introduced by Di Pillo and Grippo (Ref. 1). We also show that there is a close relationship between the class of penalty functions of Di Pillo and Grippo and the class of Fletcher (Ref. 2), and that the region of convergence of a variation of Newton's method can be enlarged by making use of one of Fletcher's penalty functions.