Results 1 
8 of
8
SNOPT: An SQP Algorithm For LargeScale Constrained Optimization
, 2002
"... Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first deriv ..."
Abstract

Cited by 597 (24 self)
 Add to MetaCart
(Show Context)
Sequential quadratic programming (SQP) methods have proved highly effective for solving constrained optimization problems with smooth nonlinear functions in the objective and constraints. Here we consider problems with general inequality constraints (linear and nonlinear). We assume that first derivatives are available, and that the constraint gradients are sparse. We discuss
QuasiNewton methods on Grassmannians and multilinear approximations of tensors
, 2009
"... Abstract. In this paper we proposed quasiNewton and limited memory quasiNewton methods for objective functions defined on Grassmann manifolds or a product of Grassmann manifolds. Specifically we defined bfgs and lbfgs updates in local and global coordinates on Grassmann manifolds or a product of ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we proposed quasiNewton and limited memory quasiNewton methods for objective functions defined on Grassmann manifolds or a product of Grassmann manifolds. Specifically we defined bfgs and lbfgs updates in local and global coordinates on Grassmann manifolds or a product of these. We proved that, when local coordinates are used, our bfgs updates on Grassmann manifolds share the same optimality property as the usual bfgs updates on Euclidean spaces. When applied to the best multilinear rank approximation problem for general and symmetric tensors, our approach yields fast, robust, and accurate algorithms that exploit the special Grassmannian structure of the respective problems, and which work on tensors of large dimensions and arbitrarily high order. Extensive numerical experiments are included to substantiate our claims. Key words. Grassmann manifold, Grassmannian, product of Grassmannians, Grassmann quasiNewton, Grassmann bfgs, Grassmann lbfgs, multilinear rank, symmetric multilinear rank, tensor, symmetric tensor, approximations
ReducedHessian QuasiNewton Methods For Unconstrained Optimization
 SIAM J. OPTIM
, 1999
"... QuasiNewton methods are reliable and efficient on a wide range of problems, but they can require many iterations if the problem is illconditioned or if a poor initial estimate of the Hessian is used. In this paper, we discuss methods designed to be more efficient in these situations. All the metho ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
(Show Context)
QuasiNewton methods are reliable and efficient on a wide range of problems, but they can require many iterations if the problem is illconditioned or if a poor initial estimate of the Hessian is used. In this paper, we discuss methods designed to be more efficient in these situations. All the methods to be considered exploit the fact that quasiNewton methods accumulate approximate secondderivative information in a sequence of expanding subspaces. Associated with each of these subspaces is a certain reduced approximate Hessian that provides a complete but compact representation of the second derivative information approximated up to that point. Algorithms that compute an explicit reduced Hessian approximation have two important advantages over conventional quasiNewton methods. First, the amount of computation for each iteration is signicantly less during the early stages. This advantage is increased by forcing the iterates to linger on a manifold whose dimension can be significantly sma...
Unconstrained optimization of real functions in complex variables
, 2011
"... Nonlinear optimization problems in complex variables are frequently encountered in applied mathematics and engineering applications such as control theory, signal processing and electrical engineering. Optimization of these problems often requires a firstor secondorder approximation of the objectiv ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
(Show Context)
Nonlinear optimization problems in complex variables are frequently encountered in applied mathematics and engineering applications such as control theory, signal processing and electrical engineering. Optimization of these problems often requires a firstor secondorder approximation of the objective function to generate a new step or descent direction. However, such methods cannot be applied to real functions in complex variables because they are necessarily nonanalytic in their argument, i.e., the Taylor series expansion in their argument alone does not exist. To overcome this problem, the objective function is often redefined as a function of the real and imaginary parts of its complex argument so that standard optimization methods can be applied. We show that real functions in complex variables do have a Taylor series expansion in complex variables, which we then use to generalize existing optimization methods for both general nonlinear optimization problems and nonlinear least squares problems. We then apply these methods to a number of case studies which show that complex Taylor expansions can lead to greater insight in the structure of the problem and that this structure can often be exploited to improve computational complexity and storage cost.
Limitedmemory reducedHessian methods for unconstrained optimization, Numerical Analysis
 SIAM J.Optim
, 1997
"... Abstract. Limitedmemory BFGS quasiNewton methods approximate the Hessian matrix of second derivatives by the sum of a diagonal matrix and a fixed number of rankone matrices. These methods are particularly effective for large problems in which the approximate Hessian cannot be stored explicitly. I ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Abstract. Limitedmemory BFGS quasiNewton methods approximate the Hessian matrix of second derivatives by the sum of a diagonal matrix and a fixed number of rankone matrices. These methods are particularly effective for large problems in which the approximate Hessian cannot be stored explicitly. It can be shown that the conventional BFGS method accumulates approximate curvature in a sequence of expanding subspaces. This allows an approximate Hessian to be represented using a smaller reduced matrix that increases in dimension at each iteration. When the number of variables is large, this feature may be used to define limitedmemory reducedHessian methods in which the dimension of the reduced Hessian is limited to save storage. Limitedmemory reducedHessian methods have the benefit of requiring half the storage of conventional limitedmemory methods. In this paper, we propose a particular reducedHessian method with substantial computational advantages compared to previous reducedHessian methods. Numerical results from a set of unconstrained problems in the CUTE test collection indicate that our implementation is competitive with the limitedmemory codes LBFGS and LBFGSB. Key words. Unconstrained optimization, quasiNewton methods, BFGS method, reducedHessian methods, conjugatedirection methods AMS subject classifications. 65K05, 90C30
Digital Object Identifier (DOI) 10.1007/s101070000155 Math. Program., Ser. B 87: 209–213 (2000)
"... Abstract. This short note traces the events that led to the unsymmetric rank one formula known as the “good Broyden ” update [5,6], which is widely used within derivativefree mathematical software for solving a system of nonlinear equations. I left University in 1956 with an indifferent degree in P ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. This short note traces the events that led to the unsymmetric rank one formula known as the “good Broyden ” update [5,6], which is widely used within derivativefree mathematical software for solving a system of nonlinear equations. I left University in 1956 with an indifferent degree in Physics and took up a post with the English Electric Company in Leicester. The company was involved in the design and construction of nuclear reactors and I was employed as a computer programmer. One of the problems with which I was concerned was the solution of systems of differential equations. These equations were used in the performance calculations of the reactors and modelled their behaviour as time passed, and some of the favoured methods of solution were the “predictorcorrector ” methods. They worked roughly as follows. Given a temperature distribution at time t they would calculate the equivalent distribution at time t C t where t was a small time increment. This calculation would then be repeated many times in the hope that the computed temperature would subside before the Magnox cladding of the fuel rods melted. The new temperature distribution would be obtained by getting an approximate