Results 1  10
of
43
Representations Of QuasiNewton Matrices And Their Use In Limited Memory Methods
, 1994
"... We derive compact representations of BFGS and symmetric rankone matrices for optimization. These representations allow us to efficiently implement limited memory methods for large constrained optimization problems. In particular, we discuss how to compute projections of limited memory matrices onto ..."
Abstract

Cited by 112 (8 self)
 Add to MetaCart
We derive compact representations of BFGS and symmetric rankone matrices for optimization. These representations allow us to efficiently implement limited memory methods for large constrained optimization problems. In particular, we discuss how to compute projections of limited memory matrices onto subspaces. We also present a compact representation of the matrices generated by Broyden's update for solving systems of nonlinear equations. Key words: QuasiNewton method, constrained optimization, limited memory method, largescale optimization. Abbreviated title: Representation of quasiNewton matrices. 1. Introduction. Limited memory quasiNewton methods are known to be effective techniques for solving certain classes of largescale unconstrained optimization problems (Buckley and Le Nir (1983), Liu and Nocedal (1989), Gilbert and Lemar'echal (1989)) . They make simple approximations of Hessian matrices, which are often good enough to provide a fast rate of linear convergence, and re...
On the convergence of reflective Newton methods for largescale nonlinear minimization subject to bounds
, 1992
"... . We consider a new algorithm, a reflective Newton method, for the problem of minimizing a smooth nonlinear function of many variables, subject to upper and/or lower bounds on some of the variables. This approach generates strictly feasible iterates by following piecewise linear paths ("reflect ..."
Abstract

Cited by 84 (5 self)
 Add to MetaCart
. We consider a new algorithm, a reflective Newton method, for the problem of minimizing a smooth nonlinear function of many variables, subject to upper and/or lower bounds on some of the variables. This approach generates strictly feasible iterates by following piecewise linear paths ("reflection" paths) to generate improved iterates. The reflective Newton approach does not require identification of an "activity set". In this report we establish that the reflective Newton approach is globally and quadratically convergent. Moreover, we develop a specific example of this general reflective path approach suitable for largescale and sparse problems. 1 Research partially supported by the Applied Mathematical Sciences Research Program (KC04 02) of the Office of Energy Research of the U.S. Department of Energy under grant DEFG0286ER25013. A000, and in part by NSF, AFOSR, and ONR through grant DMS8920550, and by the Cornell Theory Center which receives major funding from the National Sci...
Sums of Squares and Semidefinite Programming Relaxations for Polynomial Optimization Problems with Structured Sparsity
 SIAM Journal on Optimization
, 2006
"... Abstract. Unconstrained and inequality constrained sparse polynomial optimization problems (POPs) are considered. A correlative sparsity pattern graph is defined to find a certain sparse structure in the objective and constraint polynomials of a POP. Based on this graph, sets of supports for sums of ..."
Abstract

Cited by 81 (25 self)
 Add to MetaCart
(Show Context)
Abstract. Unconstrained and inequality constrained sparse polynomial optimization problems (POPs) are considered. A correlative sparsity pattern graph is defined to find a certain sparse structure in the objective and constraint polynomials of a POP. Based on this graph, sets of supports for sums of squares (SOS) polynomials that lead to efficient SOS and semidefinite programming (SDP) relaxations are obtained. Numerical results from various test problems are included to show the improved performance of the SOS and SDP relaxations. Key words.
Global Convergence of a Class of Trust Region Algorithms for Optimization Using Inexact Projections on Convex Constraints
, 1995
"... A class of trust region based algorithms is presented for the solution of nonlinear optimization problems with a convex feasible set. At variance with previously published analysis of this type, the theory presented allows for the use of general norms. Furthermore, the proposed algorithms do not r ..."
Abstract

Cited by 60 (4 self)
 Add to MetaCart
A class of trust region based algorithms is presented for the solution of nonlinear optimization problems with a convex feasible set. At variance with previously published analysis of this type, the theory presented allows for the use of general norms. Furthermore, the proposed algorithms do not require the explicit computation of the projected gradient, and can therefore be adapted to cases where the projection onto the feasible domain may be expensive to calculate. Strong global convergence results are derived for the class. It is also shown that the set of linear and nonlinear constraints that are binding at the solution are identified by the algorithms of the class in a finite number of iterations.
TrustRegion InteriorPoint Algorithms For Minimization Problems With Simple Bounds
 SIAM J. Control and Optimization
, 1995
"... . Two trustregion interiorpoint algorithms for the solution of minimization problems with simple bounds are analyzed and tested. The algorithms scale the local model in a way similar to Coleman and Li [1]. The first algorithm is more usual in that the trust region and the local quadratic model a ..."
Abstract

Cited by 51 (19 self)
 Add to MetaCart
(Show Context)
. Two trustregion interiorpoint algorithms for the solution of minimization problems with simple bounds are analyzed and tested. The algorithms scale the local model in a way similar to Coleman and Li [1]. The first algorithm is more usual in that the trust region and the local quadratic model are consistently scaled. The second algorithm proposed here uses an unscaled trust region. A global convergence result for these algorithms is given and dogleg and conjugategradient algorithms to compute trial steps are introduced. Some numerical examples that show the advantages of the second algorithm are presented. Keywords. trustregion methods, interiorpoint algorithms, DikinKarmarkar ellipsoid, Coleman and Li affine scaling, simple bounds. AMS subject classification. 49M37, 90C20, 90C30 1. Introduction. In this note we consider the boxconstrained minimization problem minimize f(x) subject to a x b; (1) where x 2 IR n , a 2 (IR [ f\Gamma1g) n , b 2 (IR [ f+1g) n and...
A Subspace, Interior, and Conjugate Gradient Method for LargeScale BoundConstrained Minimization Problems
 SIAM JOURNAL ON SCIENTIFIC COMPUTING
, 1999
"... A subspace adaptation of the ColemanLi trust region and interior method is proposed for solving largescale boundconstrained minimization problems. This method can be implemented with either sparse Cholesky factorization or conjugate gradient computation. Under reasonable conditions the convergenc ..."
Abstract

Cited by 46 (1 self)
 Add to MetaCart
(Show Context)
A subspace adaptation of the ColemanLi trust region and interior method is proposed for solving largescale boundconstrained minimization problems. This method can be implemented with either sparse Cholesky factorization or conjugate gradient computation. Under reasonable conditions the convergence properties of this subspace trust region method are as strong as those of its fullspace version. Computational
On Iterative Algorithms for Linear Least Squares Problems With Bound Constraints
, 1995
"... Three new iterative methods for the solution of the linear least squares problem with bound constraints are presented and their performance analyzed. The first is a modification of a method proposed by Lotstedt, while the two others are characterized by a technique allowing for fast active set chang ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
Three new iterative methods for the solution of the linear least squares problem with bound constraints are presented and their performance analyzed. The first is a modification of a method proposed by Lotstedt, while the two others are characterized by a technique allowing for fast active set changes resulting in noticeable improvements on the speed at which constraints active at the solution are identified. The numerical efficiency of these algorithms is experimentally studied, with particular emphasis on dependence of the starting point choice and the use of preconditioning for illconditioned problems.
Sparse SOS relaxations for minimizing functions that are summations of small polynomials
 SIAM Journal On Optimization
, 2008
"... This paper discusses how to find the global minimum of functions that are summations of small polynomials (“small ” means involving a small number of variables). Some sparse sum of squares (SOS) techniques are proposed. We compare their computational complexity and lower bounds with prior SOS relaxa ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
(Show Context)
This paper discusses how to find the global minimum of functions that are summations of small polynomials (“small ” means involving a small number of variables). Some sparse sum of squares (SOS) techniques are proposed. We compare their computational complexity and lower bounds with prior SOS relaxations. Under certain conditions, we also discuss how to extract the global minimizers from these sparse relaxations. The proposed methods are especially useful in solving sparse polynomial system and nonlinear least squares problems. Numerical experiments are presented, which show that the proposed methods significantly improve the computational performance of prior methods for solving these problems. Lastly, we present applications of this sparsity technique in solving polynomial systems derived from nonlinear differential equations and sensor network localization. Key words: Polynomials, sum of squares (SOS), sparsity, nonlinear least squares, polynomial system, nonlinear differential equations, sensor network localization 1
A Trust Region Strategy For Minimization On Arbitrary Domains
"... . We present a trust region method for minimizing a general differentiable function restricted to an arbitrary closed set. We prove a global convergence theorem. The trust region method defines difficult subproblems that are solvable in some particular cases. We analyze in detail the case where the ..."
Abstract

Cited by 17 (6 self)
 Add to MetaCart
. We present a trust region method for minimizing a general differentiable function restricted to an arbitrary closed set. We prove a global convergence theorem. The trust region method defines difficult subproblems that are solvable in some particular cases. We analyze in detail the case where the domain is an Euclidean ball. For this case we present numerical experiments where we consider different Hessian approximations. Key words: Nonlinear programming, Trustregion methods, Global convergence. Abbreviated title: Trust region on arbitrary domains. April 14, 1994 0() Work partially supported by FAPESP (Grants 9037246 and 9124413), FINEP, CNPq and FAEPUNICAMP. This paper was published in Mathematical Programming 68 (1995) 267302 0() Department of Applied Mathematics, State University of Campinas, IMECCUNICAMP, CP 6065, 13081 Campinas SP, Brazil. EMAIL: MARTINEZ@CCVAX.UNICAMP.BR 1. Introduction The problem considered in this paper is the minimization of a differentiable...
An overview of unconstrained optimization
 Online]. Available: citeseer.ist.psu.edu/fletcher93overview.html 150
, 1993
"... bundle filter method for nonsmooth nonlinear ..."