Results 1  10
of
53
Sums of Squares and Semidefinite Programming Relaxations for Polynomial Optimization Problems with Structured Sparsity
 SIAM Journal on Optimization
, 2006
"... Abstract. Unconstrained and inequality constrained sparse polynomial optimization problems (POPs) are considered. A correlative sparsity pattern graph is defined to find a certain sparse structure in the objective and constraint polynomials of a POP. Based on this graph, sets of supports for sums of ..."
Abstract

Cited by 71 (24 self)
 Add to MetaCart
Abstract. Unconstrained and inequality constrained sparse polynomial optimization problems (POPs) are considered. A correlative sparsity pattern graph is defined to find a certain sparse structure in the objective and constraint polynomials of a POP. Based on this graph, sets of supports for sums of squares (SOS) polynomials that lead to efficient SOS and semidefinite programming (SDP) relaxations are obtained. Numerical results from various test problems are included to show the improved performance of the SOS and SDP relaxations. Key words.
Smoothing Methods for Convex Inequalities and Linear Complementarity Problems
 Mathematical Programming
, 1993
"... A smooth approximation p(x; ff) to the plus function: maxfx; 0g, is obtained by integrating the sigmoid function 1=(1 + e \Gammaffx ), commonly used in neural networks. By means of this approximation, linear and convex inequalities are converted into smooth, convex unconstrained minimization probl ..."
Abstract

Cited by 62 (6 self)
 Add to MetaCart
A smooth approximation p(x; ff) to the plus function: maxfx; 0g, is obtained by integrating the sigmoid function 1=(1 + e \Gammaffx ), commonly used in neural networks. By means of this approximation, linear and convex inequalities are converted into smooth, convex unconstrained minimization problems, the solution of which approximates the solution of the original problem to a high degree of accuracy for ff sufficiently large. In the special case when a Slater constraint qualification is satisfied, an exact solution can be obtained for finite ff. Speedup over MINOS 5.4 was as high as 515 times for linear inequalities of size 1000 \Theta 1000, and 580 times for convex inequalities with 400 variables. Linear complementarity problems are converted into a system of smooth nonlinear equations and are solved by a quadratically convergent Newton method. For monotone LCP's with as many as 400 variables, the proposed approach was as much as 85 times faster than Lemke's method. Key Words: Smo...
Automatic preconditioning by limited memory QuasiNewton updating
 SIAM J. Optim
"... The paper proposes a preconditioner for the conjugate gradient method (CG) that is designed for solving systems of equations Ax = bi with di erent right hand side vectors, or for solving a sequence of slowly varying systems Akx = bk. The preconditioner has the form of a limited memory quasiNewton m ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
The paper proposes a preconditioner for the conjugate gradient method (CG) that is designed for solving systems of equations Ax = bi with di erent right hand side vectors, or for solving a sequence of slowly varying systems Akx = bk. The preconditioner has the form of a limited memory quasiNewton matrix and is generated using information from the CG iteration. The automatic preconditioner does not require explicit knowledge of the coe cient matrix A and is therefore suitable for problems where only products of A times avector can be computed. Numerical experiments indicate that the preconditioner has most to o er when these matrixvector products are expensive to compute, and when low accuracy in the solution is required. The e ectiveness of the preconditioner is tested within a Hessianfree Newton method for optimization, and by solving certain linear systems arising in nite element models.
Hooking Your Solver to AMPL
, 1997
"... This report tells how to make solvers work with AMPL's solve command. It describes an interface library, amplsolver.a, whose source is available from netlib. Examples include programs for listing LPs, automatic conversion to the LP dual (shellscript as solver), solvers for various nonlinear probl ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
This report tells how to make solvers work with AMPL's solve command. It describes an interface library, amplsolver.a, whose source is available from netlib. Examples include programs for listing LPs, automatic conversion to the LP dual (shellscript as solver), solvers for various nonlinear problems (with first and sometimes second derivatives computed by automatic differentiation), and getting C or Fortran 77 for nonlinear constraints, objectives and their first derivatives. Drivers for various well known linear, mixedinteger, and nonlinear solvers provide more examples.
Solving the trustregion subproblem using the Lanczos method
, 1997
"... The approximate minimization of a quadratic function within an ellipsoidal trust region is an important subproblem for many nonlinear programming methods. When the number of variables is large, the most widelyused strategy is to trace the path of conjugate gradient iterates either to convergence or ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
The approximate minimization of a quadratic function within an ellipsoidal trust region is an important subproblem for many nonlinear programming methods. When the number of variables is large, the most widelyused strategy is to trace the path of conjugate gradient iterates either to convergence or until it reaches the trustregion boundary. In this paper, we investigate ways of continuing the process once the boundary has been encountered. The key is to observe that the trustregion problem within the currently generated Krylov subspace has very special structure which enables it to be solved very efficiently. We compare the new strategy with existing methods. The resulting software package is available as HSL VF05 within the Harwell Subroutine Library. 1 Department for Computation and Information, Rutherford Appleton Laboratory, Chilton, Oxfordshire, OX11 0QX, England, EU Email : n.gould@rl.ac.uk 2 Current reports available by anonymous ftp from joyousgard.cc.rl.ac.uk (internet ...
Where is the Nearest NonRegular Pencil?
 TO APPEAR IN LINEAR ALGEBRA AND ITS APPLICATIONS
, 1997
"... This is a first step toward the goal of finding a way to calculate a smallest norm deregularizing perturbation of a given square matrix pencil. Minimal deregularizing perturbations have geometric characterizations that include a variable projection linear least squares problem and a minimax charac ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
This is a first step toward the goal of finding a way to calculate a smallest norm deregularizing perturbation of a given square matrix pencil. Minimal deregularizing perturbations have geometric characterizations that include a variable projection linear least squares problem and a minimax characterization reminiscent of the CourantFischer theorem. The characterizations lead to new, computationally attractive upper and lower bounds. We give a brief survey and illustrate strengths and weaknesses of several upper and lower bounds some of which are wellknown and some of which are new. The ultimate goal remains elusive.
More AD of Nonlinear AMPL Models: Computing Hessian Information and Exploiting Partial Separability
 in Computational Differentiation: Applications, Techniques, and
, 1996
"... We describe computational experience with automatic differentiation of mathematical programming problems expressed in the modeling language AMPL. Nonlinear expressions are translated to loopfree code, which makes it easy to compute gradients and Jacobians by backward automatic differentiation. ..."
Abstract

Cited by 16 (10 self)
 Add to MetaCart
We describe computational experience with automatic differentiation of mathematical programming problems expressed in the modeling language AMPL. Nonlinear expressions are translated to loopfree code, which makes it easy to compute gradients and Jacobians by backward automatic differentiation. The nonlinear expressions may be interpreted or, to gain some evaluation speed at the cost of increased preparation time, converted to Fortran or C. We have extended the interpretive scheme to evaluate Hessian (of Lagrangian) times vector. Detecting partially separable structure (sums of terms, each depending, perhaps after a linear transformation, on only a few variables) is of independent interest, as some solvers exploit this structure. It can be detected automatically by suitable "tree walks". Exploiting this structure permits an AD computation of the entire Hessian matrix by accumulating Hessian times vector computations for each term, and can lead to a much faster computation...
Curvilinear Stabilization Techniques for Truncated Newton Methods in Large Scale Unconstrained Optimization: the . . .
 SIAM J. Optim
, 1998
"... The aim of this paper is to define a new class of minimization algorithms for solving large scale unconstrained problems. In particular we describe a stabilization framework, based on a curvilinear linesearch, which uses a combination of a Newtontype direction and a negative curvature direction. Th ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
The aim of this paper is to define a new class of minimization algorithms for solving large scale unconstrained problems. In particular we describe a stabilization framework, based on a curvilinear linesearch, which uses a combination of a Newtontype direction and a negative curvature direction. The motivation for using negative curvature direction is that of taking into account local nonconvexity of the objective function. On the basis of this framework, we propose an algorithm which uses the Lanczos method for determining at each iteration both a Newtontype direction and an effective negative curvature direction. The results of an extensive numerical testing is reported together with a comparison with the LANCELOT package. These results show that the algorithm is very competitive and this seems to indicate that the proposed approach is promising. 1 Introduction In this work, we deal with the definition of new efficient unconstrained minimization algorithms for solving large scal...
Sparse SOS relaxations for minimizing functions that are summations of small polynomials
 SIAM Journal On Optimization
, 2008
"... This paper discusses how to find the global minimum of functions that are summations of small polynomials (“small ” means involving a small number of variables). Some sparse sum of squares (SOS) techniques are proposed. We compare their computational complexity and lower bounds with prior SOS relaxa ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
This paper discusses how to find the global minimum of functions that are summations of small polynomials (“small ” means involving a small number of variables). Some sparse sum of squares (SOS) techniques are proposed. We compare their computational complexity and lower bounds with prior SOS relaxations. Under certain conditions, we also discuss how to extract the global minimizers from these sparse relaxations. The proposed methods are especially useful in solving sparse polynomial system and nonlinear least squares problems. Numerical experiments are presented, which show that the proposed methods significantly improve the computational performance of prior methods for solving these problems. Lastly, we present applications of this sparsity technique in solving polynomial systems derived from nonlinear differential equations and sensor network localization. Key words: Polynomials, sum of squares (SOS), sparsity, nonlinear least squares, polynomial system, nonlinear differential equations, sensor network localization 1
Numerical experience with limitedMemory QuasiNewton methods and Truncated Newton methods
 SIAM J. Optimization
, 1992
"... Abstract. Computational experience with several limitedmemory quasiNewton and truncated Newton methods for unconstrained nonlinear optimization is described. Comparative tests were conducted on a wellknown test library [J. J. Mor, B. S. Garbow, and K. E. Hillstrom, ACM Trans. Math. Software, 7 (1 ..."
Abstract

Cited by 13 (9 self)
 Add to MetaCart
Abstract. Computational experience with several limitedmemory quasiNewton and truncated Newton methods for unconstrained nonlinear optimization is described. Comparative tests were conducted on a wellknown test library [J. J. Mor, B. S. Garbow, and K. E. Hillstrom, ACM Trans. Math. Software, 7 (1981), pp. 1741], on several synthetic problems allowing control of the clustering of eigenvalues in the Hessian spectrum, and on some largescale problems in oceanography and meteorology. The results indicate that among the tested limitedmemory quasiNewton methods, the LBFGS method [D. C. Liu and J. Nocedal, Math. Programming, 45 (1989), pp. 503528] has the best overall performance for the problems examined. The numerical performance of two truncated Newton methods, differing in the innerloop solution for the search vector, is competitive with that of LBFGS. Key words, limitedmemory quasiNewton methods, truncated Newton methods, synthetic cluster functions, largescale unconstrained minimization AMS subject classifications. 90C30, 93C20, 93C75, 65K10, 76C20 1. Introduction. Limitedmemory quasiNewton (LMQN) and truncated Newton