Results 1  10
of
24
Newton's Method For Large BoundConstrained Optimization Problems
 SIAM JOURNAL ON OPTIMIZATION
, 1998
"... We analyze a trust region version of Newton's method for boundconstrained problems. Our approach relies on the geometry of the feasible set, not on the particular representation in terms of constraints. The convergence theory holds for linearlyconstrained problems, and yields global and superlinea ..."
Abstract

Cited by 76 (4 self)
 Add to MetaCart
We analyze a trust region version of Newton's method for boundconstrained problems. Our approach relies on the geometry of the feasible set, not on the particular representation in terms of constraints. The convergence theory holds for linearlyconstrained problems, and yields global and superlinear convergence without assuming neither strict complementarity nor linear independence of the active constraints. We also show that the convergence theory leads to an efficient implementation for large boundconstrained problems.
Complete search in continuous global optimization and constraint satisfaction, Acta Numerica 13
, 2004
"... A chapter for ..."
A Survey of Condition Number Estimation for Triangular Matrices
 SIAM Review
, 1987
"... Abstract. We survey and compare a wide variety oftechniques for estimating the condition number of a triangular matrix, and make recommendations concerning the use of the estimates in applications. Each ofthe methods is shown to bound the condition number; the bounds can broadly be categorised as up ..."
Abstract

Cited by 55 (7 self)
 Add to MetaCart
Abstract. We survey and compare a wide variety oftechniques for estimating the condition number of a triangular matrix, and make recommendations concerning the use of the estimates in applications. Each ofthe methods is shown to bound the condition number; the bounds can broadly be categorised as upper bounds from matrix theory and lower bounds from heuristic or probabilistic algorithms. For each bound we examine by how much, at worst, it can overestimate or underestimate the condition number. Numerical experiments are presented in order to illustrate and compare the practical performance ofthe condition estimators. Key words, matrix condition number, triangular matrix, LINPACK, QR decomposition, rank estimation AMS(MOS) subject classification. 65F35 1. Introduction. Let C m (m) denote the set of all m n matrices with complex (real) elements. Given a nonsingular matrix A C and a matrix norm I1 " onC " the condition number ofA with respect to inversion is defined by K(A) IIA IIAII.
Global Methods For Nonlinear Complementarity Problems
 MATH. OPER. RES
, 1994
"... Global methods for nonlinear complementarity problems formulate the problem as a system of nonsmooth nonlinear equations approach, or use continuation to trace a path defined by a smooth system of nonlinear equations. We formulate the nonlinear complementarity problem as a boundconstrained nonlinea ..."
Abstract

Cited by 28 (1 self)
 Add to MetaCart
Global methods for nonlinear complementarity problems formulate the problem as a system of nonsmooth nonlinear equations approach, or use continuation to trace a path defined by a smooth system of nonlinear equations. We formulate the nonlinear complementarity problem as a boundconstrained nonlinear least squares problem. Algorithms based on this formulation are applicable to general nonlinear complementarity problems, can be started from any nonnegative starting point, and each iteration only requires the solution of systems of linear equations. Convergence to a solution of the nonlinear complementarity problem is guaranteed under reasonable regularity assumptions. The converge rate is Qlinear, Qsuperlinear, or Qquadratic, depending on the tolerances used to solve the subproblems.
Hooking Your Solver to AMPL
, 1997
"... This report tells how to make solvers work with AMPL's solve command. It describes an interface library, amplsolver.a, whose source is available from netlib. Examples include programs for listing LPs, automatic conversion to the LP dual (shellscript as solver), solvers for various nonlinear probl ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
This report tells how to make solvers work with AMPL's solve command. It describes an interface library, amplsolver.a, whose source is available from netlib. Examples include programs for listing LPs, automatic conversion to the LP dual (shellscript as solver), solvers for various nonlinear problems (with first and sometimes second derivatives computed by automatic differentiation), and getting C or Fortran 77 for nonlinear constraints, objectives and their first derivatives. Drivers for various well known linear, mixedinteger, and nonlinear solvers provide more examples.
Numerical optimization using computer experiments
 Institute for Computer
, 1997
"... Engineering design optimization often gives rise to problems in which expensive objective functions are minimized by derivativefree methods. We propose a method for solving such problems that synthesizes ideas from the numerical optimization and computer experiment literatures. Our approach relies ..."
Abstract

Cited by 28 (9 self)
 Add to MetaCart
Engineering design optimization often gives rise to problems in which expensive objective functions are minimized by derivativefree methods. We propose a method for solving such problems that synthesizes ideas from the numerical optimization and computer experiment literatures. Our approach relies on kriging known function values to construct a sequence of surrogate models of the objective function that are used to guide a grid search for a minimizer. Results from numerical experiments on a standard test problem are presented.
A Grid Algorithm for Bound Constrained Optimization of Noisy Functions
 IMA J. of Numerical Analysis
, 1995
"... noisy functions ..."
Exposing Constraints
, 1992
"... The development of algorithms and software for the solution of largescale optimization problems has been the main motivation behind the research on the identification properties of optimization algorithms. The aim of an identification result for a linearly constrained problem is to show that if the ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
The development of algorithms and software for the solution of largescale optimization problems has been the main motivation behind the research on the identification properties of optimization algorithms. The aim of an identification result for a linearly constrained problem is to show that if the sequence generated by an optimization algorithm converges to a stationary point, then there is a nontrivial face F of the feasible set such that after a finite number of iterations, the iterates enter and remain in the face F . This paper develops the identification properties of linearly constrained optimization algorithms without any nondegeneracy or linear independence assumptions. The main result shows that the projected gradient converges to zero if and only if the iterates enter and remain in the face exposed by the negative gradient. This result generalizes results of Burke and MorĂ© obtained for nondegenerate cases.
Distance Matrix Completion by Numerical Optimization
 Comput. Optim. Appl
, 1997
"... Consider the problem of determining whether or not a partial dissimilarity matrix can be completed to a Euclidean distance matrix. The dimension of the distance matrix may be restricted and the known dissimilarities may be permitted to vary subject to bound constraints. This problem can be formulate ..."
Abstract

Cited by 23 (5 self)
 Add to MetaCart
Consider the problem of determining whether or not a partial dissimilarity matrix can be completed to a Euclidean distance matrix. The dimension of the distance matrix may be restricted and the known dissimilarities may be permitted to vary subject to bound constraints. This problem can be formulated as an optimization problem for which the global minimum is zero if and only if completion is possible. The optimization problem is derived in a very natural way from an embedding theorem in classical distance geometry and from the classical approach to multidimensional scaling. It belongs to a general family of problems studied by Trosset [13] and can be formulated as a nonlinear programming problem with simple bound constraints. Thus, this approach provides a constructive technique for obtaining approximate solutions to a general class of distance matrix completion problems. Key words: Euclidean distance matrices, positive semidefinite matrices, distance geometry, multidimensional scalin...
Strong Duality in Nonconvex Quadratic Optimization with Two Quadratic Constraints
 SIAM Journal on Optimization
"... Abstract. We consider the problem of minimizing an indefinite quadratic function subject to two quadratic inequality constraints. When the problem is defined over the complex plane we show that strong duality holds and obtain necessary and sufficient optimality conditions. We then develop a connecti ..."
Abstract

Cited by 18 (10 self)
 Add to MetaCart
Abstract. We consider the problem of minimizing an indefinite quadratic function subject to two quadratic inequality constraints. When the problem is defined over the complex plane we show that strong duality holds and obtain necessary and sufficient optimality conditions. We then develop a connection between the image of the real and complex spaces under a quadratic mapping, which together with the results in the complex case lead to a condition that ensures strong duality in the real setting. Preliminary numerical simulations suggest that for random instances of the extended trust region subproblem, the sufficient condition is satisfied with a high probability. Furthermore, we show that the sufficient condition is always satisfied in two classes of nonconvex quadratic problems. Finally, we discuss an application of our results to robust least squares problems.