Results 1  10
of
11
A Trust Region Framework For Managing The Use Of Approximation Models In Optimization
 STRUCTURAL OPTIMIZATION
, 1998
"... This paper presents an analytically robust, globally convergent approach to managing the use of approximation models of various fidelity in optimization. By robust global behavior we mean the mathematical assurance that the iterates produced by the optimization algorithm, started at an arbitrary ini ..."
Abstract

Cited by 84 (9 self)
 Add to MetaCart
This paper presents an analytically robust, globally convergent approach to managing the use of approximation models of various fidelity in optimization. By robust global behavior we mean the mathematical assurance that the iterates produced by the optimization algorithm, started at an arbitrary initial iterate, will converge to a stationary point or local optimizer for the original problem. The approach we present is based on the trust region idea from nonlinear programming and is shown to be provably convergent to a solution of the original highfidelity problem. The proposed method for managing approximations in engineering optimization suggests ways to decide when the fidelity, and thus the cost, of the approximations might be fruitfully increased or decreased in the course of the optimization iterations. The approach is quite general. We make no assumptions on the structure of the original problem, in particular, no assumptions of convexity and separability, and place only mild ...
Analysis of Inexact TrustRegion InteriorPoint SQP Algorithms
, 1995
"... In this paper we analyze inexact trustregion interiorpoint (TRIP) sequential quadratic programming (SQP) algorithms for the solution of optimization problems with nonlinear equality constraints and simple bound constraints on some of the variables. Such problems arise in many engineering applicati ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
In this paper we analyze inexact trustregion interiorpoint (TRIP) sequential quadratic programming (SQP) algorithms for the solution of optimization problems with nonlinear equality constraints and simple bound constraints on some of the variables. Such problems arise in many engineering applications, in particular in optimal control problems with bounds on the control. The nonlinear constraints often come from the discretization of partial differential equations. In such cases the calculation of derivative information and the solution of linearized equations is expensive. Often, the solution of linear systems and derivatives are computed inexactly yielding nonzero residuals. This paper analyzes the effect of the inexactness onto the convergence of TRIP SQP and gives practical rules to control the size of the residuals of these inexact calculations. It is shown that if the size of the residuals is of the order of both the size of the constraints and the trustregion radius, t...
Methods for nonlinear constraints in optimization calculations
 THE STATE OF THE ART IN NUMERICAL ANALYSIS
, 1996
"... ..."
SQP methods for largescale nonlinear programming
, 1999
"... We compare and contrast a number of recent sequential quadratic programming (SQP) methods that have been proposed for the solution of largescale nonlinear programming problems. Both linesearch and trustregion approaches are considered, as are the implications of interiorpoint and quadratic progr ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
We compare and contrast a number of recent sequential quadratic programming (SQP) methods that have been proposed for the solution of largescale nonlinear programming problems. Both linesearch and trustregion approaches are considered, as are the implications of interiorpoint and quadratic programming methods.
On Some Properties of Quadratic Programs With a Convex Quadratic Constraint
, 1996
"... In this paper we consider the problem of minimizing a (possibly nonconvex) quadratic function with a quadratic constraint. We point out some new properties of the problem. In particular, in the first part of the paper, we show that (i) given a KKT point that is not a global minimizer, it is easy to ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
In this paper we consider the problem of minimizing a (possibly nonconvex) quadratic function with a quadratic constraint. We point out some new properties of the problem. In particular, in the first part of the paper, we show that (i) given a KKT point that is not a global minimizer, it is easy to find a "better" feasible point; (ii) strict complementarity holds at the localnonglobal minimizer. In the second part, we show that the original constrained problem is equivalent to the unconstrained minimization of a piecewise quartic merit function. Using the unconstrained formulation we give, in the nonconvex case, a new second order necessary condition for global minimizers. In the third part, algorithmic applications of the preceding results are briefly outlined and some preliminary numerical experiments are reported.
Relaxing Convergence Conditions To Improve The Convergence Rate
, 1999
"... Standard global convergence proofs are examined to determine why some algorithms perform better than other algorithms. We show that relaxing the conditions required to prove global convergence can improve an algorithm's performance. Further analysis indicates that minimizing an estimate of the dista ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Standard global convergence proofs are examined to determine why some algorithms perform better than other algorithms. We show that relaxing the conditions required to prove global convergence can improve an algorithm's performance. Further analysis indicates that minimizing an estimate of the distance to the minimum relaxes the convergence conditions in such a way as to improve an algorithm's convergence rate. A new linesearch algorithm based on these ideas is presented that does not force a reduction in the objective function at each iteration, yet it allows the objective function to increase during an iteration only if this will result in faster convergence. Unlike the nonmonotone algorithms in the literature, these new functions dynamically adjust to account for changes between the influence of curvature and descent. The result is an optimal algorithm in the sense that an estimate of the distance to the minimum is minimized at each iteration. The algorithm is shown to be well defi...
A Note on Quadratic Forms
 Research Report, Institute of Computational Mathematics and Scientific/ Engineering Computing, Chinese Academy of Sciences
, 1997
"... We extend an interesting theorem of Yuan [12] for two quadratic forms to three matrices. Let C 1 ; C 2 ; C 3 be three symmetric matrices in ! n\Thetan , if maxfx T C 1 x; x T C 2 x; x T C 3 xg 0 for all x 2 ! n , it is proved that there exists t i 0(i = 1; 2; 3) such that P 3 i=1 t ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
We extend an interesting theorem of Yuan [12] for two quadratic forms to three matrices. Let C 1 ; C 2 ; C 3 be three symmetric matrices in ! n\Thetan , if maxfx T C 1 x; x T C 2 x; x T C 3 xg 0 for all x 2 ! n , it is proved that there exists t i 0(i = 1; 2; 3) such that P 3 i=1 t i = 1 and P 3 i=1 t i C i has at most one negative eigenvalue. Keywords: quadratic forms, convex combination, matrix perturbation. this work was supported by Chinese NSF grants 19525101 and 19731010. 1 1. Introduction A very interesting result about two quadratic forms was given by Yuan [12] . It reads as follows: Theorem 1.1 Let C 1 ; C 2 2 ! n\Thetan be two symmetric matrices and A and B be two closed sets in ! n such that A [ B = ! n : (1.1) If we have x T C 1 x 0; x 2 A; x T C 2 x 0; x 2 B; (1.2) then there exists a t 2 [0; 1] such that the matrix tC 1 + (1 \Gamma t)C 2 (1.3) is positive semidefinite. The above theorem is very useful in the studying of optimal...
AN ELLIPSOIDAL BRANCH AND BOUND ALGORITHM FOR GLOBAL OPTIMIZATION ∗
"... Abstract. A branch and bound algorithm is developed for global optimization. Branching in the algorithm is accomplished by subdividing the feasible set using ellipses. Lower bounds are obtained by replacing the concave part of the objective function by an affine underestimate. A ball approximation a ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. A branch and bound algorithm is developed for global optimization. Branching in the algorithm is accomplished by subdividing the feasible set using ellipses. Lower bounds are obtained by replacing the concave part of the objective function by an affine underestimate. A ball approximation algorithm, obtained by generalizing of a scheme of Lin and Han, is used to solve the convex relaxation of the original problem. The ball approximation algorithm is compared to SEDUMI as well as to gradient projection algorithms using randomly generated test problems with a quadratic objective and ellipsoidal constraints.
Optimality Conditions for CDT Subproblem
, 1997
"... : In this paper, we give necessary and sufficient optimality conditions which are easy verified for the local solution of CelisDennisTapia subproblem (CDT subproblem) where the Hessian at this local solution has one negative eigenvalue. If CDT subproblem has no global solution with Hessian of Lagr ..."
Abstract
 Add to MetaCart
: In this paper, we give necessary and sufficient optimality conditions which are easy verified for the local solution of CelisDennisTapia subproblem (CDT subproblem) where the Hessian at this local solution has one negative eigenvalue. If CDT subproblem has no global solution with Hessian of Lagrangian positive semidefinite, the Hessian of Lagrangian has at least one negative eigenvalue. It is very important to investigate all the stationary points of Lagrangian dual function and to characterize the local solutions. We also discuss the gap between these two conditions. Key Words: CDT subproblem, optimality condition. 1 INTRODUCTION The CDT subproblem is proposed by Celis, Dennis and Tapia (1985) in order to overcome the difficulty of inconsistency when one applies the sequence quadratic programming method with the trust region for the constrained optimization, for example, see Powell and Yuan (1991). The CDT subproblem has the following form: min d2R n \Phi(d) = 1 2 d T Bd ...
Analysis of the Issue of Consistency in Identification for Robust Control
, 2001
"... Given measured data generated by a discretetime linear system we propose a model consisting of a linear, timeinvariant system affected by normbounded perturbation. Under mild assumptions, the plants belonging to the resulting uncertain family form a convex set. The approach depends on two key par ..."
Abstract
 Add to MetaCart
Given measured data generated by a discretetime linear system we propose a model consisting of a linear, timeinvariant system affected by normbounded perturbation. Under mild assumptions, the plants belonging to the resulting uncertain family form a convex set. The approach depends on two key parameters: an a priori given bound of the perturbation, and the input used to generate the data. It turns out that the size of the uncertain family can be reduced by intersecting the model families obtained by making use of different inputs. Two model validation problems in this identification scheme are analyzed, namely the worst and the best invalidation problems. It turns out that while the former is a maxmin optimization problem subject to a spherical constraint, the latter is a quadratic optimization problem with a quadratic and a convex constraint. For a given energy level,...