Results 1  10
of
13
TrustRegion InteriorPoint SQP Algorithms For A Class Of Nonlinear Programming Problems
 SIAM J. CONTROL OPTIM
, 1997
"... In this paper a family of trustregion interiorpoint SQP algorithms for the solution of a class of minimization problems with nonlinear equality constraints and simple bounds on some of the variables is described and analyzed. Such nonlinear programs arise e.g. from the discretization of optimal co ..."
Abstract

Cited by 37 (8 self)
 Add to MetaCart
In this paper a family of trustregion interiorpoint SQP algorithms for the solution of a class of minimization problems with nonlinear equality constraints and simple bounds on some of the variables is described and analyzed. Such nonlinear programs arise e.g. from the discretization of optimal control problems. The algorithms treat states and controls as independent variables. They are designed to take advantage of the structure of the problem. In particular they do not rely on matrix factorizations of the linearized constraints, but use solutions of the linearized state equation and the adjoint equation. They are well suited for large scale problems arising from optimal control problems governed by partial differential equations. The algorithms keep strict feasibility with respect to the bound constraints by using an affine scaling method proposed for a different class of problems by Coleman and Li and they exploit trustregion techniques for equalityconstrained optimizatio...
Complete Orthogonal Decomposition for Weighted Least Squares
 SIAM J. Matrix Anal. Appl
, 1995
"... Consider a fullrank weighted leastsquares problem in which the weight matrix is highly illconditioned. Because of the illconditioning, standard methods for solving leastsquares problems, QR factorization and the nullspace method for example, break down. G. W. Stewart established a norm bound fo ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
(Show Context)
Consider a fullrank weighted leastsquares problem in which the weight matrix is highly illconditioned. Because of the illconditioning, standard methods for solving leastsquares problems, QR factorization and the nullspace method for example, break down. G. W. Stewart established a norm bound for such a system of equations, indicating that it may be possible to find an algorithm that gives an accurate solution. S. A. Vavasis proposed a new definition of stability that is based on this result. He also defined the NSH algorithm for solving this leastsquares problem and showed that it satisfies his definition of stability. In this paper, we propose a complete orthogonal decomposition algorithm to solve this problem and show that it is also stable. This new algorithm is simpler and more efficient than the NSH method. 1 Introduction We consider solving the problem min y2R n kD \Gamma1=2 (Ay \Gamma b) k (1) for y, where D is a symmetric positive definite m \Theta m matrix, A is an ...
Tits. NewtonKKT interiorpoint methods for indefinite quadratic programming
 Comput. Optim. Appl
"... Two interiorpoint algorithms are proposed and analyzed, for the (local) solution of (possibly) indefinite quadratic programming problems. They are of the NewtonKKT variety in that (much like in the case of primaldual algorithms for linear programming) search directions for the “primal ” variables ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
Two interiorpoint algorithms are proposed and analyzed, for the (local) solution of (possibly) indefinite quadratic programming problems. They are of the NewtonKKT variety in that (much like in the case of primaldual algorithms for linear programming) search directions for the “primal ” variables and the KarushKuhnTucker (KKT) multiplier estimates are components of the Newton (or quasiNewton)
On InteriorPoint Newton Algorithms For Discretized Optimal Control Problems With State Constraints
 OPTIM. METHODS SOFTW
, 1998
"... In this paper we consider a class of nonlinear programming problems that arise from the discretization of optimal control problems with bounds on both the state and the control variables. For this class of problems, we analyze constraint qualifications and optimality conditions in detail. We derive ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
In this paper we consider a class of nonlinear programming problems that arise from the discretization of optimal control problems with bounds on both the state and the control variables. For this class of problems, we analyze constraint qualifications and optimality conditions in detail. We derive an affinescaling and two primaldual interiorpoint Newton algorithms by applying, in an interiorpoint way, Newton's method to equivalent forms of the firstorder optimality conditions. Under appropriate assumptions, the interiorpoint Newton algorithms are shown to be locally welldefined with a qquadratic rate of local convergence. By using the structure of the problem, the linear algebra of these algorithms can be reduced to the null space of the Jacobian of the equality constraints. The similarities between the three algorithms are pointed out, and their corresponding versions for the general nonlinear programming problem are discussed.
Copositive optimization – recent developments and applications
 European Journal of Operational Research
, 2012
"... Due to its versatility, copositive optimization receives increasing interest in the Operational Research community, and is a rapidly expanding and fertile field of research. It is a special case of conic optimization, which consists of minimizing a linear function over a cone subject to linear const ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
Due to its versatility, copositive optimization receives increasing interest in the Operational Research community, and is a rapidly expanding and fertile field of research. It is a special case of conic optimization, which consists of minimizing a linear function over a cone subject to linear constraints. The diversity of copositive formulations in different domains of optimization is impressive, since problem classes both in the continuous and discrete world, as well as both deterministic and stochastic models are covered. Copositivity appears in local and global optimality conditions for quadratic optimization, but can also yield tighter bounds for NPhard combinatorial optimization problems. Here some of the recent success stories are told, along with principles, algorithms and applications. 1.
Stable Computation of Search Directions for NearDegenerate Linear Programming Problems
, 1997
"... ..."
(Show Context)
An InteriorPoint Method for General LargeScale Quadratic Programming Problems
 Annals of Operations Research
, 1996
"... In this paper we present an interior point algorithm for solving both convex and nonconvex quadratic programs. The method, which is an extension of our interior point work on linear programming problems, efficiently solves a wide class of large scale problems and forms the basis for a sequential qua ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper we present an interior point algorithm for solving both convex and nonconvex quadratic programs. The method, which is an extension of our interior point work on linear programming problems, efficiently solves a wide class of large scale problems and forms the basis for a sequential quadratic programming (SQP) solver for general large scale nonlinear programs. The key to the algorithm is a 3dimensional costimprovement subproblem, which is solved at every iteration. We have developed an approximate recentering procedure and a novel, adaptive bigM Phase I procedure that are essential to the success. We describe the basic method along with the recentering and bigM Phase I procedures. Details of the implementation and computational results are also presented. Keywords: bigM Phase I procedure, convex quadratic programming, interior point methods, linear programming, method of centers, multidirectional search direction, nonconvex quadratic programming, recentering. # Cont...
RICE UNIVERSITY The Use of Optimization Techniques in the Solution of Partial Differential Equations from
, 1996
"... Acknowledgments This thesis is a very important milestone in a journey I began more than ten years ago. People too numerous to mention have helped me along the way; a few are singled out here. When I was an undergraduate at the University of Maryland, Baltimore County, the Mathematics faculty, in pa ..."
Abstract
 Add to MetaCart
(Show Context)
Acknowledgments This thesis is a very important milestone in a journey I began more than ten years ago. People too numerous to mention have helped me along the way; a few are singled out here. When I was an undergraduate at the University of Maryland, Baltimore County, the Mathematics faculty, in particular Professors James Greenberg, So/ren Jensen, and Marc Teboulle, taught me to love applied mathematics; their patience with me was endless and I will always be grateful to them.
Recovery of Blocky Images in Electrical Impedance Tomography
"... this paper we describe some aspects of the application of total variation minimization techniques to the linearized EIT problem. In Section 2, we formulate a reconstruction problem, describing one approach to total variation regularization by constrained minimization. In Section 3, a stabilization s ..."
Abstract
 Add to MetaCart
(Show Context)
this paper we describe some aspects of the application of total variation minimization techniques to the linearized EIT problem. In Section 2, we formulate a reconstruction problem, describing one approach to total variation regularization by constrained minimization. In Section 3, a stabilization strategy for the constraints is described. Section 4 discusses some results on characterizing conductivity images which can be completely recovered, under the assumption that the instability in the problem is restricted to a limited range of frequency components in the image. Section 5 motivates other conditions which are favorable for recovering images when the instabilities are not bandlimited. In Section 6 we describe a very simple minimization scheme for the constrained regularized problem. Finally, some representative numerical results are presented in Section 7. 2 EIT and minimal total variation regularization
An Affine Scaling Trust Region Algorithm For Nonlinear Programming
, 2000
"... . A monotonic decrease minimization algorithm can be desirable for nonconvex minimization since there may be more than one local minimizers. A typical interior point algorithm for a convex programming problem does not yield monotonic improvement of the objective function value. In this paper, a mono ..."
Abstract
 Add to MetaCart
(Show Context)
. A monotonic decrease minimization algorithm can be desirable for nonconvex minimization since there may be more than one local minimizers. A typical interior point algorithm for a convex programming problem does not yield monotonic improvement of the objective function value. In this paper, a monotonic a#ne scaling trust region algorithm is proposed for nonconvex programming. The proposed a#ne scaling trust region algorithm is described in the context of minimizing the exact l 1 penalty function. A#ne scaling Newton steps are derived directly from the complementarity conditions. A primal trust region subproblem is proposed for globalization. A dual subproblem is formulated to facilitate dual variables updates; its solution yields decrease of the l 1 function. Global convergence of the proposed algorithm is established. 1. Introduction. Considerable recent research studies have been made in an e#ort to generalize the successful interior point methods for convex programming to nonconve...