Results 1  10
of
12
On the limited memory BFGS method for large scale optimization
 Mathematical Programming
, 1989
"... this paper has appeared in ..."
Theory of Algorithms for Unconstrained Optimization
, 1992
"... this article I will attempt to review the most recent advances in the theory of unconstrained optimization, and will also describe some important open questions. Before doing so, I should point out that the value of the theory of optimization is not limited to its capacity for explaining the behavio ..."
Abstract

Cited by 84 (1 self)
 Add to MetaCart
this article I will attempt to review the most recent advances in the theory of unconstrained optimization, and will also describe some important open questions. Before doing so, I should point out that the value of the theory of optimization is not limited to its capacity for explaining the behavior of the most widely used techniques. The question
Impact Of Partial Separability On LargeScale Optimization
 COMP. OPTIM. APPL
, 1997
"... ELSO is an environment for the solution of largescale optimization problems. With ELSO the user is required to provide only code for the evaluation of a partially separable function. ELSO exploits the partial separability structure of the function to compute the gradient efficiently using automati ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
ELSO is an environment for the solution of largescale optimization problems. With ELSO the user is required to provide only code for the evaluation of a partially separable function. ELSO exploits the partial separability structure of the function to compute the gradient efficiently using automatic differentiation. We demonstrate ELSO's efficiency by comparing the various options available in ELSO. Our conclusion is that the hybrid option in ELSO provides performance comparable to the handcoded option, while having the significant advantage of not requiring a handcoded gradient or the sparsity pattern of the partially separable function. In our test problems, which have carefully coded gradients, the computing time for the hybrid AD option is within a factor of two of the handcoded option.
Algorithms for Solving Nonlinear Systems of Equations
, 1994
"... In this paper we survey numerical methods for solving nonlinear systems of equations F (x) = 0, where F : IR n ! IR n . We are especially interested in large problems. We describe modern implementations of the main local algorithms, as well as their globally convergent counterparts. 1. INTRODUC ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
In this paper we survey numerical methods for solving nonlinear systems of equations F (x) = 0, where F : IR n ! IR n . We are especially interested in large problems. We describe modern implementations of the main local algorithms, as well as their globally convergent counterparts. 1. INTRODUCTION Nonlinear systems of equations appear in many real  life problems. Mor'e [1989] has reported a collection of practical examples which include: Aircraft Stability problems, Inverse Elastic Rod problems, Equations of Radiative Transfer, Elliptic Boundary Value problems, etc.. We have also worked with Power Flow problems, Distribution of Water on a Pipeline, Discretization of Evolution problems using Implicit Schemes, Chemical Plant Equilibrium problems, and others. The scope of applications becomes even greater if we include the family of Nonlinear Programming problems, since the firstorder optimality conditions of these problems are nonlinear systems. Given F : IR n ! IR n ; F = (...
Solving Nonlinear Systems Of Equations By Means Of QuasiNewton Methods With A Nonmonotone Strategy
, 1997
"... A nonmonotone strategy for solving nonlinear systems of equations is introduced. The idea consists of combining efficient local methods with an algorithm that reduces monotonically the squared norm of the system in a proper way. The local methods used are Newton's method and two quasiNewton algorith ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
A nonmonotone strategy for solving nonlinear systems of equations is introduced. The idea consists of combining efficient local methods with an algorithm that reduces monotonically the squared norm of the system in a proper way. The local methods used are Newton's method and two quasiNewton algorithms. Global iterations are based on recently introduced boxconstrained minimization algorithms. We present numerical experiments. 1 INTRODUCTION Given F : IR n ! IR n ; F = (f 1 ; : : : ; f n ) T , our aim is to find solutions of F (x) = 0: (1) We assume that F is well defined and has continuous partial derivatives on an open set of IR n . J(x) denotes the Jacobian matrix of partial derivatives of F (x). We are mostly interested in problems where n is large and J(x) is structurally sparse. This means that most entries of J(x) are zero for all x in the domain of F . The package Nightingale has been developed at the Department of Applied Mathematics of the University of Campinas for...
Graph Coloring And The Estimation Of Sparse Jacobian Matrices With Segmented Columns
, 1992
"... It is well known that a sparse Jacobian matrix can be estimated by fewer function evaluations than the number of columns by using the CPR technique. An example shows that if the rows of the matrix are partitioned into two blocks then fewer function evaluations are needed. In this paper we show the r ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
It is well known that a sparse Jacobian matrix can be estimated by fewer function evaluations than the number of columns by using the CPR technique. An example shows that if the rows of the matrix are partitioned into two blocks then fewer function evaluations are needed. In this paper we show the relationship between estimating the Jacobian matrix by grouping together both rows and columns and the graph coloring problem. We give an easy implementation of the element isolation principle.
Convergence Properties of Minimization Algorithms for Convex Constraints Using a Structured Trust Region
, 1992
"... We present in this paper a class of trust region algorithms in which the structure of the problem is explicitly used in the very definition of the trust region itself. This development is intended to reflect the possibility that some parts of the problem may be more "trusted" than others, a commonl ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We present in this paper a class of trust region algorithms in which the structure of the problem is explicitly used in the very definition of the trust region itself. This development is intended to reflect the possibility that some parts of the problem may be more "trusted" than others, a commonly occurring situation in largescale nonlinear applications. After describing the structured trust region mechanism, we prove global convergence for all algorithms in our class. We also prove that, when convex constraints are present, the correct set of such constraints active at the problem's solution is identified by these algorithms after a finite number of iterations.
Adaptive cubic overestimation methods for unconstrained optimization
"... An Adaptive Cubic Overestimation (ACO) algorithm for unconstrained optimization, generalizing a method due to Nesterov & Polyak (Math. Programming 108, 2006, pp 177205), is proposed. At each iteration of Nesterov & Polyak’s approach, the global minimizer of a local cubic overestimator of the object ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
An Adaptive Cubic Overestimation (ACO) algorithm for unconstrained optimization, generalizing a method due to Nesterov & Polyak (Math. Programming 108, 2006, pp 177205), is proposed. At each iteration of Nesterov & Polyak’s approach, the global minimizer of a local cubic overestimator of the objective function is determined, and this ensures a significant improvement in the objective so long as the Hessian of the objective is Lipschitz continuous and its Lipschitz constant is available. The twin requirements of global model optimality and the availability of Lipschitz constants somewhat limit the applicability of such an approach, particularly for largescale problems. However the promised powerful worstcase theoretical guarantees prompt us to investigate variants in which estimates of the required Lipschitz constant are refined and in which computationallyviable approximations to the global modelminimizer are sought. We show that the excellent global and local convergence properties and worstcase iteration complexity bounds obtained by Nesterov & Polyak are retained, and sometimes extended to a wider class of problems, by our ACO approach. Numerical experiments with smallscale test problems from the CUTEr set show superior performance of the ACO algorithm when compared to a trustregion implementation.
Duality for convex partially separable optimization problems
 Mong. Math. J
"... This paper aims to extend duality investigations for the convex partially separable optimization problems. By using the results in [15] we formulate three dual problems for the optimization problem with convex inequality and affine equality constraints, which includes the convex partially separable ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This paper aims to extend duality investigations for the convex partially separable optimization problems. By using the results in [15] we formulate three dual problems for the optimization problem with convex inequality and affine equality constraints, which includes the convex partially separable one. For these duals we give a constraint qualification which guarantees the existence of strong duality. Optimality conditions for the convex partially separable optimization problem and some particular cases are also obtained.
Large Scale Portfolio Optimization with Piecewise Linear
, 2006
"... We consider the fundamental problem of computing an optimal portfolio based on a quadratic meanvariance model of the objective function and a given polyhedral representation of the constraints. The main departure from the classical quadratic programming formulation is the inclusion in the object ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We consider the fundamental problem of computing an optimal portfolio based on a quadratic meanvariance model of the objective function and a given polyhedral representation of the constraints. The main departure from the classical quadratic programming formulation is the inclusion in the objective function of piecewise linear, separable functions representing the transaction costs. We handle the nonsmoothness in the objective function by using spline approximations. The problem is then solved using a primaldual interiorpoint method with a crossover to an active set method. Our numerical tests show that we can solve large scale problems e#ciently and accurately.