Results 1  10
of
16
LBFGSB  Fortran Subroutines for LargeScale Bound Constrained Optimization
, 1994
"... LBFGSB is a limited memory algorithm for solving large nonlinear optimization problems subject to simple bounds on the variables. It is intended for problems in which information on the Hessian matrix is di cult to obtain, or for large dense problems. LBFGSB can also be used for unconstrained pr ..."
Abstract

Cited by 38 (2 self)
 Add to MetaCart
LBFGSB is a limited memory algorithm for solving large nonlinear optimization problems subject to simple bounds on the variables. It is intended for problems in which information on the Hessian matrix is di cult to obtain, or for large dense problems. LBFGSB can also be used for unconstrained problems, and in this case performs similarly to its predecessor, algorithm LBFGS (Harwell routine VA15). The algorithm is implemented in Fortran 77.
A Subspace, Interior, and Conjugate Gradient Method for LargeScale BoundConstrained Minimization Problems
 SIAM Journal on Scientific Computing
, 1999
"... A subspace adaptation of the ColemanLi trust region and interior method is proposed for solving largescale boundconstrained minimization problems. This method can be implemented with either sparse Cholesky factorization or conjugate gradient computation. Under reasonable conditions the convergenc ..."
Abstract

Cited by 35 (1 self)
 Add to MetaCart
A subspace adaptation of the ColemanLi trust region and interior method is proposed for solving largescale boundconstrained minimization problems. This method can be implemented with either sparse Cholesky factorization or conjugate gradient computation. Under reasonable conditions the convergence properties of this subspace trust region method are as strong as those of its fullspace version.
LargeScale Nonlinear Constrained Optimization: A Current Survey
, 1994
"... . Much progress has been made in constrained nonlinear optimization in the past ten years, but most largescale problems still represent a considerable obstacle. In this survey paper we will attempt to give an overview of the current approaches, including interior and exterior methods and algorithm ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
. Much progress has been made in constrained nonlinear optimization in the past ten years, but most largescale problems still represent a considerable obstacle. In this survey paper we will attempt to give an overview of the current approaches, including interior and exterior methods and algorithms based upon trust regions and line searches. In addition, the importance of software, numerical linear algebra and testing will be addressed. We will try to explain why the difficulties arise, how attempts are being made to overcome them and some of the problems that still remain. Although there will be some emphasis on the LANCELOT and CUTE projects, the intention is to give a broad picture of the stateoftheart. 1 IBM T.J. Watson Research Center, P.O.Box 218, Yorktown Heights, NY 10598, USA 2 Parallel Algorithms Team, CERFACS, 42 Ave. G. Coriolis, 31057 Toulouse Cedex, France 3 Central Computing Department, Rutherford Appleton Laboratory, Chilton, Oxfordshire, OX11 0QX, England ...
A New Strategy for Solving Variational Inequalities in Bounded Polytopes
, 1995
"... . We consider variational inequality problems where the convex set under consideration is a bounded polytope. We define an associated box constrained minimization problem and we prove that, under a general condition on the Jacobian, the stationary points of the minimization problems are solutions of ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
. We consider variational inequality problems where the convex set under consideration is a bounded polytope. We define an associated box constrained minimization problem and we prove that, under a general condition on the Jacobian, the stationary points of the minimization problems are solutions of the variational inequality problem. The condition includes the case where the operator is monotone. Based on this result we develop an algorithm that can solve large scale problems. We present numerical experiments. Key words. Variational inequality problems, Bound constrained minimization, Optimality conditions, Stationary points, Global minimizers. AMS (MOS) subject classification. 49M15, 65K05, 90C33. 1 Introduction In this paper we consider the Variational Inequality Problem (VIP): given F : IR n ! IR n and a convex set C ae IR n , find x 2 C such that hF (x); z \Gamma xi 0; for all z 2 C: (1) We are interested in the case where C is a bounded polytope. In this case, witho...
A globally convergent Lagrangian barrier algorithm for optimization with general inequality constraints and simple bounds
 Math. of Computation
, 1997
"... Abstract. We consider the global and local convergence properties of a class of Lagrangian barrier methods for solving nonlinear programming problems. In such methods, simple bound constraints may be treated separately from more general constraints. The objective and general constraint functions are ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Abstract. We consider the global and local convergence properties of a class of Lagrangian barrier methods for solving nonlinear programming problems. In such methods, simple bound constraints may be treated separately from more general constraints. The objective and general constraint functions are combined in a Lagrangian barrier function. A sequence of such functions are approximately minimized within the domain defined by the simple bounds. Global convergence of the sequence of generated iterates to a firstorder stationary point for the original problem is established. Furthermore, possible numerical difficulties associated with barrier function methods are avoided as it is shown that a potentially troublesome penalty parameter is bounded away from zero. This paper is a companion to previous work of ours on augmented Lagrangian methods. 1.
A LimitedMemory Algorithm for Bound Constrained Optimization
 SIAM JOURNAL ON SCIENTIFIC COMPUTING
, 1994
"... An algorithm for solving large nonlinear optimization problems with simple bounds is described. It is based on the gradient projection method and uses a limited memory BFGS matrix to approximate the Hessian of the objective function. It is shown how to take advantage of the form of the limited memor ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
An algorithm for solving large nonlinear optimization problems with simple bounds is described. It is based on the gradient projection method and uses a limited memory BFGS matrix to approximate the Hessian of the objective function. It is shown how to take advantage of the form of the limited memory approximation to implement the algorithm efficiently. The results of numerical tests on a set of large problems are reported.
Convergence Properties of Minimization Algorithms for Convex Constraints Using a Structured Trust Region
, 1992
"... We present in this paper a class of trust region algorithms in which the structure of the problem is explicitly used in the very definition of the trust region itself. This development is intended to reflect the possibility that some parts of the problem may be more "trusted" than others, a commonl ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We present in this paper a class of trust region algorithms in which the structure of the problem is explicitly used in the very definition of the trust region itself. This development is intended to reflect the possibility that some parts of the problem may be more "trusted" than others, a commonly occurring situation in largescale nonlinear applications. After describing the structured trust region mechanism, we prove global convergence for all algorithms in our class. We also prove that, when convex constraints are present, the correct set of such constraints active at the problem's solution is identified by these algorithms after a finite number of iterations.
Augmented Lagrangian method with nonmonotone penalty parameters for constrained optimization
, 2010
"... At each outer iteration of standard Augmented Lagrangian methods one tries to solve a boxconstrained optimization problem with some prescribed tolerance. In the continuous world, using exact arithmetic, this subproblem is always solvable. Therefore, the possibility of finishing the subproblem resol ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
At each outer iteration of standard Augmented Lagrangian methods one tries to solve a boxconstrained optimization problem with some prescribed tolerance. In the continuous world, using exact arithmetic, this subproblem is always solvable. Therefore, the possibility of finishing the subproblem resolution without satisfying the theoretical stopping conditions is not contemplated in usual convergence theories. However, in practice, one might not be able to solve the subproblem up to the required precision. This may be due to different reasons. One of them is that the presence of an excessively large penalty parameter could impair the performance of the boxconstraint optimization solver. In this paper a practical strategy for decreasing the penalty parameter in situations like the one mentioned above is proposed. More generally, the different decisions that may be taken when, in practice, one is not able to solve the Augmented Lagrangian subproblem will be discussed. As a result, an improved Augmented Lagrangian method is presented, which takes into account numerical difficulties in a satisfactory way, preserving suitable convergence theory. Numerical experiments are presented
On The Number Of Inner Iterations Per Outer Iteration Of A Globally Convergent Algorithm For Optimization With General Nonlinear Inequality Constraints And Simple Bounds
, 1992
"... . This paper considers the number of inner iterations required per outer iteration for the algorithm proposed by Conn et al. (1992b). We show that asymptotically, under suitable reasonable assumptions, a single inner iteration suffices. 1 IBM T.J. Watson Research Center Yorktown Heights, USA 2 Ru ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
. This paper considers the number of inner iterations required per outer iteration for the algorithm proposed by Conn et al. (1992b). We show that asymptotically, under suitable reasonable assumptions, a single inner iteration suffices. 1 IBM T.J. Watson Research Center Yorktown Heights, USA 2 Rutherford Appleton Laboratory Chilton, Oxfordshire, England 3 Department of Mathematics, Facult'es Universitaires ND de la Paix, Namur, Belgium Keywords : Nonlinear optimization, inequality constraints, barrier methods, complexity. 1 Introduction In this paper, we consider the nonlinear programming problem minimize x2! n f(x) (1:1) subject to the general constraints c i (x) 0; i = 1; : : : ; m; (1:2) and the specific simple bounds l x u: (1:3) We assume that the region B = fx 2 ! n j l x ug is nonempty and may be infinite. We do not rule out the possibility that further simple bounds on the variables are included amongst the general constraints (1.2) if that is deemed appropr...
An active set Newton's algorithm for largescale nonlinear programs with box constraints
 SIAM J. Optim
, 1995
"... A new algorithm for largescale nonlinear programs with box constraints is introduced. The algorithm is based on an efficient identification technique of the active set at the solution and on a nonmonotone stabilization technique. It possesses global and superlinear convergence properties under stan ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
A new algorithm for largescale nonlinear programs with box constraints is introduced. The algorithm is based on an efficient identification technique of the active set at the solution and on a nonmonotone stabilization technique. It possesses global and superlinear convergence properties under standard, mild assumptions. A new technique for generating test problems with known characteristics is also introduced. The implementation of the method is described along with computational results for largescale problems. 1 Introduction In this paper we consider the solution of the box constrained nonlinear programming problem min x2K f(x) (1) where K = fx 2 IR n : l i x i u i ; i = 1; : : : ; ng (2) is a nonempty set. We assume that the lower and upper bounds may be finite or infinite and that f is a twice continuously differentiable function in an open set containing K. A vector ¯ x 2 K is said to be a stationary point for Problem (1) if it satisfies 8 ? ? ! ? ? : l i = ¯ x i =) ...