Results 1  10
of
38
CUTE: Constrained and unconstrained testing environment
, 1993
"... The purpose of this paper is to discuss the scope and functionality of a versatile environment for testing small and largescale nonlinear optimization algorithms. Although many of these facilities were originally produced by the authors in conjunction with the software package LANCELOT, we belie ..."
Abstract

Cited by 152 (3 self)
 Add to MetaCart
The purpose of this paper is to discuss the scope and functionality of a versatile environment for testing small and largescale nonlinear optimization algorithms. Although many of these facilities were originally produced by the authors in conjunction with the software package LANCELOT, we believe that they will be useful in their own right and should be available to researchers for their development of optimization software. The tools are available by anonymous ftp from a number of sources and may, in many cases, be installed automatically. The scope of a major collection of test problems written in the standard input format (SIF) used by the LANCELOT software package is described. Recognising that most software was not written with the SIF in mind, we provide tools to assist in building an interface between this input format and other optimization packages. These tools already provide a link between the SIF and an number of existing packages, including MINOS and OSL. In ad...
Newton's Method For Large BoundConstrained Optimization Problems
 SIAM JOURNAL ON OPTIMIZATION
, 1998
"... We analyze a trust region version of Newton's method for boundconstrained problems. Our approach relies on the geometry of the feasible set, not on the particular representation in terms of constraints. The convergence theory holds for linearlyconstrained problems, and yields global and superlinea ..."
Abstract

Cited by 74 (4 self)
 Add to MetaCart
We analyze a trust region version of Newton's method for boundconstrained problems. Our approach relies on the geometry of the feasible set, not on the particular representation in terms of constraints. The convergence theory holds for linearlyconstrained problems, and yields global and superlinear convergence without assuming neither strict complementarity nor linear independence of the active constraints. We also show that the convergence theory leads to an efficient implementation for large boundconstrained problems.
The ADIFOR 2.0 System for the Automatic Differentiation of Fortran 77 Programs
 RICE UNIVERSITY
, 1994
"... Automatic Differentiation is a technique for augmenting computer programs with statements for the computation of derivatives based on the chain rule of differential calculus. The ADIFOR 2.0 system provides automatic differentiation of Fortran 77 programs for firstorder derivatives. The ADIFOR 2.0 s ..."
Abstract

Cited by 55 (17 self)
 Add to MetaCart
Automatic Differentiation is a technique for augmenting computer programs with statements for the computation of derivatives based on the chain rule of differential calculus. The ADIFOR 2.0 system provides automatic differentiation of Fortran 77 programs for firstorder derivatives. The ADIFOR 2.0 system consists of three main components: The ADIFOR 2.0 preprocessor, the ADIntrinsics Fortran 77 exceptionhandling system, and the SparsLinC library. The combination of these tools provides the ability to deal with arbitrary Fortran 77 syntax, to handle codes containing single and doubleprecision real or complexvalued data, to fully support and easily customize the translation of Fortran 77 intrinsics, and to transparently exploit sparsity in derivative computations. ADIFOR 2.0 has been successfully applied to a 60,000line code, which we believe to be a new record in automatic differentiation.
Computing Large Sparse Jacobian Matrices Using Automatic Differentiation
 SIAM Journal on Scientific Computing
, 1993
"... The computation of large sparse Jacobian matrices is required in many important largescale scientific problems. We consider three approaches to computing such matrices: handcoding, difference approximations, and automatic differentiation using the ADIFOR (Automatic Differentiation in Fortran) tool ..."
Abstract

Cited by 47 (25 self)
 Add to MetaCart
The computation of large sparse Jacobian matrices is required in many important largescale scientific problems. We consider three approaches to computing such matrices: handcoding, difference approximations, and automatic differentiation using the ADIFOR (Automatic Differentiation in Fortran) tool. We compare the numerical reliability and computational efficiency of these approaches on applications from the MINPACK2 test problem collection. Our conclusion is that automatic differentiation is the method of choice, leading to results that are as accurate as handcoded derivatives, while at the same time outperforming difference approximations in both accuracy and speed. COMPUTING LARGE SPARSE JACOBIAN MATRICES USING AUTOMATIC DIFFERENTIATION Brett M. Averick , Jorge J. Mor'e, Christian H. Bischof, Alan Carle , and Andreas Griewank 1 Introduction The solution of largescale nonlinear problems often requires the computation of the Jacobian matrix f 0 (x) of a mapping f : IR ...
Efficient Management of Parallelism in ObjectOriented Numerical Software Libraries
 Modern Software Tools in Scientific Computing
, 1997
"... Parallel numerical software based on the messagepassing model is enormously complicated. This paper introduces a set of techniques to manage the complexity, while maintaining high efficiency and ease of use. The PETSc 2.0 package uses objectoriented programming to conceal the details of the messag ..."
Abstract

Cited by 32 (0 self)
 Add to MetaCart
Parallel numerical software based on the messagepassing model is enormously complicated. This paper introduces a set of techniques to manage the complexity, while maintaining high efficiency and ease of use. The PETSc 2.0 package uses objectoriented programming to conceal the details of the message passing, without concealing the parallelism, in a highquality set of numerical software libraries. In fact, the programming model used by PETSc is also the most appropriate for NUMA sharedmemory machines, since they require the same careful attention to memory hierarchies as do distributedmemory machines. Thus, the concepts discussed are appropriate for all scalable computing systems. The PETSc libraries provide many of the data structures and numerical kernels required for the scalable solution of PDEs, offering performance portability. 1 Introduction Currently the only generalpurpose, efficient, scalable approach to programming distributedmemory parallel systems is the messagepass...
Incomplete Cholesky Factorizations With Limited Memory
 SIAM J. SCI. COMPUT
, 1999
"... We propose an incomplete Cholesky factorization for the solution of largescale trust region subproblems and positive definite systems of linear equations. This factorization depends on a parameter p that specifies the amount of additional memory (in multiples of n, the dimension of the problem) tha ..."
Abstract

Cited by 27 (5 self)
 Add to MetaCart
We propose an incomplete Cholesky factorization for the solution of largescale trust region subproblems and positive definite systems of linear equations. This factorization depends on a parameter p that specifies the amount of additional memory (in multiples of n, the dimension of the problem) that is available; there is no need to specify a drop tolerance. Our numerical results show that the number of conjugate gradient iterations and the computing time are reduced dramatically for small values of p. We also show that in contrast with drop tolerance strategies, the new approach is more stable in terms of number of iterations and memory requirements.
Benchmarking Optimization Software with COPS
, 2000
"... 1 Introduction 1 Testing Methods 2 1 Largest Small Polygon 3 2 Distribution of Electrons on a Sphere 5 3 Hanging Chain 7 4 Shape Optimization of a Cam 9 5 Isometrization of ffpinene 11 6 Marine Population Dynamics 13 7 Flow in a Channel 16 8 Robot Arm 18 9 Particle Steering 21 10 Goddard Rocket 23 ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
1 Introduction 1 Testing Methods 2 1 Largest Small Polygon 3 2 Distribution of Electrons on a Sphere 5 3 Hanging Chain 7 4 Shape Optimization of a Cam 9 5 Isometrization of ffpinene 11 6 Marine Population Dynamics 13 7 Flow in a Channel 16 8 Robot Arm 18 9 Particle Steering 21 10 Goddard Rocket 23 11 Hang Glider 26 12 Catalytic Cracking of Gas Oil 29 13 Methanol to Hydrocarbons 31 14 Catalyst Mixing 33 15 ElasticPlastic Torsion 35 16 Journal Bearing 37 17 Minimal Surface with Obstacle 39 Acknowledgments 41 References 41 ii Benchmarking Optimization Software with COPS by Elizabeth D. Dolan and Jorge J. Mor'e Abstract We describe version 2.0 of the COPS set of nonlinearly constrained optimization problems. We have added new problems, as well as streamlined and improved most of the problems. We also provide a comparison of the LANCELOT, LOQO, MINOS, and SNOPT solvers on these problems. Introduction The COPS [5] test set provides a modest selection of difficult nonlinearly constrai...
Computing Gradients in LargeScale Optimization Using Automatic Differentiation
 PREPRINT MCSP4880195, MATHEMATICS AND COMPUTER SCIENCE DIVISION, ARGONNE NATIONAL LABORATORY
, 1995
"... The accurate and efficient computation of gradients for partially separable functions is central to the solution of largescale optimization problems, since these functions are ubiquitous in largescale problems. We describe two approaches for computing gradients of partially separable functions via ..."
Abstract

Cited by 23 (9 self)
 Add to MetaCart
The accurate and efficient computation of gradients for partially separable functions is central to the solution of largescale optimization problems, since these functions are ubiquitous in largescale problems. We describe two approaches for computing gradients of partially separable functions via automatic differentiation. In our experiments we employ the ADIFOR (Automatic Differentiation of Fortran) tool and the SparsLinC (Sparse Linear Combination) library. We use applications from the MINPACK2 test problem collection to compare the numerical reliability and computational efficiency of these approaches with handcoded derivatives and approximations based on differences of function values. Our conclusion is that automatic differentiation is the method of choice, providing code for the efficient computation of the gradient without the need for tedious handcoding.
A Limited Memory Variable Metric Method in Subspaces and Bound Constrained Optimization Problems
 in Subspaces and Bound Constrained Optimization Problems
, 2001
"... We describe an algorithm for solving nonlinear optimization problems with lower and upper bounds that constrain the variables. The algorithm uses projected gradients to construct a limited memory BFGS matrix and determine a step direction. The algorithm has been implemented and distributed as par ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
We describe an algorithm for solving nonlinear optimization problems with lower and upper bounds that constrain the variables. The algorithm uses projected gradients to construct a limited memory BFGS matrix and determine a step direction. The algorithm has been implemented and distributed as part of the Toolkit for Advanced Optimization (TAO). We include numerical results demonstrate is eectiveness on a set of large test problems and its scalability to multiple processors.
COPS: LargeScale Nonlinearly Constrained Optimization Problems
"... 1 1 Introduction 1 2 Largest Small Polygon (Gay [8]) 3 3 Distribution of Electrons on a Sphere (Vanderbei [13]) 5 4 Sawpath Tracking (Vanderbei [13]) 7 5 Hanging Chain (H. Mittelmann, private communication) 10 6 Shape Optimization of a Cam (Anitescu and Serban [1]) 12 7 Isometrization of ffpinene ( ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
1 1 Introduction 1 2 Largest Small Polygon (Gay [8]) 3 3 Distribution of Electrons on a Sphere (Vanderbei [13]) 5 4 Sawpath Tracking (Vanderbei [13]) 7 5 Hanging Chain (H. Mittelmann, private communication) 10 6 Shape Optimization of a Cam (Anitescu and Serban [1]) 12 7 Isometrization of ffpinene (MINPACK2 test problems [3]) 15 8 Marine Population Dynamics (Rothschild et al. [11]) 18 9 Flow in a Channel (MINPACK2 test problems [3]) 21 10 Noninertial Robot Arm (Vanderbei [13]) 24 11 Linear Tangent Steering (Betts, Eldersveld, and Huffman [4]) 29 12 Goddard Rocket (Betts, Eldersveld, and Huffman [4]) 32 13 Hang Glider (Betts, Eldersveld, Huffman [4]) 35 14 Implementation of COPS in C 38 References 43 ii COPS: LargeScale Nonlinearly Constrained Optimization Problems Alexander S. Bondarenko, David M. Bortz, and Jorge J. Mor'e Abstract We have started the development of COPS, a collection of largescale nonlinearly Constrained Optimization ProblemS. The primary purpose of this col...