Results 1  10
of
186
On the limited memory BFGS method for large scale optimization
 MATHEMATICAL PROGRAMMING
, 1989
"... ..."
CUTE: Constrained and unconstrained testing environment
, 1993
"... The purpose of this paper is to discuss the scope and functionality of a versatile environment for testing small and largescale nonlinear optimization algorithms. Although many of these facilities were originally produced by the authors in conjunction with the software package LANCELOT, we belie ..."
Abstract

Cited by 161 (3 self)
 Add to MetaCart
The purpose of this paper is to discuss the scope and functionality of a versatile environment for testing small and largescale nonlinear optimization algorithms. Although many of these facilities were originally produced by the authors in conjunction with the software package LANCELOT, we believe that they will be useful in their own right and should be available to researchers for their development of optimization software. The tools are available by anonymous ftp from a number of sources and may, in many cases, be installed automatically. The scope of a major collection of test problems written in the standard input format (SIF) used by the LANCELOT software package is described. Recognising that most software was not written with the SIF in mind, we provide tools to assist in building an interface between this input format and other optimization packages. These tools already provide a link between the SIF and an number of existing packages, including MINOS and OSL. In ad...
Optimization by Direct Search: New Perspectives on Some Classical and Modern Methods
 SIAM REVIEW VOL. 45, NO. 3, PP. 385–482
, 2003
"... Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because they lacked ..."
Abstract

Cited by 143 (14 self)
 Add to MetaCart
(Show Context)
Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because they lacked coherent mathematical analysis. Nonetheless, users remained loyal to these methods, most of which were easy to program, some of which were reliable. In the past fifteen years, these methods have seen a revival due, in part, to the appearance of mathematical analysis, as well as to interest in parallel and distributed computing. This review begins by briefly summarizing the history of direct search methods and considering the special properties of problems for which they are well suited. Our focus then turns to a broad class of methods for which we provide a unifying framework that lends itself to a variety of convergence results. The underlying principles allow generalization to handle bound constraints and linear constraints. We also discuss extensions to problems with nonlinear constraints.
Direct Search Methods On Parallel Machines
 SIAM Journal on Optimization
, 1991
"... . This paper describes an approach to constructing derivativefree algorithms for unconstrained optimization that are easy to implement on parallel machines. A special feature of this approach is the ease with which algorithms can be generated to take advantage of any number of processors and to ada ..."
Abstract

Cited by 116 (21 self)
 Add to MetaCart
(Show Context)
. This paper describes an approach to constructing derivativefree algorithms for unconstrained optimization that are easy to implement on parallel machines. A special feature of this approach is the ease with which algorithms can be generated to take advantage of any number of processors and to adapt to any cost ratio of communication to function evaluation. Numerical tests show speedups on two fronts. The cost of synchronization being minimal, the speedup is almost linear with the addition of more processors, i.e., given a problem and a search strategy, the decrease in execution time is proportional to the number of processors added. Even more encouraging, however, is that different search strategies, devised to take advantage of additional (or more powerful) processors, may actually lead to dramatic improvements in the performance of the basic algorithm. Thus search strategies intended for many processors actually may generate algorithms that are better even when implemented seque...
Complete search in continuous global optimization and constraint satisfaction, Acta Numerica 13
, 2004
"... A chapter for ..."
Some tests of generalized bisection
 ACM Trans. Math. Software
, 1987
"... This paper addresses the task of reliably finding approximations to all solutions to a system of nonlinear equations within a region defined by bounds on each of the individual coordinates. Various forms of generalized bisection were proposed some time ago for this task. This paper systematically co ..."
Abstract

Cited by 52 (2 self)
 Add to MetaCart
(Show Context)
This paper addresses the task of reliably finding approximations to all solutions to a system of nonlinear equations within a region defined by bounds on each of the individual coordinates. Various forms of generalized bisection were proposed some time ago for this task. This paper systematically compares such generalized bisection algorithms to themselves, to continuation methods, and to hybrid steepest descent/quasiNewton methods. A specific algorithm containing novel “expansion ” and “exclusion ” steps is fully described, and the effectiveness of these steps is evaluated. A test problem consisting of a small, highdegree polynomial system that is appropriate for generalized bisection, but very difticult for continuation methods, is presented. This problem forms part of a set of 17 test problems from published literature on the methods being compared; this test set is fully described here.
Solving Systems of Nonlinear Equations Using the Nonzero Value of the Topological Degree”; “CHABIS: A Mathematical Software Package for Locating and Evaluating Roots of Systems of Nonlinear Equations
 ACM Trans. Math. Software
, 1988
"... Two algorithms are described here for the numerical solution of a system of nonlinear equations F(X) = 0, where 0 = (0, 0,..., 0) E Iw”, and F is a given continuous mapping of a region D in R” into R”. The first algorithm locates at least one root of the system within an ndimensional polyhedron, u ..."
Abstract

Cited by 42 (21 self)
 Add to MetaCart
(Show Context)
Two algorithms are described here for the numerical solution of a system of nonlinear equations F(X) = 0, where 0 = (0, 0,..., 0) E Iw”, and F is a given continuous mapping of a region D in R” into R”. The first algorithm locates at least one root of the system within an ndimensional polyhedron, using the nonzero value of the topological degree of F at 0 relative to the polyhedron; the second algorithm applies a new generalized bisection method in order to compute an approximate solution of the system. The size of the original ndimensional polyhedron is arbitrary, and the method is globally convergent in a residual sense. These algorithms, in the various function evaluations, only make use of the algebraic sign of F and do not require computations of the topological degree. Moreover, they can be applied to nondifferentiable continuous functions F and do not involve derivatives of F or approximations of such derivatives.
The Island Model Genetic Algorithm: On Separability, Population Size and Convergence
 Journal of Computing and Information Technology
, 1998
"... Parallel Genetic Algorithms have often been reported to yield better performance than Genetic Algorithms which use a single large panmictic population. In the case of the Island Model genetic algorithm, it has been informally argued that having multiple subpopulations helps to preserve genetic di ..."
Abstract

Cited by 40 (0 self)
 Add to MetaCart
(Show Context)
Parallel Genetic Algorithms have often been reported to yield better performance than Genetic Algorithms which use a single large panmictic population. In the case of the Island Model genetic algorithm, it has been informally argued that having multiple subpopulations helps to preserve genetic diversity, since each island can potentially follow a different search trajectory through the search space. It is also possible that since linearly separable problems are often used to test Genetic Algorithms, that Island Models may simply be particularly well suited to exploiting the separable nature of the test problems. We explore this possibility by using the infinite population models of simple genetic algorithms to study how Island Models can track multiple search trajectories. We also introduce a simple model for better understanding when Island Model genetic algorithms may have an advantage when processing some test problems. We provide empirical results for both linearly separa...
LimitedMemory Matrix Methods with Applications
, 1997
"... Abstract. The focus of this dissertation is on matrix decompositions that use a limited amount of computer memory � thereby allowing problems with a very large number of variables to be solved. Speci�cally � we will focus on two applications areas � optimization and information retrieval. We introdu ..."
Abstract

Cited by 30 (6 self)
 Add to MetaCart
Abstract. The focus of this dissertation is on matrix decompositions that use a limited amount of computer memory � thereby allowing problems with a very large number of variables to be solved. Speci�cally � we will focus on two applications areas � optimization and information retrieval. We introduce a general algebraic form for the matrix update in limited�memory quasi� Newton methods. Many well�known methods such as limited�memory Broyden Family meth� ods satisfy the general form. We are able to prove several results about methods which sat� isfy the general form. In particular � we show that the only limited�memory Broyden Family method �using exact line searches � that is guaranteed to terminate within n iterations on an n�dimensional strictly convex quadratic is the limited�memory BFGS method. Further� more � we are able to introduce several new variations on the limited�memory BFGS method that retain the quadratic termination property. We also have a new result that shows that full�memory Broyden Family methods �using exact line searches � that skip p updates to the quasi�Newton matrix will terminate in no more than n�p steps on an n�dimensional strictly convex quadratic. We propose several new variations on the limited�memory BFGS method