Results 1  10
of
253
On the limited memory BFGS method for large scale optimization
 Mathematical Programming
, 1989
"... this paper has appeared in ..."
On the Convergence of Pattern Search Algorithms
"... . We introduce an abstract definition of pattern search methods for solving nonlinear unconstrained optimization problems. Our definition unifies an important collection of optimization methods that neither computenor explicitly approximate derivatives. We exploit our characterization of pattern sea ..."
Abstract

Cited by 149 (14 self)
 Add to MetaCart
. We introduce an abstract definition of pattern search methods for solving nonlinear unconstrained optimization problems. Our definition unifies an important collection of optimization methods that neither computenor explicitly approximate derivatives. We exploit our characterization of pattern search methods to establish a global convergence theory that does not enforce a notion of sufficient decrease. Our analysis is possible because the iterates of a pattern search method lie on a scaled, translated integer lattice. This allows us to relax the classical requirements on the acceptance of the step, at the expense of stronger conditions on the form of the step, and still guarantee global convergence. Key words. unconstrained optimization, convergence analysis, direct search methods, globalization strategies, alternating variable search, axial relaxation, local variation, coordinate search, evolutionary operation, pattern search, multidirectional search, downhill simplex search AMS(M...
The tradeoffs of large scale learning
 IN: ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 20
, 2008
"... This contribution develops a theoretical framework that takes into account the effect of approximate optimization on learning algorithms. The analysis shows distinct tradeoffs for the case of smallscale and largescale learning problems. Smallscale learning problems are subject to the usual approx ..."
Abstract

Cited by 138 (4 self)
 Add to MetaCart
This contribution develops a theoretical framework that takes into account the effect of approximate optimization on learning algorithms. The analysis shows distinct tradeoffs for the case of smallscale and largescale learning problems. Smallscale learning problems are subject to the usual approximation–estimation tradeoff. Largescale learning problems are subject to a qualitatively different tradeoff involving the computational complexity of the underlying optimization algorithms in nontrivial ways.
Optimization by direct search: New perspectives on some classical and modern methods
 SIAM Review
, 2003
"... Abstract. Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because t ..."
Abstract

Cited by 126 (14 self)
 Add to MetaCart
Abstract. Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because they lacked coherent mathematical analysis. Nonetheless, users remained loyal to these methods, most of which were easy to program, some of which were reliable. In the past fifteen years, these methods have seen a revival due, in part, to the appearance of mathematical analysis, as well as to interest in parallel and distributed computing. This review begins by briefly summarizing the history of direct search methods and considering the special properties of problems for which they are well suited. Our focus then turns to a broad class of methods for which we provide a unifying framework that lends itself to a variety of convergence results. The underlying principles allow generalization to handle bound constraints and linear constraints. We also discuss extensions to problems with nonlinear constraints.
Dynamic Simulation of Nonpenetrating Flexible Bodies
 COMPUTER GRAPHICS
, 1992
"... A model for the dynamic simulation of flexible bodies subject to nonpenetration constraints is presented. Flexible bodies are described in terms of global deformations of a rest shape. The dynamical behavior of these bodies that most closely matches the behavior of ideal continuum bodies is derived ..."
Abstract

Cited by 123 (4 self)
 Add to MetaCart
A model for the dynamic simulation of flexible bodies subject to nonpenetration constraints is presented. Flexible bodies are described in terms of global deformations of a rest shape. The dynamical behavior of these bodies that most closely matches the behavior of ideal continuum bodies is derived, and subsumes the results of earlier Lagrangian dynamicsbased models. The dynamics derived for the flexiblebody model allows the unification of previous work on flexible body simulation and previous work on nonpenetrating rigid body simulation. The nonpenetration constraints for a system of bodies that contact at multiple points are maintained by analytically calculated contact forces. An implementation for first and secondorder polynomially deformable bodies is described. The simulation of secondorder or higher deformations currently involves a polyhedral boundary approximation for collision detection purposes.
Sequential Quadratic Programming
, 1995
"... this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can ..."
Abstract

Cited by 114 (2 self)
 Add to MetaCart
this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can
Representations Of QuasiNewton Matrices And Their Use In Limited Memory Methods
, 1994
"... We derive compact representations of BFGS and symmetric rankone matrices for optimization. These representations allow us to efficiently implement limited memory methods for large constrained optimization problems. In particular, we discuss how to compute projections of limited memory matrices onto ..."
Abstract

Cited by 103 (8 self)
 Add to MetaCart
We derive compact representations of BFGS and symmetric rankone matrices for optimization. These representations allow us to efficiently implement limited memory methods for large constrained optimization problems. In particular, we discuss how to compute projections of limited memory matrices onto subspaces. We also present a compact representation of the matrices generated by Broyden's update for solving systems of nonlinear equations. Key words: QuasiNewton method, constrained optimization, limited memory method, largescale optimization. Abbreviated title: Representation of quasiNewton matrices. 1. Introduction. Limited memory quasiNewton methods are known to be effective techniques for solving certain classes of largescale unconstrained optimization problems (Buckley and Le Nir (1983), Liu and Nocedal (1989), Gilbert and Lemar'echal (1989)) . They make simple approximations of Hessian matrices, which are often good enough to provide a fast rate of linear convergence, and re...
Choosing the Forcing Terms in an Inexact Newton Method
 SIAM J. Sci. Comput
, 1994
"... An inexact Newton method is a generalization of Newton's method for solving F(x) = 0, F:/ /, in which, at the kth iteration, the step sk from the current approximate solution xk is required to satisfy a condition ]lF(x) + F'(x)s]l _< /]lF(xk)]l for a "forcing term" / [0,1). In typical applications, ..."
Abstract

Cited by 94 (2 self)
 Add to MetaCart
An inexact Newton method is a generalization of Newton's method for solving F(x) = 0, F:/ /, in which, at the kth iteration, the step sk from the current approximate solution xk is required to satisfy a condition ]lF(x) + F'(x)s]l _< /]lF(xk)]l for a "forcing term" / [0,1). In typical applications, the choice of the forcing terms is critical to the efficiency of the method and can affect robustness as well. Promising choices of the forcing terms arc given, their local convergence properties are analyzed, and their practical performance is shown on a representative set of test problems.
Theory of Algorithms for Unconstrained Optimization
, 1992
"... this article I will attempt to review the most recent advances in the theory of unconstrained optimization, and will also describe some important open questions. Before doing so, I should point out that the value of the theory of optimization is not limited to its capacity for explaining the behavio ..."
Abstract

Cited by 84 (1 self)
 Add to MetaCart
this article I will attempt to review the most recent advances in the theory of unconstrained optimization, and will also describe some important open questions. Before doing so, I should point out that the value of the theory of optimization is not limited to its capacity for explaining the behavior of the most widely used techniques. The question
A Trust Region Framework For Managing The Use Of Approximation Models In Optimization
 STRUCTURAL OPTIMIZATION
, 1998
"... This paper presents an analytically robust, globally convergent approach to managing the use of approximation models of various fidelity in optimization. By robust global behavior we mean the mathematical assurance that the iterates produced by the optimization algorithm, started at an arbitrary ini ..."
Abstract

Cited by 84 (9 self)
 Add to MetaCart
This paper presents an analytically robust, globally convergent approach to managing the use of approximation models of various fidelity in optimization. By robust global behavior we mean the mathematical assurance that the iterates produced by the optimization algorithm, started at an arbitrary initial iterate, will converge to a stationary point or local optimizer for the original problem. The approach we present is based on the trust region idea from nonlinear programming and is shown to be provably convergent to a solution of the original highfidelity problem. The proposed method for managing approximations in engineering optimization suggests ways to decide when the fidelity, and thus the cost, of the approximations might be fruitfully increased or decreased in the course of the optimization iterations. The approach is quite general. We make no assumptions on the structure of the original problem, in particular, no assumptions of convexity and separability, and place only mild ...