Results 1  10
of
132
The PATH Solver: A NonMonotone Stabilization Scheme for Mixed Complementarity Problems
 OPTIMIZATION METHODS AND SOFTWARE
, 1995
"... The Path solver is an implementation of a stabilized Newton method for the solution of the Mixed Complementarity Problem. The stabilization scheme employs a pathgeneration procedure which is used to construct a piecewiselinear path from the current point to the Newton point; a step length acceptan ..."
Abstract

Cited by 149 (33 self)
 Add to MetaCart
The Path solver is an implementation of a stabilized Newton method for the solution of the Mixed Complementarity Problem. The stabilization scheme employs a pathgeneration procedure which is used to construct a piecewiselinear path from the current point to the Newton point; a step length acceptance criterion and a nonmonotone pathsearch are then used to choose the next iterate. The algorithm is shown to be globally convergent under assumptions which generalize those required to obtain similar results in the smooth case. Several implementation issues are discussed, and extensive computational results obtained from problems commonly found in the literature are given.
Optimization by direct search: New perspectives on some classical and modern methods
 SIAM Review
, 2003
"... Abstract. Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because t ..."
Abstract

Cited by 126 (14 self)
 Add to MetaCart
Abstract. Direct search methods are best known as unconstrained optimization techniques that do not explicitly use derivatives. Direct search methods were formally proposed and widely applied in the 1960s but fell out of favor with the mathematical optimization community by the early 1970s because they lacked coherent mathematical analysis. Nonetheless, users remained loyal to these methods, most of which were easy to program, some of which were reliable. In the past fifteen years, these methods have seen a revival due, in part, to the appearance of mathematical analysis, as well as to interest in parallel and distributed computing. This review begins by briefly summarizing the history of direct search methods and considering the special properties of problems for which they are well suited. Our focus then turns to a broad class of methods for which we provide a unifying framework that lends itself to a variety of convergence results. The underlying principles allow generalization to handle bound constraints and linear constraints. We also discuss extensions to problems with nonlinear constraints.
Direct search methods: then and now
, 2000
"... We discuss direct search methods for unconstrained optimization. We give a modern perspective on this classical family of derivativefree algorithms, focusing on the development of direct search methods during their golden age from 1960 to 1971. We discuss how direct search methods are characterized ..."
Abstract

Cited by 66 (4 self)
 Add to MetaCart
We discuss direct search methods for unconstrained optimization. We give a modern perspective on this classical family of derivativefree algorithms, focusing on the development of direct search methods during their golden age from 1960 to 1971. We discuss how direct search methods are characterized by the absence of the construction of a model of the objective. We then consider a number of the classical direct search methods and discuss what research in the intervening years has uncovered about these algorithms. In particular, while the original direct search methods were consciously based on straightforward heuristics, more recent analysis has shown that in most — but not all — cases these heuristics actually
On the convergence of reflective Newton methods for largescale nonlinear minimization subject to bounds
, 1992
"... . We consider a new algorithm, a reflective Newton method, for the problem of minimizing a smooth nonlinear function of many variables, subject to upper and/or lower bounds on some of the variables. This approach generates strictly feasible iterates by following piecewise linear paths ("reflection" ..."
Abstract

Cited by 60 (4 self)
 Add to MetaCart
. We consider a new algorithm, a reflective Newton method, for the problem of minimizing a smooth nonlinear function of many variables, subject to upper and/or lower bounds on some of the variables. This approach generates strictly feasible iterates by following piecewise linear paths ("reflection" paths) to generate improved iterates. The reflective Newton approach does not require identification of an "activity set". In this report we establish that the reflective Newton approach is globally and quadratically convergent. Moreover, we develop a specific example of this general reflective path approach suitable for largescale and sparse problems. 1 Research partially supported by the Applied Mathematical Sciences Research Program (KC04 02) of the Office of Energy Research of the U.S. Department of Energy under grant DEFG0286ER25013. A000, and in part by NSF, AFOSR, and ONR through grant DMS8920550, and by the Cornell Theory Center which receives major funding from the National Sci...
Solving Algebraic Riccati Equations on Parallel Computers Using Newton's Method with Exact Line Search
, 1999
"... We investigate the numerical solution of continuoustime algebraic Riccati equations via Newton's method on serial and parallel computers with distributed memory. We apply and extend the available theory for Newton's method endowed with exact line search to accelerate convergence. We also discuss a ..."
Abstract

Cited by 52 (7 self)
 Add to MetaCart
We investigate the numerical solution of continuoustime algebraic Riccati equations via Newton's method on serial and parallel computers with distributed memory. We apply and extend the available theory for Newton's method endowed with exact line search to accelerate convergence. We also discuss a new stopping criterion based on recent observations regarding condition and error estimates. In each iteration step of Newton's method a stable Lyapunov equation has too be solved. We propose to solve these Lyapunov equations using iterative schemes for computing the matrix sign function. This approach can be efficiently implemented on parallel computers using ScaLAPACK. Numerical experiments on an ibm sp2 multicomputer report the accuracy, scalability, and speedup of the implemented algorithms.
Algorithms For Complementarity Problems And Generalized Equations
, 1995
"... Recent improvements in the capabilities of complementarity solvers have led to an increased interest in using the complementarity problem framework to address practical problems arising in mathematical programming, economics, engineering, and the sciences. As a result, increasingly more difficult pr ..."
Abstract

Cited by 41 (5 self)
 Add to MetaCart
Recent improvements in the capabilities of complementarity solvers have led to an increased interest in using the complementarity problem framework to address practical problems arising in mathematical programming, economics, engineering, and the sciences. As a result, increasingly more difficult problems are being proposed that exceed the capabilities of even the best algorithms currently available. There is, therefore, an immediate need to improve the capabilities of complementarity solvers. This thesis addresses this need in two significant ways. First, the thesis proposes and develops a proximal perturbation strategy that enhances the robustness of Newtonbased complementarity solvers. This strategy enables algorithms to reliably find solutions even for problems whose natural merit functions have strict local minima that are not solutions. Based upon this strategy, three new algorithms are proposed for solving nonlinear mixed complementarity problems that represent a significant improvement in robustness over previous algorithms. These algorithms have local Qquadratic convergence behavior, yet depend only on a pseudomonotonicity assumption to achieve global convergence from arbitrary starting points. Using the MCPLIB and GAMSLIB test libraries, we perform extensive computational tests that demonstrate the effectiveness of these algorithms on realistic problems. Second, the thesis extends some previously existing algorithms to solve more general problem classes. Specifically, the NE/SQP method of Pang & Gabriel (1993), the semismooth equations approach of De Luca, Facchinei & Kanz...
Structure learning in random fields for heart motion abnormality detection
 In CVPR
, 2008
"... Coronary Heart Disease can be diagnosed by assessing the regional motion of the heart walls in ultrasound images of the left ventricle. Even for experts, ultrasound images are difficult to interpret leading to high intraobserver variability. Previous work indicates that in order to approach this pr ..."
Abstract

Cited by 40 (5 self)
 Add to MetaCart
Coronary Heart Disease can be diagnosed by assessing the regional motion of the heart walls in ultrasound images of the left ventricle. Even for experts, ultrasound images are difficult to interpret leading to high intraobserver variability. Previous work indicates that in order to approach this problem, the interactions between the different heart regions and their overall influence on the clinical condition of the heart need to be considered. To do this, we propose a method for jointly learning the structure and parameters of conditional random fields, formulating these tasks as a convex optimization problem. We consider blockL1 regularization for each set of features associated with an edge, and formalize an efficient projection method to find the globally optimal penalized maximum likelihood solution. We perform extensive numerical experiments comparing the presented method with related methods that approach the structure learning problem differently. We verify the robustness of our method on echocardiograms collected in routine clinical practice at one hospital. 1.
Parallel Variable Distribution
 SIAM Journal on Optimization
, 1994
"... We present an approach for solving optimization problems in which the variables are distributed among p processors. Each processor has primary responsibility for updating its own block of variables in parallel while allowing the remaining variables to change in a restricted fashion (e. g. along a st ..."
Abstract

Cited by 34 (5 self)
 Add to MetaCart
We present an approach for solving optimization problems in which the variables are distributed among p processors. Each processor has primary responsibility for updating its own block of variables in parallel while allowing the remaining variables to change in a restricted fashion (e. g. along a steepest descent, quasiNewton, or any arbitrary direction). This "forgetmenot" approach is a distinctive feature of our algorithm which has not been analyzed before. The parallelization step is followed by a fast synchronization step wherein the affine hull of the points computed by the parallel processors and the current point is searched for an optimal point. Convergence to a stationary point under continuous differentiability is established for the unconstrained case, as well as a linear convergence rate under the additional assumption of a Lipschitzian gradient and strong convexity. For problems constrained to lie in the Cartesian product of closed convex sets, convergence is establish...
Simplicial Decomposition with Disaggregated Representation for the Traffic Assignment Problem
 Transportation Science
, 1991
"... The class of simplicial decomposition (SD) schemes have shown to provide efficient tools for nonlinear network flows. When applied to the traffic assignment problem, shortest route subproblems are solved in order to generate extreme points of the polyhedron of feasible flows, and, alternately, maste ..."
Abstract

Cited by 32 (20 self)
 Add to MetaCart
The class of simplicial decomposition (SD) schemes have shown to provide efficient tools for nonlinear network flows. When applied to the traffic assignment problem, shortest route subproblems are solved in order to generate extreme points of the polyhedron of feasible flows, and, alternately, master problems are solved over the convex hull of the generated extreme points. We review the development of simplicial decomposition and the closely related column generation methods for the traffic assignment problem; we then present a modified, disaggregated, representation of feasible solutions in SD algorithms for convex problems over Cartesian product sets, with application to the symmetric traffic assignment problem. The new algorithm, which is referred to as disaggregate simplicial decomposition (DSD), is given along with a specialized solution method for the disaggregate master problem. Numerical results for several well known test problems and a new one are presented. These experimenta...
On the convergence of a sequential quadratic programming method with an augmented Lagrangian line search function
 Math. Operstionsforschung und Statistik, Ser. Optimization
, 1983
"... Sequential quadratic programming (SQP) methods are widely used for solving practical optimization problems, especially in structural mechanics. The general structure of SQP methods is briefly introduced and it is shown how these methods can be adapted to distributed computing. However, SQP methods a ..."
Abstract

Cited by 32 (0 self)
 Add to MetaCart
Sequential quadratic programming (SQP) methods are widely used for solving practical optimization problems, especially in structural mechanics. The general structure of SQP methods is briefly introduced and it is shown how these methods can be adapted to distributed computing. However, SQP methods are sensitive subject to errors in function and gradient evaluations. Typically they break down with an error message reporting that the line search cannot be terminated successfully. In these cases, a new nonmonotone line search is activated. In case of noisy function values, a drastic improvement of the performance is achieved compared to the version with monotone line search. Numerical results are presented for a set of more than 300 standard test examples.