Results 1  10
of
18
Sequential Quadratic Programming
, 1995
"... this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can ..."
Abstract

Cited by 114 (2 self)
 Add to MetaCart
this paper we examine the underlying ideas of the SQP method and the theory that establishes it as a framework from which effective algorithms can
Theory of Algorithms for Unconstrained Optimization
, 1992
"... this article I will attempt to review the most recent advances in the theory of unconstrained optimization, and will also describe some important open questions. Before doing so, I should point out that the value of the theory of optimization is not limited to its capacity for explaining the behavio ..."
Abstract

Cited by 84 (1 self)
 Add to MetaCart
this article I will attempt to review the most recent advances in the theory of unconstrained optimization, and will also describe some important open questions. Before doing so, I should point out that the value of the theory of optimization is not limited to its capacity for explaining the behavior of the most widely used techniques. The question
Evaluation of optimization methods for nonrigid medical image registration using mutual information and Bsplines
, 2007
"... Abstract—A popular technique for nonrigid registration of medical images is based on the maximization of their mutual information, in combination with a deformation field parameterized by cubic Bsplines. The coordinate mapping that relates the two images is found using an iterative optimization pro ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
Abstract—A popular technique for nonrigid registration of medical images is based on the maximization of their mutual information, in combination with a deformation field parameterized by cubic Bsplines. The coordinate mapping that relates the two images is found using an iterative optimization procedure. This work compares the performance of eight optimization methods: gradient descent (with two different step size selection algorithms), quasiNewton, nonlinear conjugate gradient, Kiefer–Wolfowitz, simultaneous perturbation, Robbins–Monro, and evolution strategy. Special attention is paid to computation time reduction by using fewer voxels to calculate the cost function and its derivatives. The optimization methods are tested on manually deformed CT images of the heart, on followup CT chest scans, and on MR scans of the prostate acquired using a BFFE, T1, and T2 protocol. Registration accuracy is assessed by computing the overlap of segmented edges. Precision and convergence properties are studied by comparing deformation fields. The results show that the Robbins–Monro method is the best choice in most applications. With this approach, the computation time per iteration can be lowered approximately 500 times without affecting the rate of convergence by using a small subset of the image, randomly selected in every iteration, to compute the derivative of the mutual information. From the other methods the quasiNewton and the nonlinear conjugate gradient method achieve a slightly higher precision, at the price of larger computation times. Index Terms—Bsplines, mutual information, nonrigid image registration, optimization, subsampling. I.
Review of the Space Mapping Approach to Engineering Optimization and Modeling
 OPTIMIZATION AND ENGINEERING
, 2000
"... We review the Space Mapping (SM) concept and its applications in engineering optimization and modeling. The aim of SM is to avoid computationally expensive calculations encountered in simulating an engineering system. The existence of less accurate but fast physicallybased models is exploited. SM d ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
We review the Space Mapping (SM) concept and its applications in engineering optimization and modeling. The aim of SM is to avoid computationally expensive calculations encountered in simulating an engineering system. The existence of less accurate but fast physicallybased models is exploited. SM drives the optimization iterates of the timeintensive model using the fast model. Several algorithms have been developed for SM optimization, including the original SM algorithm, Aggressive Space Mapping (ASM), Trust Region Aggressive Space Mapping (TRASM) and Hybrid Aggressive Space Mapping (HASM). An essential subproblem of any SM based optimization algorithm is parameter extraction. The uniqueness of this optimization subproblem has been crucial to the success of SM optimization. Different approaches to enhance the uniqueness are reviewed. We also discuss new developments in Space Mappingbased Modeling (SMM). These include Space Derivative Mapping (SDM), Generalized Space Mapping (GSM) and Space Mappingbased Neuromodeling (SMN). Finally, we address open points for research and future development.
A survey of some nonsmooth equations and smoothing Newton methods
 Progress in Optimization, volume 30 of Applied Optimization
, 1999
"... In this article we review and summarize recent developments on nonsmooth equations and smoothing Newton methods. Several new suggestions are presented. 1 ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
In this article we review and summarize recent developments on nonsmooth equations and smoothing Newton methods. Several new suggestions are presented. 1
A chart of backward errors for singly and doubly structured eigenvalue problems
 SIAM J. MATRIX ANAL. APPL
, 2003
"... We present a chart of structured backward errors for approximate eigenpairs of singly and doubly structured eigenvalue problems. We aim to give, wherever possible, formulae that are inexpensive to compute so that they can be used routinely in practice. We identify a number of problems for which the ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
We present a chart of structured backward errors for approximate eigenpairs of singly and doubly structured eigenvalue problems. We aim to give, wherever possible, formulae that are inexpensive to compute so that they can be used routinely in practice. We identify a number of problems for which the structured backward error is within a factor √ 2 of the unstructured backward error. This paper collects, unifies, and extends existing work on this subject.
Structured mapping problems for matrices associated with scalar products. part I: Lie . . .
, 2006
"... Given a class of structured matrices S, we identify pairs of vectors x, b for which there exists a matrix A ∈ S such that Ax = b, and also characterize the set of all matrices A ∈ S mapping x to b. The structured classes we consider are the Lie and Jordan algebras associated with orthosymmetric sc ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Given a class of structured matrices S, we identify pairs of vectors x, b for which there exists a matrix A ∈ S such that Ax = b, and also characterize the set of all matrices A ∈ S mapping x to b. The structured classes we consider are the Lie and Jordan algebras associated with orthosymmetric scalar products. These include (skew)symmetric, (skew)Hamiltonian, pseudo (skew)Hermitian, persymmetric and perskewsymmetric matrices. Structured mappings with extremal properties are also investigated. In particular, structured mappings of minimal rank are identified and shown to be unique when rank one is achieved. The structured mapping of minimal Frobenius norm is always unique and explicit formulas for it and its norm are obtained. Finally the set of all structured mappings of minimal 2norm is characterized. Our results generalize and unify existing work, answer a number of open questions, and provide useful tools for structured backward error investigations.
Nonsmooth Equations and Smoothing Newton Methods
 Applied Mathematics Report AMR 98/10 , School of Mathematics, the University of New South
, 1998
"... In this article we review and summarize recent developments on nonsmooth equations and smoothing Newton methods. Several new suggestions are presented. 1 Introduction Suppose that H : ! n ! ! n is locally Lipschitz but not necessarily continuously differentiable. To solve H(x) = 0 (1.1) has be ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
In this article we review and summarize recent developments on nonsmooth equations and smoothing Newton methods. Several new suggestions are presented. 1 Introduction Suppose that H : ! n ! ! n is locally Lipschitz but not necessarily continuously differentiable. To solve H(x) = 0 (1.1) has become one of most active research directions in mathematical programming. The early study of nonsmooth equations can be traced back to [Eav71, Man75, Man76]. The system of nonsmooth equations arises from many applications. Pang and Qi [PaQ93] reviewed eight problems in the studies of complementarity problems, variational inequality problems and optimization problems, which can be reformulated as systems of nonsmooth equations. In this paper, we review recent developments of algorithms for solving nonsmooth equations. Section 2 is devoted to semismooth Newton methods and Section 3 discusses smoothing Newton methods. We make several final remarks in Section 4. 2 Semismooth Newton methods 2.1 L...
Symbiosis between Linear Algebra and Optimization
, 1999
"... The efficiency and effectiveness of most optimization algorithms hinges on the numerical linear algebra algorithms that they utilize. Effective linear algebra is crucial to their success, and because of this, optimization applications have motivated fundamental advances in numerical linear algebra. ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
The efficiency and effectiveness of most optimization algorithms hinges on the numerical linear algebra algorithms that they utilize. Effective linear algebra is crucial to their success, and because of this, optimization applications have motivated fundamental advances in numerical linear algebra. This essay will highlight contributions of numerical linear algebra to optimization, as well as some optimization problems encountered within linear algebra that contribute to a symbiotic relationship. 1 Introduction The work in any continuous optimization algorithm neatly partitions into two pieces: the work in acquiring information through evaluation of the function and perhaps its derivatives, and the overhead involved in generating points approximating an optimal point. More often than not, this second part of the work is dominated by linear algebra, usually in the form of solution of a linear system or least squares problem and updating of matrix information. Thus, members of the optim...
Globally Convergent Broydenlike Methods for Semismooth Equations and Applications to VIP, NCP and MCP
, 1999
"... In this paper, we propose a general smoothing Broydenlike quasiNewton method for solving a class of nonsmooth equations. Under appropriate conditions, the proposed method converges to a solution of the equation globally and superlinearly. In particular, the proposed method provides the possibility ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In this paper, we propose a general smoothing Broydenlike quasiNewton method for solving a class of nonsmooth equations. Under appropriate conditions, the proposed method converges to a solution of the equation globally and superlinearly. In particular, the proposed method provides the possibility of developing a quasiNewton method that enjoys superlinear convergence even if strict complementarity fails to hld. We pay particular attention to semismooth equations arising from nonlinear complementarity problems, mixed complementarity problems and variational inequality problems. We show that under certain conditions, the related methods based on the perturbed FischerBurmeister function, ChenHarkerKanzowSmale smoothing function and GabrielMore class of smoothing functions converge globally and superlinearly.