Results 1  10
of
47
InteriorPoint Methodology for 3D PET Reconstruction
, 2000
"... Interiorpoint methods have been successfully applied to a wide variety of linear and nonlinear programming applications. This paper presents a class of algorithms, based on pathfollowing interiorpoint methodology, for performing regularized maximumlikelihood (ML) reconstructions on threedimensi ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
Interiorpoint methods have been successfully applied to a wide variety of linear and nonlinear programming applications. This paper presents a class of algorithms, based on pathfollowing interiorpoint methodology, for performing regularized maximumlikelihood (ML) reconstructions on threedimensional (3D) emission tomography data. The algorithms solve a sequence of subproblems that converge to the regularized maximum likelihood solution from the interior of the feasible region (the nonnegative orthant). We propose two methods, a primal method which updates only the primal image variables and a primaldual method which simultaneously updates the primal variables and the Lagrange multipliers. A parallel implementation permits the interiorpoint methods to scale to very large reconstruction problems. Termination is based on welldefined convergence measures, namely, the KarushKuhnTucker firstorder necessary conditions for optimality. We demonstrate the rapid convergence of the pathfollowing interiorpoint methods using both data from a small animal scanner and Monte Carlo simulated data. The proposed methods can readily be applied to solve the regularized, weighted least squares reconstruction problem.
Numerical experience with limitedMemory QuasiNewton methods and Truncated Newton methods
 SIAM J. Optimization
, 1992
"... Abstract. Computational experience with several limitedmemory quasiNewton and truncated Newton methods for unconstrained nonlinear optimization is described. Comparative tests were conducted on a wellknown test library [J. J. Mor, B. S. Garbow, and K. E. Hillstrom, ACM Trans. Math. Software, 7 (1 ..."
Abstract

Cited by 13 (9 self)
 Add to MetaCart
Abstract. Computational experience with several limitedmemory quasiNewton and truncated Newton methods for unconstrained nonlinear optimization is described. Comparative tests were conducted on a wellknown test library [J. J. Mor, B. S. Garbow, and K. E. Hillstrom, ACM Trans. Math. Software, 7 (1981), pp. 1741], on several synthetic problems allowing control of the clustering of eigenvalues in the Hessian spectrum, and on some largescale problems in oceanography and meteorology. The results indicate that among the tested limitedmemory quasiNewton methods, the LBFGS method [D. C. Liu and J. Nocedal, Math. Programming, 45 (1989), pp. 503528] has the best overall performance for the problems examined. The numerical performance of two truncated Newton methods, differing in the innerloop solution for the search vector, is competitive with that of LBFGS. Key words, limitedmemory quasiNewton methods, truncated Newton methods, synthetic cluster functions, largescale unconstrained minimization AMS subject classifications. 90C30, 93C20, 93C75, 65K10, 76C20 1. Introduction. Limitedmemory quasiNewton (LMQN) and truncated Newton
BFGS with update skipping and varying memory
 SIAM J. Optim
, 1998
"... Abstract. We give conditions under which limitedmemory quasiNewton methods with exact line searches will terminate in n steps when minimizing ndimensional quadratic functions. We show that although all Broyden family methods terminate in n steps in their fullmemory versions, only BFGS does so wi ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
Abstract. We give conditions under which limitedmemory quasiNewton methods with exact line searches will terminate in n steps when minimizing ndimensional quadratic functions. We show that although all Broyden family methods terminate in n steps in their fullmemory versions, only BFGS does so with limitedmemory. Additionally, we show that fullmemory Broyden family methods with exact line searches terminate in at most n + p steps when p matrix updates are skipped. We introduce new limitedmemory BFGS variants and test them on nonquadratic minimization problems.
Performance of 4DVar with Different Strategies for the Use of Adjoint Physics with the FSU Global Spectral Model
, 2000
"... A set of fourdimensional variational data assimilation (4DVar) experiments were conducted using both a standard method and an incremental method in an identical twin framework. The full physics adjoint model of the Florida State University global spectral model (FSUGSM) was used in the standard ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
A set of fourdimensional variational data assimilation (4DVar) experiments were conducted using both a standard method and an incremental method in an identical twin framework. The full physics adjoint model of the Florida State University global spectral model (FSUGSM) was used in the standard 4DVar, while the adjoint of only a few selected physical parameterizations was used in the incremental method. The impact of physical processes on 4DVar was examined in detail by comparing the results of these experiments. The inclusion of full physics turned out to be significantly beneficial in terms of assimilation error to the lower troposphere during the entire minimization process. The beneficial impact was found to be primarily related to boundary layer physics. The precipitation physics in the adjoint model also tended to have a beneficial impact after an intermediate number (50) of minimization iterations. Experiment results confirmed that the forecast from assimilation analyses with the full physics adjoint model displays a shorter precipitation spinup period. The beneficial impact on precipitation spinup did not result solely from the inclusion of the precipitation physics in the adjoint model, but rather from the combined impact of several physical processes. The inclusion of full physics in the adjoint model exhibited a detrimental impact on the rate of convergence at an early stage of the minimization process, but did not affect the final convergence.
Preconditioning Reduced Matrices
, 1996
"... We study preconditioning strategies for linear systems with positivedefinite matrices of the form Z T GZ, where Z is rectangular and G is symmetric but not necessarily positive definite. The preconditioning strategies are designed to be used in the context of a conjugategradient iteration, and a ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
We study preconditioning strategies for linear systems with positivedefinite matrices of the form Z T GZ, where Z is rectangular and G is symmetric but not necessarily positive definite. The preconditioning strategies are designed to be used in the context of a conjugategradient iteration, and are suitable within algorithms for constrained optimization problems. The techniques have other uses, however, and are applied here to a class of problems in the calculus of variations. Numerical tests are also included.
Analysis of Inexact TrustRegion InteriorPoint SQP Algorithms
, 1995
"... In this paper we analyze inexact trustregion interiorpoint (TRIP) sequential quadratic programming (SQP) algorithms for the solution of optimization problems with nonlinear equality constraints and simple bound constraints on some of the variables. Such problems arise in many engineering applicati ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
In this paper we analyze inexact trustregion interiorpoint (TRIP) sequential quadratic programming (SQP) algorithms for the solution of optimization problems with nonlinear equality constraints and simple bound constraints on some of the variables. Such problems arise in many engineering applications, in particular in optimal control problems with bounds on the control. The nonlinear constraints often come from the discretization of partial differential equations. In such cases the calculation of derivative information and the solution of linearized equations is expensive. Often, the solution of linear systems and derivatives are computed inexactly yielding nonzero residuals. This paper analyzes the effect of the inexactness onto the convergence of TRIP SQP and gives practical rules to control the size of the residuals of these inexact calculations. It is shown that if the size of the residuals is of the order of both the size of the constraints and the trustregion radius, t...
Enriched Methods for LargeScale Unconstrained Optimization.
 Computational Optimization and Applications
, 2000
"... This paper describes a class of optimization methods that interlace iterations of the limited memory BFGS method (LBFGS) and a Hessianfree Newton method (HFN) in such a way that the information collected by one type of iteration improves the performance of the other. Curvature information about ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
This paper describes a class of optimization methods that interlace iterations of the limited memory BFGS method (LBFGS) and a Hessianfree Newton method (HFN) in such a way that the information collected by one type of iteration improves the performance of the other. Curvature information about the objective function is stored in the form of a limited memory matrix, and plays the dual role of preconditioning the inner conjugate gradient iteration in the HFN method and of providing an initial matrix for LBFGS iterations. The lengths of the LBFGS and HFN cycles are adjusted dynamically during the course of the optimization. Numerical experiments indicate that the new algorithms are both effective and not sensitive to the choice of parameters. Key words: limited memory method, Hessianfree Newton method, truncated Newton method, LBFGS, conjugate gradient method, quasiNewton preconditioning. Departamento de Matem'aticas, Instituto Tecnol'ogico Aut'onomo de M'exico, R'io Hon...
Efficient implementation of the truncatedNewton algorithm for largescale chemistry applications
 SIAM J. OPTIM
, 1999
"... To efficiently implement the truncatedNewton (TN) optimization method for largescale, highly nonlinear functions in chemistry, an unconventional modified Cholesky (UMC) factorization is proposed to avoid large modifications to a problemderived preconditioner, used in the inner loop in approximati ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
To efficiently implement the truncatedNewton (TN) optimization method for largescale, highly nonlinear functions in chemistry, an unconventional modified Cholesky (UMC) factorization is proposed to avoid large modifications to a problemderived preconditioner, used in the inner loop in approximating the TN search vector at each step. The main motivation is to reduce the computational time of the overall method: large changes in standard modified Cholesky factorizations are found to increase the number of total iterations, as well as computational time, significantly. Since the UMC may generate an inde nite, rather than a positive definite, effective preconditioner, we prove that directions of descent still result. Hence, convergence to a local minimum can be shown, as in classic TN methods, for our UMCbased algorithm. Our incorporation of the UMC also requires changes in the TN inner loop regarding the negativecurvature test (which we replace by a descent direction test) and the choice of exit directions. Numerical experiments demonstrate that the unconventional use of an indefinite preconditioner works much better than the minimizer without preconditioning or other minimizers available in the molecular mechanics and dynamics package CHARMM. Good performance of the resulting TN method for large potential energy problems is also shown with respect to the limitedmemory BFGS method, tested both with and without preconditioning.
Second Order Information in Data Assimilation
, 2000
"... In variational data assimilation (VDA) for meteorological and/or oceanic models, the assimilated fields are deduced by combining the model and the gradient of a cost functional measuring discrepancy between model solution and observation, via a first order optimality system. However existence and un ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
In variational data assimilation (VDA) for meteorological and/or oceanic models, the assimilated fields are deduced by combining the model and the gradient of a cost functional measuring discrepancy between model solution and observation, via a first order optimality system. However existence and uniqueness of the VDA problem along with convergence of the algorithms for its implementation depend on the convexity of the cost function. Properties of local convexity can be deduced by studying the Hessian of the cost function in the vicinity of the optimum thus the necessity of second order information to ensure a unique solution to the VDA problem. In this paper we present a comprehensive review of issues related to second order analysis of the problem of VDA along with many important issues closely connected to it. In particular we study issues of existence, uniqueness and regularization through second order properties. We then focus on second order information related to statistical properties and on issues related to preconditioning and optimization methods and second order VDA analysis. Predictability and its relation to the structure of the Hessian of the cost functional is then discussed along with issues of sensitivity analysis in the presence of data being assimilated. Computational complexity issues are also addressed and discussed. Automatic differentiation issues related to second order information are also discussed along with the computational complexity of deriving the second order adjoint. Finally
SecondOrder Information in Data Assimilation
, 2002
"... In variational data assimilation (VDA) for meteorological and/or oceanic models, the assimilated fields are deduced by combining the model and the gradient of a cost functional measuring discrepancy between model solution and observation, via a firstorder optimality system. However, existence and ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
In variational data assimilation (VDA) for meteorological and/or oceanic models, the assimilated fields are deduced by combining the model and the gradient of a cost functional measuring discrepancy between model solution and observation, via a firstorder optimality system. However, existence and uniqueness of the VDA problem along with convergence of the algorithms for its implementation depend on the convexity of the cost function. Properties of local convexity can be deduced by studying the Hessian of the cost function in the vicinity of the optimum. This shows the necessity of secondorder information to ensure a unique solution to the VDA problem.