Results 1  10
of
40
LARGESCALE LINEARLY CONSTRAINED OPTIMIZATION
, 1978
"... An algorithm for solving largescale nonlinear ' programs with linear constraints is presented. The method combines efficient sparsematrix techniques as in the revised simplex method with stable quasiNewton methods for handling the nonlinearities. A generalpurpose production code (MINOS) is descr ..."
Abstract

Cited by 74 (11 self)
 Add to MetaCart
An algorithm for solving largescale nonlinear ' programs with linear constraints is presented. The method combines efficient sparsematrix techniques as in the revised simplex method with stable quasiNewton methods for handling the nonlinearities. A generalpurpose production code (MINOS) is described, along with computational experience on a wide variety of problems.
A modified Cholesky algorithm based on a symmetric indefinite factorization
 SIAM J. Matrix Anal. Appl
, 1998
"... Abstract. Given a symmetric and not necessarily positive definite matrix A, a modified Cholesky algorithm computes a Cholesky factorization P (A + E)P T = RT R, where P is a permutation matrix and E is a perturbation chosen to make A + E positive definite. The aims include producing a smallnormed E ..."
Abstract

Cited by 22 (2 self)
 Add to MetaCart
Abstract. Given a symmetric and not necessarily positive definite matrix A, a modified Cholesky algorithm computes a Cholesky factorization P (A + E)P T = RT R, where P is a permutation matrix and E is a perturbation chosen to make A + E positive definite. The aims include producing a smallnormed E and making A + E reasonably well conditioned. Modified Cholesky factorizations are widely used in optimization. We propose a new modified Cholesky algorithm based on a symmetric indefinite factorization computed using a new pivoting strategy of Ashcraft, Grimes, and Lewis. We analyze the effectiveness of the algorithm, both in theory and practice, showing that the algorithm is competitive with the existing algorithms of Gill, Murray, and Wright and Schnabel and Eskow. Attractive features of the new algorithm include easytointerpret inequalities that explain the extent to which it satisfies its design goals, and the fact that it can be implemented in terms of existing software. Key words. modified Cholesky factorization, optimization, Newtonâ€™s method, symmetric indefinite factorization
On Computing Metric Upgrades of Projective Reconstructions Under The Rectangular Pixel Assumption
, 2000
"... This paper shows how to upgrade the projective reconstruction of a scene to a metric one in the case where the only assumption made about the cameras observing that scene is that they have rectangular pixels (zeroskew cameras). The proposed approach is based on a simple characterization of zero ..."
Abstract

Cited by 18 (7 self)
 Add to MetaCart
This paper shows how to upgrade the projective reconstruction of a scene to a metric one in the case where the only assumption made about the cameras observing that scene is that they have rectangular pixels (zeroskew cameras). The proposed approach is based on a simple characterization of zeroskew projection matrices in terms of line geometry, and it handles zeroskew cameras with arbitrary or known aspect ratios in a unified framework. The metric upgrade computation is decomposed into a sequence of linear operations, including linear leastsquares parameter estimation and eigenvaluebased symmetric matrix factorization, followed by an optional nonlinear leastsquares refinement step. A few classes of critical motions for which a unique solution cannot be found are spelled out. A MATLAB implementation has been constructed and preliminary experiments with real data are presented.
Adaptive Use of Iterative Methods in PredictorCorrector Interior Point Methods for Linear Programming
 NUMERICAL ALGORITHMS
, 1999
"... ..."
Adaptive Use Of Iterative Methods In Interior Point Methods For Linear Programming
, 1995
"... In this work we devise efficient algorithms for finding the search directions for interior point methods applied to linear programming problems. There are two innovations. The first is the use of updating of preconditioners computed for previous barrier parameters. The second is an adaptive automate ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
In this work we devise efficient algorithms for finding the search directions for interior point methods applied to linear programming problems. There are two innovations. The first is the use of updating of preconditioners computed for previous barrier parameters. The second is an adaptive automated procedure for determining whether to use a direct or iterative solver, whether to reinitialize or update the preconditioner, and how many updates to apply. These decisions are based on predictions of the cost of using the different solvers to determine the next search direction, given costs in determining earlier directions. These ideas are tested by applying a modified version of the OB1R code of Lustig, Marsten, and Shanno to a variety of problems from the NETLIB and other collections. If a direct method is appropriate for the problem, then our procedure chooses it, but when an iterative procedure is helpful, substantial gains in efficiency can be obtained.
Computing a Search Direction for LargeScale LinearlyConstrained Nonlinear Optimization Calculations
, 1993
"... . We consider the computation of Newtonlike search directions that are appropriate when solving largescale linearlyconstrained nonlinear optimization problems. We investigate the use of both direct and iterative methods and consider efficient ways of modifying the Newton equations in order to ens ..."
Abstract

Cited by 12 (8 self)
 Add to MetaCart
. We consider the computation of Newtonlike search directions that are appropriate when solving largescale linearlyconstrained nonlinear optimization problems. We investigate the use of both direct and iterative methods and consider efficient ways of modifying the Newton equations in order to ensure global convergence of the underlying optimization methods. 1 Parallel Algorithms Team, CERFACS, 42 Ave. G. Coriolis, 31057 Toulouse Cedex, France 2 IANCNR, c/o Dipartimento di Matematica, 209, via Abbiategrasso 27100 Pavia, Italy 3 Department of Mathematics, University of California, 405 Hilgard Avenue, Los Angeles, CA 900241555, USA 4 Central Computing Department, Rutherford Appleton Laboratory, Chilton, Oxfordshire, OX11 0QX, England 5 Current reports available by anonymous ftp from the directory "pub/reports" on camelot.cc.rl.ac.uk (internet 130.246.8.61) Keywords: Largescale problems, unconstrained optimization, linearly constrained optimization, direct methods, iterative...
Efficient implementation of the truncatedNewton algorithm for largescale chemistry applications
 SIAM J. OPTIM
, 1999
"... To efficiently implement the truncatedNewton (TN) optimization method for largescale, highly nonlinear functions in chemistry, an unconventional modified Cholesky (UMC) factorization is proposed to avoid large modifications to a problemderived preconditioner, used in the inner loop in approximati ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
To efficiently implement the truncatedNewton (TN) optimization method for largescale, highly nonlinear functions in chemistry, an unconventional modified Cholesky (UMC) factorization is proposed to avoid large modifications to a problemderived preconditioner, used in the inner loop in approximating the TN search vector at each step. The main motivation is to reduce the computational time of the overall method: large changes in standard modified Cholesky factorizations are found to increase the number of total iterations, as well as computational time, significantly. Since the UMC may generate an inde nite, rather than a positive definite, effective preconditioner, we prove that directions of descent still result. Hence, convergence to a local minimum can be shown, as in classic TN methods, for our UMCbased algorithm. Our incorporation of the UMC also requires changes in the TN inner loop regarding the negativecurvature test (which we replace by a descent direction test) and the choice of exit directions. Numerical experiments demonstrate that the unconventional use of an indefinite preconditioner works much better than the minimizer without preconditioning or other minimizers available in the molecular mechanics and dynamics package CHARMM. Good performance of the resulting TN method for large potential energy problems is also shown with respect to the limitedmemory BFGS method, tested both with and without preconditioning.
LargeScale Nonlinear Constrained Optimization: A Current Survey
, 1994
"... . Much progress has been made in constrained nonlinear optimization in the past ten years, but most largescale problems still represent a considerable obstacle. In this survey paper we will attempt to give an overview of the current approaches, including interior and exterior methods and algorithm ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
. Much progress has been made in constrained nonlinear optimization in the past ten years, but most largescale problems still represent a considerable obstacle. In this survey paper we will attempt to give an overview of the current approaches, including interior and exterior methods and algorithms based upon trust regions and line searches. In addition, the importance of software, numerical linear algebra and testing will be addressed. We will try to explain why the difficulties arise, how attempts are being made to overcome them and some of the problems that still remain. Although there will be some emphasis on the LANCELOT and CUTE projects, the intention is to give a broad picture of the stateoftheart. 1 IBM T.J. Watson Research Center, P.O.Box 218, Yorktown Heights, NY 10598, USA 2 Parallel Algorithms Team, CERFACS, 42 Ave. G. Coriolis, 31057 Toulouse Cedex, France 3 Central Computing Department, Rutherford Appleton Laboratory, Chilton, Oxfordshire, OX11 0QX, England ...
Second Order Information in Data Assimilation
, 2000
"... In variational data assimilation (VDA) for meteorological and/or oceanic models, the assimilated fields are deduced by combining the model and the gradient of a cost functional measuring discrepancy between model solution and observation, via a first order optimality system. However existence and un ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
In variational data assimilation (VDA) for meteorological and/or oceanic models, the assimilated fields are deduced by combining the model and the gradient of a cost functional measuring discrepancy between model solution and observation, via a first order optimality system. However existence and uniqueness of the VDA problem along with convergence of the algorithms for its implementation depend on the convexity of the cost function. Properties of local convexity can be deduced by studying the Hessian of the cost function in the vicinity of the optimum thus the necessity of second order information to ensure a unique solution to the VDA problem. In this paper we present a comprehensive review of issues related to second order analysis of the problem of VDA along with many important issues closely connected to it. In particular we study issues of existence, uniqueness and regularization through second order properties. We then focus on second order information related to statistical properties and on issues related to preconditioning and optimization methods and second order VDA analysis. Predictability and its relation to the structure of the Hessian of the cost functional is then discussed along with issues of sensitivity analysis in the presence of data being assimilated. Computational complexity issues are also addressed and discussed. Automatic differentiation issues related to second order information are also discussed along with the computational complexity of deriving the second order adjoint. Finally
SecondOrder Information in Data Assimilation
, 2002
"... In variational data assimilation (VDA) for meteorological and/or oceanic models, the assimilated fields are deduced by combining the model and the gradient of a cost functional measuring discrepancy between model solution and observation, via a firstorder optimality system. However, existence and ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
In variational data assimilation (VDA) for meteorological and/or oceanic models, the assimilated fields are deduced by combining the model and the gradient of a cost functional measuring discrepancy between model solution and observation, via a firstorder optimality system. However, existence and uniqueness of the VDA problem along with convergence of the algorithms for its implementation depend on the convexity of the cost function. Properties of local convexity can be deduced by studying the Hessian of the cost function in the vicinity of the optimum. This shows the necessity of secondorder information to ensure a unique solution to the VDA problem.