Results 1  10
of
54
Preconditioning techniques for large linear systems: A survey
 J. COMPUT. PHYS
, 2002
"... This article surveys preconditioning techniques for the iterative solution of large linear systems, with a focus on algebraic methods suitable for general sparse matrices. Covered topics include progress in incomplete factorization methods, sparse approximate inverses, reorderings, parallelization i ..."
Abstract

Cited by 118 (5 self)
 Add to MetaCart
(Show Context)
This article surveys preconditioning techniques for the iterative solution of large linear systems, with a focus on algebraic methods suitable for general sparse matrices. Covered topics include progress in incomplete factorization methods, sparse approximate inverses, reorderings, parallelization issues, and block and multilevel extensions. Some of the challenges ahead are also discussed. An extensive bibliography completes the paper.
Incomplete Cholesky Factorizations With Limited Memory
 SIAM J. SCI. COMPUT
, 1999
"... We propose an incomplete Cholesky factorization for the solution of largescale trust region subproblems and positive definite systems of linear equations. This factorization depends on a parameter p that specifies the amount of additional memory (in multiples of n, the dimension of the problem) tha ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
We propose an incomplete Cholesky factorization for the solution of largescale trust region subproblems and positive definite systems of linear equations. This factorization depends on a parameter p that specifies the amount of additional memory (in multiples of n, the dimension of the problem) that is available; there is no need to specify a drop tolerance. Our numerical results show that the number of conjugate gradient iterations and the computing time are reduced dramatically for small values of p. We also show that in contrast with drop tolerance strategies, the new approach is more stable in terms of number of iterations and memory requirements.
Solving the trustregion subproblem using the Lanczos method
, 1997
"... The approximate minimization of a quadratic function within an ellipsoidal trust region is an important subproblem for many nonlinear programming methods. When the number of variables is large, the most widelyused strategy is to trace the path of conjugate gradient iterates either to convergence or ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
The approximate minimization of a quadratic function within an ellipsoidal trust region is an important subproblem for many nonlinear programming methods. When the number of variables is large, the most widelyused strategy is to trace the path of conjugate gradient iterates either to convergence or until it reaches the trustregion boundary. In this paper, we investigate ways of continuing the process once the boundary has been encountered. The key is to observe that the trustregion problem within the currently generated Krylov subspace has very special structure which enables it to be solved very efficiently. We compare the new strategy with existing methods. The resulting software package is available as HSL VF05 within the Harwell Subroutine Library. 1 Department for Computation and Information, Rutherford Appleton Laboratory, Chilton, Oxfordshire, OX11 0QX, England, EU Email : n.gould@rl.ac.uk 2 Current reports available by anonymous ftp from joyousgard.cc.rl.ac.uk (internet ...
A modified Cholesky algorithm based on a symmetric indefinite factorization
 SIAM J. Matrix Anal. Appl
, 1998
"... Abstract. Given a symmetric and not necessarily positive definite matrix A, a modified Cholesky algorithm computes a Cholesky factorization P (A + E)P T = RT R, where P is a permutation matrix and E is a perturbation chosen to make A + E positive definite. The aims include producing a smallnormed E ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
(Show Context)
Abstract. Given a symmetric and not necessarily positive definite matrix A, a modified Cholesky algorithm computes a Cholesky factorization P (A + E)P T = RT R, where P is a permutation matrix and E is a perturbation chosen to make A + E positive definite. The aims include producing a smallnormed E and making A + E reasonably well conditioned. Modified Cholesky factorizations are widely used in optimization. We propose a new modified Cholesky algorithm based on a symmetric indefinite factorization computed using a new pivoting strategy of Ashcraft, Grimes, and Lewis. We analyze the effectiveness of the algorithm, both in theory and practice, showing that the algorithm is competitive with the existing algorithms of Gill, Murray, and Wright and Schnabel and Eskow. Attractive features of the new algorithm include easytointerpret inequalities that explain the extent to which it satisfies its design goals, and the fact that it can be implemented in terms of existing software. Key words. modified Cholesky factorization, optimization, Newton’s method, symmetric indefinite factorization
On Computing Metric Upgrades of Projective Reconstructions Under The Rectangular Pixel Assumption
, 2000
"... This paper shows how to upgrade the projective reconstruction of a scene to a metric one in the case where the only assumption made about the cameras observing that scene is that they have rectangular pixels (zeroskew cameras). The proposed approach is based on a simple characterization of zero ..."
Abstract

Cited by 19 (7 self)
 Add to MetaCart
This paper shows how to upgrade the projective reconstruction of a scene to a metric one in the case where the only assumption made about the cameras observing that scene is that they have rectangular pixels (zeroskew cameras). The proposed approach is based on a simple characterization of zeroskew projection matrices in terms of line geometry, and it handles zeroskew cameras with arbitrary or known aspect ratios in a unified framework. The metric upgrade computation is decomposed into a sequence of linear operations, including linear leastsquares parameter estimation and eigenvaluebased symmetric matrix factorization, followed by an optional nonlinear leastsquares refinement step. A few classes of critical motions for which a unique solution cannot be found are spelled out. A MATLAB implementation has been constructed and preliminary experiments with real data are presented.
A Robust Incomplete Factorization Preconditioner for Positive Definite Matrices
, 2001
"... this paper we introduce a preconditioner that strikes a compromise between these two extremes ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
this paper we introduce a preconditioner that strikes a compromise between these two extremes
An overview of unconstrained optimization
 Online]. Available: citeseer.ist.psu.edu/fletcher93overview.html 150
, 1993
"... bundle filter method for nonsmooth nonlinear ..."
Efficient implementation of the truncatedNewton algorithm for largescale chemistry applications
 SIAM J. OPTIM
, 1999
"... To efficiently implement the truncatedNewton (TN) optimization method for largescale, highly nonlinear functions in chemistry, an unconventional modified Cholesky (UMC) factorization is proposed to avoid large modifications to a problemderived preconditioner, used in the inner loop in approximati ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
To efficiently implement the truncatedNewton (TN) optimization method for largescale, highly nonlinear functions in chemistry, an unconventional modified Cholesky (UMC) factorization is proposed to avoid large modifications to a problemderived preconditioner, used in the inner loop in approximating the TN search vector at each step. The main motivation is to reduce the computational time of the overall method: large changes in standard modified Cholesky factorizations are found to increase the number of total iterations, as well as computational time, significantly. Since the UMC may generate an inde nite, rather than a positive definite, effective preconditioner, we prove that directions of descent still result. Hence, convergence to a local minimum can be shown, as in classic TN methods, for our UMCbased algorithm. Our incorporation of the UMC also requires changes in the TN inner loop regarding the negativecurvature test (which we replace by a descent direction test) and the choice of exit directions. Numerical experiments demonstrate that the unconventional use of an indefinite preconditioner works much better than the minimizer without preconditioning or other minimizers available in the molecular mechanics and dynamics package CHARMM. Good performance of the resulting TN method for large potential energy problems is also shown with respect to the limitedmemory BFGS method, tested both with and without preconditioning.
Nonmonotone curvilinear linesearch methods for unconstrained optimization
 Computational Optimization and Applications
, 1996
"... Abstract. We present a new algorithmic framework for solving unconstrained minimization problems that incorporates a curvilinear linesearch. The search direction used in our framework is a combination of an approximate Newton direction and a direction of negative curvature. Global convergence to a ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
Abstract. We present a new algorithmic framework for solving unconstrained minimization problems that incorporates a curvilinear linesearch. The search direction used in our framework is a combination of an approximate Newton direction and a direction of negative curvature. Global convergence to a stationary point where the Hessian matrix is positive semidefinite is exhibited for this class of algorithms by means of a nonmonotone stabilization strategy. An implementation using the BunchParlett decomposition is shown to outperform several other techniques on a large class of test problems. Keywords: 1.
A Revised Modified Cholesky Factorization Algorithm
 SIAM J. Optim
, 1999
"... A modified Cholesky factorization algorithm introduced originally by Gill and Murray and refined by Gill, Murray and Wright, is used extensively in optimization algorithms. Since its introduction in 1990, a di#erent modified Cholesky factorization of Schnabel and Eskow has also gained widespread usa ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
(Show Context)
A modified Cholesky factorization algorithm introduced originally by Gill and Murray and refined by Gill, Murray and Wright, is used extensively in optimization algorithms. Since its introduction in 1990, a di#erent modified Cholesky factorization of Schnabel and Eskow has also gained widespread usage. Compared with the GillMurrayWright algorithm, the SchnabelEskow algorithm has a smaller a priori bound on the perturbation added to ensure positive definiteness, and some computational advantages, especially for large problems. Users of the SchnabelEskow algorithm, however, have reported cases from two di#erent contexts where it makes a far larger modification to the original matrix than is necessary and than is made by the GillMurrayWright method. This paper reports a simple modification to the SchnabelEskow algorithm that appears to correct all the known computational di#culties with the method, without harming its theoretical properties, or its computational behavior in any ot...