Results 1  10
of
42
Preconditioning techniques for large linear systems: A survey
 J. COMPUT. PHYS
, 2002
"... This article surveys preconditioning techniques for the iterative solution of large linear systems, with a focus on algebraic methods suitable for general sparse matrices. Covered topics include progress in incomplete factorization methods, sparse approximate inverses, reorderings, parallelization i ..."
Abstract

Cited by 103 (5 self)
 Add to MetaCart
This article surveys preconditioning techniques for the iterative solution of large linear systems, with a focus on algebraic methods suitable for general sparse matrices. Covered topics include progress in incomplete factorization methods, sparse approximate inverses, reorderings, parallelization issues, and block and multilevel extensions. Some of the challenges ahead are also discussed. An extensive bibliography completes the paper.
Incomplete Cholesky Factorizations With Limited Memory
 SIAM J. SCI. COMPUT
, 1999
"... We propose an incomplete Cholesky factorization for the solution of largescale trust region subproblems and positive definite systems of linear equations. This factorization depends on a parameter p that specifies the amount of additional memory (in multiples of n, the dimension of the problem) tha ..."
Abstract

Cited by 27 (5 self)
 Add to MetaCart
We propose an incomplete Cholesky factorization for the solution of largescale trust region subproblems and positive definite systems of linear equations. This factorization depends on a parameter p that specifies the amount of additional memory (in multiples of n, the dimension of the problem) that is available; there is no need to specify a drop tolerance. Our numerical results show that the number of conjugate gradient iterations and the computing time are reduced dramatically for small values of p. We also show that in contrast with drop tolerance strategies, the new approach is more stable in terms of number of iterations and memory requirements.
A modified Cholesky algorithm based on a symmetric indefinite factorization
 SIAM J. Matrix Anal. Appl
, 1998
"... Abstract. Given a symmetric and not necessarily positive definite matrix A, a modified Cholesky algorithm computes a Cholesky factorization P (A + E)P T = RT R, where P is a permutation matrix and E is a perturbation chosen to make A + E positive definite. The aims include producing a smallnormed E ..."
Abstract

Cited by 22 (2 self)
 Add to MetaCart
Abstract. Given a symmetric and not necessarily positive definite matrix A, a modified Cholesky algorithm computes a Cholesky factorization P (A + E)P T = RT R, where P is a permutation matrix and E is a perturbation chosen to make A + E positive definite. The aims include producing a smallnormed E and making A + E reasonably well conditioned. Modified Cholesky factorizations are widely used in optimization. We propose a new modified Cholesky algorithm based on a symmetric indefinite factorization computed using a new pivoting strategy of Ashcraft, Grimes, and Lewis. We analyze the effectiveness of the algorithm, both in theory and practice, showing that the algorithm is competitive with the existing algorithms of Gill, Murray, and Wright and Schnabel and Eskow. Attractive features of the new algorithm include easytointerpret inequalities that explain the extent to which it satisfies its design goals, and the fact that it can be implemented in terms of existing software. Key words. modified Cholesky factorization, optimization, Newton’s method, symmetric indefinite factorization
Solving the trustregion subproblem using the Lanczos method
, 1997
"... The approximate minimization of a quadratic function within an ellipsoidal trust region is an important subproblem for many nonlinear programming methods. When the number of variables is large, the most widelyused strategy is to trace the path of conjugate gradient iterates either to convergence or ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
The approximate minimization of a quadratic function within an ellipsoidal trust region is an important subproblem for many nonlinear programming methods. When the number of variables is large, the most widelyused strategy is to trace the path of conjugate gradient iterates either to convergence or until it reaches the trustregion boundary. In this paper, we investigate ways of continuing the process once the boundary has been encountered. The key is to observe that the trustregion problem within the currently generated Krylov subspace has very special structure which enables it to be solved very efficiently. We compare the new strategy with existing methods. The resulting software package is available as HSL VF05 within the Harwell Subroutine Library. 1 Department for Computation and Information, Rutherford Appleton Laboratory, Chilton, Oxfordshire, OX11 0QX, England, EU Email : n.gould@rl.ac.uk 2 Current reports available by anonymous ftp from joyousgard.cc.rl.ac.uk (internet ...
On Computing Metric Upgrades of Projective Reconstructions Under The Rectangular Pixel Assumption
, 2000
"... This paper shows how to upgrade the projective reconstruction of a scene to a metric one in the case where the only assumption made about the cameras observing that scene is that they have rectangular pixels (zeroskew cameras). The proposed approach is based on a simple characterization of zero ..."
Abstract

Cited by 18 (7 self)
 Add to MetaCart
This paper shows how to upgrade the projective reconstruction of a scene to a metric one in the case where the only assumption made about the cameras observing that scene is that they have rectangular pixels (zeroskew cameras). The proposed approach is based on a simple characterization of zeroskew projection matrices in terms of line geometry, and it handles zeroskew cameras with arbitrary or known aspect ratios in a unified framework. The metric upgrade computation is decomposed into a sequence of linear operations, including linear leastsquares parameter estimation and eigenvaluebased symmetric matrix factorization, followed by an optional nonlinear leastsquares refinement step. A few classes of critical motions for which a unique solution cannot be found are spelled out. A MATLAB implementation has been constructed and preliminary experiments with real data are presented.
A Robust Incomplete Factorization Preconditioner for Positive Definite Matrices
, 2001
"... this paper we introduce a preconditioner that strikes a compromise between these two extremes ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
this paper we introduce a preconditioner that strikes a compromise between these two extremes
An overview of unconstrained optimization
 Online]. Available: citeseer.ist.psu.edu/fletcher93overview.html 150
, 1993
"... bundle filter method for nonsmooth nonlinear ..."
Computing a Search Direction for LargeScale LinearlyConstrained Nonlinear Optimization Calculations
, 1993
"... . We consider the computation of Newtonlike search directions that are appropriate when solving largescale linearlyconstrained nonlinear optimization problems. We investigate the use of both direct and iterative methods and consider efficient ways of modifying the Newton equations in order to ens ..."
Abstract

Cited by 12 (8 self)
 Add to MetaCart
. We consider the computation of Newtonlike search directions that are appropriate when solving largescale linearlyconstrained nonlinear optimization problems. We investigate the use of both direct and iterative methods and consider efficient ways of modifying the Newton equations in order to ensure global convergence of the underlying optimization methods. 1 Parallel Algorithms Team, CERFACS, 42 Ave. G. Coriolis, 31057 Toulouse Cedex, France 2 IANCNR, c/o Dipartimento di Matematica, 209, via Abbiategrasso 27100 Pavia, Italy 3 Department of Mathematics, University of California, 405 Hilgard Avenue, Los Angeles, CA 900241555, USA 4 Central Computing Department, Rutherford Appleton Laboratory, Chilton, Oxfordshire, OX11 0QX, England 5 Current reports available by anonymous ftp from the directory "pub/reports" on camelot.cc.rl.ac.uk (internet 130.246.8.61) Keywords: Largescale problems, unconstrained optimization, linearly constrained optimization, direct methods, iterative...
Nonmonotone Curvilinear Line Search Methods for Unconstrained Optimization
 Computational Optimization and Applications
, 1995
"... We present a new algorithmic framework for solving unconstrained minimization problems that incorporates a curvilinear linesearch. The search direction used in our framework is a combination of an approximate Newton direction and a direction of negative curvature. Global convergence to a stationary ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
We present a new algorithmic framework for solving unconstrained minimization problems that incorporates a curvilinear linesearch. The search direction used in our framework is a combination of an approximate Newton direction and a direction of negative curvature. Global convergence to a stationary point where the Hessian matrix is positive semidefinite is exhibited for this class of algorithms by means of a nonmonotone stabilization strategy. An implementation using the BunchParlett decomposition is shown to outperform several other techniques on a large class of test problems. 1 Introduction In this work we consider the unconstrained minimization problem min x2IR n f(x); where f is a real valued function on IR n . We assume throughout that both the gradient g(x) := rf(x) and the Hessian matrix H(x) := r 2 f(x) of f exist and are continuous. Many iterative methods for solving this problem have been proposed; they are usually descent methods that generate a sequence fx k g su...
A Revised Modified Cholesky Factorization Algorithm
 SIAM J. Optim
, 1999
"... A modified Cholesky factorization algorithm introduced originally by Gill and Murray and refined by Gill, Murray and Wright, is used extensively in optimization algorithms. Since its introduction in 1990, a di#erent modified Cholesky factorization of Schnabel and Eskow has also gained widespread usa ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
A modified Cholesky factorization algorithm introduced originally by Gill and Murray and refined by Gill, Murray and Wright, is used extensively in optimization algorithms. Since its introduction in 1990, a di#erent modified Cholesky factorization of Schnabel and Eskow has also gained widespread usage. Compared with the GillMurrayWright algorithm, the SchnabelEskow algorithm has a smaller a priori bound on the perturbation added to ensure positive definiteness, and some computational advantages, especially for large problems. Users of the SchnabelEskow algorithm, however, have reported cases from two di#erent contexts where it makes a far larger modification to the original matrix than is necessary and than is made by the GillMurrayWright method. This paper reports a simple modification to the SchnabelEskow algorithm that appears to correct all the known computational di#culties with the method, without harming its theoretical properties, or its computational behavior in any ot...