Results 1  10
of
28
Newton's Method For Large BoundConstrained Optimization Problems
 SIAM JOURNAL ON OPTIMIZATION
, 1998
"... We analyze a trust region version of Newton's method for boundconstrained problems. Our approach relies on the geometry of the feasible set, not on the particular representation in terms of constraints. The convergence theory holds for linearlyconstrained problems, and yields global and superlinea ..."
Abstract

Cited by 74 (4 self)
 Add to MetaCart
We analyze a trust region version of Newton's method for boundconstrained problems. Our approach relies on the geometry of the feasible set, not on the particular representation in terms of constraints. The convergence theory holds for linearlyconstrained problems, and yields global and superlinear convergence without assuming neither strict complementarity nor linear independence of the active constraints. We also show that the convergence theory leads to an efficient implementation for large boundconstrained problems.
Krylov subspace methods on supercomputers
 SIAM J. SCI. STAT. COMPUT
, 1989
"... This paper presents a short survey of recent research on Krylov subspace methods with emphasis on implementation on vector and parallel computers. Conjugate gradient methods have proven very useful on traditional scalar computers, and their popularity is likely to increase as three dimensional model ..."
Abstract

Cited by 68 (4 self)
 Add to MetaCart
This paper presents a short survey of recent research on Krylov subspace methods with emphasis on implementation on vector and parallel computers. Conjugate gradient methods have proven very useful on traditional scalar computers, and their popularity is likely to increase as three dimensional models gain importance. A conservative approach to derive effective iterative techniques for supercomputers has been to find efficient parallel / vector implementations of the standard algorithms. The main source of difficulty in the incomplete factorization preconditionings is in the solution of the triangular systems at each step. We describe in detail a few approaches consisting of implementing efficient forward and backward triangular solutions. Then we discuss polynomial preconditioning as an alternative to standard incomplete factorization techniques. Another efficient approach is to reorder the equations so as improve the structure of the matrix to achieve better parallelism or vectorization. We give an overview of these ideas and others and attempt to comment on their effectiveness or potential for different types of architectures.
Experimental Study of ILU Preconditioners for Indefinite Matrices
 J. COMPUT. APPL. MATH
, 1997
"... Incomplete LU factorization preconditioners have been surprisingly successful for many cases of general nonsymmetric and indefinite matrices. However, their failure rate is still too high for them to be useful as blackbox library software for general matrices. Besides fatal breakdowns due to zer ..."
Abstract

Cited by 60 (8 self)
 Add to MetaCart
Incomplete LU factorization preconditioners have been surprisingly successful for many cases of general nonsymmetric and indefinite matrices. However, their failure rate is still too high for them to be useful as blackbox library software for general matrices. Besides fatal breakdowns due to zero pivots, the major causes of failure are inaccuracy, and instability of the triangular solves. When there are small pivots, both these problems can occur, but these problems can also occur without small pivots. Through examples from actual problems, this paper shows how these problems evince themselves, how these problems can be detected, and how these problems can sometimes be circumvented through pivoting, reordering, scaling, perturbing diagonal elements, and preserving symmetric structure. The goal of this paper is to gain a better practical understanding of ILU preconditioners and help improve their reliability.
Incomplete Cholesky Factorizations With Limited Memory
 SIAM J. SCI. COMPUT
, 1999
"... We propose an incomplete Cholesky factorization for the solution of largescale trust region subproblems and positive definite systems of linear equations. This factorization depends on a parameter p that specifies the amount of additional memory (in multiples of n, the dimension of the problem) tha ..."
Abstract

Cited by 27 (5 self)
 Add to MetaCart
We propose an incomplete Cholesky factorization for the solution of largescale trust region subproblems and positive definite systems of linear equations. This factorization depends on a parameter p that specifies the amount of additional memory (in multiples of n, the dimension of the problem) that is available; there is no need to specify a drop tolerance. Our numerical results show that the number of conjugate gradient iterations and the computing time are reduced dramatically for small values of p. We also show that in contrast with drop tolerance strategies, the new approach is more stable in terms of number of iterations and memory requirements.
An Improved Incomplete Cholesky Factorization
 ACM Trans. Math. Software
, 1995
"... Incomplete factorization has been shown to be a good preconditioner for the conjugate gradient method on a wide variety of problems. It is well known that allowing some fillin during the incomplete factorization can significantly reduce the number of iterations needed for convergence. Allowing fill ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
Incomplete factorization has been shown to be a good preconditioner for the conjugate gradient method on a wide variety of problems. It is well known that allowing some fillin during the incomplete factorization can significantly reduce the number of iterations needed for convergence. Allowing fillin, however, increases the time for the factorization and for the triangular system solves. In addition, it is difficult to predict a priori how much fillin to allow and how to allow it. The unpredictability of the required storage/work and the unknown benefits of the additional fillin make such strategies impractical to use in many situations. In this paper we motivate, and then present, two "blackbox" strategies that significantly increase the effectiveness of incomplete Cholesky factorization as a preconditioner. These strategies require no parameters from the user and do not increase the cost of the triangular system solves. Efficient implementations for these algorithms are describe...
On the relations between ILUs and factored approximate inverses
, 2001
"... This paper discusses some relationships between Incomplete LU (ILU) factorization techniques and factored sparse approximate inverse (AINV) techniques. While ILU factorizations compute approximate LU factors of the coefficient matrix A, AINV techniques aim at building triangular matrices Z and W ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
This paper discusses some relationships between Incomplete LU (ILU) factorization techniques and factored sparse approximate inverse (AINV) techniques. While ILU factorizations compute approximate LU factors of the coefficient matrix A, AINV techniques aim at building triangular matrices Z and W such that W AZ is approximately diagonal. The paper shows that certain forms of approximate inverse techniques amount to approximately inverting the triangular factors obtained from some variants of incomplete LU factorization of the original matrix. A few useful, and already known, applications of these relationships will be overviewed.
Diagonal Threshold Techniques in Robust MultiLevel ILU Preconditioners for General Sparse Linear Systems
 NUMER. LINEAR ALGEBRA APPL
, 1998
"... This paper introduces techniques based on diagonal threshold tolerance when developing multielimination and multilevel incomplete LU (ILUM) factorization preconditioners for solving general sparse linear systems. Existing heuristics solely based on the adjacency graph of the matrices have been ..."
Abstract

Cited by 14 (10 self)
 Add to MetaCart
This paper introduces techniques based on diagonal threshold tolerance when developing multielimination and multilevel incomplete LU (ILUM) factorization preconditioners for solving general sparse linear systems. Existing heuristics solely based on the adjacency graph of the matrices have been used to find independent sets and are not robust for matrices arising from certain applications in which the matrices may have small or zero diagonals. New heuristic strategies based on the adjacency graph and the diagonal values of the matrices for finding independent sets are introduced. Analytical bounds for the factorization and preconditioned errors are obtained for the case of a twolevel analysis. These bounds provide useful information in designing robust ILUM preconditioners. Extensive numerical experiments are conducted in order to compare robustness and efficiency of various heuristic strategies.
Preconditioning Newton's Method
 IN STUDIES IN NUMERICAL ANALYSIS (G.H. GOLUB, ED), THE MATHEMATICAL ASSOCIATION OF AMERICA
, 1998
"... The development of algorithms and software for the solution of largescale optimization problems ... ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
The development of algorithms and software for the solution of largescale optimization problems ...
An Evaluation of New NAG Library Solvers for Large Sparse Symmetric Linear Systems
, 1996
"... This report presents experimental results for new NAG Fortran 77 Library software for the solution of large sparse symmetric linear systems of algebraic equations. Preconditioned conjugate gradient methods introduced at Mark 17 are compared against iterative and direct methods previously available i ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
This report presents experimental results for new NAG Fortran 77 Library software for the solution of large sparse symmetric linear systems of algebraic equations. Preconditioned conjugate gradient methods introduced at Mark 17 are compared against iterative and direct methods previously available in the library. Test problems include discrete approximations to 2d elliptic partial differential equations, randomvalued randomlystructured symmetric positivedefinite systems, and some symmetric systems from the HarwellBoeing collection. 2 Contents 1 Introduction 3 2 A brief summary of the methods compared 3 2.1 F01MAF/F04MAF : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 3 2.2 F11GBF : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 4 2.3 F11JAF/F11JCF : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 4 2.4 Direct method : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 5 3 Numerical results 5 3.1...
A Robust ILU Based on Monitoring the Growth of the Inverse Factors
, 2000
"... An incomplete LU decomposition with pivoting is presented that progressively monitors the growth of the inverse factors of L; U . The information on the growth of the inverse factors is used as feedback for dropping entries in L and U . This method yields a robust preconditioner in many cases and is ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
An incomplete LU decomposition with pivoting is presented that progressively monitors the growth of the inverse factors of L; U . The information on the growth of the inverse factors is used as feedback for dropping entries in L and U . This method yields a robust preconditioner in many cases and is often effective especially when the system is highly indenite. Numerical examples demonstrate the effectiveness of this approach.