Results 1 
9 of
9
Parallel Two Level Block ILU Preconditioning Techniques for Solving Large Sparse Linear Systems
 Paral. Comput
, 2000
"... We discuss issues related to domain decomposition and multilevel preconditioning techniques which are often employed for solving large sparse linear systems in parallel computations. We introduce a class of parallel preconditioning techniques for general sparse linear systems based on a two level bl ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
We discuss issues related to domain decomposition and multilevel preconditioning techniques which are often employed for solving large sparse linear systems in parallel computations. We introduce a class of parallel preconditioning techniques for general sparse linear systems based on a two level block ILU factorization strategy. We give some new data structures and strategies to construct local coefficient matrix and local Schur complement matrix in each processor. The preconditioner constructed is fast and robust for solving certain large sparse matrices. Numerical experiments show that our domain based two level block ILU preconditioners are more robust and more efficient than some published ILU preconditioners based on Schur complement techniques for parallel sparse matrix solutions.
Efficient steadystate solution techniques for variably saturated groundwater flow
 Advances in Water Resources
"... We consider the simulation of steadystate variably saturated groundwater flow using Richards ’ equation (RE). The difficulties associated with solving RE numerically are well known. Most discretization approaches for RE lead to nonlinear systems that are large and difficult to solve. The solution o ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
We consider the simulation of steadystate variably saturated groundwater flow using Richards ’ equation (RE). The difficulties associated with solving RE numerically are well known. Most discretization approaches for RE lead to nonlinear systems that are large and difficult to solve. The solution of nonlinear systems for steadystate problems can be particularly challenging, since a good initial guess for the steadystate solution is often hard to obtain, and the resulting linear systems may be poorly scaled. Common approaches like Picard iteration or variations of Newton’s method have their advantages but perform poorly with standard globalization techniques under certain conditions. Pseudotransient continuation has been used in computational fluid dynamics for Preprint submitted to Elsevier Science 30 October 2002some time to obtain steadystate solutions for problems in which Newton’s method with standard linesearch strategies fails. It combines aspects of backward Euler time integration and Newton’s method to select intermediate estimates of the steadystate solution. Here, we examine the use of pseudotransient continuation as well as Newton’s method combined with standard globalization techniques for steadystate problems in heterogeneous domains. We investigate the methods ’ performance with direct and preconditioned Krylov iterative linear solvers. We then make recommendations for robust and efficient approaches to obtain steadystate solutions for RE under a range of conditions.
The impact of high performance Computing in the solution of linear systems: trends and problems
, 1999
"... We review the influence of the advent of high performance computing on the solution of linear equations. We will concentrate on direct methods of solution and consider both the case when the coefficient matrix is dense and when it is sparse. We will examine the current performance of software in thi ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We review the influence of the advent of high performance computing on the solution of linear equations. We will concentrate on direct methods of solution and consider both the case when the coefficient matrix is dense and when it is sparse. We will examine the current performance of software in this area and speculate on what advances we might expect in the early years of the next century. Keywords: sparse matrices, direct methods, parallelism, matrix factorization, multifrontal methods. AMS(MOS) subject classifications: 65F05, 65F50. 1 Current reports available at http://www.cerfacs.fr/algor/algo reports.html. Also appeared as Technical Report RALTR1999072 from Rutherford Appleton Laboratory, Oxfordshire. 2 duff@cerfacs.fr. Also at Atlas Centre, RAL, Oxon OX11 0QX, England. Rutherford Appleton Laboratory. Contents 1 Introduction 1 2 Building blocks 1 3 Factorization of dense matrices 2 4 Factorization of sparse matrices 4 5 Parallel computation 8 6 Current situation 12 7 F...
Combinatorial problems in solving linear systems
, 2009
"... Numerical linear algebra and combinatorial optimization are vast subjects; as is their interaction. In virtually all cases there should be a notion of sparsity for a combinatorial problem to arise. Sparse matrices therefore form the basis of the interaction of these two seemingly disparate subjects ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Numerical linear algebra and combinatorial optimization are vast subjects; as is their interaction. In virtually all cases there should be a notion of sparsity for a combinatorial problem to arise. Sparse matrices therefore form the basis of the interaction of these two seemingly disparate subjects. As the core of many of today’s numerical linear algebra computations consists of the solution of sparse linear system by direct or iterative methods, we survey some combinatorial problems, ideas, and algorithms relating to these computations. On the direct methods side, we discuss issues such as matrix ordering; bipartite matching and matrix scaling for better pivoting; task assignment and scheduling for parallel multifrontal solvers. On the iterative method side, we discuss preconditioning techniques including incomplete factorization preconditioners, support graph preconditioners, and algebraic multigrid. In a separate part, we discuss the block triangular form of sparse matrices.
ParIC: A Family of Parallel Incomplete Cholesky Preconditioners
, 2000
"... . A class of parallel incomplete factorization preconditionings for the solution of large linear systems is investigated. The approach may be regarded as a generalized domain decomposition method. Adjacent subdomains have to communicate during the setting up of the preconditioner, and during the app ..."
Abstract
 Add to MetaCart
. A class of parallel incomplete factorization preconditionings for the solution of large linear systems is investigated. The approach may be regarded as a generalized domain decomposition method. Adjacent subdomains have to communicate during the setting up of the preconditioner, and during the application of the preconditioner. Overlap is not necessary to achieve high performance. Fillin levels are considered in a global way. If necessary, the technique may be implemented as a global reordering of the unknowns. Experimental results are reported for twodimensional problems. 1 Introduction Krylov subspace based iterative methods are quite popular for solving large sparse preconditioned linear systems B \Gamma1 Au = B \Gamma1 b ; (1) where Au = b denotes the original system, and B denotes a given preconditioning matrix (see, e.g., [2, 10]). The main operations within Krylov subspace methods are following: 1. sparse matrixvector multiplication(s); 2. vector updates; 3. dot pro...
Spectral Analysis of Parallel Incomplete Factorizations With Implicit PseudoOverlap
, 2000
"... Introduction Linear systems from boundary value problems like the diffusion equation can be solved by iterative methods. The speed of convergence depends very much on global properties (a local correction affects the whole solution), whereas for parallelism one wants to split the problem into small ..."
Abstract
 Add to MetaCart
Introduction Linear systems from boundary value problems like the diffusion equation can be solved by iterative methods. The speed of convergence depends very much on global properties (a local correction affects the whole solution), whereas for parallelism one wants to split the problem into smaller (almost) independent subproblems. These two requirements are in conflict [13]. A critical topical question in the use of incomplete factorization based preconditionings on parallel environments is how to overcome the abovementioned tradeoff between high level parallelism and rate of convergence [13,14]. An answer to the above question requires to clearly identify why there is a tradeoff. To this end, Doi and Lichnewsky [8,9] relate this phenomenon to the number of incompatible nodes (any node i which is connected to at least two nodes j and k along the same direction (axis), such that j<F12.
Fast Prediction of Transonic Aeroelasticity Using Computational Fluid Dynamics
, 2008
"... A copy can be downloaded for personal noncommercial research or study, without prior permission or charge This thesis cannot be reproduced or quoted extensively from without first obtaining permission in writing from the Author The content must not be changed in any way or sold commercially in any ..."
Abstract
 Add to MetaCart
A copy can be downloaded for personal noncommercial research or study, without prior permission or charge This thesis cannot be reproduced or quoted extensively from without first obtaining permission in writing from the Author The content must not be changed in any way or sold commercially in any format or medium without the formal permission of the Author When referring to this work, full bibliographic details including the author, title, awarding institution and date of the thesis must be given
Parallel Selfverified Method for Solving Linear Systems
"... Abstract. This paper presents the parallelization of a selfverified method for solving dense linear equations. Verified computing provides an interval result that surely contains the correct result. The advent of parallel computing and its impact in the overall performance of various algorithms on ..."
Abstract
 Add to MetaCart
Abstract. This paper presents the parallelization of a selfverified method for solving dense linear equations. Verified computing provides an interval result that surely contains the correct result. The advent of parallel computing and its impact in the overall performance of various algorithms on numerical analysis have been increasing in the last decade. Two main points of this method, which demand a higher computational cost, were carried out: the backward/forward substitution of a LUdecomposed matrix A and an iterative refinement step. Our main contribution is to point out the advantages an drawbacks of our approach, in order to popularize the use of selfverified computation. 1