Results 1  10
of
12
Multifrontal Parallel Distributed Symmetric and Unsymmetric Solvers
, 1998
"... We consider the solution of both symmetric and unsymmetric systems of sparse linear equations. A new parallel distributed memory multifrontal approach is described. To handle numerical pivoting efficiently, a parallel asynchronous algorithm with dynamic scheduling of the computing tasks has been dev ..."
Abstract

Cited by 119 (32 self)
 Add to MetaCart
We consider the solution of both symmetric and unsymmetric systems of sparse linear equations. A new parallel distributed memory multifrontal approach is described. To handle numerical pivoting efficiently, a parallel asynchronous algorithm with dynamic scheduling of the computing tasks has been developed. We discuss some of the main algorithmic choices and compare both implementation issues and the performance of the LDL T and LU factorizations. Performance analysis on an IBM SP2 shows the efficiency and the potential of the method. The test problems used are from the RutherfordBoeing collection and from the PARASOL end users.
An interior point algorithm for large scale nonlinear programming
 SIAM Journal on Optimization
, 1999
"... The design and implementation of a new algorithm for solving large nonlinear programming problems is described. It follows a barrier approach that employs sequential quadratic programming and trust regions to solve the subproblems occurring in the iteration. Both primal and primaldual versions of t ..."
Abstract

Cited by 74 (17 self)
 Add to MetaCart
The design and implementation of a new algorithm for solving large nonlinear programming problems is described. It follows a barrier approach that employs sequential quadratic programming and trust regions to solve the subproblems occurring in the iteration. Both primal and primaldual versions of the algorithm are developed, and their performance is illustrated in a set of numerical tests. Key words: constrained optimization, interior point method, largescale optimization, nonlinear programming, primal method, primaldual method, successive quadratic programming, trust region method.
MUMPS MUltifrontal Massively Parallel Solver Version 2.0
, 1998
"... We describe aspects of the interface and design of Version 2.0 of the MUltifrontal Massively Parallel Solver MUMPS. This code solves sets of sparse linear equations Ax = b, where the matrix A is unsymmetric. It is written in Fortran 90 and uses MPI for message passing. It also calls the ScaLAPACK c ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
We describe aspects of the interface and design of Version 2.0 of the MUltifrontal Massively Parallel Solver MUMPS. This code solves sets of sparse linear equations Ax = b, where the matrix A is unsymmetric. It is written in Fortran 90 and uses MPI for message passing. It also calls the ScaLAPACK code which in turn uses the BLACS. Level 3 BLAS are also used by the code. MUMPS is the direct solver in the PARASOL project, an EU LTR Project with twelve partners from five countries. The main aim of PARASOL is to develop a public domain library of sparse codes for distributed memory parallel computers. This report describes the interface to the MUMPS code and the message passing mechanisms that are used in the package. Keywords: Multifrontal, sparse solver, distributed memory parallelism, MPI, BLAS, BLACS, ScaLAPACK, PARASOL. AMS(MOS) subject classifications: 65F05, 65F50. 1 Current reports available at http://www.cerfacs.fr/algor/algo reports.html. 2 amestoy@enseeiht.fr. ENSEEIHTIRIT...
Algebraic twolevel preconditioners for the Schur complement method
 SIAM J. SCIENTIFIC COMPUTING
, 1998
"... The solution of elliptic problems is challenging on parallel distributed memory computers as their Green's functions are global. To address this issue, we present a set of preconditioners for the Schur complement domain decomposition method. They implement a global coupling mechanism, through coarse ..."
Abstract

Cited by 16 (9 self)
 Add to MetaCart
The solution of elliptic problems is challenging on parallel distributed memory computers as their Green's functions are global. To address this issue, we present a set of preconditioners for the Schur complement domain decomposition method. They implement a global coupling mechanism, through coarse space components, similar to the one proposed in [3]. The definition of the coarse space components is algebraic, they are defined using the mesh partitioning information and simple interpolation operators. These preconditioners are implemented on distributed memory computers without introducing any new global synchronization in the preconditioned conjugate gradient iteration. The numerical and parallel scalability of those preconditioners is illustrated on twodimensional model examples that have anisotropy and/or discontinuity phenomena.
The Solution of Augmented Systems
, 1993
"... We examine the solution of sets of linear equations for which the coefficient matrix has the form / H A A T 0 ! where the matrix H is symmetric. We are interested in the case when the matrices H and A are sparse. These augmented systems occur in many application areas, for example in the solu ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
We examine the solution of sets of linear equations for which the coefficient matrix has the form / H A A T 0 ! where the matrix H is symmetric. We are interested in the case when the matrices H and A are sparse. These augmented systems occur in many application areas, for example in the solution of linear programming problems, structural analysis, magnetostatics, differential algebraic systems, constrained optimization, electrical networks, and computational fluid dynamics. We discuss in some detail how they arise in the last three of these applications and consider particular characteristics and methods of solution. We then concentrate on direct methods of solution. We examine issues related to conditioning and scaling, and discuss the design and performance of a code for solving these systems. Keywords: augmented systems, constrained optimization, Stokes problem, indefinite sparse matrices, KKT systems, systems matrix, equilibrium problems, electrical networks, interior poi...
On the influence of the orthogonalization scheme on the parallel performance of GMRES
, 1998
"... . In Krylovbased iterative methods, the computation of an orthonormal basis of the Krylov space is a key issue in the algorithms because the many scalar products are often a bottleneck in parallel distributed environments. Using GMRES, we present a comparison of four variants of the GramSchmidt pr ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
. In Krylovbased iterative methods, the computation of an orthonormal basis of the Krylov space is a key issue in the algorithms because the many scalar products are often a bottleneck in parallel distributed environments. Using GMRES, we present a comparison of four variants of the GramSchmidt process on distributed memory machines. Our experiments are carried on an application in astrophysics and on a convectiondiffusion example. We show that the iterative classical GramSchmidt method overcomes its three competitors in speed and in parallel scalability while keeping robust numerical properties. 1 Introduction Krylovbased iterative methods for solving linear systems are attractive because they can be rather easily integrated in a parallel distributed environment. This is mainly because they are free from matrix manipulations apart from matrixvector products which can often be parallelized. The difficulty is then to find an efficient preconditioner which is good at reducing the nu...
The impact of high performance Computing in the solution of linear systems: trends and problems
, 1999
"... We review the influence of the advent of high performance computing on the solution of linear equations. We will concentrate on direct methods of solution and consider both the case when the coefficient matrix is dense and when it is sparse. We will examine the current performance of software in thi ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We review the influence of the advent of high performance computing on the solution of linear equations. We will concentrate on direct methods of solution and consider both the case when the coefficient matrix is dense and when it is sparse. We will examine the current performance of software in this area and speculate on what advances we might expect in the early years of the next century. Keywords: sparse matrices, direct methods, parallelism, matrix factorization, multifrontal methods. AMS(MOS) subject classifications: 65F05, 65F50. 1 Current reports available at http://www.cerfacs.fr/algor/algo reports.html. Also appeared as Technical Report RALTR1999072 from Rutherford Appleton Laboratory, Oxfordshire. 2 duff@cerfacs.fr. Also at Atlas Centre, RAL, Oxon OX11 0QX, England. Rutherford Appleton Laboratory. Contents 1 Introduction 1 2 Building blocks 1 3 Factorization of dense matrices 2 4 Factorization of sparse matrices 4 5 Parallel computation 8 6 Current situation 12 7 F...
Some sparse pattern selection strategies for robust Frobenius norm minimization preconditioners in electromagnetism
, 2000
"... We consider preconditioning strategies for the iterative solution of dense complex symmetric nonHermitian systems arising in computational electromagnetics. We consider in particular sparse approximate inverse preconditioners that use a static nonzero pattern selection. The novelty of our approach c ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We consider preconditioning strategies for the iterative solution of dense complex symmetric nonHermitian systems arising in computational electromagnetics. We consider in particular sparse approximate inverse preconditioners that use a static nonzero pattern selection. The novelty of our approach comes from using a different nonzero pattern selection for the original matrix from that for the preconditioner and from exploiting geometric or topological information from the underlying meshes instead of using methods based on the magnitude of the entries. The numerical and computational efficiency of the proposed preconditioners are illustrated on a set of model problems arising both from academic and from industrial applications. The results of our numerical experiments suggest that the new strategies are viable approaches for the solution of largescale electromagnetic problems using preconditioned Krylov methods. In particular, our strategies are applicable when fast multipole techniqu...
A Brief Bibliography of Recent Research and Software for the Parallel Solution of Large Sparse Linear Equations
, 1999
"... We give some pointers to recent work on the parallel solution of sparse linear equations. We consider both iterative and direct methods and combinations of these two approaches and list some Web sites where software may be found. Keywords: sparse linear systems, sparse least squares, sparse normal e ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We give some pointers to recent work on the parallel solution of sparse linear equations. We consider both iterative and direct methods and combinations of these two approaches and list some Web sites where software may be found. Keywords: sparse linear systems, sparse least squares, sparse normal equations, mixed model equations, BLUP, cattle breeding, iterative methods, direct methods, preconditioning, block iterative methods. AMS(MOS) subject classifications: 65F05, 65F50. 1 Current reports available at http://www.cerfacs.fr/algor/algo reports.html. This short report was produced by request after a keynote talk presented by the author at an international conference on Computational Cattle Breeding, held in Helsinki, Finland on 19th and 20th March 1999. 2 duff@cerfacs.fr. Also at Atlas Centre, RAL, Oxon OX11 0QX, England. Contents 1 Introduction 1 2 Iterative methods 1 3 Direct methods 1 4 Solution of leastsquares problems 2 5 Methods coupling direct and iterative techniques ...
A New Algorithm for Continuation and Bifurcation Analysis of Large Scale Free Surface Flows
, 2004
"... A New Algorithm for Continuation and Bifurcation Analysis of Large Scale Free Surface Flows by Zenaida Castillo This thesis presents a new algorithm to find and follow particular solutions of parameterized nonlinear systems. Important applications often arise after spatial discretization of time dep ..."
Abstract
 Add to MetaCart
A New Algorithm for Continuation and Bifurcation Analysis of Large Scale Free Surface Flows by Zenaida Castillo This thesis presents a new algorithm to find and follow particular solutions of parameterized nonlinear systems. Important applications often arise after spatial discretization of time dependent PDEs. We embed a block eigenvalue solver in a continuation framework for the computation of some specific eigenvalues of large Jacobian matrices that depend on one or more parameters. The new approach is then employed to study the behavior of an industrial process referred to as coating. Stability analysis of the discretized system that models this process is important because it provides alternatives for changing parameters in order to improve the quality of the final product or to increase productivity. Experiments on several problems show the reliability of the new approach in the accurate detection of critical points. Further analysis of twodimensional coating flow problems reveals that computational results are competitive with those of previous continuation approaches. As a byproduct, one obtains information about the stability of the process with no additional cost. Due to the size and structure of the matrices generated in threedimensional free surface flow applications, it is necessary to use a general iterative linear solver, such as GMRES. However, GMRES displays a very slow iii rate of convergence as a consequence of the poor conditioning in the coe#cient matrices. To speed up GMRES convergence, we developed and implemented a scalable approximate sparse inverse preconditioner. Numerical experiments demonstrate that this preconditioner greatly improves the convergence of the method. Results illustrate the e#ectiveness of the preconditioner on very large fr...