Results 1  10
of
39
Solving unsymmetric sparse systems of linear equations with PARDISO
 Journal of Future Generation Computer Systems
, 2004
"... Supernode partitioning for unsymmetric matrices together with complete block diagonal supernode pivoting and asynchronous computation can achieve high gigaflop rates for parallel sparse LU factorization on shared memory parallel computers. The progress in weighted graph matching algorithms helps to ..."
Abstract

Cited by 195 (11 self)
 Add to MetaCart
(Show Context)
Supernode partitioning for unsymmetric matrices together with complete block diagonal supernode pivoting and asynchronous computation can achieve high gigaflop rates for parallel sparse LU factorization on shared memory parallel computers. The progress in weighted graph matching algorithms helps to extend these concepts further and unsymmetric prepermutation of rows is used to place large matrix entries on the diagonal. Complete block diagonal supernode pivoting allows dynamical interchanges of columns and rows during the factorization process. The level3 BLAS efficiency is retained and an advanced twolevel left–right looking scheduling scheme results in good speedup on SMP machines. These algorithms have been integrated into the recent unsymmetric version of the PARDISO solver. Experiments demonstrate that a wide set of unsymmetric linear systems can be solved and high performance is consistently achieved for large sparse unsymmetric matrices from real world applications. Key words: Computational sciences, numerical linear algebra, direct solver, unsymmetric linear systems
WSMP: Watson sparse matrix package part I—direct solution of symmetric sparse system
 Center, Yorktown Heights
, 2010
"... ..."
A numerical evaluation of HSL packages for the direct solution of large sparse, symmetric linear systems of equations
 ACM Transactions on Mathematical Software
"... In recent years, a number of new direct solvers for the solution of large sparse, symmetric linear systems of equations have been added to the mathematical software library HSL. These include solvers that are designed for the solution of positivedefinite systems as well as solvers that are principa ..."
Abstract

Cited by 31 (8 self)
 Add to MetaCart
In recent years, a number of new direct solvers for the solution of large sparse, symmetric linear systems of equations have been added to the mathematical software library HSL. These include solvers that are designed for the solution of positivedefinite systems as well as solvers that are principally intended for solving indefinite problems. The available choice can make it difficult for users to know which solver is the most appropriate for their use. In this study, we use performance profiles as a tool for evaluating and comparing the performance of the HSL solvers on an extensive set of test problems taken from a range of practical applications.
Improved symbolic and numerical factorization algorithms for unsymmetric sparse matrices
 SIAM JOURNAL ON MATRIX ANALYSIS AND APPLICATIONS
, 2002
"... We present algorithms for the symbolic and numerical factorization phases in the direct solution of sparse unsymmetric systems of linear equations. We have modified a classical symbolic factorization algorithm for unsymmetric matrices to inexpensively compute minimal elimination structures. We giv ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
(Show Context)
We present algorithms for the symbolic and numerical factorization phases in the direct solution of sparse unsymmetric systems of linear equations. We have modified a classical symbolic factorization algorithm for unsymmetric matrices to inexpensively compute minimal elimination structures. We give an efficient algorithm to compute a nearminimal datadependency graph for unsymmetric multifrontal factorization that is valid irrespective of the amount of dynamic pivoting performed during factorization. Finally, we describe an unsymmetricpattern multifrontal algorithm for Gaussian elimination with partial pivoting that uses the task and datadependency graphs computed during the symbolic phase. These algorithms have been implemented in WSMP—an industrial strength sparse solver package—and have enabled WSMP to significantly outperform other similar solvers. We present experimental results to demonstrate the merits of the new algorithms.
Using dense storage to solve small sparse linear systems
 ACM Trans. Math. Softw
, 2007
"... A data structure is used to build a linear solver specialized for relatively small sparse systems. The proposed solver, optimized for runtime performance at the expense of memory footprint, outperforms widely used direct and sparse solvers for systems with between 100 and 3000 equations. A multithr ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
A data structure is used to build a linear solver specialized for relatively small sparse systems. The proposed solver, optimized for runtime performance at the expense of memory footprint, outperforms widely used direct and sparse solvers for systems with between 100 and 3000 equations. A multithreaded version of the solver is shown to give some speedups for problems with medium fillin, while it does not give any benefit for very sparse problems. Categories and Subject Descriptors: G.1.3 [Numerical Analysis]: Numerical Linear Algebra— Linear systems (direct and interactive methods), sparse, structured, and very large systems (direct and iterative methods); G.4 [Mathematical Software]: Algorithm design and analysis; E.1 [Data
Fast exact planning in markov decision processes
 In ICAPS
, 2005
"... We study the problem of computing the optimal value function for a Markov decision process with positive costs. Computing this function quickly and accurately is a basic step in many schemes for deciding how to act in stochastic environments. There are efficient algorithms which compute value funct ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
We study the problem of computing the optimal value function for a Markov decision process with positive costs. Computing this function quickly and accurately is a basic step in many schemes for deciding how to act in stochastic environments. There are efficient algorithms which compute value functions for special types of MDPs: for deterministic MDPs with S states and A actions, Dijkstra’s algorithm runs in time O(AS log S). And, in singleaction MDPs (Markov chains), standard linearalgebraic algorithms find the value function in time O(S3), or faster by taking advantage of sparsity or good conditioning. Algorithms for solving general MDPs can take much longer: we are not aware of any speed guarantees better than those for comparablysized linear programs. We present a family of algorithms which reduce to Dijkstra’s algorithm when applied to deterministic MDPs, and to standard techniques for solving linear equations when applied to Markov chains. More importantly, we demonstrate experimentally that these algorithms perform well when applied to MDPs which “almost ” have the required special structure.
Task scheduling in an asynchronous distributed memory multifrontal solver
 SIAM Journal on Matrix Analysis and Applications
"... 1 ..."
(Show Context)
WSMP: Watson Sparse Matrix Package
, 2000
"... Part II – direct solution of general sparse systems Version 10.9 ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
(Show Context)
Part II – direct solution of general sparse systems Version 10.9
Efficient steadystate solution techniques for variably saturated groundwater flow
 Advances in Water Resources
"... We consider the simulation of steadystate variably saturated groundwater flow using Richards ’ equation (RE). The difficulties associated with solving RE numerically are well known. Most discretization approaches for RE lead to nonlinear systems that are large and difficult to solve. The solution o ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
(Show Context)
We consider the simulation of steadystate variably saturated groundwater flow using Richards ’ equation (RE). The difficulties associated with solving RE numerically are well known. Most discretization approaches for RE lead to nonlinear systems that are large and difficult to solve. The solution of nonlinear systems for steadystate problems can be particularly challenging, since a good initial guess for the steadystate solution is often hard to obtain, and the resulting linear systems may be poorly scaled. Common approaches like Picard iteration or variations of Newton’s method have their advantages but perform poorly with standard globalization techniques under certain conditions. Pseudotransient continuation has been used in computational fluid dynamics for Preprint submitted to Elsevier Science 30 October 2002some time to obtain steadystate solutions for problems in which Newton’s method with standard linesearch strategies fails. It combines aspects of backward Euler time integration and Newton’s method to select intermediate estimates of the steadystate solution. Here, we examine the use of pseudotransient continuation as well as Newton’s method combined with standard globalization techniques for steadystate problems in heterogeneous domains. We investigate the methods ’ performance with direct and preconditioned Krylov iterative linear solvers. We then make recommendations for robust and efficient approaches to obtain steadystate solutions for RE under a range of conditions.