Results 1  10
of
56
Performance of various computers using standard linear equations software
, 2003
"... This report compares the performance of different computer systems in solving dense systems of linear equations. The comparison involves approximately a hundred computers, ranging from the Earth Simulator to personal computers. ..."
Abstract

Cited by 328 (20 self)
 Add to MetaCart
This report compares the performance of different computer systems in solving dense systems of linear equations. The comparison involves approximately a hundred computers, ranging from the Earth Simulator to personal computers.
Random walks for image segmentation
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2006
"... Abstract—A novel method is proposed for performing multilabel, interactive image segmentation. Given a small number of pixels with userdefined (or predefined) labels, one can analytically and quickly determine the probability that a random walker starting at each unlabeled pixel will first reach on ..."
Abstract

Cited by 218 (18 self)
 Add to MetaCart
Abstract—A novel method is proposed for performing multilabel, interactive image segmentation. Given a small number of pixels with userdefined (or predefined) labels, one can analytically and quickly determine the probability that a random walker starting at each unlabeled pixel will first reach one of the prelabeled pixels. By assigning each pixel to the label for which the greatest probability is calculated, a highquality image segmentation may be obtained. Theoretical properties of this algorithm are developed along with the corresponding connections to discrete potential theory and electrical circuits. This algorithm is formulated in discrete space (i.e., on a graph) using combinatorial analogues of standard operators and principles from continuous potential theory, allowing it to be applied in arbitrary dimension on arbitrary graphs. Index Terms—Image segmentation, interactive segmentation, graph theory, random walks, combinatorial Dirichlet problem, harmonic functions, Laplace equation, graph cuts, boundary completion. Ç 1
ARPACK Users Guide: Solution of Large Scale Eigenvalue Problems by Implicitly Restarted Arnoldi Methods.
, 1997
"... this document is intended to provide a cursory overview of the Implicitly Restarted Arnoldi/Lanczos Method that this software is based upon. The goal is to provide some understanding of the underlying algorithm, expected behavior, additional references, and capabilities as well as limitations of the ..."
Abstract

Cited by 136 (14 self)
 Add to MetaCart
this document is intended to provide a cursory overview of the Implicitly Restarted Arnoldi/Lanczos Method that this software is based upon. The goal is to provide some understanding of the underlying algorithm, expected behavior, additional references, and capabilities as well as limitations of the software. 1.7 Dependence on LAPACK and BLAS
Software libraries for linear algebra computations on high performance computers
 SIAM REVIEW
, 1995
"... This paper discusses the design of linear algebra libraries for high performance computers. Particular emphasis is placed on the development of scalable algorithms for MIMD distributed memory concurrent computers. A brief description of the EISPACK, LINPACK, and LAPACK libraries is given, followed b ..."
Abstract

Cited by 68 (17 self)
 Add to MetaCart
This paper discusses the design of linear algebra libraries for high performance computers. Particular emphasis is placed on the development of scalable algorithms for MIMD distributed memory concurrent computers. A brief description of the EISPACK, LINPACK, and LAPACK libraries is given, followed by an outline of ScaLAPACK, which is a distributed memory version of LAPACK currently under development. The importance of blockpartitioned algorithms in reducing the frequency of data movement between different levels of hierarchical memory is stressed. The use of such algorithms helps reduce the message startup costs on distributed memory concurrent computers. Other key ideas in our approach are the use of distributed versions of the Level 3 Basic Linear Algebra Subprograms (BLAS) as computational building blocks, and the use of Basic Linear Algebra Communication Subprograms (BLACS) as communication building blocks. Together the distributed BLAS and the BLACS can be used to construct highe...
Robust approximate inverse preconditioning for the conjugate gradient method
 SIAM J. SCI. COMPUT
, 2000
"... We present a variant of the AINV factorized sparse approximate inverse algorithm which is applicable to any symmetric positive definite matrix. The new preconditioner is breakdownfree and, when used in conjunction with the conjugate gradient method, results in a reliable solver for highly illcondit ..."
Abstract

Cited by 48 (11 self)
 Add to MetaCart
We present a variant of the AINV factorized sparse approximate inverse algorithm which is applicable to any symmetric positive definite matrix. The new preconditioner is breakdownfree and, when used in conjunction with the conjugate gradient method, results in a reliable solver for highly illconditioned linear systems. We also investigate an alternative approach to a stable approximate inverse algorithm, based on the idea of diagonally compensated reduction of matrix entries. The results of numerical tests on challenging linear systems arising from finite element modeling of elasticity and diffusion problems are presented.
Performance Evaluation of a New Parallel Preconditioner
 In Proceedings of the Ninth International Parallel Processing Symposium
, 1995
"... The linear systems associated with large, sparse, symmetric, positive definite matrices are often solved iteratively using the preconditioned conjugate gradient method. We have developed a new class of preconditioners, support tree preconditioners, that are based on the connectivity of the graphs co ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
The linear systems associated with large, sparse, symmetric, positive definite matrices are often solved iteratively using the preconditioned conjugate gradient method. We have developed a new class of preconditioners, support tree preconditioners, that are based on the connectivity of the graphs corresponding to the matrices and are wellstructured for parallel implementation. In this paper, we evaluate the performance of support tree preconditioners by comparing them against two common types of preconditioners: diagonal scaling, and incomplete Cholesky. Support tree preconditioners require less overall storage and less work per iteration than incomplete Cholesky preconditioners. In terms of total execution time, support tree preconditioners outperform both diagonal scaling and incomplete Cholesky preconditioners. 1
Stability of the diagonal pivoting method with partial pivoting
 SIAM J. Matrix Anal. Appl
, 1995
"... Abstract. LAPACK and LINPACK both solve symmetric indefinite linear systems using the diagonal pivoting method with the partial pivoting strategy of Bunch and Kaufman [Math. Comp., 31 (1977), pp. 163–179]. No proof of the stability of this method has appeared in the literature. It is tempting to arg ..."
Abstract

Cited by 22 (9 self)
 Add to MetaCart
Abstract. LAPACK and LINPACK both solve symmetric indefinite linear systems using the diagonal pivoting method with the partial pivoting strategy of Bunch and Kaufman [Math. Comp., 31 (1977), pp. 163–179]. No proof of the stability of this method has appeared in the literature. It is tempting to argue that the diagonal pivoting method is stable for a given pivoting strategy if the growth factor is small. We show that this argument is false in general and give a sufficient condition for stability. This condition is not satisfied by the partial pivoting strategy because the multipliers are unbounded. Nevertheless, using a more specific approach we are able to prove the stability of partial pivoting, thereby filling a gap in the body of theory supporting LAPACK and LINPACK.
Nested Krylov Methods And Preserving The Orthogonality
, 1993
"... this article we will consider GMRES and BICGSTAB as inner methods. In the next section we will discuss the implications of the orthogonalization in the inner method. It will be proved that this leads to an optimal approximation over the space spanned by both the outer and the inner iteration vectors ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
this article we will consider GMRES and BICGSTAB as inner methods. In the next section we will discuss the implications of the orthogonalization in the inner method. It will be proved that this leads to an optimal approximation over the space spanned by both the outer and the inner iteration vectors. It also introduces a potential problem: the possibility of breakdown in the generation of the Krylov space in the inner iteration, since we iterate with a singular operator. We will show, however, that such a breakdown can never happen before a specific (generally large) number of iterations. Furthermore we will also show how to remedy such a breakdown. We will also discuss the efficient implementation of these methods and how we can truncate the outer GCR iteration. Outlines of the algorithms can be found in [7], [2]. CONSEQUENCES OF INNER ORTHOGONALIZATION To keep this section concise, we will only give a short indication of the proofs or omit them completely. The proofs can be found in [2]. Throughout the rest of this article we will use the following notations:
Implicitly restarted Arnoldi/Lanczos Methods for Large Scale Eigenvalue Calculations
, 1996
"... Eigenvalues and eigenfunctions of linear operators are important to many areas of applied mathematics. The ability to approximate these quantities numerically is becoming increasingly important in a wide variety of applications. This increasing demand has fueled interest in the development of new m ..."
Abstract

Cited by 20 (3 self)
 Add to MetaCart
Eigenvalues and eigenfunctions of linear operators are important to many areas of applied mathematics. The ability to approximate these quantities numerically is becoming increasingly important in a wide variety of applications. This increasing demand has fueled interest in the development of new methods and software for the numerical solution of largescale algebraic eigenvalue problems. In turn, the existence of these new methods and software, along with the dramatically increased computational capabilities now available, has enabled the solution of problems that would not even have been posed five or ten years ago. Until very recently, software for largescale nonsymmetric problems was virtually nonexistent. Fortunately, the situation is improving rapidly. The purpose of this article is to provide an overview of the numerical solution of largescale algebraic eigenvalue problems. The focus will be on a class of methods called Krylov subspace projection methods. The wellknown Lanczos method is the premier member of this class. The Arnoldi method generalizes the Lanczos method to the nonsymmetric case. A recently developed variant of the Arnoldi/Lanczos scheme called the Implicitly Restarted Arnoldi Method is presented here in some depth. This method is highlighted because of its suitability as a basis for software development.
A Multiscale Method for the Double Layer Potential Equation on a Polyhedron
 In H.P. Dikshit and C.A. Micchelli, editors, Advances in Computational Mathematics, pages 1557, World Scientific Publ
, 1994
"... This paper is concerned with the numerical solution of the double layer potential equation on polyhedra. Specifically, we consider collocation schemes based on multiscale decompositions of piecewise linear finite element spaces defined on polyhedra. An essential difficulty is that the resulting line ..."
Abstract

Cited by 18 (9 self)
 Add to MetaCart
This paper is concerned with the numerical solution of the double layer potential equation on polyhedra. Specifically, we consider collocation schemes based on multiscale decompositions of piecewise linear finite element spaces defined on polyhedra. An essential difficulty is that the resulting linear systems are not sparse. However, for uniform grids and periodic problems one can show that the use of multiscale bases gives rise to matrices that can be well approximated by sparse matrices in such a way that the solutions to the perturbed equations exhibits still sufficient accuracy. Our objective is to explore to what extent the presence of corners and edges in the domain as well as the lack of uniform discretizations affects the performance of such schemes. Here we propose a concrete algorithm, describe its ingredients, discuss some consequences, future perspectives, and open questions, and present the results of numerical experiments for several test domains including nonconvex doma...