Results 1  10
of
23
Discretedipole approximation for scattering calculations
 Fournal of Optical Society of Averica A
, 1994
"... The discretedipole approximation (DDA) for scattering calculations, including the relationship between the DDA and other methods, is reviewed. Computational considerations, i.e., the use of complexconjugate gradient algorithms and fastFouriertransform methods, are discussed. We test the accuracy ..."
Abstract

Cited by 119 (5 self)
 Add to MetaCart
The discretedipole approximation (DDA) for scattering calculations, including the relationship between the DDA and other methods, is reviewed. Computational considerations, i.e., the use of complexconjugate gradient algorithms and fastFouriertransform methods, are discussed. We test the accuracy of the DDA by using the DDA to compute scattering and absorption by isolated, homogeneous spheres as well as by targets consisting of two contiguous spheres. It is shown that, for dielectric materials (Iml c 2), the DDA permits calculations of scattering and absorption that are accurate to within a few percent. 1.
Software libraries for linear algebra computations on high performance computers
 SIAM REVIEW
, 1995
"... This paper discusses the design of linear algebra libraries for high performance computers. Particular emphasis is placed on the development of scalable algorithms for MIMD distributed memory concurrent computers. A brief description of the EISPACK, LINPACK, and LAPACK libraries is given, followed b ..."
Abstract

Cited by 73 (17 self)
 Add to MetaCart
This paper discusses the design of linear algebra libraries for high performance computers. Particular emphasis is placed on the development of scalable algorithms for MIMD distributed memory concurrent computers. A brief description of the EISPACK, LINPACK, and LAPACK libraries is given, followed by an outline of ScaLAPACK, which is a distributed memory version of LAPACK currently under development. The importance of blockpartitioned algorithms in reducing the frequency of data movement between different levels of hierarchical memory is stressed. The use of such algorithms helps reduce the message startup costs on distributed memory concurrent computers. Other key ideas in our approach are the use of distributed versions of the Level 3 Basic Linear Algebra Subprograms (BLAS) as computational building blocks, and the use of Basic Linear Algebra Communication Subprograms (BLACS) as communication building blocks. Together the distributed BLAS and the BLACS can be used to construct highe...
Large Dense Numerical Linear Algebra in 1993: The Parallel Computing Influence
 International Journal Supercomputer Applications
, 1994
"... This paper surveys the current state of applications of large dense numerical linear algebra, and the influence of parallel computing. Furthermore, we attempt to crystalize many important ideas that we feel have been sometimes been misunderstood in the rush to write fast programs. 1 Introduction Th ..."
Abstract

Cited by 41 (2 self)
 Add to MetaCart
This paper surveys the current state of applications of large dense numerical linear algebra, and the influence of parallel computing. Furthermore, we attempt to crystalize many important ideas that we feel have been sometimes been misunderstood in the rush to write fast programs. 1 Introduction This paper represents my continuing efforts to track the status of large dense linear algebra problems. The goal is to shatter the barriers that separate the various interested communities while commenting on the influence of parallel computing. A secondary goal is to crystalize the most important ideas that have all too often been obscured by the details of machines and algorithms. Parallel supercomputing is in the spotlight. In the race towards the proliferation of papers on person X's experiences with machine Y (and why his algorithm runs faster than person Z's), sometimes we have lost sight of the applications for which these algorithms are meant to be useful. This paper concentrates on la...
The Design of Linear Algebra Libraries for High Performance Computers
, 1993
"... This paper discusses the design of linear algebra libraries for high performance computers. Particular emphasis is placed on the development of scalable algorithms for MIMD distributed memory concurrent computers. A brief description of the EISPACK, LINPACK, and LAPACK libraries is given, followe ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
This paper discusses the design of linear algebra libraries for high performance computers. Particular emphasis is placed on the development of scalable algorithms for MIMD distributed memory concurrent computers. A brief description of the EISPACK, LINPACK, and LAPACK libraries is given, followed by an outline of ScaLAPACK, which is a distributed memory version of LAPACK currently under development. The importance of blockpartitioned algorithms in reducing the frequency of data movementbetween di#erent levels of hierarchical memory is stressed. The use of such algorithms helps reduce the message startup costs on distributed memory concurrent computers. Other key ideas in our approach are the use of distributed versions of the Level 3 Basic Linear Algebra Subgrams #BLAS# as computational building blocks, and the use of Basic Linear Algebra Communication Subprograms #BLACS# as communication building blocks. Together the distributed BLAS and the BLACS can be used to construct ...
The First Annual Large Dense Linear System Survey
 Int. Rept. Univ. California, Berkeley CA
, 1991
"... In the March 24, 1991 issue of NA Digest, I submitted a questionnaire asking who was solving large dense linear systems of equations. Based on the responses, nearly all large dense linear systems today arise from either the benchmarking of supercomputers or applications involving the influence of a ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
In the March 24, 1991 issue of NA Digest, I submitted a questionnaire asking who was solving large dense linear systems of equations. Based on the responses, nearly all large dense linear systems today arise from either the benchmarking of supercomputers or applications involving the influence of a two dimensional boundary on three dimensional space. Not surprisingly, the area of computational aerodynamics or aeroelectromechanics represents an important commercial application requiring the solution of such systems. The largest unstructured matrix that has been factored using Gaussian Elimination was a complex matrix of size 55,296. The largest dense matrix solved on a Sun using an iterative method was a real matrix of size 20,000. It is unclear at this time whether dense methods are truly needed at all for huge matrices. It is intended to survey users every year with the hope of including more applications as I am made aware of them. 1 Introduction The idea to poll solvers of large d...
CRPC Research into Linear Algebra Software for High Performance Computers
, 1994
"... In this paper we look at a number of approaches being investigated in the Center for Research on Parallel Computation (CRPC) to develop linear algebra software for highperformance computers. These approaches are exemplified by the LAPACK, templates, and ARPACK projects. LAPACK is a software library ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
In this paper we look at a number of approaches being investigated in the Center for Research on Parallel Computation (CRPC) to develop linear algebra software for highperformance computers. These approaches are exemplified by the LAPACK, templates, and ARPACK projects. LAPACK is a software library for performing dense and banded linear algebra computations, and was designed to run efficiently on high performance computers. We focus on the design of the distributed memory version of LAPACK, and on an objectoriented interface to LAPACK. The templates project aims at making the task of developing sparse linear algebra software simpler and easier. Reusable software templates are provided that the user can then customize to modify and optimize a particular algorithm, and hence build a more complex applications. ARPACK is a software package for solving large scale eigenvalue problems, and is based on an implicitly restarted variant of the Arnoldi scheme. The paper focuses on issues impact...
Dense Linear Algebra on Distributed Heterogeneous Hardware with a Symbolic DAG Approach
, 2012
"... Among the various factors that drive the momentous changes occurring in the design of microprocessors and high end systems [1], three stand out as especially notable: 1. the number of transistors per chip will continue the current trend, i.e. double roughly every 18 months, while the speed of proces ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Among the various factors that drive the momentous changes occurring in the design of microprocessors and high end systems [1], three stand out as especially notable: 1. the number of transistors per chip will continue the current trend, i.e. double roughly every 18 months, while the speed of processor clocks will cease to in