Results 1 
6 of
6
A New Array Operation
 Graph Reduction: Proceedings of a Workshop
, 1986
"... This paper proposes a new solution, which is a variant on the "monolothic" approach to array operations. The new solution is also not completely satisfactory, but does have advantages complementary to other proposals. It makes a class of programs easy to express, notably those involving the construc ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
This paper proposes a new solution, which is a variant on the "monolothic" approach to array operations. The new solution is also not completely satisfactory, but does have advantages complementary to other proposals. It makes a class of programs easy to express, notably those involving the construction of histograms. It also allows for parallel implementations without the need to introduce nondeterminism. The work reported here was motivated by discussions at the Workshop on Graph Reduction, of which this is the proceedings. In particular, it was motivated by the contributions of Arvind and Paul Hudak (see this volume), and especially Arvind's observation that the histogram problem is difficult to solve in a satisfactory way. After the first draft of this paper was written, I discovered that Simon Peyton Jones independently suggested the same idea, also prompted by one of Arvind's talks. After the second draft was written, I discovered that Guy Steele and Daniel Hillis use a very similar notion in Connection Machine Lisp [SH86]. Apparently the simple innovation described in this paper is an idea whose time has come. This paper is organized as follows. Section 1 briefly surveys previously proposed array operations. Section 2 introduces the new operation. Section 3 gives a small example of its use. Section 4 discusses two variations on the array operation. Section 5 concludes. 1 Background
Matrix Algebra and Applicative Programming
 Functional Programming Languages and Computer Architecture (Proceedings
, 1987
"... General Term: Algorithms. The broad problem of matrix algebra is taken up from the perspective of functional programming. Akey question is how arrays should be represented in order to admit good implementations of wellknown e cient algorithms, and whether functional architecture sheds any new ligh ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
General Term: Algorithms. The broad problem of matrix algebra is taken up from the perspective of functional programming. Akey question is how arrays should be represented in order to admit good implementations of wellknown e cient algorithms, and whether functional architecture sheds any new light on these or other solutions. It relates directly to disarming the \aggregate update " problem. The major thesis is that 2 dary trees should be used to represent ddimensional arrays � examples are matrix operations (d = 2), and a particularly interesting vector (d = 1) algorithm. Sparse and dense matrices are represented homogeneously, but at some overhead that appears tolerable � encouraging results are reviewed and extended. A Pivot Step algorithm is described which o ers optimal stability at no extra cost for searching. The new results include proposed sparseness measures for matrices, improved performance of stable matrix inversion through repeated pivoting while deep within a matrixtree (extendible to solving linear systems), and a clean matrix derivation of the vector algorithm for the fast Fourier transform. Running code is o ered in the appendices.
Using randomization to make recursive matrix algorithms practical
"... Abstract Recursive block decomposition algorithms (also known as quadtree algorithms when the blocks are all square) have been proposed to solve wellknown problems such as matrix addition, multiplication, inversion, determinant computation, block LDU decomposition, and Cholesky and QR factorization ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract Recursive block decomposition algorithms (also known as quadtree algorithms when the blocks are all square) have been proposed to solve wellknown problems such as matrix addition, multiplication, inversion, determinant computation, block LDU decomposition, and Cholesky and QR factorization. Until now, such algorithms have been seen as impractical, since they require leading submatrices of the input matrix to be invertible (which is rarely guaranteed). We show how to randomize an input matrix to guarantee that submatrices meet these requirements, and to make recursive block decomposition methods practical on wellconditioned input matrices. The resulting algorithms are elegant, and we show the recursive programs can perform well for both dense and sparse matrices, although with randomization dense computations seem most practical. By `homogenizing ' the input, randomization provides a way to avoid degeneracy in numerical problems that permits simple recursive quadtree algorithms to solve these problems. 1 Introduction We have been investigating alternative computation schemes for largescale matrix computations. A natural functional programming approach called recursive block decomposition (or quadtree decomposition when the blocks are all square) operates via divideandconquer recursion. 1.1 Recursive Block Decomposition Algorithms The basic idea here is that when a matrix is decomposed into smaller blocks, many useful functions of the matrix can be computed recursively. A natural question is whether recursive programming can play a practical role in numerical computation, although today most numerical algorithms are programmed iteratively.
Matrix Algorithms using Quadtrees Invited Talk, ATABLE92
 in Proc. ATABLE92
, 1992
"... Many scheduling and synchronization problems for largescale multiprocessing can be overcome using functional (or applicative) programming. With this observation, it is strange that so much attention within the functional programming community has focused on the "aggregate update problem" [10]: esse ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Many scheduling and synchronization problems for largescale multiprocessing can be overcome using functional (or applicative) programming. With this observation, it is strange that so much attention within the functional programming community has focused on the "aggregate update problem" [10]: essentially how to implement FORTRAN arrays. This situation is strange because inplace updating of aggregates belongs more to uniprocessing than to mathematics. Several years ago functional style drew me to treatment of ddimen sional arrays as 2 d ary trees; in particular, matrices become quaternary trees or quadtrees. This convention yields efficient recopyingcumupdate of any array; recursive, algebraic decomposition of conventional arithmetic algorithms; and uniform representations and algorithms for both dense and sparse matrices. For instance, any nonsingular subtree is a candidate as the pivot block for Gaussian elimination; the restriction actually helps identification of pivot b...
Parallel algorithms for robust broadband MVDR beamforming
 Journal of Computational Acoustics
, 2002
"... Rapid advancements in adaptive sonar beamforming algorithms have greatly increased the computation and communication demands on beamforming arrays, particularly for applications that require inarray autonomous operation. By coupling each transducer node in a distributed array with a microprocessor, ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Rapid advancements in adaptive sonar beamforming algorithms have greatly increased the computation and communication demands on beamforming arrays, particularly for applications that require inarray autonomous operation. By coupling each transducer node in a distributed array with a microprocessor, and networking them together, embedded parallel processing for adaptive beamformers can significantly reduce execution time, power consumption and cost, and increase scalability and dependability. In this paper, the basic narrowband Minimum Variance Distortionless Response (MVDR) beamformer is enhanced by incorporating broadband processing, a technique to enhance the robustness of the algorithm, and speedup of the matrix inversion task using sequential regression. Using this Robust Broadband MVDR (RBMVDR) algorithm as a sequential baseline, two novel parallel algorithms are developed and analyzed. Performance results are included, among them execution time, scaled speedup, parallel efficiency, result latency and memory utilization. The testbed used is a distributed system comprised of a cluster of personal computers connected by a conventional network. 1.
A Parallel Algorithm For Distributed Minimum Variance Distortionless Response Beamforming
, 1999
"... .......................................................................................................................vi CHAPTERS 1 INTRODUCTION........................................................................................................7 2 BACKGROUND..................................... ..."
Abstract
 Add to MetaCart
.......................................................................................................................vi CHAPTERS 1 INTRODUCTION........................................................................................................7 2 BACKGROUND........................................................................................................11 2.1 MPI......................................................................................................................12 2.2 Threads................................................................................................................14 2.3 Performance measurements..................................................................................16 3 OVERVIEW OF MVDR ALGORITHM....................................................................19 4 SEQUENTIAL MVDR ALGORITHM AND PERFORMANCE................................25 4.1 Sequential MVDR tasks................................................................