Results 1 
9 of
9
evolution and application of functional programming languages
 ACM Computing surveys
, 1989
"... The foundations of functional programming languages are examined from both historical and technical perspectives. Their evolution is traced through several critical periods: early work on lambda calculus and combinatory calculus, Lisp, Iswim, FP, ML, and modern functional languages such as Miranda ’ ..."
Abstract

Cited by 45 (0 self)
 Add to MetaCart
The foundations of functional programming languages are examined from both historical and technical perspectives. Their evolution is traced through several critical periods: early work on lambda calculus and combinatory calculus, Lisp, Iswim, FP, ML, and modern functional languages such as Miranda ’ and Haskell. The fundamental premises on which the functional programming methodology stands are critically analyzed with respect to philosophical, theoretical, and pragmatic concerns. Particular attention is paid to the main features that characterize modern functional languages: higherorder functions, lazy evaluation, equations and pattern matching, strong static typing and type inference, and data abstraction. In addition, current research areassuch as parallelism, nondeterminism, input/output, and stateoriented computationsare examined with the goal of predicting the future development and application of functional languages.
SVP  a Model Capturing Sets, Streams, and Parallelism
 In Proceedings of the 18th VLDB Conference
, 1992
"... We describe the SVP data model. The goal of SVP is to model both set and stream data, and to model parallelism in bulk data processing. SVP also shows promise for other parallel processing applications. SVP models collections, which include sets and streams as special cases. Collections are represen ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
We describe the SVP data model. The goal of SVP is to model both set and stream data, and to model parallelism in bulk data processing. SVP also shows promise for other parallel processing applications. SVP models collections, which include sets and streams as special cases. Collections are represented as ordered tree structures, and divideandconquer mappings are easily defined on these structures. We show that many useful database mappings (queries) have a divideandconquer format when specified using collections, and that this specification exposes parallelism. We formalize a class of divideandconquer mappings on collections called SVPtransducers. SVPtransducers generalize aggregates, set mappings, stream transductions, and scan computations. At the same time, they have a rigorous semantics based on continuity with respect to collection orderings, and permit implicit specification of both independent and pipeline parallelism. 1 Introduction Achieving parallelism in bulk data...
Equal Rights for Functional Objects or, The More Things Change, The More They Are the Same
, 1993
"... DATA TYPES A. Comparing Type Objects There has been as much confusion over type identity as there has been over object identity, although the type identity problem is usually referred to as the type equivalence problem [Aho86,s.6.3] [Wegbreit74] [Welsh77]. The type identity problem is to determine ..."
Abstract

Cited by 22 (7 self)
 Add to MetaCart
DATA TYPES A. Comparing Type Objects There has been as much confusion over type identity as there has been over object identity, although the type identity problem is usually referred to as the type equivalence problem [Aho86,s.6.3] [Wegbreit74] [Welsh77]. The type identity problem is to determine when two types are equal, so that type checking can be done in a programming language. 22 Algol68 takes the point of view of "structural" equivalence, in which nonrecursive types that are built up from primitive types using the same type constructors in the same order should compare equal, while Ada takes the point of view of "name" equivalence, in which types are equivalent if and only if they have the same name. We will ignore the software engineering issues of which kind of type equivalence makes for betterengineered programs, and focus on the basic issue of type equivalence itself. We note that if a type system offers the type TYPEi.e., it offers firstclass representations of typ...
How to Eliminate Pivoting from Gaussian Elimination  By Randomizing Instead
, 1995
"... Gaussian elimination is probably the best known and most widely used method for solving linear systems, computing determinants, and finding matrix decompositions. While the basic elimination procedure is simple to state and implement, it becomes more complicated with the addition of a pivoting proce ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
Gaussian elimination is probably the best known and most widely used method for solving linear systems, computing determinants, and finding matrix decompositions. While the basic elimination procedure is simple to state and implement, it becomes more complicated with the addition of a pivoting procedure, which handles degenerate matrices having zero elements on the diagonal. Pivoting can significantly complicate the algorithm, increase data movement, and reduce speed, particularly on highperformance computers. In this paper we propose an alternative scheme for performing Gaussian elimination that first preconditions the input matrix by multiplying it with random matrices, whose inverses can be applied subsequently. At the expense of these multiplications, and making the linear system dense if it was not already, this approach makes the system `nondegenerate'  subsystems have full rank  with probability 1. This preconditioning has the effect of (almost certainly) eliminating the ...
A Toolkit for Parallel Functional Programming
, 1995
"... this paper is on writing parallel programs, we will not say more about programming in Miranda. 5 Tools and annotations for parallelism ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
this paper is on writing parallel programs, we will not say more about programming in Miranda. 5 Tools and annotations for parallelism
A Randomizing Butterfly Transformation Useful in Block Matrix Computations
, 1995
"... We present a new randomization scheme that preconditions an input linear system by multiplying it with nonsingular random matrices. At the expense of these multiplications, and making the linear system dense if it was not already, this approach makes blocks of the resulting system nonsingular with p ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
We present a new randomization scheme that preconditions an input linear system by multiplying it with nonsingular random matrices. At the expense of these multiplications, and making the linear system dense if it was not already, this approach makes blocks of the resulting system nonsingular with probability 1. Specifically, in this paper we consider random matrices that are `random butterflies', and call their application a Random Butterfly Transformation (RBT). RBTs can be performed efficiently, and in particular should be useful on parallel and vector machines with architectures that support FFTlike computations. Block Gaussian elimination is an important example of an algorithm with which this RBT can be used. Gaussian elimination is complicated by pivoting, which handles degenerate matrices having zero elements on the diagonal. Pivoting can significantly complicate the algorithm, increase data movement, and reduce speed, particularly on highperformance computers. We show that t...
Using randomization to make recursive matrix algorithms practical
"... Abstract Recursive block decomposition algorithms (also known as quadtree algorithms when the blocks are all square) have been proposed to solve wellknown problems such as matrix addition, multiplication, inversion, determinant computation, block LDU decomposition, and Cholesky and QR factorization ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract Recursive block decomposition algorithms (also known as quadtree algorithms when the blocks are all square) have been proposed to solve wellknown problems such as matrix addition, multiplication, inversion, determinant computation, block LDU decomposition, and Cholesky and QR factorization. Until now, such algorithms have been seen as impractical, since they require leading submatrices of the input matrix to be invertible (which is rarely guaranteed). We show how to randomize an input matrix to guarantee that submatrices meet these requirements, and to make recursive block decomposition methods practical on wellconditioned input matrices. The resulting algorithms are elegant, and we show the recursive programs can perform well for both dense and sparse matrices, although with randomization dense computations seem most practical. By `homogenizing ' the input, randomization provides a way to avoid degeneracy in numerical problems that permits simple recursive quadtree algorithms to solve these problems. 1 Introduction We have been investigating alternative computation schemes for largescale matrix computations. A natural functional programming approach called recursive block decomposition (or quadtree decomposition when the blocks are all square) operates via divideandconquer recursion. 1.1 Recursive Block Decomposition Algorithms The basic idea here is that when a matrix is decomposed into smaller blocks, many useful functions of the matrix can be computed recursively. A natural question is whether recursive programming can play a practical role in numerical computation, although today most numerical algorithms are programmed iteratively.
Matrix Algorithms using Quadtrees Invited Talk, ATABLE92
 in Proc. ATABLE92
, 1992
"... Many scheduling and synchronization problems for largescale multiprocessing can be overcome using functional (or applicative) programming. With this observation, it is strange that so much attention within the functional programming community has focused on the "aggregate update problem" [10]: esse ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Many scheduling and synchronization problems for largescale multiprocessing can be overcome using functional (or applicative) programming. With this observation, it is strange that so much attention within the functional programming community has focused on the "aggregate update problem" [10]: essentially how to implement FORTRAN arrays. This situation is strange because inplace updating of aggregates belongs more to uniprocessing than to mathematics. Several years ago functional style drew me to treatment of ddimen sional arrays as 2 d ary trees; in particular, matrices become quaternary trees or quadtrees. This convention yields efficient recopyingcumupdate of any array; recursive, algebraic decomposition of conventional arithmetic algorithms; and uniform representations and algorithms for both dense and sparse matrices. For instance, any nonsingular subtree is a candidate as the pivot block for Gaussian elimination; the restriction actually helps identification of pivot b...