Results 1 
4 of
4
Sampling and Analytical Techniques for Data Distribution of Parallel Sparse Computation
, 1997
"... We present a compiletime method to select compression and distribution schemes for sparse matrices which are computed using Fortran 90 array intrinsic operations. The selection process samples input sparse matrices to determine their sparsity structures. It is also guided by cost functions of vari ..."
Abstract

Cited by 9 (7 self)
 Add to MetaCart
We present a compiletime method to select compression and distribution schemes for sparse matrices which are computed using Fortran 90 array intrinsic operations. The selection process samples input sparse matrices to determine their sparsity structures. It is also guided by cost functions of various sparse routines as measured from the target machine. The Fortran 90 array expression is then transformed into a sparse array expression that calls the selected compression and distribution routines. 1 Introduction It has long been a challenging research topic to devise general guidelines for selecting efficient compression and distribution schemes for parallel executions of sparse matrix computations. We feel that this problem is difficult at least for the following three reasons. First, the cost of a sparse matrix computation depends greatly on the structures (i.e., the distributions of nonzero elements) of its input matrices [2]. Such information, however, may not be available at c...
Towards Automatic Support of Parallel Sparse Computation in Java with Continuous Compilation
, 1997
"... In this paper, we present a generic matrix class in Java and a runtime environment with continuous compilations aiming to support automatic parallelization of sparse computations on distributed environments. Our package comes with a collection of matrix classes including operators of dense matrix, s ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
In this paper, we present a generic matrix class in Java and a runtime environment with continuous compilations aiming to support automatic parallelization of sparse computations on distributed environments. Our package comes with a collection of matrix classes including operators of dense matrix, sparse matrix, and parallel matrix on distributed memory environments. In our environment, a program such as conjugate gradient solver is written by users with highlevel generic matrix notations in Java. At runtime, with the help of profiling information and cost model, our runtime system employs continuous compilation schemes to rewrite the matrix notations into corresponding parallel operations. Our system is particularly useful in optimizing sparse computations on distributed environments. Our runtime compilation environment selects compression and distribution schemes for sparse matrices on distributed memory environments according to the access patterns of the programs and the nonzero ...
A Functional Perspective of Array Primitives
 In 2nd Fuji Int. Workshop on Functional and Logic Programming
, 1996
"... We propose a set of array primitives, based on experience from structural functional programming. We argue that these primitives provide a right level of abstraction for array computation. These primitives are derived from various perspectives of arrays, with each perspective imposing a particular a ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We propose a set of array primitives, based on experience from structural functional programming. We argue that these primitives provide a right level of abstraction for array computation. These primitives are derived from various perspectives of arrays, with each perspective imposing a particular algebraic structure and demanding specific efficiency requirements. We follow BirdMeertens formalism [7] [22], in particular the approach used by Meijer, Fokkinga, and Paterson [23], when designing these primitives, but also take into consideration efficient issues in their implementations. 1. Motivation Functional programming languages  with the exceptions of APL and SISAL  are not well known for large scale scientific computation. We identify inadequate language support for array operations  again with apology to APL and SISAL  a major reason why they are not widely used for scientific applications. Some functional languages provide little support for array operations, which...
Compilation of BottomUp Evaluation for a Pure Logic Programming Language
"... Abstraction in programming languages is usually achieved at the price of run time efficiency. This thesis presents a compilation scheme for the Starlog logic programming language. In spite of being very abstract, Starlog can be compiled to an efficient executable form. Starlog implements stratified ..."
Abstract
 Add to MetaCart
Abstraction in programming languages is usually achieved at the price of run time efficiency. This thesis presents a compilation scheme for the Starlog logic programming language. In spite of being very abstract, Starlog can be compiled to an efficient executable form. Starlog implements stratified negation and includes logically pure facilities for input and output, aggregation and destructive assignment. The main new work described in this thesis is (1) a bottomup evaluation technique which is optimised for Starlog programs, (2) a static indexing structure that allows significant compile time optimisation, (3) an intermediate language to represent bottomup logic programs and (4) an evaluation of automatic data structure selection techniques. It is shown empirically that the performance of compiled Starlog programs can be competitive with that of equivalent handcoded programs. Acknowledgements This thesis was only possible with the wisdom, patience, and optimism of my supervisor, John Cleary. I would also like to thank the other members of the Starlog project (some