Results 1 
9 of
9
Equational term graph rewriting
 FUNDAMENTA INFORMATICAE
, 1996
"... We present an equational framework for term graph rewriting with cycles. The usual notion of homomorphism is phrased in terms of the notion of bisimulation, which is wellknown in process algebra and concurrency theory. Specifically, a homomorphism is a functional bisimulation. We prove that the bis ..."
Abstract

Cited by 71 (8 self)
 Add to MetaCart
We present an equational framework for term graph rewriting with cycles. The usual notion of homomorphism is phrased in terms of the notion of bisimulation, which is wellknown in process algebra and concurrency theory. Specifically, a homomorphism is a functional bisimulation. We prove that the bisimilarity class of a term graph, partially ordered by functional bisimulation, is a complete lattice. It is shown how Equational Logic induces a notion of copying and substitution on term graphs, or systems of recursion equations, and also suggests the introduction of hidden or nameless nodes in a term graph. Hidden nodes can be used only once. The general framework of term graphs with copying is compared with the more restricted copying facilities embodied in the µrule, and translations are given between term graphs and µexpressions. Using these, a proof system is given for µexpressions that is complete for the semantics given by infinite tree unwinding. Next, orthogonal term graph rewrite ...
Cyclic Lambda Calculi
, 1997
"... . We precisely characterize a class of cyclic lambdagraphs, and then give a sound and complete axiomatization of the terms that represent a given graph. The equational axiom system is an extension of lambda calculus with the letrec construct. In contrast to current theories, which impose restrictio ..."
Abstract

Cited by 36 (5 self)
 Add to MetaCart
. We precisely characterize a class of cyclic lambdagraphs, and then give a sound and complete axiomatization of the terms that represent a given graph. The equational axiom system is an extension of lambda calculus with the letrec construct. In contrast to current theories, which impose restrictions on where the rewriting can take place, our theory is very liberal, e.g., it allows rewriting under lambdaabstractions and on cycles. As shown previously, the reduction theory is nonconfluent. We thus introduce an approximate notion of confluence. Using this notion we define the infinite normal form or L'evyLongo tree of a cyclic term. We show that the infinite normal form defines a congruence on the set of terms. We relate our cyclic lambda calculus to the traditional lambda calculus and to the infinitary lambda calculus. Since most implementations of nonstrict functional languages rely on sharing to avoid repeating computations, we develop a variant of our calculus that enforces the ...
A Provably TimeEfficient Parallel Implementation of Full Speculation
 In Proceedings of the 23rd ACM Symposium on Principles of Programming Languages
, 1996
"... Speculative evaluation, including leniency and futures, is often used to produce high degrees of parallelism. Existing speculative implementations, however, may serialize computation because of their implementation of queues of suspended threads. We give a provably efficient parallel implementation ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
Speculative evaluation, including leniency and futures, is often used to produce high degrees of parallelism. Existing speculative implementations, however, may serialize computation because of their implementation of queues of suspended threads. We give a provably efficient parallel implementation of a speculative functional language on various machine models. The implementation includes proper parallelization of the necessary queuing operations on suspended threads. Our target machine models are a butterfly network, hypercube, and PRAM. To prove the efficiency of our implementation, we provide a cost model using a profiling semantics and relate the cost model to implementations on the parallel machine models. 1 Introduction Futures, lenient languages, and several implementations of graph reduction for lazy languages all use speculative evaluation (callbyspeculation [15]) to expose parallelism. The basic idea of speculative evaluation, in this context, is that the evaluation of a...
Communication Studies of DMP and SMP Machines
, 1997
"... Understanding the interplay between machines and problems is key to obtaining high performance on parallel machines. This paper investigates the interplay between programming paradigms and communication capabilities of parallel machines. In particular, we explicate the communication capabilities of ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
Understanding the interplay between machines and problems is key to obtaining high performance on parallel machines. This paper investigates the interplay between programming paradigms and communication capabilities of parallel machines. In particular, we explicate the communication capabilities of the IBM SP2 distributedmemory multiprocessor and the SGI PowerCHALLENGEarray symmetric multiprocessor. Two benchmark problems of bitonic sorting and Fast Fourier Transform are selected for experiments. Communication efficient algorithms are developed to exploit the overlapping capabilities of the machines. Programs are written in MessagePassing Interface for portability and identical codes are used for both machines. Various data sizes and message sizes are used to test the machines' communication capabilities. Experimental results indicate that the communication performance of the multiprocessors are consistent with the size of messages. The SP2 is sensitive to message size but yields ...
Relating Graph and Term Rewriting via Böhm Models
 in Engineering, Communication and Computing 7
, 1993
"... . Dealing properly with sharing is important for expressing some of the common compiler optimizations, such as common subexpressions elimination, lifting of free expressions and removal of invariants from a loop, as sourcetosource transformations. Graph rewriting is a suitable vehicle to accommoda ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
. Dealing properly with sharing is important for expressing some of the common compiler optimizations, such as common subexpressions elimination, lifting of free expressions and removal of invariants from a loop, as sourcetosource transformations. Graph rewriting is a suitable vehicle to accommodate these concerns. In [4] we have presented a term model for graph rewriting systems (GRSs) without interfering rules, and shown the partial correctness of the aforementioned optimizations. In this paper we define a different model for GRSs, which allows us to prove total correctness of those optimizations. Differently from [4] we will discard sharing from our observations and introduce more restrictions on the rules. We will introduce the notion of Bohm tree for GRSs, and show that in a system without interfering and nonleft linear rules (orthogonal GRSs), Bohm tree equivalence defines a congruence. Total correctness then follows in a straightforward way from showing that if a program M co...
pH Language Reference Manual, Version 1.0
 Tech. Rep. CSGMemo369, MIT Computation Structures Group
, 1995
"... This document must be read in conjunction with the Haskell Report [2] since it describes only the extensions. In Section 2 we present the syntax extensions proper. In Section 3 we present some examples to give a flavor of the language. In Section 4 we present commentary and rationale. Future version ..."
Abstract
 Add to MetaCart
This document must be read in conjunction with the Haskell Report [2] since it describes only the extensions. In Section 2 we present the syntax extensions proper. In Section 3 we present some examples to give a flavor of the language. In Section 4 we present commentary and rationale. Future versions of this document will address topics such as loop pragmas, data and work distribution, etc. Background
Multithreaded Systems
"... Machine (TAM) TAM [Culler93] has its roots in the dataflow model of execution, but can be understood independently of dataflow. A language called Threaded Machine Language, TL0, was designed to permit programming using the TAM model. TAM recognizes three major storage resourcescodeblocks, frames ..."
Abstract
 Add to MetaCart
Machine (TAM) TAM [Culler93] has its roots in the dataflow model of execution, but can be understood independently of dataflow. A language called Threaded Machine Language, TL0, was designed to permit programming using the TAM model. TAM recognizes three major storage resourcescodeblocks, frames, and structuresand the existence of critical processor resources, such as registers. A program is represented by a collection of reentrant codeblocks, corresponding roughly to individual functions or loop bodies in the highlevel program text. A codeblock comprises a collection of threads and inlets. Invoking a codeblock involves allocating a framemuch like a conventional call frame depositing argument values into locations within the frame, and enabling threads within the codeblock for execution. Instructions may refer to registers and to slots in the current frame: the compiler statically determines the frame size for each codeblock and is responsible for correctly using sl...
Thread Programming in SIMPL
, 2000
"... Advances in parallel architectures in recent years fueled the demand of a new class of programming languages that can simplify and promote parallel programming on a wide variety of parallel computers. This article introduces a new explicitly parallel programming language, called SIMPL. Three constru ..."
Abstract
 Add to MetaCart
Advances in parallel architectures in recent years fueled the demand of a new class of programming languages that can simplify and promote parallel programming on a wide variety of parallel computers. This article introduces a new explicitly parallel programming language, called SIMPL. Three constructs are designed to support parallelism and to coordinate threads. The thread statement is used for forking threads. The lock and wait statements are used for synchronization. SIMPL is designed to support functional and data parallelism and to run on a wide variety of architectures. The simplicity and expressiveness of the parallel constructs is demonstrated. The implementation of these constructs is also discussed. 1. Introduction Parallel programming languages differ greatly in their support for parallel programming. Some languages, such as High Performance Fortran [Koelbel 1994], are designed to support data or array parallelism. Other languages, such as Ada [Ada 1995], are designed to ...
A TYPEBASED APPROACH TO PARALLELIZATION
"... During the course of this research I benefited enormously from the knowledge and expertise of my adviser, Associate Professor Khoo Siau Cheng. I especially thank him for his patient guidance on constructing the PType system as well as proving its correctness. I also thank him for his hospitality and ..."
Abstract
 Add to MetaCart
During the course of this research I benefited enormously from the knowledge and expertise of my adviser, Associate Professor Khoo Siau Cheng. I especially thank him for his patient guidance on constructing the PType system as well as proving its correctness. I also thank him for his hospitality and kindness in general. I would like to thank Associate Professor Chin Wei Ngan and Associate Professor Hu Zhenjiang for giving me a chance of research collaboration at Information Processing Laboratory of Department of Mathematical Informatics in the University of Tokyo for six weeks. I thank Hu for hosting my visit and for his patience in teaching me the concept of context preservation and for valuable discussions leading to the idea of PType system. I am also grateful to Chin Wei Ngan, Neil Jones, Martin Rinard and Alan Edelman for sitting in my presentation on PType system and giving valuable feedback. Thanks to all my colleagues sitting in PLS (Programming Language and System) laboratory II of SoC for their moral support. Especially, I thank Corneliu Popeea and Razvan MusaloiuE for helping me resolve some system problems. iii Contents Acknowledgements