Results 1  10
of
38
A provable time and space efficient implementation of nesl
 In International Conference on Functional Programming
, 1996
"... In this paper we prove time and space bounds for the implementation of the programming language NESL on various parallel machine models. NESL is a sugared typed Jcalculus with a set of array primitives and an explicit parallel map over arrays. Our results extend previous work on provable implementa ..."
Abstract

Cited by 82 (9 self)
 Add to MetaCart
(Show Context)
In this paper we prove time and space bounds for the implementation of the programming language NESL on various parallel machine models. NESL is a sugared typed Jcalculus with a set of array primitives and an explicit parallel map over arrays. Our results extend previous work on provable implementation bounds for functional languages by considering space and by including arrays. For modeling the cost of NESL we augment a standard callbyvalue operational semantics to return two cost measures: a DAG representing the sequential dependence in the computation, and a measure of the space taken by a sequential implementation. We show that a NESL program with w work (nodes in the DAG), d depth (levels in the DAG), and s sequential space can be implemented on a p processor butterfly network, hypercube, or CRCW PRAM usin O(w/p + d log p) time and 0(s + dp logp) reachable space. For programs with sufficient parallelism these bounds are optimal in that they give linew speedup and use space within a constant factor of the sequential space. 1
Semantics of memory management for polymorphic languages
 In 1st Workshop on Higher Order Operational Techniques in Semantics, A. Gordon and A. Pitts, Eds. Publications of the Newton Institute
, 1997
"... The views and conclusions contained in this document arethose of the authors and should not be interpreted as representing o cial policies, either expressed or implied, of the Advanced We present a static and dynamic semantics for an abstract machine that evaluates expressions of a polymorphic progr ..."
Abstract

Cited by 42 (8 self)
 Add to MetaCart
(Show Context)
The views and conclusions contained in this document arethose of the authors and should not be interpreted as representing o cial policies, either expressed or implied, of the Advanced We present a static and dynamic semantics for an abstract machine that evaluates expressions of a polymorphic programming language. Unlike traditional semantics, our abstract machine exposes many important issues of memory management, such as value sharing and control representation. We prove the soundness of the static semantics with respect to the dynamic semantics using traditional techniques. We then show how these same techniques may be used to establish the soundness of various memory management strategies, including typebased, tagfree garbage collection� tailcall elimination � and environment strengthening. Keywords: management Type theory and operational semantics are remarkably e ective tools for programming
Abstracting Abstract Machines
, 2010
"... We describe a derivational approach to abstract interpretation that yields novel and transparently sound static analyses when applied to wellestablished abstract machines. To demonstrate the technique and support our claim, we transform the CEK machine of Felleisen and Friedman, a lazy variant of K ..."
Abstract

Cited by 33 (20 self)
 Add to MetaCart
(Show Context)
We describe a derivational approach to abstract interpretation that yields novel and transparently sound static analyses when applied to wellestablished abstract machines. To demonstrate the technique and support our claim, we transform the CEK machine of Felleisen and Friedman, a lazy variant of Krivine’s machine, and the stackinspecting CM machine of Clements and Felleisen into abstract interpretations of themselves. The resulting analyses bound temporal ordering of program events; predict returnflow and stackinspection behavior; and approximate the flow and evaluation of byneed parameters. For all of these machines, we find that a series of wellknown concrete machine refactorings, plus a technique we call storeallocated continuations, leads to machines that abstract into static analyses simply by bounding their stores. We demonstrate that the technique scales up uniformly to allow static analysis of realistic language features, including tail calls, conditionals, side effects, exceptions, firstclass continuations, and even garbage collection.
Monadic State: Axiomatization and Type Safety
 In Proceedings of the 2nd ACM SIGPLAN International Conference on Functional Programming (ICFP’97
, 1997
"... Type safety of imperative programs is an area fraught with difficulty and requiring great care. The SML solution to the problem, originally involving imperative type variables, has been recently simplified to the syntacticvalue restriction. In Haskell, the problem is addressed in a rather different ..."
Abstract

Cited by 24 (5 self)
 Add to MetaCart
Type safety of imperative programs is an area fraught with difficulty and requiring great care. The SML solution to the problem, originally involving imperative type variables, has been recently simplified to the syntacticvalue restriction. In Haskell, the problem is addressed in a rather different way using explicit monadic state. We present an operational semantics for state in Haskell and the first full proof of type safety. We demonstrate that the semantic notion of value provided by the explicit monadic types is able to avoid any problems with generalization. 1 Introduction When Launchbury and Peyton Jones introduced encapsulated monadic state [11, 12], it came equipped with a denotational semantics and a modeltheoretic proof that different state threads did not interact with each other. The encapsulation operator runST had a type which statically guaranteed freedom of interaction, and the guarantee relied on a parametricity proof. What the paper failed to provide was any form...
SetBased Analysis for Full Scheme and Its Use in SoftTyping
, 1995
"... SetBased Analysis is an efficient and accurate program analysis for higherorder languages. It exploits an intuitive notion of approximation that treats program variables as sets of values. We present a new derivation of setbased analysis, based on a reduction semantics, that substantially simplif ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
SetBased Analysis is an efficient and accurate program analysis for higherorder languages. It exploits an intuitive notion of approximation that treats program variables as sets of values. We present a new derivation of setbased analysis, based on a reduction semantics, that substantially simplifies previous formulations. Most importantly, the derivation easily extends from a functional core language to include imperative features such as assignments and firstclass continuations, and supports the first correctness proof of setbased analysis for these imperative features. The paper includes an implementation of the derived analysis for a Schemelike language, and describes a softtyping algorithm that eliminates typechecks based on the information produced by the analysis.
Introspective Pushdown Analysis of HigherOrder Programs
"... In the static analysis of functional programs, pushdown flow analysis and abstract garbage collection skirt just inside the boundaries of soundness and decidability. Alone, each method reduces analysis times and boosts precision by orders of magnitude. This work illuminates and conquers the theoreti ..."
Abstract

Cited by 20 (12 self)
 Add to MetaCart
(Show Context)
In the static analysis of functional programs, pushdown flow analysis and abstract garbage collection skirt just inside the boundaries of soundness and decidability. Alone, each method reduces analysis times and boosts precision by orders of magnitude. This work illuminates and conquers the theoretical challenges that stand in the way of combining the power of these techniques. The challenge in marrying these techniques is not subtle: computing the reachable control states of a pushdown system relies on limiting access during transition to the top of the stack; abstract garbage collection, on the other hand, needs full access to the entire stack to compute a root set, just as concrete collection does. Introspective pushdown systems resolve this conflict. Introspective pushdown systems provide enough access to the stack to allow abstract garbage collection, but they remain restricted enough to compute controlstate reachability, thereby enabling the sound and precise product of pushdown analysis and abstract garbage collection. Experiments reveal synergistic interplay between the techniques, and the fusion demonstrates “betterthanbothworlds ” precision.
A Provably TimeEfficient Parallel Implementation of Full Speculation
 In Proceedings of the 23rd ACM Symposium on Principles of Programming Languages
, 1996
"... Speculative evaluation, including leniency and futures, is often used to produce high degrees of parallelism. Existing speculative implementations, however, may serialize computation because of their implementation of queues of suspended threads. We give a provably efficient parallel implementation ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
(Show Context)
Speculative evaluation, including leniency and futures, is often used to produce high degrees of parallelism. Existing speculative implementations, however, may serialize computation because of their implementation of queues of suspended threads. We give a provably efficient parallel implementation of a speculative functional language on various machine models. The implementation includes proper parallelization of the necessary queuing operations on suspended threads. Our target machine models are a butterfly network, hypercube, and PRAM. To prove the efficiency of our implementation, we provide a cost model using a profiling semantics and relate the cost model to implementations on the parallel machine models. 1 Introduction Futures, lenient languages, and several implementations of graph reduction for lazy languages all use speculative evaluation (callbyspeculation [15]) to expose parallelism. The basic idea of speculative evaluation, in this context, is that the evaluation of a...
The Formal Relationship Between Direct and ContinuationPassing Style Optimizing Compilers: A Synthesis of Two Paradigms
, 1994
"... Compilers for higherorder programming languages like Scheme, ML, and Lisp can be broadly characterized as either "direct compilers" or "continuationpassing style (CPS) compilers", depending on their main intermediate representation. Our central result is a precise correspondenc ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
Compilers for higherorder programming languages like Scheme, ML, and Lisp can be broadly characterized as either "direct compilers" or "continuationpassing style (CPS) compilers", depending on their main intermediate representation. Our central result is a precise correspondence between the two compilation strategies. Starting from
Pushdown ControlFlow Analysis of HigherOrder Programs
"... Contextfree approaches to static analysis gain precision over classical approaches by perfectly matching returns to call sites— a property that eliminates spurious interprocedural paths. Vardoulakis and Shivers’s recent formulation of CFA2 showed that it is possible (if expensive) to apply context ..."
Abstract

Cited by 14 (9 self)
 Add to MetaCart
(Show Context)
Contextfree approaches to static analysis gain precision over classical approaches by perfectly matching returns to call sites— a property that eliminates spurious interprocedural paths. Vardoulakis and Shivers’s recent formulation of CFA2 showed that it is possible (if expensive) to apply contextfree methods to higherorder languages and gain the same boost in precision achieved over firstorder programs. To this young body of work on contextfree analysis of higherorder programs, we contribute a pushdown controlflow analysis framework, which we derive as an abstract interpretation of a CESK machine with an unbounded stack. One instantiation of this framework marks the first polyvariant pushdown analysis of higherorder programs; another marks the first polynomialtime analysis. In the end, we arrive at a framework for controlflow analysis that can efficiently compute pushdown generalizations of classical controlflow analyses. 1.
Extending the Loop Language with HigherOrder Procedural Variables
 Special issue of ACM TOCL on Implicit Computational Complexity
, 2010
"... We extend Meyer and Ritchie’s Loop language with higherorder procedures and procedural variables and we show that the resulting programming language (called Loop ω) is a natural imperative counterpart of Gödel System T. The argument is twofold: 1. we define a translation of the Loop ω language int ..."
Abstract

Cited by 13 (9 self)
 Add to MetaCart
(Show Context)
We extend Meyer and Ritchie’s Loop language with higherorder procedures and procedural variables and we show that the resulting programming language (called Loop ω) is a natural imperative counterpart of Gödel System T. The argument is twofold: 1. we define a translation of the Loop ω language into System T and we prove that this translation actually provides a lockstep simulation, 2. using a converse translation, we show that Loop ω is expressive enough to encode any term of System T. Moreover, we define the “iteration rank ” of a Loop ω program, which corresponds to the classical notion of “recursion rank ” in System T, and we show that both translations preserve ranks. Two applications of these results in the area of implicit complexity are described. 1