Results 1  10
of
217
The Oz Programming Model
 COMPUTER SCIENCE TODAY, LECTURE NOTES IN COMPUTER SCIENCE
, 1995
"... The Oz Programming Model (OPM) is a concurrent programming model subsuming higherorder functional and objectoriented programming as facets of a general model. This is particularly interesting for concurrent objectoriented programming, for which no comprehensive formal model existed until now. ..."
Abstract

Cited by 315 (11 self)
 Add to MetaCart
The Oz Programming Model (OPM) is a concurrent programming model subsuming higherorder functional and objectoriented programming as facets of a general model. This is particularly interesting for concurrent objectoriented programming, for which no comprehensive formal model existed until now. The model
ParameterPassing and the Lambda Calculus
, 1991
"... The choice of a parameterpassing technique is an important decision in the design of a highlevel programming language. To clarify some of the semantic aspects of the decision, we develop, analyze, and compare modifications of the calculus for the most common parameterpassing techniques, i.e., ca ..."
Abstract

Cited by 216 (24 self)
 Add to MetaCart
(Show Context)
The choice of a parameterpassing technique is an important decision in the design of a highlevel programming language. To clarify some of the semantic aspects of the decision, we develop, analyze, and compare modifications of the calculus for the most common parameterpassing techniques, i.e., callbyvalue and callbyname combined with passbyworth and passby reference, respectively. More specifically, for each parameterpassing technique we provide 1. a program rewriting semantics for a language with sideeffects and firstclass procedures based on the respective parameterpassing technique; 2. an equational theory that is derived from the rewriting semantics in a uniform manner; 3. a formal analysis of the correspondence between the calculus and the semantics; and 4. a strong normalization theorem for the imperative fragment of the theory (when applicable). A comparison of the various systems reveals that Algol's callbyname indeed satisfies the wellknown fi rule of the orig...
Tackling the awkward squad: monadic input/output, concurrency, exceptions, and foreignlanguage calls in Haskell
 Engineering theories of software construction
, 2001
"... Functional programming may be beautiful, but to write real applications we must grapple with awkward realworld issues: input/output, robustness, concurrency, and interfacing to programs written in other languages. These lecture notes give an overview of the techniques that have been developed by th ..."
Abstract

Cited by 113 (1 self)
 Add to MetaCart
(Show Context)
Functional programming may be beautiful, but to write real applications we must grapple with awkward realworld issues: input/output, robustness, concurrency, and interfacing to programs written in other languages. These lecture notes give an overview of the techniques that have been developed by the Haskell community to address these problems. I introduce various proposed extensions to Haskell along the way, and I offer an operational semantics that explains what these extensions mean. This tutorial was given at the Marktoberdorf Summer School 2000. It will appears in the book “Engineering theories of software construction, Marktoberdorf Summer School 2000”, ed CAR Hoare, M Broy, and R Steinbrueggen, NATO ASI Series, IOS Press, 2001, pp4796. This version has a few errors corrected compared with the published version. Change summary: Apr 2005: some examples added to Section 5.2.2, to clarifyevaluate. March 2002: substantial revision 1
Abstract Models of Memory Management
, 1995
"... Most specifications of garbage collectors concentrate on the lowlevel algorithmic details of how to find and preserve accessible objects. Often, they focus on bitlevel manipulations such as "scanning stack frames," "marking objects," "tagging data," etc. While these d ..."
Abstract

Cited by 95 (18 self)
 Add to MetaCart
(Show Context)
Most specifications of garbage collectors concentrate on the lowlevel algorithmic details of how to find and preserve accessible objects. Often, they focus on bitlevel manipulations such as "scanning stack frames," "marking objects," "tagging data," etc. While these details are important in some contexts, they often obscure the more fundamental aspects of memory management: what objects are garbage and why? We develop a series of calculi that are just lowlevel enough that we can express allocation and garbage collection, yet are sufficiently abstract that we may formally prove the correctness of various memory management strategies. By making the heap of a program syntactically apparent, we can specify memory actions as rewriting rules that allocate values on the heap and automatically dereference pointers to such objects when needed. This formulation permits the specification of garbage collection as a relation that removes portions of the heap without affecting the outcome of the evaluation. Our highlevel approach allows us to specify in a compact manner a wide variety of memory management techniques, including standard tracebased garbage collection (i.e., the family of copying and mark/sweep collection algorithms), generational collection, and typebased, tagfree collection. Furthermore, since the definition of garbage is based on the semantics of the underlying language instead of the conservative approximation of inaccessibility, we are able to specify and prove the idea that type inference can be used to collect some objects that are accessible but never used.
Once Upon a Type
 In Functional Programming Languages and Computer Architecture
, 1995
"... A number of useful optimisations are enabled if we can determine when a value is accessed at most once. We extend the HindleyMilner type system with uses, yielding a typeinference based program analysis which determines when values are accessed at most once. Our analysis can handle higherorder fun ..."
Abstract

Cited by 90 (2 self)
 Add to MetaCart
A number of useful optimisations are enabled if we can determine when a value is accessed at most once. We extend the HindleyMilner type system with uses, yielding a typeinference based program analysis which determines when values are accessed at most once. Our analysis can handle higherorder functions and data structures, and admits principal types for terms. Unlike previous analyses, we prove our analysis sound with respect to callbyneed reduction. Callbyname reduction does not provide an accurate model of how often a value is used during lazy evaluation, since it duplicates work which would actually be shared in a real implementation. Our type system can easily be modified to analyse usage in a callbyvalue language. 1 Introduction This paper describes a method for determining when a value is used at most once. Our method is based on a simple modification of the HindleyMilner type system. Each type is labelled to indicate whether the corresponding value is used at most onc...
A transformationbased optimiser for Haskell
, 1998
"... Many compilers do some of their work by means of correctnesspreserving, and hopefully performanceimproving, program transformations. The Glasgow Haskell Compiler (GHC) takes this idea of "compilation by transformation" as its warcry, trying to express as much as possible of the compilat ..."
Abstract

Cited by 89 (12 self)
 Add to MetaCart
Many compilers do some of their work by means of correctnesspreserving, and hopefully performanceimproving, program transformations. The Glasgow Haskell Compiler (GHC) takes this idea of "compilation by transformation" as its warcry, trying to express as much as possible of the compilation process in the form of program transformations. This paper reports on our practical experience of the transformational approach to compilation, in the context of a substantial compiler.
Models of Sharing Graphs: A Categorical Semantics of let and letrec
, 1997
"... To my parents A general abstract theory for computation involving shared resources is presented. We develop the models of sharing graphs, also known as term graphs, in terms of both syntax and semantics. According to the complexity of the permitted form of sharing, we consider four situations of sha ..."
Abstract

Cited by 75 (9 self)
 Add to MetaCart
(Show Context)
To my parents A general abstract theory for computation involving shared resources is presented. We develop the models of sharing graphs, also known as term graphs, in terms of both syntax and semantics. According to the complexity of the permitted form of sharing, we consider four situations of sharing graphs. The simplest is firstorder acyclic sharing graphs represented by letsyntax, and others are extensions with higherorder constructs (lambda calculi) and/or cyclic sharing (recursive letrec binding). For each of four settings, we provide the equational theory for representing the sharing graphs, and identify the class of categorical models which are shown to be sound and complete for the theory. The emphasis is put on the algebraic nature of sharing graphs, which leads us to the semantic account of them. We describe the models in terms of the notions of symmetric monoidal categories and functors, additionally with symmetric monoidal adjunctions and traced
The πCalculus in Direct Style
, 1997
"... We introduce a calculus which is a direct extension of both the and the π calculi. We give a simple type system for it, that encompasses both Curry's type inference for the calculus, and Milner's sorting for the πcalculus as particular cases of typing. We observe that the various contin ..."
Abstract

Cited by 70 (2 self)
 Add to MetaCart
We introduce a calculus which is a direct extension of both the and the π calculi. We give a simple type system for it, that encompasses both Curry's type inference for the calculus, and Milner's sorting for the πcalculus as particular cases of typing. We observe that the various continuation passing style transformations for terms, written in our calculus, actually correspond to encodings already given by Milner and others for evaluation strategies of terms into the πcalculus. Furthermore, the associated sortings correspond to wellknown double negation translations on types. Finally we provide an adequate cps transform from our calculus to the πcalculus. This shows that the latter may be regarded as an "assembly language", while our calculus seems to provide a better programming notation for higherorder concurrency.
Operational Semantics for Declarative MultiParadigm Languages
 Journal of Symbolic Computation
, 2005
"... Abstract. In this paper we define an operational semantics for functional logic languages covering notions like laziness, sharing, concurrency, nondeterminism, etc. Such a semantics is not only important to provide appropriate language definitions to reason about programs and check the correctness ..."
Abstract

Cited by 68 (29 self)
 Add to MetaCart
Abstract. In this paper we define an operational semantics for functional logic languages covering notions like laziness, sharing, concurrency, nondeterminism, etc. Such a semantics is not only important to provide appropriate language definitions to reason about programs and check the correctness of implementations but it is also a basis to develop languagespecific tools, like program tracers, profilers, optimizers, etc. First, we define a &quot;bigstep &quot; semantics in natural style to relate expressions and their evaluated results. Since this semantics is not sufficient to cover concurrency, search strategies, or to reason about costs associated to particular computations, we also define a &quot;smallstep &quot; operational semantics covering the features of modern functional logic languages.
Compiling Haskell by program transformation: a report from the trenches
 In Proc. European Symp. on Programming
, 1996
"... Many compilers do some of their work by means of correctnesspreserving, and hopefully performanceimproving, program transformations. The Glasgow Haskell Compiler (GHC) takes this idea of "compilation by transformation" as its warcry, trying to express as much as possible of the compilat ..."
Abstract

Cited by 66 (4 self)
 Add to MetaCart
Many compilers do some of their work by means of correctnesspreserving, and hopefully performanceimproving, program transformations. The Glasgow Haskell Compiler (GHC) takes this idea of "compilation by transformation" as its warcry, trying to express as much as possible of the compilation process in the form of program transformations. This paper reports on our practical experience of the transformational approach to compilation, in the context of a substantial compiler. The paper appears in the Proceedings of the European Symposium on Programming, Linkoping, April 1996. 1 Introduction Using correctnesspreserving transformations as a compiler optimisation is a wellestablished technique (Aho, Sethi & Ullman [1986]; Bacon, Graham & Sharp [1994]). In the functional programming area especially, the idea of compilation by transformation has received quite a bit of attention (Appel [1992]; Fradet & Metayer [1991]; Kelsey [1989]; Kelsey & Hudak [1989]; Kranz [1988]; Steele [1978]). A ...