Results 1  10
of
35
Programming with bananas, lenses, envelopes and barbed wire
 In FPCA
, 1991
"... We develop a calculus for lazy functional programming based on recursion operators associated with data type definitions. For these operators we derive various algebraic laws that are useful in deriving and manipulating programs. We shall show that all example Functions in Bird and Wadler's &qu ..."
Abstract

Cited by 327 (12 self)
 Add to MetaCart
(Show Context)
We develop a calculus for lazy functional programming based on recursion operators associated with data type definitions. For these operators we derive various algebraic laws that are useful in deriving and manipulating programs. We shall show that all example Functions in Bird and Wadler's "Introduction to Functional Programming " can be expressed using these operators. 1
Proving Concurrent Constraint Programs Correct
, 1994
"... We develop a compositional proofsystem for the partial correctness of concurrent constraint programs. Soundness and (relative) completeness of the system are proved with respect to a denotational semantics based on the notion of strongest postcondition. The strongest postcondition semantics provide ..."
Abstract

Cited by 60 (13 self)
 Add to MetaCart
We develop a compositional proofsystem for the partial correctness of concurrent constraint programs. Soundness and (relative) completeness of the system are proved with respect to a denotational semantics based on the notion of strongest postcondition. The strongest postcondition semantics provides a justification of the declarative nature of concurrent constraint programs, since it allows to view programs as theories in the specification logic. 1 Introduction Concurrent constraint programming ([24, 25, 26]) (ccp, for short) is a concurrent programming paradigm which derives from replacing the storeasvaluation conception of von Neumann computing by the storeas constraint model. Its computational model is based on a global store, represented by a constraint, which expresses some partial information on the values of the variables involved in the computation. The concurrent execution of different processes, which interact through the common store, refines the partial information of...
Strictness Analysis in Logical Form
, 1991
"... This paper presents a framework for comparing two strictness analysis techniques: Abstract interpretation and nonstandard type inference. The comparison is based on the representation of a lattice by its ideals. A formal system for deducing inclusions between ideals of a lattice is presented and p ..."
Abstract

Cited by 47 (2 self)
 Add to MetaCart
(Show Context)
This paper presents a framework for comparing two strictness analysis techniques: Abstract interpretation and nonstandard type inference. The comparison is based on the representation of a lattice by its ideals. A formal system for deducing inclusions between ideals of a lattice is presented and proved sound and complete. Viewing the ideals as strictness properties we use the formal system to define a program logic for deducing strictness properties of expressions in a typed lambda calculus. This strictness logic is shown to be sound and complete with respect to the abstract interpretation, which establishes the main result that strictness analysis by typeinference and by abstract interpretation are equally powerful techniques. 1 Introduction Abstract interpretation is a wellestablished technique for static analysis of programs. Its virtue is its strong connection with denotational semantics which provides a means of proving the analysis correct. Its vice is that the process of...
What Not to Do When Writing an Interpreter for Specialisation
 In Danvy et al
, 1996
"... . A partial evaluator, given a program and a known "static" part of its input data, outputs a specialised or residual program in which computations depending only on the static data have been performed in advance. Ideally the partial evaluator would be a "black box" able to extr ..."
Abstract

Cited by 32 (3 self)
 Add to MetaCart
(Show Context)
. A partial evaluator, given a program and a known "static" part of its input data, outputs a specialised or residual program in which computations depending only on the static data have been performed in advance. Ideally the partial evaluator would be a "black box" able to extract nontrivial static computations whenever possible; which never fails to terminate; and which always produces residual programs of reasonable size and maximal efficiency, so all possible static computations have been done. Practically speaking, partial evaluators often fall short of this goal; they sometimes loop, sometimes pessimise, and can explode code size. A partial evaluator is analogous to a spirited horse: while impressive results can be obtained when used well, the user must know what he/she is doing. Our thesis is that this knowledge can be communicated to new users of these tools. This paper presents a series of examples, concentrating on a quite broad and on the whole quite successful application ...
Science, Computational Science and Computer Science: At a Crossroads
 Comm. ACM
, 1993
"... We describe computational science as an interdisciplinary approach to doing science on computers. Our purpose is to introduce computational science as a legitimate interest of computer scientists. We present a foundation for computational science based on the need to incorporate computation at the s ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
We describe computational science as an interdisciplinary approach to doing science on computers. Our purpose is to introduce computational science as a legitimate interest of computer scientists. We present a foundation for computational science based on the need to incorporate computation at the scientific level; i.e., computational aspects must be considered when a model is formulated. We next present some obstacles to computer scientists' participation in computational science, including a cultural bias in computer science that inhibits participation. Finally, we look at some areas of conventional computer science and indicate areas of mutual interest between computational science and computer science. Keywords: education, computational science. 1 What is Computational Science ? In December, 1991, the U. S. Congress passed the High Performance Computing and Communications Act, commonly known as the HPCC . This act focuses on several aspects of computing technology, but two have...
A Provably Correct Compiler Generator
, 1992
"... We have designed, implemented, and proved the correctness of a compiler generator that accepts action semantic descriptions of imperative programming languages. The generated compilers emit absolute code for an abstract RISC machine language that currently is assembled into code for the SPARC and th ..."
Abstract

Cited by 28 (3 self)
 Add to MetaCart
We have designed, implemented, and proved the correctness of a compiler generator that accepts action semantic descriptions of imperative programming languages. The generated compilers emit absolute code for an abstract RISC machine language that currently is assembled into code for the SPARC and the HP Precision Architecture. Our machine language needs no runtime typechecking and is thus more realistic than those considered in previous compiler proofs. We use solely algebraic specifications; proofs are given in the initial model. 1 Introduction The previous approaches to proving correctness of compilers for nontrivial languages all use target code with runtime typechecking. The following semantic rule is typical for these target languages: (FIRST : C; hv 1 ; v 2 i : S) ! (C; v 1 : S) The rule describes the semantics of an instruction that extracts the first component of the topelement of the stack, provided that the topelement is a pair. If not, then it is implicit that the...
The Expressive Power of Higherorder Types or, Life without CONS
, 2001
"... Compare firstorder functional programs with higherorder programs allowing functions as function parameters. Can the the first program class solve fewer problems than the second? The answer is no: both classes are Turing complete, meaning that they can compute all partial recursive functions. In pa ..."
Abstract

Cited by 28 (1 self)
 Add to MetaCart
Compare firstorder functional programs with higherorder programs allowing functions as function parameters. Can the the first program class solve fewer problems than the second? The answer is no: both classes are Turing complete, meaning that they can compute all partial recursive functions. In particular, higherorder values may be firstorder simulated by use of the list constructor ‘cons’ to build function closures. This paper uses complexity theory to prove some expressivity results about small programming languages that are less than Turing complete. Complexity classes of decision problems are used to characterize the expressive power of functional programming language features. An example: secondorder programs are more powerful than firstorder, since a function f of type &lsqb;Bool&rsqb;〉Bool is computable by a consfree firstorder functional program if and only if f is in PTIME, whereas f is computable by a consfree secondorder program if and only if f is in EXPTIME. Exact characterizations are given for those problems of type &lsqb;Bool&rsqb;〉Bool solvable by programs with several combinations of operations on data: presence or absence of constructors; the order of data values: 0, 1, or higher; and program control structures: general recursion, tail recursion, primitive recursion.
A Security Flow Control Algorithm and Its Denotational Semantics Correctness Proof
, 1992
"... We derive a security flow control algorithm for messagebased, modular systems and prove the algorithm correct. The development is noteworthy because it is completely rigorous: the flow control algorithm is derived as an abstract interpretation of the dentotational semantics of the programming langu ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
We derive a security flow control algorithm for messagebased, modular systems and prove the algorithm correct. The development is noteworthy because it is completely rigorous: the flow control algorithm is derived as an abstract interpretation of the dentotational semantics of the programming language for the modular system, and the correctness proof is a proof by logical relations of the congruence between the denotational semantics and its abstract interpretation. Effectiveness is also addressed: we give conditions under which an abstract interpretation can be computed as a traditional iterative data flow analysis, and we prove that our security flow control algorithm satisfies the conditions. We also show that symbolic expressions (that is, data flow values that contain unknowns) can be used in a convergent, iterative analysis. An important consequence of the latter result is that the security flow control algorithm can analyze individual modules in a system for well formedness and...
SVP  a Model Capturing Sets, Streams, and Parallelism
 In Proceedings of the 18th VLDB Conference
, 1992
"... We describe the SVP data model. The goal of SVP is to model both set and stream data, and to model parallelism in bulk data processing. SVP also shows promise for other parallel processing applications. SVP models collections, which include sets and streams as special cases. Collections are represen ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
(Show Context)
We describe the SVP data model. The goal of SVP is to model both set and stream data, and to model parallelism in bulk data processing. SVP also shows promise for other parallel processing applications. SVP models collections, which include sets and streams as special cases. Collections are represented as ordered tree structures, and divideandconquer mappings are easily defined on these structures. We show that many useful database mappings (queries) have a divideandconquer format when specified using collections, and that this specification exposes parallelism. We formalize a class of divideandconquer mappings on collections called SVPtransducers. SVPtransducers generalize aggregates, set mappings, stream transductions, and scan computations. At the same time, they have a rigorous semantics based on continuity with respect to collection orderings, and permit implicit specification of both independent and pipeline parallelism. 1 Introduction Achieving parallelism in bulk data...
An Automatically Generated and Provably Correct Compiler for a Subset of Ada
 In IEEE International Conference on Computer Languages
, 1992
"... We describe the automatic generation of a provably correct compiler for a nontrivial subset of Ada. The compiler is generated from an action semantic description; it emits absolute code for an abstract RISC machine language that currently is assembled into code for the SPARC and the HP Precision Ar ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
(Show Context)
We describe the automatic generation of a provably correct compiler for a nontrivial subset of Ada. The compiler is generated from an action semantic description; it emits absolute code for an abstract RISC machine language that currently is assembled into code for the SPARC and the HP Precision Architecture. The generated code is an order of magnitude better than what is produced by compilers generated by the classical systems of Mosses, Paulson, and Wand. The use of action semantics makes the processable language specification easy to read and pleasant to work with. In Proc. ICCL'92, Fourth IEEE International Conference on Computer Languages, pages 117126. 1 Introduction The purpose of a language designer's workbench, envisioned by Pleban, is to drastically improve the language design process. The major components in such a workbench are: ffl A specification language whose specifications are easily maintainable, and accessible without knowledge of the underlying theory; and f...